xmlgraphics-fop-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Kelly Campbell <c...@channelpoint.com>
Subject RE: external-graphic, tif images, jimi, and *Reader.java
Date Fri, 15 Dec 2000 23:01:11 GMT
Hi Neal,

You're right about tiff's being able to supply their own compression, as can
most graphics formats. Support for keeping the graphics in their compressed
format is evolving in FOP still. The data coming from api's like Jimi and
JAI is really meant for screen presentation or java2d manipulation where it
has to be uncompressed. It's a little more difficult to get the data in the
file's raw compressed format, so that hasn't been implemented yet from what
I know about this subject. Eric has some good ideas about what needs changed
for this to work from some discussion on the list about a month ago on this
same subject I think.

Some other issues with this are making sure that we can get all the
different flags and options from the graphics file and make sure the
compressed data fits into what the PDF standard supports.

I'm working on cleaning up the filtering and compression support right now
(See the "FOP and PDF Size" messages that have been going back and forth the
last couple of days) so all streams (not just images) are compressed in the
PDF. I expect to have that ready to commit sometime this weekend.

-Kelly

-----Original Message-----
From: Neal C. Evans [mailto:nce@uab.edu]
Sent: Friday, December 15, 2000 2:06 PM
To: fop-dev@xml.apache.org
Subject: RE: external-graphic, tif images, jimi, and *Reader.java


I was able to use your tiff reader to successfully read in tif file into a
pdf (woohoo! thanks), but I have discovered a few other issues.

1.  The cool thing about tiffs, according to our graphics person, is that
they use an extremely efficient compression algorithm, and can represent a
ton of data in very small files.  Now, in looking at JimiImage.loadImage()
it looks like the image data is being _uncompressed_ and placed into
this.m_bitmaps.  On very large tiff files, say with a width of 2320 and a
height of 3408, this results in an array size of 2320 x 3408 x 3 = 23719680!
Obviously, most jvms will barf on the 'this.m_bitmaps = new
byte[this.m_bitmapsSize];' line, when this.m_bitmapsSize is such a large
number.

2.  Which leads to the question, why do you uncompress the image before
placing it into the pdf?  In the above example, the 2320 x 3408 image is
represented by a tiff that has a filesize of ~40 kb.  It seems like
FopImage.getBitmaps() could be modified so that it returns a byte[] array of
the actual compressed data.  I also noticed in PDFXObject.output() (once
place where FopImage.getBitmaps() is called) that there are a few
interesting lines:

		imgStream.setData(fopimage.getBitmaps());
		imgStream.encode(new PDFFilter(PDFFilter.FLATE_DECODE));
		imgStream.encode(new PDFFilter(PDFFilter.ASCII_HEX_DECODE));

In PDFFilter.java, the following is also defined:

	public static int CCITT_FAX_DECODE = 5;

I happen to know the tiffs are encoded with the CCITT algorithm, so, if
'PDFFilter.FLATE_DECODE' were replaced with 'PDFFilter.CCITT_FAX_DECODE' and
fopimage.getBitmaps() returned an byte[] array of the compressed tif would a
valid pdf result?

Why not use compression for all image types?  Wouldn't this mean smaller
pdfs?

Thanks for your help,

Neal


Neal Evans, Ph.D.
Senior Applications Architect
Knowledge Management Objects, LLC
(703) 841-4287



Mime
View raw message