W3C home > Mailing lists > Public > public-respimg@w3.org > September 2013

Re: Compressive images test

From: Chris Lilley <chris@w3.org>
Date: Mon, 16 Sep 2013 17:00:01 +0200
Message-ID: <349555861.20130916170001@w3.org>
To: Marcos Caceres <w3c@marcosc.com>
CC: Frédéric Kayser <f.kayser@free.fr>, <public-respimg@w3.org>, Jason Grigsby <jason@cloudfour.com>
Hello Marcos,

Monday, September 16, 2013, 1:04:00 PM, you wrote:

> Hi Frédéric,

> On Friday, September 13, 2013 at 7:34 PM, Frédéric Kayser wrote:

>> Pardon by French, but apparently these people don't even have a clue of how JPEG compression internally works: no word about chroma subsampling effects and use of a 300x200 pixels image sample that doesn't fit nicely into MCUs.
>> Here are a few explanations about it:
>> - the file size of about 20 images at different sizes ranging from 240x160 to 300x200 pixels (resized using ImageMagick convert -resize XxY\! -quality 76% -sampling-factor 2x2 -unsharp 1.5x1+0.7+0.02 —a rough equivalent to Photoshop Save For Web quality 50—), notice the inflections point around 240x160, 264x176 and 288x192, some images at 288x192 or 264x176 weight even less than their 255x190 and 261x174 counterpart!
>> - a sample 288x192 image with its DCT matrices count
>> - the same at 291x194 where new DCT matrices have been added into the JPEG file to hold the 3 extra columns and the 2 extra rows (now guess why the file size made such a jump between 288x192 and 291x194)
>> - finally all the pixels the file holds in the outer matrices even those out of the visible frame (JPEGsnoop (http://www.impulseadventure.com/photo/jpeg-snoop.html) can display those but sadly only for sequential JPEGs).

> I've be surprised if anyone in this list has a clue about the
> above.

Prepare to be surprised, then. Chroma subsampling for example is
pretty easy to understand (and the test images should not have had any
such subsampluing applied).

> It all sounds very interesting (though a bit like rocket
> surgery),

not really

>  but it will be hard for us to apply the information above
> in a useful way. If you can articulate the above as tests or in a
> manner digestible to the CG, that would be helpful.  

Its simple.

When testing image compression, start with uncompressed imagery which
has never been subject to any lossy compression. Then encode it in
whatever image format. Otherwise, you are actually testing how well
previously compressed images can be compressed (ie the interaction
between two compression methods in the presence of cropping).

On the other hand, Frédéric Kayser appears to suggest that all images
on the web should be an integral multiple of an 8x8 pixel square in
size. Sorry, that constraint is not realistic. People will need other

Best regards,
 Chris                            mailto:chris@w3.org
Received on Monday, 16 September 2013 15:00:04 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 20:12:40 UTC