W3C home > Mailing lists > Public > public-respimg@w3.org > March 2012

Re: what about progressively encoded images and http range requests?

From: Le Roux Bodenstein <lerouxb@gmail.com>
Date: Tue, 27 Mar 2012 21:37:57 +0200
Message-ID: <CAD_aen5bsoMtKMnsD5rB09=BCmEpvDiZWM7-6JdPs+GgK3YyZw@mail.gmail.com>
To: Adrian Roselli <Roselli@algonquinstudios.com>
Cc: public-respimg@w3.org
> I had not. That puts the full burden on the server, though, meaning web
> folk without full server control couldn't use it. That's not to say I
> think the idea is good or bad, but that point would be a concern to me.

Sorry I wasn't very clear about this: There is only one image file and
the server is completely dumb. It doesn't do any re-encoding or
conversion or anything. All you need is a server that supports http
range requests which I would think all webservers already do - that's
how resumable downloads and (I'm guessing) seeking in html5 video
works. All the magic happens inside the browser.

The file also doesn't contain different sizes. I'm no expert on how
progressive image encoding works, but the way I understand it jpegs as
they exist on disk and over the wire aren't bitmaps: they just
represent and contain enough info to recreate those bitmaps.
(Thumbnails are also just representations of those same high res
bitmaps, but I digress..) Progressive image formats as far as I
understand contain the information to produce a very low resolution
image (imagine 10 pixel by 10 pixel or even larger blobs of colour)
and then progressively more information to "colour those blurry blocks
in" further and further. I know video encoding does similar trickery.

I'm theorizing that it might be possible to tune things so that a
2048x2048 image is a 64x64 image that's scaled up (pixel doubled) with
more info to turn it into a 128x128 image which is doubled to 256x256
to 512x512 to 1024x1024 to 2048x2048. You can also imagine the 64x64
pixel version is made up of 32x32 pixels or blobs of colour. My point
is that at various points in the file you would have enough info to
scale the image down to various sizes very efficiently and I doubt the
quality would be significantly worse than what you would have gotten
if you manually created the thumbnail. So if you knew where those
offsets were you would be able to download only the first X bytes. But
those bytes are also necessary for the next size as it just builds on
top of it.

(Clearly this example is contrived - real images aren't all square and
their dimensions aren't perfect powers of two so it wouldn't be that
simple. I'm also no expert in how image compression really works -
perhaps I'm completely wrong.)

All the server needs to know about is a new image mime type and it
needs to understand range queries. I guess the browser would see a url
used as an image and the url has an extension of the new mime type. It
could do a HEAD request to get the content-type header to make sure.
The problem is that then it would have to ask for the first kilobyte
or so (I'm not sure what would be the most efficient) so it can get
the image's metadata in order to proceed. This could all be a bit
inefficient in terms of number of requests, but so are all the other
proposals I've seen so far[1]. I guess you could also "opt-in" by
adding a new attribute to the <img> tag and obviously older iamge
types wouldn't support this behaviour anyway.

What I really like about this is that an image resource is now just
one resource which fits in with the REST philosophy. And there is now
no reason to not put giant or original images up there as they came
from your camera or out of your photo manipulation software. If the
browser or the user doesn't want the larger image they don't have to
download it, but it is there if they want it. The other thing I really
like about it is that it can be completely automated. It is completely
robotic. There is nothing for the content author to do or decide.

Le Roux

[1] If you send different image sizes in a picture tag you would have
to download entirely new images as the screen orientation changes or
the user zooms or resizes the browser. Not being able to zoom in on
the 320 pixel wide image you sent the to the mobile browser that's
browsing via wifi is ridiculous when you're sending a 940 pixel wide
image to the desktop browser. You have the extra detail, why can't the
mobile user see it? You would also have to manually decide on which
image sizes to go with ahead of time, you would have to store all
those individual images that are just different sized versions of
exactly the same thing, you would have to decide on a url naming
convention... and everything else I listed in my initial proposal.
Received on Tuesday, 27 March 2012 19:38:26 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Tuesday, 27 March 2012 19:38:27 GMT