Re: image width & height, summary

Larry Masinter wrote:
> For example, imagine running a browser on Flikr, and opening
> up a "set" of 70,000 photos, with a HTML page of 70,000
> images, but only some of which are visible at any one time.

Just as a note, in this situation current browsers will load all 70,000 
images (possibly running out of memory and crashing in the process, of 
course).

And web pages most definitely depend on the behavior that images that 
are not visible are still loaded eagerly.

 > does the browser have to have
> cached all of the thumbnails that have ever been viewed, in
> order to meet this MUST requirement?

The browser needs to load all images the page script has access to 
before firing the 'load' event.  Anything else breaks pages.  In 
practice, once an image is loaded dropping the image data also breaks 
pages.  See https://bugzilla.mozilla.org/show_bug.cgi?id=466586 for an 
example (and note that the site broken was high-profile enough that this 
was considered a stop-ship bug for Gecko 1.9.1).

> In a specification that is attempting to be precise,
> reasonable implementations shouldn't be non-conformant.

I would welcome any suggestion for how the 70,000 image case can be 
handled without at the same time breaking assumptions other pages make...

> Why is something that can't actually be promised -- that all
> images on a page MUST be cached once they have been loaded
> -- a requirement at all?

Because web pages depend on it....  I'm no happier about this than you 
are, but they do.

> Authors of software and JavaScript libraries would be able
> to read and interpret the compliance statements and
> understand which constraints they can depend on, without
> having to decipher an (incompletely specified) algorithm.

Which I fully agree is a good thing, if it's possible to come with a 
clear set of such constraints that is exhaustive.  I think Ian agreed 
with that too, for what it's worth.

-Boris

Received on Wednesday, 3 June 2009 03:31:49 UTC