W3C home > Mailing lists > Public > public-html@w3.org > June 2009

RE: image width & height, summary

From: Larry Masinter <masinter@adobe.com>
Date: Tue, 2 Jun 2009 22:45:47 -0700
To: Boris Zbarsky <bzbarsky@MIT.EDU>
CC: HTML WG <public-html@w3.org>
Message-ID: <8B62A039C620904E92F1233570534C9B0118CD95F239@nambx04.corp.adobe.com>
(a few hours left, one more post after this one)

> The browser needs to load all images the page script has access to 
> before firing the 'load' event.  Anything else breaks pages.

Where does it say this? I read that images can go from
'not available' to 'available' asynchronously, and that
an image can go from 'available' to 'not available' if there's
an error during a load (even if it's a "temporarily unavailable"
error?) It sounded like you could have 5 images, and only
3 of them available initially and others becoming available
later. If I have 5 images, they all have to be available
before any script runs?

This sounds like a much more stringent requirement than
what the spec says.

>> In a specification that is attempting to be precise,
>> reasonable implementations shouldn't be non-conformant.

> I would welcome any suggestion for how the 70,000 image case can be 
> handled without at the same time breaking assumptions other pages make...

Well, there are a lot of pages that "assume" things that aren't true;
for example, assuming that  images are always available.  Maybe
the javascript will have an error if the image isn't available,
and maybe that will or will not be a big deal.

The fact that some high-profile site made an assumption,
("all images are available at load time") and some other 
site made a different assumption ("there is no limit to
the number of images on a page, because only the visible
segment of the page images are cached or guaranteed to
be loaded") and it's impossible to build a browser that
always satisfies both sets of assumptions, even though
both sets of assumptions are true *most* of the time  -- 
what's the design  principle for deciding who wins?
High-profile sites assumptions are more valuable? 

> Which I fully agree is a good thing, if it's possible
>  to come with a  clear set of such constraints that
> is exhaustive.  

I believe that from many points of view, it is better to
have an incomplete set of normative constraints that 
authors *can* depend on, along with an informative "sample"
or "example" implementation; if there is an implementation
guide, it is better for that part to be explanatory.  In the 
case of image width and height, that would involve expressing the 
constraints on the values of width and height; if it's
necessary to describe the states an image can be and
constraints on state transition, that's fine but it's
not clear that it's even necessary.

This would allow careful authors and authoring tools to create
robust code that does not rely unnecessarily on assumptions
that are difficult to ensure, while the algorithmic description
provides guidance on how to implement something that doesn't
"break" current pages, or those written with incorrect
assumptions, while allowing more vendor flexibility.

I think what is "normative" -- what is "must" vs 
what is "should" matters a lot in a technical specification.
The normative algorithms basically don't distinguish
between those constraints that are mandatory and those
that are advisory.

I'm picking on this little example, which doesn't matter
much in the grand scheme of things, to keep the discussion
focused on the technology. But I think an examination
of other sections (chosen at random) will bring more
clarity to the discussion, and make it clear that we're
actually talking about specification quality, 
applicability and scope, and not just name-calling.


Received on Wednesday, 3 June 2009 05:46:28 UTC

This archive was generated by hypermail 2.4.0 : Saturday, 9 October 2021 18:44:48 UTC