W3C home > Mailing lists > Public > public-html@w3.org > June 2009

RE: image width & height, summary

From: Ian Hickson <ian@hixie.ch>
Date: Wed, 3 Jun 2009 06:20:21 +0000 (UTC)
To: Larry Masinter <masinter@adobe.com>
Cc: Boris Zbarsky <bzbarsky@MIT.EDU>, HTML WG <public-html@w3.org>
Message-ID: <Pine.LNX.4.62.0906030606480.16244@hixie.dreamhostps.com>
On Tue, 2 Jun 2009, Larry Masinter wrote:
> > 
> > The browser needs to load all images the page script has access to 
> > before firing the 'load' event.  Anything else breaks pages.
> Where does it say this?

# 4.8.2 The img element
# [...]
# Unless [various conditions], when an img is created with a src 
# attribute [...], the user agent must resolve the value of that 
# attribute, relative to the element, and if that is successful must then 
# fetch that resource.
# [...]
# Fetching the image must delay the load event [...]

...where "delay the load event" links to:

# 9.2.6 The end:
# [...]
# Once everything that delays the load event of the document has 
# completed, the user agent must run the following steps:
# [...]
# 2. [...] fire a simple event called load [...]

> I believe that from many points of view, it is better to have an 
> incomplete set of normative constraints that authors *can* depend on, 
> along with an informative "sample" or "example" implementation; if there 
> is an implementation guide, it is better for that part to be 
> explanatory.  In the case of image width and height, that would involve 
> expressing the constraints on the values of width and height; if it's 
> necessary to describe the states an image can be and constraints on 
> state transition, that's fine but it's not clear that it's even 
> necessary.
> This would allow careful authors and authoring tools to create robust 
> code that does not rely unnecessarily on assumptions that are difficult 
> to ensure, while the algorithmic description provides guidance on how to 
> implement something that doesn't "break" current pages, or those written 
> with incorrect assumptions, while allowing more vendor flexibility.
> I think what is "normative" -- what is "must" vs what is "should" 
> matters a lot in a technical specification. The normative algorithms 
> basically don't distinguish between those constraints that are mandatory 
> and those that are advisory.

I think that the world you describe would be wonderful, but I think in 
practice we have found that most authors don't read the specs, and so we 
have to build a platform that is far more resilient than this. Yes, this 
means that the platform is overconstrained for expert or careful authors, 
but they are in the minority, sadly. It's worth noting that it is vendors 
who have been most vocal in asking for this flexibility to be removed from 
the specs -- the flexibility which you describe was present, for instance, 
in HTML4, but ended up costing vendors a lot in terms of trying to reverse 
engineer each other to work out what the actual algorithms should be. This 
is one of the things the HTML5 effort has tried to short-circuit. This 
should make it significantly cheaper for new vendors to write browsers, 
which should further help increase the level of competition in this space.

Ian Hickson               U+1047E                )\._.,--....,'``.    fL
http://ln.hixie.ch/       U+263A                /,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'
Received on Wednesday, 3 June 2009 06:20:57 UTC

This archive was generated by hypermail 2.3.1 : Thursday, 29 October 2015 10:15:46 UTC