W3C home > Mailing lists > Public > www-html-editor@w3.org > January to March 2006

Re: [XHTML 2] 24.2 Referenced objects not yet defined (PR#7752)

From: Lachlan Hunt <lachlan.hunt@lachy.id.au>
Date: Fri, 27 Jan 2006 00:10:54 +1100
Message-ID: <43D8CA5E.6060600@lachy.id.au>
To: Steven Pemberton <steven.pemberton@cwi.nl>
CC: www-html-editor@w3.org

Steven Pemberton wrote:
> Lachlan Hunt wrote:
>> How can you possibly achieve any interoperable implementations without 
>> relying on browsers reverse engineering each other when you simply 
>> fail to define what UAs should do with the documents they receive?
> You don't need interoperability for incorrect documents, just for 
> correct ones.

No, you need interoperability for all documents, regardless of whether 
they're correct or not.  If you need proof, please examine the current 
state of text/html on the web:


> If you send an incorrect document to a browser, there is no agreement on 
> how it should behave. If the browser does something you didn't intend, 
> it is your fault, not the browser's.

If an author writes an incorrect document, but only tests it in the 
market leading browser of the time, which happens to give the intended 
result, then the author won't know there is a problem.  Yet other 
browsers that the author failed to test in have decided to perform 
different error recovery and thus the page does not work the same in 
those browsers, then you are all of a sudden back to where we were 
during the browser wars.

Authors don't write to specifications, they write to implementations and 
if the implementations don't agree, then they'll pick their favourite, 
stick a "Best Viewed with X" message on it and use some browser sniffing 
to redirect everyone else to a "Please upgrade your browser [Firefox 
1.5] to Internet Explorer 5 or later".  Then, that browser's behaviour 
will become the defacto standard and regardless of whether it agrees 
with the specification or not, it's what browsers will be forced to 
implement in order to remain competitive in the market place.

> Classically browsers have tried to work out what the user intended, in 
> my opinion a grave error that has made a mess of the web, and done most 
> web authors a disservice.

Yes, and I fully agree that the decisions made by browser vendors in the 
past have been huge mistakes, but they're mistakes we have to accept and 
still continue to work with into the foreseeable future.  Although 
vendors have learned many lessons and are unlikely to make such serious 
mistakes again, they will make mistakes and it is up to the 
specification to reduce the chances of a vendor making a bad decision by 
defining precisely what they should do

> Having written browsers myself, I know that it 
> is extremely easy to display a warning to the user that there is 
> something wrong with a document, such as a grumpy face in the status bar.

You seem to have this notion that browsers should display an error to 
the user when they encounter an erroneous document and somehow expect it 
to be what browsers will actually do.  Yet, you fail to realise that 
this is precisely what the browser vendors will not do unless it is 
defined in the specification somewhere.

> What should a browser do if you send it a Fortran program with media type
> text/html? Sniff the content and execute it?  The answer should be: who
> cares what it does? It's an error, and we don't need interoperability in
> these cases.

It should attempt to parse the file as HTML, when it innevitably 
encounters errors the parser should perform error recovery as defined in 
the SGML and HTML specs.  But since it is not well defined at all, 
browsers take a best guess approach.  If, for example, a Fortran program 
were served as XML, error handling is well defined and it would very 
likely result in a well-formedness error.

Lachlan Hunt
Received on Thursday, 26 January 2006 13:11:25 UTC

This archive was generated by hypermail 2.3.1 : Wednesday, 7 January 2015 15:08:54 UTC