W3C home > Mailing lists > Public > www-forms@w3.org > September 2006

Re: some technical thoughts about incremental improvements to forms

From: Lachlan Hunt <lachlan.hunt@lachy.id.au>
Date: Wed, 06 Sep 2006 23:45:10 +1000
Message-ID: <44FED0E6.9020208@lachy.id.au>
To: Dave Raggett <dsr@w3.org>
CC: www-forms@w3.org

Dave Raggett wrote:
> On Wed, 6 Sep 2006, Lachlan Hunt wrote:
>> See the Parsing section of the Web Apps 1.0 spec [1].  At least 3 
>> major browser vendors (Mozilla, Opera and Safari) are committed to 
>> implementing that algorithm which is being reverse engineered 
>> primarily from the 4 major browsers.
> I can appreciate why browser vendors might want to align their error 
> handling, but it may have the effect of encouraging more content 
> developers to produce malformed markup.

What evidence do you have to support such a claim?  The fact is that 
authors already write a significant amount of broken code and will 
continue to do so regardless of browser handling.  The only difference 
will be whether the garbage they write will be compatible with one or 
many browsers.

I think you need to look at it from a different perspective. 
Standardised error handling is not a goal in itself, rather its a means 
to an end.  The ultimate goal of redefining HTML parsing is to increase 
interoperability so that there will be fewer and fewer pages depending 
upon the bugs in the dominant browser of the time and, as a consequence, 
stop the continual cycle of reverse engineering each other.

There's plenty of evidence to show that many authors will do one of two 

1. Pick one browser (usually the dominant one) and build for it.  Just 
look at the number of pages that have been built to work only in IE!
2. Use as many hacks as they need to to achieve interoperability between 

In both of these cases, improved interoperability between browsers is a 
major benefit.

In the past, when Netscape was the dominant browser and IE was a 
newcomer to the playing field, Microsoft invested a lot into reverse 
engineering Netscape so that they could handle pages built for it.  In 
the process, IE also introduced its own fair share of bugs and 
extensions, and eventually took over the market.  After this, more and 
more authors began writing for IE only and, as a direct result, other 
browsers have had to reverse engineer IE.

It's a cycle that, I'm sure you will agree, must stop.  It's not only 
one of the reasons why HTML was considered a dying language and the move 
to XML began, it's a cycle that will continue to repeat for as long as 
pages are built in a market of non-interoperable browsers.  New browsers 
will enter the market, the market leadership will eventually change and 
authors will either write broken pages only the new dominant browser, or 
spend a long time working around all the different bugs between it and 
the competition.

Standardising the parsing will hopefully put an end to that cycle, at 
least insofar as handling HTML is concerned.  Pages will, theoretically, 
no longer be written to work in only one browser because all browsers 
should handle the page the same way.

As for specifically defining error handling, there is other evidence to 
show that the positives far outweigh the negatives (if any).  Just look 
at the well defined error handling in CSS and XML.  Of course, errors 
are still made and bugs are still present.  But the point is that the 
situation with CSS and XML is much better than that with HTML.

Lachlan Hunt
Received on Wednesday, 6 September 2006 13:45:38 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:36:18 UTC