W3C home > Mailing lists > Public > public-html@w3.org > May 2007

Re: Support Existing Content

From: Gareth Hay <gazhay@gmail.com>
Date: Fri, 4 May 2007 08:37:56 +0100
Message-Id: <7B9B4699-61B5-4C93-88B5-8B6F27F53E52@gmail.com>
Cc: Maciej Stachowiak <mjs@apple.com>, matt@builtfromsource.com, public-html@w3.org
To: Jonas Sicking <jonas@sicking.cc>


On 3 May 2007, at 22:24, Jonas Sicking wrote:

>
> These are the arguments against "draconian errorhandling" that I  
> can see:
>
> 1.
> If we're making something that is that backwards incompatible, why  
> not instead go all the way and do something like XHTML2 that is a  
> completely new language. That way we could get rid of tags that  
> we're only keeping around for backwards compatibility anyway.
> And at that point we might as well also use XML rather than create  
> a new language that needs a parser written for it. Most UAs need an  
> XML parser anyway.
>
I've heard UA's vendors complaining that they have two parsers  
already, maybe it makes sense to make HTML more XML like. I'm not  
advocating that, just adding the point that most browsers already  
have the two parsers for either outcome.
> 2.
> It's hard for authors to get things perfect. Writing bug free code  
> has nothing to do with being lazy or uninformed. When did you ever  
> run into a bugfree software program? If you want to generate  
> something with as strict parsing rules as that you probably want to  
> write code that provably creates good output. The only way I can  
> think of to do that would be to let servers generate DOM-like data  
> structures that then gets serialized before sent over the wire.
> While this sounds like a good design to me, it would be a big  
> change from how servers work today and would significantly raise  
> the bar for adopting HTML5 for authors.
>
This issue isn't about bug free code. I think you would concede that  
even the buggy code has compiled? Just because the logic that has  
been programmed according to the syntax rules is flawed doesn't mean  
it's the compiler's fault.

> 3.
> The "cleanup" of the web it would accomplish is actually fairly  
> small. Most quirks and inconsistencies is in how things behave  
> after they have been parsed. The biggest one is in how things are  
> rendered, but also in how the DOM behaves.
> And while there is some value for UA developers since they'd have  
> an easier time writing the parser, I see little to no value for web  
> authors over having relaxed, but consistent, error handling in the  
> various browsers.
>
I completely disagree. Though it won't happen overnight, this  
approach would educate authors to write better code, and after time  
the tag-soup would begin to become cleaner.
>
> The result is that the price you pay for such strict error handling  
> (1 and 2) is very high, while the value you get (3) is pretty small.
>
In your opinion.

I was thinking about this issue overnight, and I think I need some  
clarification.
Is it not correct that each browser currently handles errors in their  
own manner?
People on here are aiming to document this inconsistent error  
handling to base the spec on.
A common ground will be found and this will be the specified  
behaviour for the future.

So If this is correct then I don't understand, some UA's will have to  
change their error handling, breaking the web as much as "draconian"  
error handling.
Ok, so they will be changing to a consistent handling, but any change  
at all will lead to as much disruption as what is being suggested?

I'm sure this can't be correct, so can someone please correct me?

Thanks

Gareth
Received on Friday, 4 May 2007 07:38:18 UTC

This archive was generated by hypermail 2.3.1 : Monday, 29 September 2014 09:38:44 UTC