W3C home > Mailing lists > Public > public-html@w3.org > July 2008

RE: Why Microsoft's authoritative=true won't work and is a bad idea

From: Henrik Nordstrom <henrik@henriknordstrom.net>
Date: Tue, 08 Jul 2008 01:49:16 +0200
To: Justin James <j_james@mindspring.com>
Cc: "'HTTP Working Group'" <ietf-http-wg@w3.org>, public-html@w3.org
Message-Id: <1215474556.20418.229.camel@henriknordstrom.net>
On mån, 2008-07-07 at 18:56 -0400, Justin James wrote:

> The problem with the concept of HTML specifying its own URLs, from my
> viewpoint, is that developers need one standard to follow, not 3 (URI,

But I am still not aware of the problem which triggered this. I linger
on the HTTP WG, not the HTML one.. and is therefore unaware of what
problem HTTP URL/URI/IRI specifications cause for HTML.
> Any spec which is not properly followed by the majority of developers
> a majority of the time (where pertinent, of course) is not a
> "standard" and is a broken spec.

There is a large grey zone there. But yes, if every implementer consider
what the specs says in some area to be nonsense and implements something
else than the specs says then the spec is most likely broken. But in
quite many cases it's just poor choice of language making the intentions
of the specification not so obvious

If every implementer implements something else because what the specs
says is correct but the will to try to interoperate with existing/older
broken implementations is greater than the will to keep a sane
implementation. And especially not when there is multiple such areas for
historical reasons (which HTTP has it's noticeable share of with 3.5
generations in a less than a handful years)

> Sometimes, it is broken outside of the spec itself, such as being
> sponsored or ratified by an unrecognized body.

Or implemented before the effects has been properly analyzed..

> Other times it is broken within the spec, like 800 page specs
> describing a floor sweeping process or something.

Yes.. and unfortunately many specifications is heading in that
direction, growing uncontrollably large with huge amounts of legacy

But quite often it's better to clearly define the original intents using
the original mechanisms and encourage compliance, than to reinvent the
same things again only because most implementers got it wrong the first

> Sometimes it is just a marketing problem (like so many of the X*
> specs, like XHTML, XForms, XPath, and a zillion other X* specs which
> few people use).


> From what I can tell, the W3C has very, very hard time producing specs
> which don't qualify as "broken" by that measure, and HTML is heading
> that list.

Can't comment. HTML is not my main field, staying mostly in the area of
protocols and bits. But I do still feel a significant gap between HTML
(and related) specifications and user agent implementation, and quite
different gaps depending on implementation... But I still have faith
that things will improve over time if one has a little patience, and
coverge towards the specications instead of diverging even further

A really big problem is to how to get rid of legacy from earlier
specifications whos design choices perhaps wasn't the best.. Once a
feature gets into a standard and implemented in more than one
implementation it's likely to stay for a considerable time even if it
turned out to be a very bad idea.

Things which is only implemented but not officially standardised, or
only in the standards but never implemented is a while lot easier to
change as you can always claim that one of the two is wrong/broken.

Same for when implementations misread specifications, resulting in
unintentional deviations from the specification, most often from not
understanding the specification or how it applies to what they do. Such
mistakes is often relatively easy to get corrected once the right people
is made aware of the issue and why it's important to follow the specs.


Received on Monday, 7 July 2008 23:50:00 UTC

This archive was generated by hypermail 2.4.0 : Saturday, 9 October 2021 18:44:33 UTC