W3C home > Mailing lists > Public > www-html@w3.org > November 2006

Re: XHTML 1.0, section C14

From: Benjamin Hawkes-Lewis <bhawkeslewis@googlemail.com>
Date: Sat, 25 Nov 2006 17:13:30 +0000
To: www-html <www-html@w3.org>
Cc: Shane McCarron <shane@aptest.com>
Message-Id: <1164474810.8068.158.camel@galahad>

On Sat, 2006-11-25 at 09:12 -0600, Shane McCarron wrote:
> Let's all roll over and keep using 1997 technology and hacking around
> using weird-ass abstraction libraries to implement "Web 2.0" (gag-me)
> on top of incompatible underlying implementations rather than
> attempting to help the Internet evolve toward something light-weight,
> fast, and extensible like XML/XHTML.  Tag soup is sooo much better.

Forgive me, but from my perspective, it is precisely the folks serving
XHTML as text/html who are relying on "incompatible underlying
implementations" and the caprice of tag soup parsers. In particular, I
don't see how content management software churning out "XHTML" that
barely exploits semantic markup, often doesn't validate, and is served
as text/html, where the only requirement on rendering user agents is
that they try and copy each other's bugs (RFC 2854), helps the internet
evolve anywhere. All it's done is reduce XHTML to a buzz word for human
resources. At least conforming HTML 4.01 Strict has a specified parsing
and rendering and can be converted directly to XHTML 1.1 with tools like

Far from standing in the way of progress, I am saying that, however good
an idea it seemed at the time, for the most part Appendix C isn't
working. (The fact that it's so confusing doesn't help either.) I am
saying that W3C has latent power to push /real/ XML-based markup that
it's not exercising. It is time to take the sword from the stone and use
it. If anything, it's W3C that's caving in to pessimism, albeit in a
somewhat shambolic fashion. TBL says:

> the attempt to get the world to switch to XML, including quotes around
> attribute values and slashes in empty tags and namespaces all at once
> didn't work. The large HTML-generating public did not move, largely
> because the browsers didn't complain.


Beyond obstinately refusing to improve HTML and weakly recommending
XHTML, I see little evidence of there having been an energetic attempt
at conversion.

(I don't mean, by saying this, to dismiss virtuous efforts to improve
HTML as themselves unwanted.)

>  In my world I always personally ignore */* in the accept header.

But no Accept header is equivalent to "*/*". So if you ignore "*/*, is
there some sort of default list of types you assume all user agents can
render? If so, shouldn't that list itself be a standard, and shouldn't
user agents be required to render those types properly for UAAG

> Groups like the OMA have declared that you cannot use it that way for
> this very reason.

So what does the Accept header mean for OMA? Is it merely a claim about
what media types mobile browsers can render? And is this declaration
incorporated into any W3C checklists of compliance for desktop user
agents or servers? If not, why not?

> You all do whatever you want.

See what I mean? I know we're backward, but please stop giving up on
us. :)

Benjamin Hawkes-Lewis
Received on Saturday, 25 November 2006 17:21:38 UTC

This archive was generated by hypermail 2.4.0 : Thursday, 30 April 2020 16:21:01 UTC