W3C home > Mailing lists > Public > www-talk@w3.org > May to June 2001

Re: XHTML Considered Harmful

From: William F. Hammond <hammond@csc.albany.edu>
Date: Tue, 26 Jun 2001 14:22:24 -0400 (EDT)
Message-Id: <200106261822.f5QIMOF03034@pluto.math.albany.edu>
To: www-talk@w3.org
Arjun Ray <aray@q2.net> writes, 24 Jun 2001 22:33:50 -0400 (EDT):

> On Sat, 23 Jun 2001, Ian Hickson wrote:
> 
> > What's wrong with XHTML sent as text/xml?

Isn't it acceptable if there is no user agent tolerance in regard to
XML conformance and if the use of tags outside of the default
namespace is compliant with namespace rules?  (Please see comments
about validation below before taking issue with this.)

Is Amaya's behavior wrong?  (Amaya does know how to yell.)

On the other hand a rigorous handler of text/xml will need to do
a great deal more triage than is required for, say, HTML TagSoup,
HTML 2.0, HTML 3.2, HTML 4.0, HTML 4.1, XHTML 1.0, XHTML 1.1, and
XHTML 1.1 plus MathML 2.0.  So I agree that text/html is really a
better place for any namespace extension of XHTML than text/xml
since it's more specific.

> deliverable as text/html, it is less than edifying to learn that the
> compatibility in practice involves a reality that the W3C has spent
> years denying, because it takes *ignorance* of SGML for all this
> XML-ized stuff to "work" in "HTML user agents".  Inter alia, the
> hapless innocent who doesn't read between the lines is left to find
> out the hard way that validation of XHTML and of HTML4 documents are
> distinct and incompatible considerations.  That's the fate of the few
> who commit to taking W3C specs *seriously* - double the work for no
> gain in benefits.  

Overstated.

While it is true that a given instance will not validate as both
classical HTML and as XHTML, this is no more serious than saying that
a given instance of HTML 4.0 may not validate as HTML 3.2.  In the W3C
family of classical HTML specs there have been at least 3 different
underlying SGML declarations.  Any correct validating system for
classical HTML needs to comprehend that fact and needs to digest the
document type declaration before picking the correct SGML declaration
and, hence, before parsing.

When XHTML is served, as, for example, the root instance at W3C, the
same procedure works since the inclusion of the XML category as a
subcategory of SGML involves the use of an appropriate SGML
declaration that is common to all XML document types.

It is, therefore, a nearly trivial matter to add XHTML to a correct
pre-existing validating system for classical HTML.

Furthermore, in regard to namespace extensions of XHTML the crucial
case in point at this time is MathML.

The modular version of XHTML works well for this.  See specifically
the Carlisle/Altheim flattened DTD for the FPI

            "-//W3C//DTD XHTML 1.1 plus MathML 2.0//EN" 

found at

        http://www.w3.org/TR/MathML2/dtd/xhtml-math11-f.dtd  .

I've got instances that validate (with minor noise due to small
dtd glitches).

It's really quite workable, any examples of Gresham's law (bad markup
drives out good) notwithstanding.

One of the whole points of not requiring validation for XML documents
is to relieve client side agents of responsibility.

But I agree that it is sheer madness for a content provider to serve
something of this nature without prior (one-time) server side
validation.

                                    -- Bill

P.S.  Ian, Some of the Moz examples need small cleaning for validation
against Carlisle/Altheim.  Validation errors may obstruct Amaya.
Received on Tuesday, 26 June 2001 14:23:06 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Wednesday, 27 October 2010 18:14:26 GMT