Relation between markup and transport

Arjun Ray writes:

> On Sun, 1 Jul 2001, Ian Hickson wrote:
> > . . .
> I think we're agreed that this - the desirability/advisability of
> intending XHTML documents for Tag Soup processors - is a bass-ackwards
> approach to progress.

Nobody ever advocated that in earlier discussion.  What was said is
that XHTML+MathML does not cause problem-level breakage in extant
mass market web browsers.

(And I dont find the summary at "damowmow" to be neutral.)

> > I'm still looking for a good reason to write websites in XHTML _at
> > the moment_, given that the majority of web browsers don't grok
> > XHTML.

1. More reliable CSS styling.

2. Namespace extensions.

3. Client-side XSL styling some day?

> >         The only reason I was given (by Dan Connolly [1]) is that
> > it makes managing the content using XML tools easier... but it
> > would be just as easy to convert the XML to tag soup or HTML
> > before publishing it, so I'm not sure I understand that.

Misleading since that was specifically answering the question why W3C
did what it is doing.  Remember that W3C's Amaya handles MathML under
either transport content type.

> Agreed.  (And at that, why restrict oneself to XML tools?  SGML tools 
> work too.)

To understand the sensible model consider two kinds of documents:

A. Oxford-TEI-Pizza-Chef-custom-brew-with-math under XML.  Serve as
   "text/xml".  Browser provides tree portrayal if no external
   application (with a general triage facility for all XML
   applications) is specified.

B. Server-side translation of above as XHTML+MathML.  Serve as
   "text/html".  New browsers can show school children at home and in
   public libraries a properly typeset "1/2".  There is no
   problem-level breakage in old browsers, and users begin to perceive
   a need to upgrade.  The web actually gets better.

> >    . . .                 And even _then_, if the person in control
> > of the content is using XML tools and so on, they are almost
> > certainly in control of the website as well, so why not do the

The hypothesis is seldom satisfied in large organizations where, for
security reasons, distributed desktop platforms are not permitted to
run HTTP servers.

> > UA authors to spend their already restricted resources on
> > implementing content type sniffing?

Sniffing is not required.  Reading the first line of an http body is
not sniffing.  Attaching meaning to comments is sniffing.

> Is the perceived lack of Content Negotiation the real problem here,
> that we have to scrounge for workarounds?

No.  (I've never been impressed with http content negotiation as an
idea.)

The issue is about two models for the relation between markup and
transport content type.

The other model makes both "text/html" (for tag soup) and "text/xml"
(for general XML) the exclusive domain of mass-market user agents and
forecloses the possibility of external handling of XML document types
transported by a mass market user agent that are not rendered (or
processed) well by the agent when served under the transport type
"text/xml".

                                    -- Bill

Received on Monday, 2 July 2001 13:02:47 UTC