W3C home > Mailing lists > Public > www-math@w3.org > July 2001

Relation between markup and transport

From: William F. Hammond <hammond@csc.albany.edu>
Date: Mon, 2 Jul 2001 13:01:56 -0400 (EDT)
Message-Id: <200107021701.f62H1uk22461@pluto.math.albany.edu>
To: www-talk@w3.org
Arjun Ray writes:

> On Sun, 1 Jul 2001, Ian Hickson wrote:
> > . . .
> I think we're agreed that this - the desirability/advisability of
> intending XHTML documents for Tag Soup processors - is a bass-ackwards
> approach to progress.

Nobody ever advocated that in earlier discussion.  What was said is
that XHTML+MathML does not cause problem-level breakage in extant
mass market web browsers.

(And I dont find the summary at "damowmow" to be neutral.)

> > I'm still looking for a good reason to write websites in XHTML _at
> > the moment_, given that the majority of web browsers don't grok
> > XHTML.

1. More reliable CSS styling.

2. Namespace extensions.

3. Client-side XSL styling some day?

> >         The only reason I was given (by Dan Connolly [1]) is that
> > it makes managing the content using XML tools easier... but it
> > would be just as easy to convert the XML to tag soup or HTML
> > before publishing it, so I'm not sure I understand that.

Misleading since that was specifically answering the question why W3C
did what it is doing.  Remember that W3C's Amaya handles MathML under
either transport content type.

> Agreed.  (And at that, why restrict oneself to XML tools?  SGML tools 
> work too.)

To understand the sensible model consider two kinds of documents:

A. Oxford-TEI-Pizza-Chef-custom-brew-with-math under XML.  Serve as
   "text/xml".  Browser provides tree portrayal if no external
   application (with a general triage facility for all XML
   applications) is specified.

B. Server-side translation of above as XHTML+MathML.  Serve as
   "text/html".  New browsers can show school children at home and in
   public libraries a properly typeset "1/2".  There is no
   problem-level breakage in old browsers, and users begin to perceive
   a need to upgrade.  The web actually gets better.

> >    . . .                 And even _then_, if the person in control
> > of the content is using XML tools and so on, they are almost
> > certainly in control of the website as well, so why not do the

The hypothesis is seldom satisfied in large organizations where, for
security reasons, distributed desktop platforms are not permitted to
run HTTP servers.

> > UA authors to spend their already restricted resources on
> > implementing content type sniffing?

Sniffing is not required.  Reading the first line of an http body is
not sniffing.  Attaching meaning to comments is sniffing.

> Is the perceived lack of Content Negotiation the real problem here,
> that we have to scrounge for workarounds?

No.  (I've never been impressed with http content negotiation as an
idea.)

The issue is about two models for the relation between markup and
transport content type.

The other model makes both "text/html" (for tag soup) and "text/xml"
(for general XML) the exclusive domain of mass-market user agents and
forecloses the possibility of external handling of XML document types
transported by a mass market user agent that are not rendered (or
processed) well by the agent when served under the transport type
"text/xml".

                                    -- Bill
Received on Monday, 2 July 2001 13:02:46 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Saturday, 20 February 2010 06:12:50 GMT