W3C home > Mailing lists > Public > www-talk@w3.org > July to August 2001

Re: Relation between markup and transport

From: Ian Hickson <ian@hixie.ch>
Date: Mon, 2 Jul 2001 16:12:17 -0700 (Pacific Daylight Time)
To: <www-talk@w3.org>
Message-ID: <Pine.WNT.4.31.0107021459051.1728-100000@HIXIE.netscape.com>
On Mon, 2 Jul 2001, William F. Hammond wrote:
>
> (And I dont find the summary at "damowmow" to be neutral.)

It's hard to be neutral about something which one has strong views
about. If you have any specific criticism, then I will be eager to try
to make it more neutral. Saying "I don't find it to be neutral"
doesn't really help, sorry.


> Arjun Ray writes:
>> On Sun, 1 Jul 2001, Ian Hickson wrote:
>>> I'm still looking for a good reason to write websites in XHTML _at
>>> the moment_, given that the majority of web browsers don't grok
>>> XHTML.
>
> 1. More reliable CSS styling.

Can you give me an example of a page written in XHTML that is
rendered more reliably than a page written in HTML 4?


> 2. Namespace extensions.

You cannot, while complying to the spirit of current W3C technologies,
send XHTML containing non-XHTML namespaced content as text/html. All
XHTML content containing any mention of namespaces other than the
xmlns attribute on the root <html> element are, of course, invalid
XHTML documents to start with, but even taking that into account,
section 5.1 of XHTML states that only documents that, by virtue of
following Appendix C, are compatible with existing UAs may be sent as
text/html. Documents containing namespaces are almost certainly not
backwards compatible.


> 3. Client-side XSL styling some day?

The day that becomes a reasonable option, won't most browsers on the
market that don't support XSL, support XML+CSS? In practice, of
course, XSL is rarely going to be used to style XHTML documents, since
CSS makes that so much easier.


>>> The only reason I was given (by Dan Connolly [1]) is that it makes
>>> managing the content using XML tools easier... but it would be
>>> just as easy to convert the XML to tag soup or HTML before
>>> publishing it, so I'm not sure I understand that.
>
> Misleading since that was specifically answering the question why
> W3C did what it is doing.

Fair point.


> Remember that W3C's Amaya handles MathML under either transport
> content type.

And does so by using a heuristic that makes it unable to render valid
HTML documents:

   http://damowmow.com/mozilla/html-not-xml.html

...which are rendered correctly by some UAs (e.g., Mozilla or the W3C
validator).


>> Agreed. (And at that, why restrict oneself to XML tools? SGML tools
>> work too.)
>
> To understand the sensible model consider two kinds of documents:
>
> A. Oxford-TEI-Pizza-Chef-custom-brew-with-math under XML. Serve as
>    "text/xml". Browser provides tree portrayal if no external
>    application (with a general triage facility for all XML
>    applications) is specified.

...or if the application can render the document natively, for example
if there is any indication that the document contains CSS, any
JavaScript, any XLinks, or whatever.

I add this clause because it is vital that a web browser that
understands simple XLinks, like, say, Mozilla, be able to correctly
render this document:

   <example xmlns="http://www.example.org/"
            xmlns:xlink="http://www.w3.org/1999/xlink"
            xlink:type="simple" xlink:href="http://www.w3.org/">
     A well-formed XML document with XLinks.
   </example>

This is based on these test cases:

    http://www.hixie.ch/tests/adhoc/xml/xlink/001.xml
    http://hixie.ch/tests/evil/xml/001.xml
    http://hixie.ch/tests/evil/xml/002.xml

...which Mozilla passes.


> B. Server-side translation of above as XHTML+MathML.  Serve as
>    "text/html".  New browsers can show school children at home and in
>    public libraries a properly typeset "1/2".

Unlikely, since <mfrac> has no explicit "/". "1+2" maybe.


>    There is no problem-level breakage in old browsers, and users
>    begin to perceive a need to upgrade. The web actually gets
>    better.

 C. Server-side translation of above as XHTML+MathML with a CSS
    stylesheet. Stylesheet works fine in all browsers tested (say, IE
    and Netscape 4.x). When a compliant browser loads the page as XML
    and renders the document using the stylesheet, the document looks
    terrible. Users perceive a need to avoid the new browser. The web
    doesn't improve at all.

Lest you think "C" is unlikely -- I have already seen it happen at
least a dozen times during the last year, and that's just with people
experimenting with XML, not even sending their documents as text/html.
Why would it happen? Because text/html stylesheets are not case
sensitive for element names, whereas XML stylesheets are.

Don't fall into the trap of thinking users upgrade because of web
standards. They don't. They upgrade for one of three reasons: they
like the user features of the newer browser, they are locked out of
many sites if they use their older browser, or they are forced to by
their system administrator.

Just look at Netscape 6.0: at the time it was released, it was by far
the most standards compliant browser available. It also was one of the
worst browsers in term of user experience. Very few people use 6.0.

(We can but hope that users like 6.1 better!)


>>> UA authors to spend their already restricted resources on
>>> implementing content type sniffing?
>
> Sniffing is not required. Reading the first line of an http body is
> not sniffing. Attaching meaning to comments is sniffing.

Reading the first line of the following:

   http://www.damowmow.com/mozilla/html-not-xml.html

...wouldn't help.


> The issue is about two models for the relation between markup and
> transport content type.
>
> The other model makes both "text/html" (for tag soup) and "text/xml"
> (for general XML) the exclusive domain of mass-market user agents
> and forecloses the possibility of external handling of XML document
> types transported by a mass market user agent that are not rendered
> (or processed) well by the agent when served under the transport
> type "text/xml".

Why?

My "model" is:

   text/html -> tag soup
   text/xml -> XML processing
                 |
                 +-- If the root namespace is recognised by a helper
                 |   application, pass the entire thing over to the
                 |   helper application.
                 |
                 +-- If the document is in any way styled (including
                 |   taking into account what namespaces are natively
                 |   recognised by the UA, e.g. XHTML or MathML), then
                 |   render it natively.
                 |
                 +-- Ask the user what to do or render a tree or show
                     the source or whatever.

Rendering the document natively would also include handing any
elements from registered namespaces to namespace-specific plugins.
This is related to the W3C's EDOM and plugin standardisation work and
is very much in progress right now. Windows IE uses "Binary
Behaviours" to perform this step.

-- 
Ian Hickson                                            )\     _. - ._.)   fL
Invited Expert, CSS Working Group                     /. `- '  (  `--'
The views expressed in this message are strictly      `- , ) - > ) \
personal and not those of Netscape or Mozilla. ________ (.' \) (.' -' ______
Received on Monday, 2 July 2001 19:12:28 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Wednesday, 27 October 2010 18:14:26 GMT