W3C home > Mailing lists > Public > public-html@w3.org > April 2008

Re: Supporting MathML and SVG in text/html, and related topics

From: Paul Libbrecht <paul@activemath.org>
Date: Wed, 16 Apr 2008 12:30:03 +0200
Message-Id: <0E480B16-FAB6-4A16-BEB0-F60BEC43F63A@activemath.org>
Cc: David Carlisle <davidc@nag.co.uk>, jirka@kosek.cz, whatwg@whatwg.org, public-html@w3.org, www-math@w3.org, www-svg@w3.org
To: Henri Sivonen <hsivonen@iki.fi>

Le 16 avr. 08 à 12:16, Henri Sivonen a écrit :
> On Apr 16, 2008, at 12:58, Paul Libbrecht wrote:
>>> In fact, the reason why the proportion of Web pages that get  
>>> parsed as XML is negligible is that the XML approach totally  
>>> failed to plug into the existing text/html network effects[...]
>>
>> My hypothesis here is that this problem is mostly a parsing  
>> problem and not a model problem. HTML5 mixes the two.
>
> For backwards compatibility in scripted browser environments, the  
> HTML DOM can't behave exactly like the XHTML5 DOM. For non-scripted  
> non-browser environments, using an XML data model (XML DOM, XOM,  
> JDOM, dom4j, SAX, ElementTree, lxml, etc., etc.) works fine.

I don't know how big the holy name of backwards compatibility is but  
that should be quantified instead of quantifying the amount of URLs  
in each serialization mime-type!

You seem to be speaking of XHTML5 DOM... maybe I have missed  
something in the mail torrent about that. I was talking XHTML3 vs HTML5.

The point you make above about backwards compatibility seems to say  
that HTML5's DOM is not the same as HTML4's DOM (or their  
implementations) and, I feel, this sounds ok if the backwards  
compatibility break is not too big, therefore the request to quantify.

The question remains: can't all the enhancements to HTML model done  
by HTML5 be done within an XML model decoupled from parsing?

paul

Received on Wednesday, 16 April 2008 10:31:05 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Wednesday, 9 May 2012 00:16:14 GMT