Le 16 avr. 08 à 12:16, Henri Sivonen a écrit :
> On Apr 16, 2008, at 12:58, Paul Libbrecht wrote:
>>> In fact, the reason why the proportion of Web pages that get
>>> parsed as XML is negligible is that the XML approach totally
>>> failed to plug into the existing text/html network effects[...]
>>
>> My hypothesis here is that this problem is mostly a parsing
>> problem and not a model problem. HTML5 mixes the two.
>
> For backwards compatibility in scripted browser environments, the
> HTML DOM can't behave exactly like the XHTML5 DOM. For non-scripted
> non-browser environments, using an XML data model (XML DOM, XOM,
> JDOM, dom4j, SAX, ElementTree, lxml, etc., etc.) works fine.
I don't know how big the holy name of backwards compatibility is but
that should be quantified instead of quantifying the amount of URLs
in each serialization mime-type!
You seem to be speaking of XHTML5 DOM... maybe I have missed
something in the mail torrent about that. I was talking XHTML3 vs HTML5.
The point you make above about backwards compatibility seems to say
that HTML5's DOM is not the same as HTML4's DOM (or their
implementations) and, I feel, this sounds ok if the backwards
compatibility break is not too big, therefore the request to quantify.
The question remains: can't all the enhancements to HTML model done
by HTML5 be done within an XML model decoupled from parsing?
paul