- From: Shane McCarron <shane@aptest.com>
- Date: Sun, 01 Apr 2007 09:21:14 -0500
- To: Lachlan Hunt <lachlan.hunt@lachy.id.au>
- CC: "Henry S. Thompson" <ht@inf.ed.ac.uk>, Dan Connolly <connolly@w3.org>, www-tag@w3.org
Lachlan Hunt wrote: > Shane McCarron wrote: > >> Does it always matter from a validation perspective what the producer >> intended? No. Not *always*. > > Validation seems to be the only remotely valid argument put forth in > favour of versioning, but its usefulness for such purposes has been, > IMHO, successfully disputed [1]. The argument at [1] seems to be that versioning is bad because some careless user agent manufacturers have done it wrong? That's not a successful argument against versioning. That's a successful argument *for* better testing. Separate discussion, though. It does point out that the original version of the XHTML Mobile Profile used M12N incorrectly. For what its worth, we know - and the new version fixes that. We knew at the time, but somehow our complaints about that got lost. It also seems to argue that since some early draft of some notional HTML5 spec doesn't have versioning, it isn't needed. That viewpoint, to me, seems entirely HTML5 centric. In the broader world. there are lots of markup languages. Sometimes you *want* a walled garden. This discussion isn't about whether some tag soup markup language should declare its version. Its about whether arbitrary XML data streams should be identifiable. At least, it is for me. Finally, that article seems to argue that validation should be done using some massive superset of possible content, because then you will automatically include every possible interesting subset. That is absolutely wrong - mathematically and philosophically. XHTML Modularization makes it relatively easy to extend XHTML, and only slightly more difficult to create completely new markup languages using common building blocks. Additional building blocks that conform to M12N, such as XML Events, xhtml-rdfa (coming soon), xhtml-role, xforms, MathML, and SVG, expand on this capability. When end users create such markup languages, they are potentially wildly different - they would have different content models, potentially different semantics when common elements are used in unusual ways, different default presentation when used in visual user agents, etc. Without some declaration of what grammar is used for content, how can a validator know what schema to apply? How can a user agent possibly be expected to interpret the content correctly? How can a validating user agent know what to do? (And don't say there are none - for years the mobile web gateways have validated before sending content on to web-enabled phones. Perhaps the new ones do not; but to my mind that would be sad.) Now, I would *love* it if the versioning were done in some better manner than using DOCTYPE. That mechanism doesn't scale very well. An XML PI that smells like DOCTYPE would be great. A common attribute on the root element would be less great, because philosophically it requires "opening the envelope" - but I would get over it. I don't think the xsi:schemaLocation attribute is adequate for the task, and I don't think the XML Schema working group would think it is either. The HTML Working Group has discussed this many times, but in the end felt trying to create a mechanism like that was way beyond our scope. However, if someone wants to tasks us with doing this, we could surely take a shot at it. > > [1] http://www.w3.org/mid/A9925841-9449-4E5C-B149-EF07E1598735@iki.fi > -- Shane P. McCarron Phone: +1 763 786-8160 x120 Managing Director Fax: +1 763 786-8180 ApTest Minnesota Inet: shane@aptest.com
Received on Sunday, 1 April 2007 14:22:22 UTC