- From: Robin Berjon <robin@berjon.com>
- Date: Mon, 27 Feb 2012 22:05:18 +0100
- To: David Carlisle <davidc@nag.co.uk>
- Cc: public-xml-er@w3.org
On Feb 27, 2012, at 19:19 , David Carlisle wrote: > The second phase is (currently) described as building a DOM tree but it > only uses the language of nodes and attributes, so DOM is just being > used as an abstract tree description. It doesn't use any methods from a > DOM API as far as I can see. Right — the point is to have a well-defined common understanding of what an element, attribute, etc. are. The fact that it maps onto something that one can immediately implement is, I find, very helpful in making it concrete. > So long as the final wording makes it clear that it is conformant to > implement an xml-er parser by (say) representing the final output tree > by a series of sax events (or as a string representing a well formed > document) then I don't have any particular issues with the style in the > current draft. I don't think that that would be a problem. (It would be technically difficult to do this usefully on an HTML parser given the amount of backtracking that it can find itself needing, but I don't see that being the case here.) > The current draft doesn't really do anything (much) towards specifying a > processor (even one using a DOM AI) Nothing about how the nput is fed > in, or how the results and.or errors are fed back out. Exactly, it only uses the DOM as a concrete definition of the data model that is used to interpret the document from the token stream. That's why I'm having trouble seeing what we gain from dropping that. -- Robin Berjon - http://berjon.com/ - @robinberjon Coming up soon: I'm teaching a W3C online course on Mobile Web Apps http://www.w3devcampus.com/writing-great-web-applications-for-mobile/
Received on Monday, 27 February 2012 21:05:47 UTC