Re: What problem is this task force trying to solve and why?

> It's clear to me that the draconian error handling rule is far and away
> the biggest reason for XML's failure on the Web.

Interesting, I wouldn't have expected that.

Incorrect XML can come from two places: hand-authored XML, or XML 
generated by buggy software. I wouldn't expect to see much hand-authored 
XML, and most of what there is, I would expect to be generated by 
editing tools that get it right. I do see bad XML sometimes generated by 
bad software, and on the whole I think that rejecting such XML is the 
right thing to do, because it forces people to fix the software. I think 
we could change XML (or XML parsing rules) to be a bit more tolerant, 
e.g. of unescaped ampersands, but I wouldn't want to change it so that 
anything goes.

I would have said a much bigger factor in "XML's failure" (on the 
client) was that it's only been since about 2008 that there's been 
reasonably adequate support for XML processing across all the browsers, 
and by then the window of opportunity had passed by. In fact, lack of 
standardisation of browsers meant that delivering dynamic content 
required server-side processing, and if you're going to do server-side 
processing, then you might as well generate HTML as XML. It's not clear 
to me that changes to XML will change that situation.

The other factor is that web developers have a choice of two languages 
for processing XML. For many, XSLT is off-putting because it is so 
unlike anything they have encountered before (and because support is 
still not universal); but the alternative, Javascript plus DOM, is 
hopelessly laborious for coding and very hard to debug. The attraction 
of JSON has nothing to do with its qualities as a data encoding, but is 
entirely due to it having a good fit with the programming environment.

Michael Kay

Received on Saturday, 18 December 2010 18:58:34 UTC