- From: Robin Berjon <robin@w3.org>
- Date: Mon, 12 Aug 2013 12:04:59 +0200
- To: Noah Mendelsohn <nrm@arcanedomain.com>
- CC: "www-tag@w3.org" <www-tag@w3.org>, dlee@marklogic.com
On 12/08/2013 03:23 , Noah Mendelsohn wrote: > David Lee has written an interesting analysis of JSON vs. XML > performance. There's lots of detail and (seamingly) very careful > measurements. > [1] > http://www.balisage.net/Proceedings/vol10/html/Lee01/BalisageVol10-Lee01.html This is an interesting paper, but there are a few issues. As Sergey notes, he's parsing with eval() rather than JSON.parse(). I don't know if there are current optimisations for the latter, but there could be (at the very least it would be more correct). Using the parser for a data format and for a full blown language could have an impact. At the very least eval() takes its scope into account, which may incur overhead. It is certainly not "best practice" as the author asserts. I'm not sure what the value of parsing using jQuery is (using json2.js might however be interesting). He indicates that he's parsing XML in "native JS". I'm not sure what that's supposed to mean, but it sounds as if the XML parser is implemented in JS. But the code shows that he's clearly using the underlying C++ parser (which is fine, but isn't explained well). Again I'm not sure what the added value of having the XML parsed by jQuery is. The querying section is unrealistic. Walking the entire tree to count nodes is not very representative, and it shouldn't be surprising that the code looks the same for pretty much any format. I always understood the "fat-free" statement as being much more about the API complexity than anything else, i.e. data.books[17].title vs document.documentElement.childNodes.item(17).getAttributeNS(null, "title"). It would make more sense to time that sort of operation. He's also finding that jQuery imposes a lot of overhead for both JSON and XML. That's not surprising: he's using $.each() which calls a function for each node in both cases (the JS code is also a bit strange in places, but probably not in a way that hurts). He seems mostly to be measuring the overhead involved in iterating using functions, and I wouldn't be surprised if he found similar results using Array.forEach(). That said he does have a number of interesting data points. -- Robin Berjon - http://berjon.com/ - @robinberjon
Received on Monday, 12 August 2013 10:04:48 UTC