- From: Danny Ayers <danny@panlanka.net>
- Date: Fri, 18 May 2001 22:14:05 +0600
- To: <jos.deroo.jd@belgium.agfa.com>
- Cc: <www-rdf-logic@w3.org>
>> I was talking yesterday to a guy who has started three successful >> companies in this area, who told me they had tried using XML and >> discovered that the notation was such a crock that about 90% of their >> transmission traffic was being used up sending meaningless notational >> strings back and forth, causing performance problems; so they just >> trashed XML and wrote their own notation. > >There's indeed a point here. Yesterday I was doing a testcase >with 200001 concepts used in 100000 statements (no real application, >just stress testing some inference engines). In that particular >testcase I found that the RDF/XML file could be zipped 20 times. >Using RDF/N3 this was just 4 times. So the XML file is 10 MB, the >N3 file is 2 MB and the binary compressed file is 0.5 MB. Needless >to say that this is having an impact on communication, storage and >processing. We found the best balance with N3 [1][2][3][4]. Your figures speak for themselves, but I'm not sure of your implication - that N3 should be used in preference to RDF/XML? Wouldn't this be throwing the baby out with the bathwater? Performance and efficiency lie on a continuum, interoperability comes in big discrete chunks - do we really want an extra N converters? When the binary XML brigade on xml-dev have come up with something workable, that perhaps will be worth considering. Imagine the compression ratios that could be achieved with the messages in this thread ;-)
Received on Friday, 18 May 2001 12:18:56 UTC