- From: David Orchard <dorchard@bea.com>
- Date: Tue, 30 Oct 2007 16:30:44 -0700
- To: "Karl Dubost" <karl@w3.org>, "John Cowan" <cowan@ccil.org>
- Cc: "Dan Connolly" <connolly@w3.org>, "W3C-TAG Group WG" <www-tag@w3.org>
The EXI WG did a huge amount of measurement work, referenced in my original message as http://www.w3.org/TR/2007/WD-exi-measurements-20070725/. Section 9.1.3 shows the Processing Efficiency Analysis Details http://www.w3.org/TR/2007/WD-exi-measurements-20070725/#Ax-details-pe Out of curiosity Karl and John, did you not see this section or does the section not address your needs? I made an assertion that the document is too detailed for any but the most diligent reader so you can offer your own data points on that. Cheers, Dave > -----Original Message----- > From: Karl Dubost [mailto:karl@w3.org] > Sent: Tuesday, October 30, 2007 3:24 PM > To: John Cowan > Cc: Dan Connolly; David Orchard; W3C-TAG Group WG > Subject: Re: Review of EXI > > > > John Cowan (31 oct. 2007 - 05:43) : > > Dan Connolly scripsit: > >>> These leads us to wonder whether a combination GZip with improved > >>> technologies such as Parsers, JDKs, VMs, or even Stack > Integration > >>> technology (that is Schema aware and hence covered under Both and > >>> Schema) would suffice for the community. > > > > gzip decoding is expensive in both speed and space, and the > speed cost > > is raised even higher because decoding and parsing are typically > > performed as separate steps. > > That would be cool to have a benchmarks table (tests with > different scenarios). > So we could put it online. That would be a useful reference. > Someone has the details handy somewhere? > > > -- > Karl Dubost - W3C > http://www.w3.org/QA/ > Be Strict To Be Cool > > > > >
Received on Tuesday, 30 October 2007 23:31:17 UTC