W3C home > Mailing lists > Public > www-tag@w3.org > April 2005

RE: Andrew Layman and Don Box Analysis of XML Optimization Techni ques

From: Bullard, Claude L (Len) <len.bullard@intergraph.com>
Date: Thu, 7 Apr 2005 10:05:37 -0500
Message-ID: <15725CF6AFE2F34DB8A5B4770B7334EE07206DA2@hq1.pcmail.ingr.com>
To: "'noah_mendelsohn@us.ibm.com'" <noah_mendelsohn@us.ibm.com>
Cc: Andrew Layman <andrewl@microsoft.com>, 'Don Box' <dbox@microsoft.com>, "Rice, Ed (HP.com)" <ed.rice@hp.com>, Paul Cotton <pcotton@microsoft.com>, www-tag@w3.org, klawrenc@us.ibm.com, haggar@us.ibm.com

From: noah_mendelsohn@us.ibm.com [mailto:noah_mendelsohn@us.ibm.com]

>With respect, I don't think the measure of success for HTTP, HTML or SOAP 
>was primarily performance.   If it were, I would have thought the 
>community would have wanted to get quite a bit of shared experience with 
>benchmarks and performance models before agreeing to standardization.

Also with respect, that is revisionist.  Those applications were fielded 
without too much discussion (as I recall, all that is needed is running 
code and rough consensus).  RSS and CSS are other examples.  They succeeded 
based on a perceived need and clear utility even though there were
acceptable 
alternatives. 

There is a need for the binary (real time systems, primarily).  So it is 
a matter of standardization, not fielding.  Markets drive this.  Given a 
lossless encoding, isn't a binary just another encoding on the Save As menu?

>I am aware that Sun has done FastInfoset benchmarks.  Having spent nearly 
>4 years leading teams doing high performance XML implementations, I can 
>tell you that any benchmarks have to be run with great care.  

If accurate benchmarks are the make-or-break test, I agree.  In this case, 
I don't think they are, and in any case, there are requirements for the 
binary that are separable from pure bench performance.   While the
requirements 
you outline are interesting (eg, self-description, a requirement XML 
does not meet except in a priori agreement about the syntax), these are 
standardization design issues, not the issues that determine the need.

There may be need for debate on the content of the standard.  I think 
the need for better XML performance is self-evident in cases for which 
this has been proposed.  So this comes down to fielding and sales.  Is 
this a disruptive technology?  Likely yes.  Incumbents never like those 
and do whatever they can from proposing overloaded requirements, humor 
pieces designed by showmen to muddy discussions, FUD, whatever to slow 
those down until they have their own products ready.  After that, they 
seek standardization as legitimization.  Game as played.

So if you need benchmarks, now is definitely the time to be running the 
tests and publishing the results.  If however, sales and deployment are 
imminent, those aren't relevant.  The community that evaluates the product 
are the developers using it and the content creators building with it.

>In summary, I think it is important to have a public debate about 
>quantitative performance issues, preferably based on carefully run and 
>reproduceable benchmarks. 

That debate has been going on for quite awhile.  I want to see benchmarks 
for the best royalty-free standard, but the question of whether or not a 
binary is desirable is in my opinion, moot.

len
Received on Thursday, 7 April 2005 15:05:41 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Thursday, 26 April 2012 12:47:34 GMT