W3C home > Mailing lists > Public > ietf-http-wg@w3.org > October to December 2004

Re: [Simple] Some thoughts on XCAP's resource architecture

From: Lisa Dusseault <lisa@osafoundation.org>
Date: Wed, 24 Nov 2004 11:49:11 -0800
Message-Id: <E93F2602-3E51-11D9-B57F-000A95B2BB72@osafoundation.org>
Cc: HTTP working group <ietf-http-wg@w3.org>, "simple@ietf.org" <simple@ietf.org>
To: Cullen Jennings <fluffy@cisco.com>

>> 1c) Performance: HTTP is designed to batch requests in a certain way
>> based on the granularity assumptions.  Recall that latency is a much
>> bigger problem than bandwidth above a certain (low) bandwidth, and in
>> modern Internet applications it's usually the latency that kills you.
>> A more granular approach to resources doesn't in itself kill
>> performance but it does if you stay with HTTP's request granularity.
>> What XCAP is saving in bandwidth it will lose, in many use cases, in
>> latency costs.
> I'm not understanding here. How we can best design stuff to batch in a 
> way
> to have optimal performance.

If you have multiple changes to make to a 1 MB or smaller document, 
batch them up together if possible, even if it requires uploading the 
whole document afresh.  The current design of XCAP encourages changes 
to be made independently, and each change will require a full 
round-trip (no pipelining possible because you need to wait for the 
server to respond with an ETag each time).

Similarly, even though a whole document may be large, up to a MB or so 
it's probably better to download (synch) the whole document rather than 
do several roundtrips to get individual nodes.  Even though download 
requests can be pipelined not all client libraries support that yet.  
If you can pipeline, then you can do more node requests before the 
tradeoff of preferring to get the whole document kicks in.

The optimal granularity for performance is probably somewhere between 1 
kB and 10 MB for XCAP's use cases.  A single XML node is probably too 
small; an undivisible 10 MB file is probably too large.   Compare to 
byte ranges -- 10 MB files are often broken into byte ranges for 
reliable download, but the byte ranges aren't chosen to be too small or 
the performance cost would be greater than the reliability win.

My size estimates here are from intuition based on experience; but I'm 
confident in saying that HTTP is not ideal for ferrying around tiny 
things at a high performance  (just try using the MSDAIPP library to do 
client applications pretending that the HTTP server is a local 
database).  The overhead per request/response is not insignificant.

Since XCAP does allow addressable nodes to be grouped into larger 
addressable nodes it will be possible for a smart XCAP client to do 
some of this batching anyway, but many clients will do the simple and 
obvious thing regardless.  So I don't expect this to be the killer 
argument for changing XCAP -- if this were the only consideration, the 
answer would likely be to "let the implementor beware".  It's not the 
only consideration however -- I consider the implementability and 
interoperability of HTTP extensions to be the bigger consideration.

Received on Wednesday, 24 November 2004 19:49:32 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 1 March 2016 11:10:38 UTC