RE: Interest in standardizing Batch methods?

Greg,

if I understand you correctly you say that issuing all requests at the same
time (without waiting for the responses) will help to reduce the
communications overhead.

Yet, the *server* will still see separate requests, and at this point it
would be hard to actually detect that all these requests have something in
common and may be internally optimized (for instance by doing just one
instead of many calls to the database).

I think that if there are common operations that involve multiple resources
at once, it makes sense to define the protocol such that the server can
optimize the execution...

Julian


> -----Original Message-----
> From: w3c-dist-auth-request@w3.org
> [mailto:w3c-dist-auth-request@w3.org]On Behalf Of Greg Stein
> Sent: Tuesday, January 08, 2002 3:33 AM
> To: Jim Whitehead
> Cc: WebDAV
> Subject: Re: Interest in standardizing Batch methods?
>
>
> On Fri, Jan 04, 2002 at 01:11:03PM -0800, Jim Whitehead wrote:
> >...
> > To address these performance issues, several "Batch" methods
> were developed
> > as relatively simple extensions to existing WebDAV methods. Switching
> > Outlook Web Access to use these methods resulted in
> approximately an order
> > of magnitude performance increase (obviously, the performance benefit of
> > going from N round-trips to 1 round trip depends on N). From the user
> > perspective, the observed elapsed time for executing an
> operation went from
> > multiple seconds down to close to a second (depending on latency, of
> > course). It was a signficant performance improvement. The batch
> methods are:
>
> I wonder whether the performance suffered because the requests were
> performed in a request/response fashion, rather than as a series of
> pipelined requests. If you can pipeline requests, not waiting for
> an answer,
> then a series of DELETE operations is simply a "larger request"
> and then you
> handle a "larger response". Yes, each request/response has more overhead
> than a batched operation.
>
> Personally, I'm going to guess they didn't pipeline requests, so a batch
> mechanism was a must to get around deficiencies in their protocol stack.
>
>
> That said, it is important to recognize the overhead in a
> sequence of, say,
> DELETE requests and their responses, relative to a potential batch
> operation. Specifically, you're going to have a lot of duplicate
> headers on
> the requests and responses (there are no bodies in this case).
> How much does
> this pose over a batch delete with a list of URLs? Maybe 3x or 4x in the
> number of bytes? Maybe 10x? When you're talking over a modem (which is
> typically the case for MSFT's Hotmail servers), then that 10x can
> be rather
> significant.
>
> Ah, it's all a numbers game. Personally, I'm not interested in batch
> operations. I would guess that most of their benefit is obviating by
> pipelining requests.
>
> Cheers,
> -g
>
> ps. yes, this is mostly supposition; I'm not about to sit down and start
> measuring byte counts and network traffic; I don't know whether
> they were or
> were not pipelining; but my intuition tells me "no" and that pipelining is
> the answer...
>
> --
> Greg Stein, http://www.lyra.org/
>

Received on Tuesday, 8 January 2002 05:54:04 UTC