W3C home > Mailing lists > Public > ietf-http-wg@w3.org > April to June 2007

Re: pipelined client/server behaviour

From: Jamie Lokier <jamie@shareable.org>
Date: Sun, 1 Apr 2007 16:06:11 +0100
To: Adrian Chadd <adrian@creative.net.au>
Cc: Adrien de Croy <adrien@qbik.com>, ietf-http-wg@w3.org
Message-ID: <20070401150611.GA9941@mail.shareable.org>

Adrian Chadd wrote:
> > So overall statistically we saw it as worse, and the benefit on the 
> > sites that supported it seemed very small as well.  You're only talking 
> > about saving one RTTs per request to send requests serialized (after 
> > response received) than pipelined.  The biggest performance improvement 
> > we saw in testing came about from the reuse of the connection.
> > 
> > I guess if all servers and intermediaries supported pipelining, it would 
> > perform better overall, but I don't think it's widespread enough yet, so 
> > we decided to trade a small performance loss for an improvement in 
> > stability and ease of implementation.  We may still revert on this.
> 
> Did you benchmark it over higher latency links? It might not give great
> performance boosts for under 100ms but anecdotally it seems to load
> pages in Mozilla faster when the site RTT is ~300 to ~400ms (think
> .eu site accessing .au, or vice versa.)

Here's what I think, based on thinking and hand-waving (not measurement):

The main problem with pipelining is that a large or slow response
delays subsequent responses.  Sometimes the delay can be very long,
especially for a response generated on the fly.  Sometimes a response
is a stream which does not terminate.

The client cannot often predict which responses will be large or slow,
which makes it difficult to decide when to pipeline requests, quite
apart from proxy/server bugs.  (If HTTP allowed fragments of different
responses to multiplex out of order, it would be fine.)  Therefore,
there are many requests which the client should not pipeline.

The main benefits of pipelining are that: (a) multiple requests and
multiple responses can share packets; (b) latency in responses is
reduced, by approximately one RTT per response.  In principle, any
calculations the server needs to do can be initiated in parallel too,
or parallel I/O, but I've not heard of any server actually doing
these.

But the network latency, one RTT per response, might be reduced in a
different way than pipelining: just use more persistent connections,
and open them in advance of needing them.  E.g. when a browser fetches
a web page, it could open multiple connections immediately, sending
the first HTTP request over the first connection to be established.
The other connections are unused at this point, but they would
establish in a similar time scale in parallel.  Subsequent requests
could be sent in parallel over the set of (persistent but not
pipelined) connections.

That uses more network packets, but avoids the bugs which may be
triggered by pipelining, and avoids the problem of serialised
responses and unpredictable delays due to slow/large responses.  On a
link with high bandwidth-RTT product, I'm guessing that would perform
better than pipelining.  (Where there is high RTT but low bandwidth,
the additional packets may cost more than the potential benefit).

My $0.02,
-- Jamie
Received on Sunday, 1 April 2007 16:05:16 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 27 April 2012 06:50:09 GMT