W3C home > Mailing lists > Public > ietf-http-wg@w3.org > January to March 2007

RE: pipelined client/server behaviour

From: Eric Lawrence <ericlaw@exchange.microsoft.com>
Date: Tue, 27 Mar 2007 20:49:58 -0700
To: Adrian Chadd <adrian@creative.net.au>, "ietf-http-wg@w3.org" <ietf-http-wg@w3.org>
Message-ID: <8301DE7F96C0074C8DA98484623D7E51135B50930A@DF-MASTIFF-MSG.exchange.corp.microsoft.com>

My understanding is that Opera invested in heuristic detection to determine whether pipelining is supported by an upstream proxy or server.  I'm not sure if their algorithms are public.

Trying to support pipelining is probably one of the biggest challenges I encountered in developing Fiddler.  Supporting pipelining in a user-agent is even more challenging, because you often must decide whether queueing a given request into a pipeline is likely to improve performance (vs using another connection) without knowing how large the remote file is and/or how long it will take to generate.


-----Original Message-----
From: ietf-http-wg-request@w3.org [mailto:ietf-http-wg-request@w3.org] On Behalf Of Adrian Chadd
Sent: Tuesday, March 27, 2007 8:27 PM
To: ietf-http-wg@w3.org
Subject: pipelined client/server behaviour


I'm doing up a specification for some proxy software I'm writing and I'm stuck
trying to describe "good enough" behaviour with HTTP pipelining between
client and servers.

Pipelining seems to be one of those rarely-touched subjects, save for "its
horribly broken" but also "clients notice slower downloads when they're
behind a proxy/cache that doesn't do pipelining" (eg, Squid, and no, its
not because 'its Squid' anymore.) Its noticable for clients who are a few hundred
milliseconds away from origin servers, like Australians talking to American
web servers.

The one example of an open source web proxy/cache trying to take advantage of
HTTP pipelining is Polipo. It jumps through quite a few hoops to try and
detect whether a server is "good" or "bad" at pipelining.

About the only sane method I can see of handling pipelining in a HTTP proxy
is by pinning the server/client connections together and issuing pipelined
connections to the server just as we get them from the client. Trying
to handle pipelined requests from the client by distributing them to free
persistent connections or new server connections (a la what Squid tried to
do) seems to result in the broken behaviour everyone loathes.

So now my questions:

* Does anyone here who is involved in writing HTTP clients, servers and
  intermediaries have any light to shed on HTTP pipelining and its behaviours
  in given environments, and

* Is there any interest in trying to update the HTTP/1.1 specification to be
  clearer on how pipelined HTTP requests are to be handled and how errors
  are to be treated?


Received on Wednesday, 28 March 2007 03:52:21 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 1 March 2016 11:10:41 UTC