pipelined client/server behaviour

Hi,

I'm doing up a specification for some proxy software I'm writing and I'm stuck
trying to describe "good enough" behaviour with HTTP pipelining between
client and servers.

Pipelining seems to be one of those rarely-touched subjects, save for "its
horribly broken" but also "clients notice slower downloads when they're
behind a proxy/cache that doesn't do pipelining" (eg, Squid, and no, its
not because 'its Squid' anymore.) Its noticable for clients who are a few hundred
milliseconds away from origin servers, like Australians talking to American
web servers.

The one example of an open source web proxy/cache trying to take advantage of
HTTP pipelining is Polipo. It jumps through quite a few hoops to try and
detect whether a server is "good" or "bad" at pipelining.

About the only sane method I can see of handling pipelining in a HTTP proxy
is by pinning the server/client connections together and issuing pipelined
connections to the server just as we get them from the client. Trying
to handle pipelined requests from the client by distributing them to free
persistent connections or new server connections (a la what Squid tried to
do) seems to result in the broken behaviour everyone loathes.

So now my questions:

* Does anyone here who is involved in writing HTTP clients, servers and
  intermediaries have any light to shed on HTTP pipelining and its behaviours
  in given environments, and

* Is there any interest in trying to update the HTTP/1.1 specification to be
  clearer on how pipelined HTTP requests are to be handled and how errors
  are to be treated?

Thanks,



Adrian

Received on Wednesday, 28 March 2007 03:17:10 UTC