Pipelining and compression effect on HTTP/1.1 proxies

In the performance paper that Jim sent a reference to a couple of days ago

	http://www.w3.org/pub/WWW/Protocols/HTTP/Performance/Pipeline.html

we state that pipelining is an essential part in making HTTP/1.1 outperform
HTTP/1.0 speedwise. What the paper does not state directly is the impact
pipelining has on proxies.

Right now, if a client starts doing pipelining going through a non-piping
but otherwise HTTP/1.1 compliant proxy the effect will effectively be lost.
In a worst-case scenario, non-piped request over a single TCP connections
will be significantly slower than HTTP/1.0 using multiple connections. In
order to provide the pipelining proxy with an "equivalent" bandwidth, it
will have to open multiple connections in which case we are back into all
the HTTP/1.0 problems. The end result will likely be an overall performance
degradation going through proxies.

However, proxies are amoung the applications that are likely to gain the
most using pipelined requests. The situation where pipelining really wins
is cache validation which until now has been almost as expensive (TCP
packet wise) as getting the full messages. As this has now become relative
cheap it allows the proxy to do much more real work than to shuffle around
TCP connections.

Compression will also have a positive impact as it allows proxies to
maintain the same compressed representation of the object in their
persistent cache hence giving room for more objects on disk and in memory.

I would therefore urge proxy implementors to have a close look at the paper
to get their view on how this will work in proxies.

Thanks,

Henrik
--
Henrik Frystyk Nielsen, <frystyk@w3.org>
World Wide Web Consortium, MIT/LCS NE43-346
545 Technology Square, Cambridge MA 02139, USA

Received on Wednesday, 19 February 1997 08:28:35 UTC