W3C home > Mailing lists > Public > ietf-http-wg@w3.org > July to September 2007

Re: Suggestion for NEW Issue: Pipelining problems

From: Roy T. Fielding <fielding@gbiv.com>
Date: Mon, 23 Jul 2007 16:57:21 -0700
Message-Id: <8220FF0D-0BE6-4C91-954F-A61EAD43584F@gbiv.com>
Cc: Harald Tveit Alvestrand <harald@alvestrand.no>, Jamie Lokier <jamie@shareable.org>, yngve@opera.com, HTTP Working Group <ietf-http-wg@w3.org>
To: Jeffrey Mogul <Jeff.Mogul@hp.com>

On Jul 23, 2007, at 2:02 PM, Jeffrey Mogul wrote:

> Roy writes:
>     Pipelined requests actually increase congestion because any
>     messages left unsatisfied have to be sent again on a new  
> connection.
> Just curious: can you point us to the experimental evidence for this?

No, just ad hoc observation while viewing systems though tcpdump.

> That is, evidence that shows that the effect you described outweighs
> the congestion that might be avoided when successful pipelining, for
> example, reduces the burstiness of non-pipelined TCP connections.

It depends on what kind of connections you are talking about.
If the connection is so heavily used that the persistence remains
open (either by accident or through special configuration) then
there is no question that congestion will be reduced on average
versus multiple TCP connections.  We can just sum the bits for that.
However, the burstiness of real HTTP traffic is because the
applications have bursty needs.  I don't see how pipelining can
change that without being an artificial request profile or an
unusual application (like Google's spider).

What I've seen in traces is that pipelining does have substantial
benefits up until the point of reaching an application steady-state
(for a browser, that means a web page with all inlined resource
requests complete).  If the client closes the connection at that
point, instead of waiting for the next request to find out if the
connection was kept open by the server, then it minimizes its use
of the network.  Likewise, pipelining has huge benefits for
specialized HTTP services, such as Subversion (when it uses it)
and content management systems.

If the connections are used infrequently and are susceptible to
timeouts occurring while the next request is in transit, then the
data sent across the network far exceeds any average
*congestion-control* benefit obtained by avoiding the separate
connections. A new connection will be required anyway and the
request message is sent twice (though whether that message hit
the network or not seems to depend on the TCP implementation).

That should be testable to find the actual trade-off point where
a client should close the connection, but I have no experimental
evidence and no time to set up an experiment.  In any case,
pipelining is still worthwhile for many reasons, particularly
when we have control over both sides of the connection, so we need
it defined in HTTP regardless of any single client's request profile.

BTW, do you know of any good (experiment-quality) evaluations
of HTTP over SCTP?

Received on Monday, 23 July 2007 23:57:58 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:13:31 UTC