- From: David Morris <dwm@xpasc.com>
- Date: Mon, 23 Jul 2007 17:56:57 -0700 (PDT)
- cc: HTTP Working Group <ietf-http-wg@w3.org>
Just a footnote ... with content from the likes of cnn, I've measured 150+ objects retrieved to compose a single page. Not all from the same server, but a significant opportunity for pipelining/persistent connections, etc. On Mon, 23 Jul 2007, Roy T. Fielding wrote: > > On Jul 23, 2007, at 2:02 PM, Jeffrey Mogul wrote: > > > Roy writes: > > > > Pipelined requests actually increase congestion because any > > messages left unsatisfied have to be sent again on a new > > connection. > > > > Just curious: can you point us to the experimental evidence for this? > > No, just ad hoc observation while viewing systems though tcpdump. > > > That is, evidence that shows that the effect you described outweighs > > the congestion that might be avoided when successful pipelining, for > > example, reduces the burstiness of non-pipelined TCP connections. > > It depends on what kind of connections you are talking about. > If the connection is so heavily used that the persistence remains > open (either by accident or through special configuration) then > there is no question that congestion will be reduced on average > versus multiple TCP connections. We can just sum the bits for that. > However, the burstiness of real HTTP traffic is because the > applications have bursty needs. I don't see how pipelining can > change that without being an artificial request profile or an > unusual application (like Google's spider). > > What I've seen in traces is that pipelining does have substantial > benefits up until the point of reaching an application steady-state > (for a browser, that means a web page with all inlined resource > requests complete). If the client closes the connection at that > point, instead of waiting for the next request to find out if the > connection was kept open by the server, then it minimizes its use > of the network. Likewise, pipelining has huge benefits for > specialized HTTP services, such as Subversion (when it uses it) > and content management systems. > > If the connections are used infrequently and are susceptible to > timeouts occurring while the next request is in transit, then the > data sent across the network far exceeds any average > *congestion-control* benefit obtained by avoiding the separate > connections. A new connection will be required anyway and the > request message is sent twice (though whether that message hit > the network or not seems to depend on the TCP implementation). > > That should be testable to find the actual trade-off point where > a client should close the connection, but I have no experimental > evidence and no time to set up an experiment. In any case, > pipelining is still worthwhile for many reasons, particularly > when we have control over both sides of the connection, so we need > it defined in HTTP regardless of any single client's request profile. > > BTW, do you know of any good (experiment-quality) evaluations > of HTTP over SCTP? > > ....Roy >
Received on Tuesday, 24 July 2007 00:57:08 UTC