- From: Roy T. Fielding <fielding@gbiv.com>
- Date: Tue, 5 Jul 2011 18:39:22 -0700
- To: Brian Pane <brianp@brianp.net>
- Cc: httpbis Group <ietf-http-wg@w3.org>
On Jul 5, 2011, at 5:30 PM, Brian Pane wrote: > On Tue, Jul 5, 2011 at 12:43 PM, Roy T. Fielding <fielding@gbiv.com> wrote: >> On Jul 5, 2011, at 12:25 PM, Brian Pane wrote: >>> Speaking of the 101/Upgrade mechanism... it seems fundamentally >>> incompatible with request pipelining. >> >> Generally speaking, it is unwise to pipeline the very first request >> series with a new server (at least until you get the first response >> code and headers back indicating that the connection is persistent). > > That's orthogonal to the pipelining-vs-Upgrade incompatibility, > though. It doesn't matter whether the request in which the client > sends an Upgrade header is the first request on a connection or the > Nth; it's the presence of HTTP requests in the pipeline after that > particular request that's the problem. True, but the only case I cared to design for was the one where a new (incompatible) version of HTTP is being introduced using existing http identifiers. For that case, we always send Upgrade on the first request and we always want the server to respond in one round trip. Keep in mind that this was designed in 1995, when httpNG was still a goal. > As a tangential issue, I'm curious: what's the rationale for > recommending against pipelining the first series of requests on a new > connection in the general case? When I look at it from a performance > perspective, optimistically pipelining seems like a quite reasonable > tradeoff. Consider the common case of a client that has just > determined it needs to GET several small resources from a site to > which it doesn't already have any connection: The problem is that if the server does not want to support persistent connections, then sending a lot more data than the server is willing to consume is likely to result in a TCP RST which may cancel the buffer before the client has finished reading the first response. Apache's lingering close is only going to help with that some of the time. > In an aggressive implementation, the client opens the connection and > immediately sends all the requests in a pipeline. In the best-case > scenario, where the server allows the persistent connection, the total > elapsed time is 2xRTT. In the worst-case scenario, where the server > doesn't allow the persistent connection, the client must resubmit each > of the lost requests on a separate connection; if it establishes those > connections in parallel, the total elapsed time is 4xRTT. The worst case is that both ends get confused and the entire sequence has to be started over after a fairly long timeout. > In a conservative implementation, the client opens the connection, > sends the first request, and waits for the response before deciding > whether to pipeline the rest. In the best-case scenario, the total > elapsed time is 3xRTT. In the worst-case scenario, where the server > doesn't allow the persistent connection, the total elapsed time is > 4xRTT. Yes, but the normal request profile is to request one page and then parse it for a bunch of embedded requests (images, stylesheets, etc.). In other words, when accessing a server for the first time, the client is usually going to wait for the first response anyway. After that first time, the client can remember how the server responded and make a reasonable estimate of what it can do with pipelining for later pages. Even if the resources are partitioned across multiple servers, there is very little gained by pushing multiple requests down the pipe right away because the connection is going to be stuck on slow start anyway. > Thus the aggressive implementation performs 1xRTT faster in the best > case. Yes, it wastes upstream bandwidth in the worst case, because it > ends up sending all but one of the requests twice. But for a typical > web browser issuing a series of GETs, upstream bandwidth is an > abundant resource and network round trips are expensive, so I'd rather > spend bandwidth less efficiently in the worst case to save on round > trips in the best case. Reasonable people will disagree. Try convincing Jim Gettys that it won't have an effect on network buffers. ;-) ....Roy
Received on Wednesday, 6 July 2011 01:39:47 UTC