- From: Brian Pane <brianp@brianp.net>
- Date: Tue, 5 Jul 2011 17:30:18 -0700
- To: httpbis Group <ietf-http-wg@w3.org>
On Tue, Jul 5, 2011 at 12:43 PM, Roy T. Fielding <fielding@gbiv.com> wrote: > On Jul 5, 2011, at 12:25 PM, Brian Pane wrote: >> Speaking of the 101/Upgrade mechanism... it seems fundamentally >> incompatible with request pipelining. > > Generally speaking, it is unwise to pipeline the very first request > series with a new server (at least until you get the first response > code and headers back indicating that the connection is persistent). That's orthogonal to the pipelining-vs-Upgrade incompatibility, though. It doesn't matter whether the request in which the client sends an Upgrade header is the first request on a connection or the Nth; it's the presence of HTTP requests in the pipeline after that particular request that's the problem. As a tangential issue, I'm curious: what's the rationale for recommending against pipelining the first series of requests on a new connection in the general case? When I look at it from a performance perspective, optimistically pipelining seems like a quite reasonable tradeoff. Consider the common case of a client that has just determined it needs to GET several small resources from a site to which it doesn't already have any connection: In an aggressive implementation, the client opens the connection and immediately sends all the requests in a pipeline. In the best-case scenario, where the server allows the persistent connection, the total elapsed time is 2xRTT. In the worst-case scenario, where the server doesn't allow the persistent connection, the client must resubmit each of the lost requests on a separate connection; if it establishes those connections in parallel, the total elapsed time is 4xRTT. In a conservative implementation, the client opens the connection, sends the first request, and waits for the response before deciding whether to pipeline the rest. In the best-case scenario, the total elapsed time is 3xRTT. In the worst-case scenario, where the server doesn't allow the persistent connection, the total elapsed time is 4xRTT. Thus the aggressive implementation performs 1xRTT faster in the best case. Yes, it wastes upstream bandwidth in the worst case, because it ends up sending all but one of the requests twice. But for a typical web browser issuing a series of GETs, upstream bandwidth is an abundant resource and network round trips are expensive, so I'd rather spend bandwidth less efficiently in the worst case to save on round trips in the best case. -Brian
Received on Wednesday, 6 July 2011 00:31:05 UTC