- From: Brian Pane <brianp@brianp.net>
- Date: Wed, 4 May 2011 19:04:57 -0700
- To: HTTP Working Group <ietf-http-wg@w3.org>
On Wed, Apr 27, 2011 at 12:13 AM, Mark Nottingham <mnot@mnot.net> wrote: > > On 27/04/2011, at 10:43 AM, Brian Pane wrote: > >> On Mon, Apr 25, 2011 at 11:56 PM, Mark Nottingham <mnot@mnot.net> wrote: >> [...] >>> A fair amount of time has passed since the first version (or even most >>> recent version!) of the draft, and in my conversations with vendors -- >>> especially Moz's Patrick McManus -- I've come to realise that the draft >>> is probably too conservative. I.e., There's a desire to have pipelining on >>> by default, without any opt-in or special mechanisms from the server, >>> using heuristics to back off if a problem is encountered. >> >> Does this also imply the use of heuristics up-front to decide whether a >> given request is a suitable candidate for pipelining? E.g., I can imagine >> a client implementation doing something like this: "if method is GET and >> request-URI doesn't contain a query string and the request was not >> issued via JavaScript then assume it's safe to pipeline." If so, I also >> anticipate that web app developers will start designing toward the >> browsers' heuristics. > > > Well, nothing prohibits a browser from doing that, but the heuristics > that I'm seeing are very carefully watching for errors and slowdowns, > and adjusting appropriately. What sort of adjustment semantics are you seeing? If a client issues pipelined requests R1 through Rn and then detects a problem with R1 (slow or malformed response), I'm assuming it will just continue awaiting responses for R2 through Rn; the client can't reissue those requests on different connections because it can't tell whether the server has begun processing them, and it doesn't know whether the processing is idempotent. Thanks, -Brian
Received on Thursday, 5 May 2011 02:05:44 UTC