W3C home > Mailing lists > Public > ietf-http-wg@w3.org > July to September 2010

Re: Fwd: I-D Action:draft-nottingham-http-pipeline-00.txt

From: Willy Tarreau <w@1wt.eu>
Date: Tue, 10 Aug 2010 09:52:04 +0200
To: "Thomson, Martin" <Martin.Thomson@andrew.com>
Cc: Mark Nottingham <mnot@mnot.net>, HTTP Working Group <ietf-http-wg@w3.org>
Message-ID: <20100810075204.GA25898@1wt.eu>
Hi Martin,

On Tue, Aug 10, 2010 at 02:00:34PM +0800, Thomson, Martin wrote:
> Hi Willy,
> >      HTTP/1.1 200 OK
> >      Request-Id: 3
> >      Connection: Request-Id
> >      Content-length: 40
> > 
> > Any opinion ?
> There's a big difference between what you are proposing and what I understand Mark is trying to achieve.
> Anything that requires action from any entity is inherently not going to be necessary if all you are looking to do is make pipelining work.  After all, if the implementer has the sophistication to deploy these sorts of mechanisms, then they can equally be told to stop mucking with pipelining requests.  End of story; no new protocol mechanisms required.

I agree that in theory this should be enough. But if Mark posted this
proposal, it's precisely because we have several difficulties in the

> In part, this draft looks at ways for a server and client to collude in detecting bad intermediaries.  There's not a lot that a document like this can do for bad servers aside from documenting some detection techniques - there certainly aren't any protocol mechanisms that you could deploy.

I agree with your point concerning bad servers. But my main point is
to detect bad intermediaries without too high a cost for servers.

> My first reaction was to use a request identifier, but then that messes with caches that don't understand the new header.  Going hop-by-hop, as you suggest runs afoul of the principle above.

In fact in my opinion it would be the right solution. With a response
header from the server, we'll have to wait for servers to adopt that
and given the cost for some servers, we'll not see that happen soon.

With just a per-connection request counter, we can have that progressively
deployed where pipelining matters the most. Right now, as Mark said,
browsers disable it by default. The request identifier would allow a
smooth deployment which would work like this :

  1) browsers maintain a default disabled for pipelining

  2) browsers can start emitting the request identifier, and accept to
     automatically switch pipelining ON for the origin server (in case
     of direct access), or for the proxy (when passing through a proxy).
     That means that in a first time, nothing will change, we'll keep the
     current default setting which is disabled.

  3) proxies that support pipelining will be able to quickly adopt the
     principle and start responding with a request ID in responses, thus
     immediately allowing their local users to pipeline requests. This
     will even be valuable add-on for proxy vendors so this should be
     quickly adopted.

  4) proxies that are modified to respond to clients should also start
     emitting request IDs in the requests they send to servers.

  5) at this point, browsers and proxies are able to cooperate and will
     speed up adoption, as was done with the messy proxy-connection header.

  6) servers will get many requests with an identifier, either from the
     wide number of clients or from some progressively upgraded proxies.
     Servers will then be incited into putting that ID in the response.
     Since it's cheap to do, it should not take much time to see this

  7) bad intermediaries will sometimes be identified (same goal as with
     the Assoc-Req hedaer). Those will have to be fixed, either by
     disabling pipelining (config option) or by making it work (takes
     longer but given support around the component, there will be more

> Personally, I wonder whether the judicious application of Content-Location and Content-MD5 would not be enough.  It's close to what is suggested, but less definitive. 

I'm not sure it would be an efficient alternative, nor that controls
will be correclty performed on that. Also, the pipelining issue really
is a hop-by-hop issue and not an end-to-end. So in my opinion, by trying
to get it fixed end-to-end, we won't succeed because every actor in the
chain has no motivation for trying to improve the situation since all
other actors around are known to be bad.

That's the current situation BTW : browsers support it but disable it,
so intermediates have no reason to work hard on making it work for a
small minority of users, considering that they won't even know if servers
will correctly process it.

> If you want to talk about out of order responses, then we're in a whole 'nother ballpark.  That's where discussion of SCTP, SPDY, HTTP/2.0, et. al. come in.

Yes, you're right, this is something different.

Received on Tuesday, 10 August 2010 07:52:38 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:13:48 UTC