W3C home > Mailing lists > Public > ietf-http-wg@w3.org > July to September 2010

Re: I-D Action:draft-nottingham-http-pipeline-00.txt

From: Willy Tarreau <w@1wt.eu>
Date: Wed, 11 Aug 2010 10:53:59 +0200
To: Mark Nottingham <mnot@mnot.net>
Cc: Adrien de Croy <adrien@qbik.com>, HTTP Working Group <ietf-http-wg@w3.org>
Message-ID: <20100811085359.GA659@1wt.eu>
On Wed, Aug 11, 2010 at 05:29:32PM +1000, Mark Nottingham wrote:
> It's interesting, but it would require browsers to spew Req-MD5 headers into requests unconditionally... something that I can't imagine they're likely to want to do (especially at first, when adoption on the server side is low).

Well, it's cheap and scalable. Also, it's hard to imagine that browsers
will both want to have pipelining to work and do nothing for that. The
"connection: pipeline" method suggested by Martin looks perfect to me
in this regards, but it does not address the point you want to address
which is to try to detect bad intermediaries without having to upgrade
them.

> OTOH just doing the MD5 in the response is potentially interesting,

if the server does the MD5 itself, we're back to the problem I emitted
about reverse proxies having to do the job themselves if they rewrite
requests. This will remain broken for a very long time, until those
reverse-proxies know how to write the header. Also, the problem of which
one has to do it depending on the request path still exists. In practice,
a browser may receive 4-5 different values for a same response header
because multiple pipiline-capable reverse-proxies will have put theirs.

That's why I really think that having the server report the information
it got is nice. That way we can ensure that even in case of multiple
actors, the reported values are the correct ones.

That said, for the long run, my preference still goes for the
"Connection: pipeline" method, which would take longer to deploy but
which is more in line with the real needs :-)

> although having the URI available in the response may be useful for other purposes (e.g., verifying that we're using the right URL for a base URI, etc.). As Martin pointed out, Content-Location could serve that purpose as well.

yes, though using it to detect a perfect match is still quite hard due to
possible rewrites (eg: url prefixing, double-slash normalization, the '+'
that becomes '%20', along the way, etc...).

Regards,
Willy
Received on Wednesday, 11 August 2010 08:54:36 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 27 April 2012 06:51:24 GMT