W3C home > Mailing lists > Public > ietf-http-wg@w3.org > July to September 2011

Re: Pipeline hinting revisited

From: Willy Tarreau <w@1wt.eu>
Date: Sat, 13 Aug 2011 01:18:33 +0200
To: "Roy T. Fielding" <fielding@gbiv.com>
Cc: Brian Pane <brianp@brianp.net>, ietf-http-wg@w3.org
Message-ID: <20110812231833.GG12235@1wt.eu>
Hi Roy,

On Fri, Aug 12, 2011 at 02:44:53PM -0700, Roy T. Fielding wrote:
> Pipeline failures are almost never due to HTTP issues and cannot
> be hinted at within HTTP.  They are due to 
> 
>   a) non-HTTP intermediaries interfering with communication, and
>   b) specific resources that hog or close the connection

I don't fully agree with you on this point. I've written myself bogus
HTTP intermediaries when I tried hard to ensure I would get pipelining
right. Silly things such as buffer wrap-arounds are not always easy to
deal with, especially when they're full. Also I remember having managed
to get a freeze when waiting for a response buffer to be released before
processing a request. What I mean is that even when you want to ensure
that your HTTP intermediary is fine, you can write bugs that make some
HTTP intermediaries buggy.

> Hinting in HTTP (aside from "Connection: close") will not help
> either one.  Hinting within the mark-up and/or links might help (b).
> Using a session-level MUX below HTTP would work, of course, but
> can't actually be deployed because of (a).
> 
> In short, I think it is time to stop trying to make HTTP fix all
> the things wrong with TCP.  It is just as hard to deploy HTTP
> changes now as it was to deploy TCP changes in 1994.

Speaking of TCP, another TCP-specific issue with pipelining is that
some TCP stacks send an RST and flushes any unsent data when some
data is received after one side closes. Pipelining makes this case
appear often if the server has to close the connection for whatever
reason, resulting in blank pages or truncated objects on the client
*before* the aborted requests, while the server believes it has
sent everything (and did so to the kernel). This issue is quite
hard to fix at a reasonable cost and from what I've seen, only
Apache and to some extent Squid get it right.

> What might
> work, without protocol changes, is to use a host or pathname
> convention that says all of the resources on *this* site are
> good for pipelining.  Ugly as heck, but it would work well enough
> for the use cases where pipelining helps the most (embedded stuff).

Well, there are sane situations (mainly between clients and proxies)
where pipelining can be reliable and efficient, and for any site. I'm
convinced we can manage to find a way to reliably make it work, even
with intercepting proxies.

> Or simply convince the browser developers to implement visible
> (transient and non-interrupting) error messages when pipelining fails.

Users are always afraid of errors. Turning the issue the other way around
might work though, for instance by showing a bonus for sites where pipelining
worked (eg: indicating an estimate of how much time was saved by the browser's
ability to boost the connection).

Regards,
Willy
Received on Friday, 12 August 2011 23:19:06 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 27 April 2012 06:51:46 GMT