- From: Brian Pane <brianp@brianp.net>
- Date: Fri, 12 Aug 2011 16:15:39 -0700
- To: ietf-http-wg@w3.org
On Fri, Aug 12, 2011 at 2:44 PM, Roy T. Fielding <fielding@gbiv.com> wrote:
> Pipeline failures are almost never due to HTTP issues and cannot
> be hinted at within HTTP. They are due to
>
> a) non-HTTP intermediaries interfering with communication, and
> b) specific resources that hog or close the connection
>
> Hinting in HTTP (aside from "Connection: close") will not help
> either one. Hinting within the mark-up and/or links might help (b).
Agreed so far; I'm thinking of out-of-band hinting, specifically
hinting within markup.
> Using a session-level MUX below HTTP would work, of course, but
> can't actually be deployed because of (a).
You lost me on this part, though. All that's needed for session-level
multiplexing is a stream between the client and the origin server (or
some intermediary that terminates the session) within which a framing
protocol can run. In an environment where one can't create that
stream, CONNECT won't work either.
> In short, I think it is time to stop trying to make HTTP fix all
> the things wrong with TCP.
If I'm interpreting this right, it sounds like you're arguing in favor
of either a lighter transport layer or a session multiplexing layer
between TCP and HTTP, so that a client can issue an arbitrarily large
number of concurrent requests without the inefficiencies created by a
large number of TCP connections. I'm of two minds. On one hand,
lightweight multiplexing somewhere underneath layer 7 does indeed
solve the problem in a very general manner, and without the
vulnerability to head-of-line blocking that's a problem in pipelined
designs. On the other hand, getting the world to support HTTP over
SCTP or SPDY could take a long time. My conjecture is that a
backward-compatible change to HTTP could achieve a much faster rollout
than a relayering of HTTP on top of a new session or transport
protocol.
> It is just as hard to deploy HTTP
> changes now as it was to deploy TCP changes in 1994. What might
> work, without protocol changes, is to use a host or pathname
> convention that says all of the resources on *this* site are
> good for pipelining. Ugly as heck, but it would work well enough
> for the use cases where pipelining helps the most (embedded stuff).
My preference would be a mechanism within HTML and CSS to specify URI
prefixes that are good for pipelining. E.g.,
<link rel="quick" href="http://www.example.com/images/*">
But there's still the problem of broken intermediaries.
> Or simply convince the browser developers to implement visible
> (transient and non-interrupting) error messages when pipelining fails.
I'm not sure this will actually work, given Darin's observation
earlier in this thread:
> Apache 2.0 and the latest IIS seemed to work great,
> except sometimes. Sometimes there was a mysterious
> failure when communicating with one of the "good" servers.
> It seemed that some intermediary must be to blame. The
> failure modes were fascinating too. Sometimes the response
> would be nothing, sometimes it would be garbage bytes, and
> other times the server would simply never reply (timeout).
If the "garbage bytes" scenario involves a reasonably valid HTTP
response header followed by a corrupted response body, the client may
be helpless to detect the problem. That's the rationale behind my
proposal of a new method name: to trigger an obvious error response
from incompatible intermediaries.
-Brian
Received on Friday, 12 August 2011 23:16:07 UTC