Re: Pipelining in HTTP 1.1

Preethi Natarajan (prenatar) wrote:
> > BEEP over TCP shows how to solve this, though the spec leaves 
> > you to figure out for yourself why.  BEEP itself is a 
> > protocol to multiplex logical streams over a single transport 
> > stream.  The BEEP over TCP spec adds a bit more syntax, so 
> > each logical stream advertises its own independent receive 
> > window.  This prevents a single slow process from stalling 
> > the streams of others.
> 
> I assume by "stall" you mean that the transport is ready to deliver data
> but the application process is busy doing something else.

Yes.

> If yes, wouldn't this problem go away when a single thread/process reads
> data from the socket and moves the data to appropriate response buffers
> (as done in Firefox) instead of multiple threads/processes, one per
> response, reading from the socket? 

Yes but that requires unbounded buffering in the receiving
demultiplexer.  That's why the problem is _either_ stalling the
transport or unbounded buffering required.

That's ok for a applications like Firefox, when all the received data
is stored in the application anyway.  If the server sends too much in
one response, the application will complain anyway.

But imagine a proxy which is forwarding the individual responses to
different processes each with independent behaviours.  (It's a
requirement that processes can consume independently - otherwise
what's the point in multiplexing different responses anyway?)

One of the process might be consuming a very large stream (even
infinite), at its own rate.  If that process stops reading, the proxy
must buffer that large stream to continue forwarding responses to the
other processes.

I said it's ok for applications like Firefox.  But even Firefox may
have internal processes, with one of them consuming a large or
infinite stream of data in one response, while other internal
processes handle other responses.  This happens in certain AJAX models.

So you need to handle this even for things like Firefox.  And hence,
you need to handle this for an extension to HTTP which multiplexes
different messages out of order.

The basic issue is that rate at which some abstract process consumes
data must be conveyed to the sender somehow to avoid unbounded
intermediate buffering or head of line blocking.  Per stream flow
control when there's more than one stream achieves this, although it
might not be the optimal solution.  With just one stream at a time,
the TCP window does this by itself - and ordinary HTTP implementations
rely on this.

> Still, I think that the BEEP approach would suffer from head-of-line
> blocking during packet losses, which is what SCTP streams solve. In
> BEEP, the logical streams are multiplexed over a single TCP bytestream
> -- if a TCP PDU is lost, successive TCP PDUs (belonging to different
> responses) will not be delivered to BEEP/app. 

Yes.  SCTP is superior in this respect.  But inferior in the sense
that I've never seen a NAT or firewall which will forward SCTP :-)
You'll need SCTP-over-UDP to get anywhere with that nowadays.

BEEP also suffers from excessive round trips to set up new streams.

There's a BEEP extension proposal to support lightweight substreams
within streams to get around this, but that proposal suffers from not
letting you arbitrarily interleave messages of substreams in the same
stream, making it somewhat pointless for what we're talking about.

-- Jamie

Received on Saturday, 4 April 2009 02:58:56 UTC