W3C home > Mailing lists > Public > ietf-http-wg@w3.org > April to June 2009

Re: Pipelining in HTTP 1.1

From: Salvatore Loreto <salvatore.loreto@ericsson.com>
Date: Sun, 05 Apr 2009 12:05:49 +0300
Message-ID: <49D8746D.6090607@ericsson.com>
To: Jamie Lokier <jamie@shareable.org>
CC: "Preethi Natarajan (prenatar)" <prenatar@cisco.com>, "Roy T. Fielding" <fielding@gbiv.com>, ietf-http-wg@w3.org, Jonathan Leighton <leighton@cis.udel.edu>, "Paul D. Amer" <amer@cis.udel.edu>, "Fred Baker (fred)" <fred@cisco.com>
Jamie Lokier wrote:
> Preethi Natarajan (prenatar) wrote:
>   
>>> BEEP over TCP shows how to solve this, though the spec leaves 
>>> you to figure out for yourself why.  BEEP itself is a 
>>> protocol to multiplex logical streams over a single transport 
>>> stream.  The BEEP over TCP spec adds a bit more syntax, so 
>>> each logical stream advertises its own independent receive 
>>> window.  This prevents a single slow process from stalling 
>>> the streams of others.
>>>       
>> I assume by "stall" you mean that the transport is ready to deliver data
>> but the application process is busy doing something else.
>>     
>
> Yes.
>
>   
>> If yes, wouldn't this problem go away when a single thread/process reads
>> data from the socket and moves the data to appropriate response buffers
>> (as done in Firefox) instead of multiple threads/processes, one per
>> response, reading from the socket? 
>>     
>
> Yes but that requires unbounded buffering in the receiving
> demultiplexer.  That's why the problem is _either_ stalling the
> transport or unbounded buffering required.
>
> That's ok for a applications like Firefox, when all the received data
> is stored in the application anyway.  If the server sends too much in
> one response, the application will complain anyway.
>
> But imagine a proxy which is forwarding the individual responses to
> different processes each with independent behaviours.  (It's a
> requirement that processes can consume independently - otherwise
> what's the point in multiplexing different responses anyway?)
>
> One of the process might be consuming a very large stream (even
> infinite), at its own rate.  If that process stops reading, the proxy
> must buffer that large stream to continue forwarding responses to the
> other processes.
>
> I said it's ok for applications like Firefox.  But even Firefox may
> have internal processes, with one of them consuming a large or
> infinite stream of data in one response, while other internal
> processes handle other responses.  This happens in certain AJAX models.
>
> So you need to handle this even for things like Firefox.  And hence,
> you need to handle this for an extension to HTTP which multiplexes
> different messages out of order.
>
> The basic issue is that rate at which some abstract process consumes
> data must be conveyed to the sender somehow to avoid unbounded
> intermediate buffering or head of line blocking.  Per stream flow
> control when there's more than one stream achieves this, although it
> might not be the optimal solution.  With just one stream at a time,
> the TCP window does this by itself - and ordinary HTTP implementations
> rely on this.
>   
first I think that the major benefit that could come from using SCTP 
instead that TCP is
in the Proxy to Proxy or Proxy to Server side, where a Proxy can use 
separate streams
to handle different requests coming from different clients.
so if we suppose that a Proxy X receives requests from N clients that 
all want to contact Server Y,
then Proxy X instead of opening N TCP connection, it will open one 
single SCTP association with
N streams and it will use one specific stream to handle in order 
request/response coming from each specific client.

I am still not convinced in using SCTP in the client-proxy or 
client-server side.

about the "stall" problem Jamie is raising, I agree that the fact that 
SCTP use a single receive buffer
per Association is a problem, one possibility I can immagine in SCTP is 
to use the
"Ancillary Data" to communicate to the Sending side to slow done or even 
stop sending data on a specific stream.

>   
>> Still, I think that the BEEP approach would suffer from head-of-line
>> blocking during packet losses, which is what SCTP streams solve. In
>> BEEP, the logical streams are multiplexed over a single TCP bytestream
>> -- if a TCP PDU is lost, successive TCP PDUs (belonging to different
>> responses) will not be delivered to BEEP/app. 
>>     
>
> Yes.  SCTP is superior in this respect.  But inferior in the sense
> that I've never seen a NAT or firewall which will forward SCTP :-)
> You'll need SCTP-over-UDP to get anywhere with that nowadays.
>   
yes, I agree that SCTP does not work with NAT nowadays!
but hopefully the situation will improve in the near feature.

> BEEP also suffers from excessive round trips to set up new streams.
>
> There's a BEEP extension proposal to support lightweight substreams
> within streams to get around this, but that proposal suffers from not
> letting you arbitrarily interleave messages of substreams in the same
> stream, making it somewhat pointless for what we're talking about.
>
> -- Jamie
>
>   
Received on Sunday, 5 April 2009 09:06:30 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 27 April 2012 06:51:02 GMT