Re: Question on flow control for a single file transfer

Amos,

I agree what you said, but again only when there is more than one active
stream. Again, HTTP 2 flow control is harmless at best when there is only
one active stream.

But you don't have to believe me. Just setup a test a browser that does
flow control and add a few % loss and 200 ms latency and see whether you
are able to download large files faster with flow control on or off. The
flow control off case should never lose, assuming the loss/latency are
regular and your test is long enough.

Peter


On Sun, Nov 3, 2013 at 9:55 PM, Amos Jeffries <squid3@treenet.co.nz> wrote:

> On 4/11/2013 6:19 p.m., Peter Lepeska wrote:
>
>> Roberto,
>>
>> I'm not getting your point. When the sender is the browser (for uploads)
>> the data is buffered in the TCP stack on the end user's machine, so this
>> would have no impact on servers at scale. When the sender is the server
>> (for downloads), there should be no scaleability-related difference if data
>> is buffered in the user mode application or in the kernel, if it is in
>> TCP's buffer. Why is there a scalability advantage buffering in the user
>> mode process (when we respect the remote peer's receive window) as opposed
>> to the TCP stack in the kernel?
>>
>> I recognize that I have less implementation experience at scale, but I
>> still don't understand your argument. Trying to though...
>>
>> "Speaking as someone who has some implementation experience at scale,
>> when the receiver asserts flow control or other policy, and the csender
>> fails to respect it, it will be assumed to be malicious and the connection
>> is far likelier to be terminated."
>>
>> We don't have app-level flow control today (in HTTP 1.x implementations)
>> so why would this be assumed to be malicious? I'm just suggesting we put in
>> the spec that receiver advertised windows not be respected when there is
>> only one active stream. If it is the standard and this behavior is in
>> standards-compliant browsers, why would we assume this to be malicious?
>>
>> Peter
>>
>>
> You are thinking only of the case where a client is directly plugged into
> the origin server.
>
> HTTP contains proxies. Meaning there are at least two TCP buffers in
> series to be traversed.
>  * In HTTP/1 this is not a problem since for period when data is being
> relayed end-to-end the proxy is simply pushing client bytes to the server
> and vice versa an can simply pause reading/writing at any point.
>  * With HTTP/2 multiplexing this protection disappears. Two or more
> clients requests may be delivered to the same server connection
> simultaneously. If the proxy is not explicitly informed about each clients
> receive limit, it has no way to reliably limit data from the server to that
> client and may hit a deadlock situation where its buffer is filled with
> frames awaiting delivery to the slowest client while the faster client is
> starved.
>
> The more complex case where multiple clients with multiple streams are
> split N:M across several faster server connections can hit contention on
> flow back to the clients. If any one client connection is slower than the
> sum of server connections output destined there, the sheer flood of data
> being delivered to it will cause the same form of server connection buffer
> inside the proxy to fill/block with that clients data and starve other
> clients.
>
> These are strong cases for per-stream WINDOW_UPDATE flow control. Im not
> sure what the use-case for connection-wide flow control is. Perhapse that
> is best left for the TCP level.
>
> Amos
>
>

Received on Monday, 4 November 2013 16:05:43 UTC