Re: Question on flow control for a single file transfer

Hi, Peter
On Nov 3, 2013, at 9:19 PM, Peter Lepeska <bizzbyster@gmail.com> wrote:

> Roberto,
> 
> I'm not getting your point. When the sender is the browser (for uploads) the data is buffered in the TCP stack on the end user's machine, so this would have no impact on servers at scale. When the sender is the server (for downloads), there should be no scaleability-related difference if data is buffered in the user mode application or in the kernel, if it is in TCP's buffer. Why is there a scalability advantage buffering in the user mode process (when we respect the remote peer's receive window) as opposed to the TCP stack in the kernel? 

I don't have Roberto's experience either, but I do share some experience with people who write intermediaries. Receiver (whether the server for upload, the client for download, or the intermediary in either case) memory is a constrained resource. That is especially true for the server and intermediary. To get TCP to work well, the TCP stack needs to have enough buffer space to avoid the need to slow the client down by dropping packets. If the client uploads at top speed, it could overwhelm the ability of the server software to handle, and end up filling up the TCP buffers on the server. If one client does that, that's OK. If a lot of clients do that, TCP memory runs out, so that limits the number of clients that a single server can support. 

Even for downloads, if the server application does not get a signal from the client application, all it can do is send the whole resource and let its TCP stack buffer the whole thing or block the application, so either way the entire resource is waiting in memory. That is, of course, a naive way to implement a server, and even today servers query TCP to know how much data they can send without blocking and/or overfilling the buffers. So for download it is solvable. But for uploads, there is nothing that the server can do about its TCP buffers filling up faster than it can empty them.  Having such signaling at the HTTP layer gives both sides better control over their use of buffer space (whether that buffer is in kernel or user mode).

Yoav

Received on Monday, 4 November 2013 05:59:06 UTC