- From: 陈智昌 <willchan@chromium.org>
- Date: Mon, 4 Nov 2013 08:45:08 -0800
- To: Peter Lepeska <bizzbyster@gmail.com>
- Cc: Michael Sweet <msweet@apple.com>, Yoav Nir <ynir@checkpoint.com>, "<ietf-http-wg@w3.org>" <ietf-http-wg@w3.org>, Martin Thomson <martin.thomson@gmail.com>
- Message-ID: <CAA4WUYjemF7Sn3Lh5a0pcA-00N2dJQ=LMrUUqgB5=zPfD-Yn=A@mail.gmail.com>
I feel like a key thing people have missed here is the API. TCP kernel flow control API: stop calling read(), let kernel buffers build up. Problem is that you are not allowed to read data that may be sitting in the kernel buffers, just because you want to assert flow control. But if an application protocol like HTTP/2 *requires* you to read data from the socket, then this is broken. You *must* read data from the socket to process control frames promptly, or else things may break. Not processing WINDOW_UPDATEs may lead to data transfer deadlocks. Not processing PING frames may lead to the peer terminating the connection, since they may be using PING frames as a liveness check. And if you don't respond to the PING, then the peer will think the connection is dead and tear down the transport connection. HTTP/2 API: everything's going to typically be in userspace. You can assert HTTP/2 level flow control and still process all data available on the socket. This is a more powerful API. The problem is flow control window sizing is a hard problem. Don't use it unless you have to :) On Mon, Nov 4, 2013 at 8:25 AM, Peter Lepeska <bizzbyster@gmail.com> wrote: > Hi Michael, > > We have no choice but to rely on TCP-based flow control. The question is > whether there is anything to be gained by also relying on HTTP level flow > control when there is only one active transfer. As per the emails with Yoav > and William, I get it that there are cases where a server operator may > decide that the benefits of reduced per connection buffering are worth the > loss in performance and in this case HTTP flow control may make sense. > > By default though, it should be no slower than 1.x. > > Peter > > > On Mon, Nov 4, 2013 at 8:04 AM, Michael Sweet <msweet@apple.com> wrote: > >> Peter, >> >> On Nov 3, 2013, at 11:04 PM, Peter Lepeska <bizzbyster@gmail.com> wrote: >> >> If a receiver cannot absorb any more data, it will not make a buffer >> available to TCP. >> >> Don't forget that in HTTP 1.x we don't do flow control. We leave that to >> the transport layer and this works well. Layering flow control on top of >> flow control can only result in slower flows. This slowdown is necessary >> when two or more streams are being sent at once but let's not take this hit >> in the simple case of one stream. >> >> >> The problem with relying on TCP-based flow control is that you are >> forcing retransmissions and log-jamming all access to the other end. If >> instead you send your file in chunks sized to the receiver’s capabilities >> then you can either a) do other useful work or b) go to sleep until the >> receiver tells you it can accept more data. Add a small amount of >> rate-tracking code on the sending side and you should be able to keep the >> receiver window near full. >> >> >> >> Peter >> >> On Sunday, November 3, 2013, William Chan (陈智昌) wrote: >> >>> http://en.wikipedia.org/wiki/Flow_control_(data) says "In data >>> communications, flow control is the process of managing the rate of data >>> transmission between two nodes to prevent a fast sender from overwhelming a >>> slow receiver." >>> >>> Guesstimating BDP is only important if the receiver cares about >>> maximizing throughput. Which hopefully it does, but there's no guarantee. >>> Sometimes due to resource constraints, the receiver cannot accept that much >>> data, and it asserts flow control in this case. And senders *need* to >>> respect that. Otherwise a receiver with any sense, like a highly scalable >>> server, will terminate the connection since the peer is misbehaving. >>> >>> >>> On Sun, Nov 3, 2013 at 7:44 PM, Peter Lepeska <bizzbyster@gmail.com>wrote: >>> >>>> Sloppiness? I don't get that. The sender's job is to transmit the data >>>> as fast as possible, not to respect the receiver's best guesstimate of >>>> available bandwidth sent ½ RTT ago. In this case, the sender's job is to >>>> keep the TCP buffer full of data so it can send it when it has the >>>> opportunity to. >>>> >>>> Respecting the peer's receive window in the single file send case is >>>> harmless at best and detrimental otherwise. >>>> >>>> Peter >>>> >>>> On Sunday, November 3, 2013, William Chan (陈智昌) wrote: >>>> >>>>> I don't feel comfortable encouraging such sloppiness, I worry about >>>>> future interop. Respecting a peer's receive window isn't hard. Just do it :) >>>>> >>>>> And even though wget doesn't support upload (to my knowledge, but I'm >>>>> not an expert), a command line tool may upload, in which case it should >>>>> definitely respect the peer's receive window. >>>>> On Nov 3, 2013 6:22 PM, "Yoav Nir" <ynir@checkpoint.com> wrote: >>>>> >>>>>> >>>>>> On Nov 3, 2013, at 1:25 PM, William Chan (陈智昌) < >>>>>> willchan@chromium.org> wrote: >>>>>> >>>>>> It's probably understood already, but just to be clear, this is >>>>>> receiver controlled and directional. Unless you control both endpoints, you >>>>>> must implement flow control in order to respect the peer's receive windows, >>>>>> even if you disable your own receive windows. Cheers. >>>>>> >>>>>> >>>>>> This discussion started with tools like WGET. If all you're ever >>>>>> sending is one single request equivalent to "GET xxx", you're likely fine >>>>>> not considering server receive window. >>>>>> >>>>>> For a single file, the data that the client sends to the server >>>>>> never exceeds the default server receive window. >>>>>> >>>>>> >>>>>> >>> >> ____________________________________________________________ >> Michael Sweet, Senior Printing System Engineer, PWG Chair >> >> >
Received on Monday, 4 November 2013 16:45:36 UTC