- From: Roberto Peon <grmocg@gmail.com>
- Date: Mon, 4 Nov 2013 16:08:53 -0800
- To: Peter Lepeska <bizzbyster@gmail.com>
- Cc: Mike Bishop <Michael.Bishop@microsoft.com>, Amos Jeffries <squid3@treenet.co.nz>, HTTP Working Group <ietf-http-wg@w3.org>
- Message-ID: <CAP+FsNfTdCiaD0wDCdKuxYC63ws1d0-sb5BNA72ORdHQsSB3Cw@mail.gmail.com>
I'll point out again, that the proposal of ignoring flow control at any time implies that one MUST have a synchronization event when one transitions from ignoring to adhering to flow control. With the proposal of ignoring when only having one stream, *which is the common case for the beginning of browsing*, it would cause de-synchronization, possible deadlock, etc. Even if I liked the proposal, which I don't, it would need to be completed with adequate synchronization for the transition from 1 stream to multiple streams, and vice versa (what happens when you go from 2-1-2 streams?). Dealing with this would require additional complexity far in excess of simply following the flow control semantic as it is defined today. -=R On Mon, Nov 4, 2013 at 2:20 PM, Peter Lepeska <bizzbyster@gmail.com> wrote: > Okay I get the misunderstanding now. > > I'm not proposing that the sender determines that there is just one active > stream end to end. It just checks to see if it knows of other active > streams. If it does not, then it ignores flow control. If in fact there are > other active streams end to end, then it will not be in the "ignoring flow > control" state for long. > > Peter > > > On Mon, Nov 4, 2013 at 2:15 PM, Mike Bishop <Michael.Bishop@microsoft.com>wrote: > >> No, the sender *doesn’t* know if there’s a single stream end-to-end. >> It knows whether there’s a single stream on the first hop, and it doesn’t >> know whether that hop is the only hop. >> >> If a client is directly connected to a server and that’s the only >> request that will ever be made, then you’re correct -- you may get some >> improved performance by disabling flow control, and we provide that option >> in the spec. But the server, at least, doesn’t know whether it’s dealing >> with a proxy or a direct client. The client can inform its next hop >> whether it wants flow control on that hop. If that next hop is a proxy, >> it’s the proxy’s decision whether it wants flow control, and that’s an >> independent choice. >> >> Sent from Windows Mail >> >> *From:* Peter Lepeska <bizzbyster@gmail.com> >> *Sent:* Monday, November 4, 2013 1:17 PM >> >> *To:* Amos Jeffries <squid3@treenet.co.nz> >> *Cc:* HTTP Working Group <ietf-http-wg@w3.org> >> >> >> >> >> On Mon, Nov 4, 2013 at 1:02 PM, Amos Jeffries <squid3@treenet.co.nz>wrote: >> >>> On 2013-11-05 05:05, Peter Lepeska wrote: >>> >>>> Amos, >>>> >>>> I agree what you said, but again only when there is more than one active >>>> stream. Again, HTTP 2 flow control is harmless at best when there is >>>> only >>>> one active stream. >>>> >>> >>> Part of my point was that there is absolutely no way to determine that >>> one active stream cases existence all the way along the path. Middleware >>> exists (whether it is visible to the endpoints or not) and the "single >>> stream" may be sharing any HTTP hop with one or more other streams. >>> >>> "At best" this single stream will be able to avoid contention in the >>> more common cases where it ceases being a single end-to-end stream at some >>> middel hop. So no I think the best-case is rather better than you are >>> saying. >> >> >> The sender knows if there are other active streams at the time it has >> data to send. >> >> >>> >>> >>> >>> >>>> But you don't have to believe me. Just setup a test a browser that does >>>> flow control and add a few % loss and 200 ms latency and see whether you >>>> are able to download large files faster with flow control on or off. The >>>> flow control off case should never lose, assuming the loss/latency are >>>> regular and your test is long enough. >>>> >>> >>> At what size data frames? and what relative size of TCP and HTTP layer >>> buffer sizes? over how many hops? >>> >>> In the grand scheme of HTTP, single client going to single server, with >>> a single stream and nothing in between is a rather rare occurance. Just >>> like it is a rather rare and artificial occurance to see only a single >>> isolated TCP connection today. >> >> >> Those questions will not impact your results. When there is available >> buffering at the TCP layer, HTTP flow control makes one of two decisions -- >> send now or send later. When flow control is disabled the answer is always >> send now. Therefore no HTTP flow control will always be as fast or faster. >> >> Actually, I don't know for sure but I'd bet the single stream case is >> the most common from an overall bytes sent perspective due to http >> streaming movies from services like Netflix. In any case, there are many >> uses of HTTP that involve one at a time file transfers. >> >> Peter >> >>> >>> >>> Amos >>> >>> >>> >> >
Received on Tuesday, 5 November 2013 00:09:20 UTC