- From: Roberto Peon <grmocg@gmail.com>
- Date: Sun, 26 Jan 2014 15:36:58 -0800
- To: Martin Thomson <martin.thomson@gmail.com>
- Cc: Mike Belshe <mike@belshe.com>, HTTP Working Group <ietf-http-wg@w3.org>
- Message-ID: <CAP+FsNe22+9TwCiR-dboK-SFJ4-VHnX0x+t12-wjzbROTbcwGA@mail.gmail.com>
Example of when to send BLOCKED frame for flow control: if socket_writable and want_to_send: if flow_control_window[stream_id] == 0 && can send_blocked_frame[stream_id]: enqueue_blocked_frame(stream_id, FLOW_CONTROL) can_send_blocked_frame[stream_id] = false if flow_control_window[0] == 0 && can_send_blocked_frame[0]: can_send_blocked_frame[stream_id] = false When receiving WINDOW_UPDATE, set can_send_blocked_frame[window_update_frame.stream_id] = true Our experience has been that the lack of this was painful. Or, of one prefers it said this way: its inclusion is likely to help solve non-theoretical implementation pain when either side is buggy, or where either side is unable to examine TCP state. The gain to getting this right is: substantially better ability to debug debug flow flow control ( :) ), better adaptation to poor implementations (i.e. send GOWAWAY/alt-svc, etc. for buggy remote implementations), and giving implementations the ability to auto-tune and adapt session and stream flow control to meet changing network conditions. It is thoroughly impractical to attempt to build this into a separate tool: getting this information after the fact when debugging is both difficult and of less utility. Using a separate tool also disallows proper tuning of the stack on (the majority of deployed) devices which cannot examine TCP parameters. -=R On Sun, Jan 26, 2014 at 1:27 PM, Martin Thomson <martin.thomson@gmail.com>wrote: > I'm not sure about redundant, but I'm certainly not that excited by > the idea. I think that this is going to be very hard to extract > useful information from, and it's probably going to be difficult to > generate properly, let alone interpret. > > I can appreciate the value of having an interoperable mechanism for > this sort of reporting channel, but I think that it's probably better > to build this sort of feedback into a specialized tool, rather the > protocol itself. There are plenty of sites out there that test how > good your site/server/client is with respect to various aspects of > HTTP, web performance and so forth and I don't think that this is any > different from those. The incremental gain seems minor, but a > protocol element, particularly at this stage, is expensive. > > On 25 January 2014 05:33, Mike Belshe <mike@belshe.com> wrote: > > Questions: > > - is there any existing implementation experience with BLOCKED? > > - Why can't each side already infer blocked based on the window size > being > > closed? (I do realize the max-conns-reached case could benefit, but I > think > > that is less valuable) > > > > I ask because the definition seems unclear. "When an endpoint is > blocked on > > flow control, but the socket is writable". What exactly does this mean? > > If the socket is not writable for 60 seconds, do you send BLOCKED once or > > more than once? > > > > Example 1: > > - the server has sent a WINDOW_UPDATE, but it is in flight > > - client flushes enough data that there is 1 byte of space in the > socket > > buffer > > - send a BLOCKED frame? > > > > Example 2: > > - server sets window to 64KB > > - client has 65KB to send, so it sends 64KB bytes followed by a > BLOCKED? > > > > Example 3: > > - server sets window to 1024 bytes > > - client immediately sends a BLOCKED > > - 3 seconds later, you're still blocked, do you send BLOCKED again? > > > > Example 4: (perhaps a chatty version of example 1) > > - server sends WINDOW_UPDATE opening 1 frame of space (1400bytes) > > - client sends 1400 bytes > > - client sends BLOCKED > > - server sends WINDOW_UPDATE opening 1 frame of space (1400bytes) > > - client sends 1400 bytes > > - client sends BLOCKED > > - server sends WINDOW_UPDATE opening 1 frame of space (1400bytes) > > - client sends 1400 bytes > > - client sends BLOCKED > > > > > > Overall, I must be misunderstanding, because I think the blocked frame is > > redundant? > > > > Mike > > > > > > > > > > > > > > On Fri, Jan 24, 2014 at 6:11 PM, Roberto Peon <grmocg@gmail.com> wrote: > >> > >> I added a new issue for the blocked frame. > >> > >> As a reminder, the expectation, based on our implementation experience, > is > >> that flow control and other settings limiting state size are helpful, > but > >> can cause issues with the user experience when tuned improperly, as > these > >> settings invariably are... > >> Worse, finding bugs in flow control implementations is extremely > annoying > >> and difficult without explicit signaling (Why did the client/server stop > >> sending us data? What is going on?!) > >> > >> A BLOCKED frame helps with both of these issues: > >> When an endpoint is blocked on flow control, but the socket is writable, > >> it emits a frame saying so, and the remote end now has explicit > signaling > >> about the tuning of the flow control size. > >> When you get a BLOCKED frame, if you didn't expect the other side to be > >> blocked (i.e. you believe the window isn't closed), you can log the > fact for > >> investigation. > >> > >> Flow control is the current most-common (and arguably the most > important) > >> use for BLOCKED, but an endpoint can also be blocked on the > max-concurrency > >> limit, e.g. 10 max connections for gmail. It would be extremely helpful > to > >> know how often this is occurring so as to tune these parameters. > >> > >> -=R > > > > >
Received on Sunday, 26 January 2014 23:37:25 UTC