- From: Martin Thomson <martin.thomson@gmail.com>
- Date: Sun, 26 Jan 2014 13:27:51 -0800
- To: Mike Belshe <mike@belshe.com>
- Cc: Roberto Peon <grmocg@gmail.com>, HTTP Working Group <ietf-http-wg@w3.org>
I'm not sure about redundant, but I'm certainly not that excited by the idea. I think that this is going to be very hard to extract useful information from, and it's probably going to be difficult to generate properly, let alone interpret. I can appreciate the value of having an interoperable mechanism for this sort of reporting channel, but I think that it's probably better to build this sort of feedback into a specialized tool, rather the protocol itself. There are plenty of sites out there that test how good your site/server/client is with respect to various aspects of HTTP, web performance and so forth and I don't think that this is any different from those. The incremental gain seems minor, but a protocol element, particularly at this stage, is expensive. On 25 January 2014 05:33, Mike Belshe <mike@belshe.com> wrote: > Questions: > - is there any existing implementation experience with BLOCKED? > - Why can't each side already infer blocked based on the window size being > closed? (I do realize the max-conns-reached case could benefit, but I think > that is less valuable) > > I ask because the definition seems unclear. "When an endpoint is blocked on > flow control, but the socket is writable". What exactly does this mean? > If the socket is not writable for 60 seconds, do you send BLOCKED once or > more than once? > > Example 1: > - the server has sent a WINDOW_UPDATE, but it is in flight > - client flushes enough data that there is 1 byte of space in the socket > buffer > - send a BLOCKED frame? > > Example 2: > - server sets window to 64KB > - client has 65KB to send, so it sends 64KB bytes followed by a BLOCKED? > > Example 3: > - server sets window to 1024 bytes > - client immediately sends a BLOCKED > - 3 seconds later, you're still blocked, do you send BLOCKED again? > > Example 4: (perhaps a chatty version of example 1) > - server sends WINDOW_UPDATE opening 1 frame of space (1400bytes) > - client sends 1400 bytes > - client sends BLOCKED > - server sends WINDOW_UPDATE opening 1 frame of space (1400bytes) > - client sends 1400 bytes > - client sends BLOCKED > - server sends WINDOW_UPDATE opening 1 frame of space (1400bytes) > - client sends 1400 bytes > - client sends BLOCKED > > > Overall, I must be misunderstanding, because I think the blocked frame is > redundant? > > Mike > > > > > > > On Fri, Jan 24, 2014 at 6:11 PM, Roberto Peon <grmocg@gmail.com> wrote: >> >> I added a new issue for the blocked frame. >> >> As a reminder, the expectation, based on our implementation experience, is >> that flow control and other settings limiting state size are helpful, but >> can cause issues with the user experience when tuned improperly, as these >> settings invariably are... >> Worse, finding bugs in flow control implementations is extremely annoying >> and difficult without explicit signaling (Why did the client/server stop >> sending us data? What is going on?!) >> >> A BLOCKED frame helps with both of these issues: >> When an endpoint is blocked on flow control, but the socket is writable, >> it emits a frame saying so, and the remote end now has explicit signaling >> about the tuning of the flow control size. >> When you get a BLOCKED frame, if you didn't expect the other side to be >> blocked (i.e. you believe the window isn't closed), you can log the fact for >> investigation. >> >> Flow control is the current most-common (and arguably the most important) >> use for BLOCKED, but an endpoint can also be blocked on the max-concurrency >> limit, e.g. 10 max connections for gmail. It would be extremely helpful to >> know how often this is occurring so as to tune these parameters. >> >> -=R > >
Received on Sunday, 26 January 2014 21:28:23 UTC