Re: CONTINUATION proposal w/ minimum change

On Jul 1, 2014, at 10:10 AM, Tatsuhiro Tsujikawa <tatsuhiro.t@gmail.com> wrote:

> 
> 
> 
> On Tue, Jul 1, 2014 at 11:48 PM, Jason Greene <jason.greene@redhat.com> wrote:
> 
> On Jul 1, 2014, at 8:24 AM, Tatsuhiro Tsujikawa <tatsuhiro.t@gmail.com> wrote:
> 
>> 
>> 
>> 
>> On Tue, Jul 1, 2014 at 7:38 PM, <K.Morgan@iaea.org> wrote:
>> **Is it worth continuing with this proposal?**
>> 
>> 
>> I think Michael brought up some really valid points.
>> 
>> 
>> Assuming the following changes:
>> 
>> + opcodes and their associated literal values MUST fit within the initial HEADERS frame
>> 
>> + opcodes and their associated literal values MAY span CONTINUATION frames
>> 
>> + static table references are OK in CONTINUATION
>> 
>> + same-stream muxing between HEADERS and CONTINUATION is disallowed
>> 
>> + reference set emitted at the end of HEADERS/PUSH_PROMISE
>> 
>> 
>> Does anyone thing this proposal is still worth pursuing?
>> 
>> 
>> ​Personally I prefer the current CONTINUATION spec to the proposed one.
>> The proposed solution removes some restrictions, but introduces lots of complexities.
>> And those complexities are just for "​only 0.02% of requests and 0.006% of requests" in the world.
>> I think it probably does not worth the cost.
>> 
>> The servers always have the power to terminate connection if header size is too large for them.
> 
> Can they? The blacklist proposal that was recently suggested today suggests that browsers won’t talk to them if they actually do.
> 
> 
> ​I don't remind the blacklist proposal, but
> today, nginx, apache​ and other servers have their own limit regarding header fields.
> HTTP/2 enabled servers are no exception here.
> 
> 
> 
>> It is already a good incentive and pressure for peer not to abuse HEADERS, because large request will result in connection lost with higher probability.
>> 
>> We already know headers > 16K (e.g., Kerberos), so we need CONTINUATION for them in anyway.
>> Flow controlling CONTINUATION unfairly penalizes those valid HTTP requests by arbitrarily delaying transmission because of flow control and scheduling.  In addition to this, as already discussed, headers are more likely kept in memory while reading entire request, so applying flow control for them just complicates both specification and implementation without good value.
> 
> How does it arbitrarily delayed. Senders have pressure to send them as fast as they can. The only delay flow control has is allowing other streams to progress, which is exactly what should happen in a multiplexed protocol.
> 
> 
> ​Connection-level flow control fully blocks CONTINUATION and DATA in a connection.  Transmission of CONTINUATION​ is delayed until WINDOW_UPDATE is received.​  Also, in client scheduler, it may get very small send window in contention.  If we can send entire request headers without flow control, we have less block and server can use their time more useful.

That’s not arbitrary delay, thats fair controlled delay. If a receiver wants more data it simply sets the window appropriately. As you mention, we are talking this 0.02% large cases here. Making the .02% stand in line like everyone else is *good*. I do agree with you that scheduling problems are bad, and there is a huge one with the current draft which allows one stream to starve the rest. So for the purpose of argument, lets say the flow control aspect is dropped. Do you still prefer a HOL blocking multiplexed protocol?

--
Jason T. Greene
WildFly Lead / JBoss EAP Platform Architect
JBoss, a division of Red Hat

Received on Tuesday, 1 July 2014 16:08:00 UTC