No it was reduced to ensure that people mux with small enough segments that
the priority inversion delay that occurs when higher priority data is
blocked behind lower priority data is small enough that the protocol works
properly.
That it helps to prevent the backup generator problem is icing.
On May 28, 2014 5:02 PM, "David Krauss" <potswa@gmail.com> wrote:
>
> On 2014–05–29, at 1:23 AM, Martin Thomson <martin.thomson@gmail.com>
> wrote:
>
> > On 28 May 2014 08:51, Richard Wheeldon (rwheeldo) <rwheeldo@cisco.com>
> wrote:
> >> The following are based off yesterday's CWS traffic. ~ 6BN requests of
> which only 123 fall into the > 64K category. So, yes they exist but they're
> a tiny edge case.
> >> Header sizes in each case are rounded down to the nearest KB.
> >
> > Awesome, thanks! If we are interested in discussing who to throw off
> > the bus, 64K seems like good break point to discuss. Though that
> > doesn't avoid the need for continuations entirely.
>
> It does if the max frame size goes back up to 64K. It was only reduced to
> artificially make continuations more likely, right?
>
> As for common-case head of queue blocking, DATA frame payloads can still
> be limited to 16K if we like. Such a limit disparity also solves the
> padding granularity problem.
>
> Again I’ll suggest that nobody gets “thrown off the bus” if the canonical
> translation to and from HTTP/1.1 uses an initial sequence of header blocks,
> with routing information going into the first block.
>
>
>