Re: #540: "jumbo" frames

On Wed, Jun 25, 2014 at 11:48:15AM -0700, Nicholas Hurley wrote:
> (co-)Implementer of 2 implementations chiming in...
> 
> Exactly. Jumbo frames seem like prime candidates for an extension,
> especially since no one can seem to come together on a single plan for what
> they would look like.

Which is precisely the purpose of this discussion.

> And given that we've already removed things from the
> spec that had some actual implementation experience (or at least broad
> implementer interest - I'm thinking ALTSVC and BLOCKED), I see no reason to
> add something to the spec just before WGLC.

You know, that's always the principle of cost of errors which multiplies
by ten at each step towards the end user. Fixing a bug before compiling
is cheaper than after, which is cheaper than after release which is cheaper
than after it reaches customers, which is cheaper than once customers hit
their own customers etc...

> If the procedural concerns aren't enough (they don't seem to have been in
> the past, so I'm not sure they will be now), let's talk about actual
> experience. We have running code, right now, from a good number of
> implementers, that works with the regular-sized frames as currently in the
> spec.

Don't get me wrong, I don't want to dismiss anyone's work on the subject,
but you also need to accept the fact that for some low level components
like some intermediaries, implementing such a protocol is a tremendous
work and that even the smallest changes can become a nightmare to adapt,
so it is normal to observe a cool down period. To be clear, I *hope* to
be able to implement end-to-end HTTP/2 in haproxy in one year, but I'm
sure I'm dreaming...

> The current frame layout is simple, easy to parse, and easy to make
> decisions about. With all the perceived "complexity" that people have been
> complaining about, I'm honestly a little surprised that people are asking
> to add even more complexity (some of whom seem to be the same people
> complaining earlier!)

I'm personally not complaining about the protocol's complexity itself,
but am trying to warn people about choices which will definitely hurt
implementations. The difficulty that comes with the protocol is not
parsing it, it's dealing with multiplexing between N clients and M
servers in products which are designed to pass data blocks at the
fastest possible speed between both sides. And among the issues, we
observe that a few of the crucial capabilities these products rely on
(ability to transfer large blocks between the two sides) will not be
possible anymore (16kB-1 bytes is fine for 1Gbps, but that's about all).

> I'm not convinced by the arguments around large file downloads - those are
> the exception rather than the rule.

Most of network operators and equipment providers disagree with you. On
this page, Cisco claims that video already represents 66% of all the
internet's IP traffic, and they even expect 79% for 2018 :

   http://www.cisco.com/c/en/us/solutions/service-provider/visual-networking-index-vni/index.html

And you'll easily find a few others from Akamai and other CDNs citing
similar numbers.

> Optimize for the common case.

That's what we're trying to do :-)

> In HTTP,
> that means viable multiplexing and priority, both of which are much more
> effective with frame size limited to 16k like we have it now.

I absolutely agree, and we're precisely discussing the use case of the
vast majority of the internet traffic which is not concerned by this
case. OK we found how to finally make sites load fast, in large part
thanks to the good research work made by the SPDY team. Now we're
realizing that only *this* aspect was covered (and it was the most
difficult one so that's normal) and that some of the design decisions
made for this use case hurt the rest of the traffic and that it's not
too late to fix this with the most minimal changes.

I think it's the right approach to the problem to get the best in both
use cases.

Regards,
Willy

Received on Wednesday, 25 June 2014 19:17:11 UTC