Re: http2 Working Code

On 13 Jun 2014, at 8:42 am, Greg Wilkins <gregw@intalio.com> wrote:

> 
> So Jetty's had a working http2 implementation for less that 48 hours and I'm already feeling the affect of working code in as much as some of my vocal opposition to some hpack/http2 design is melting away.    When asked "should we make a provision for optionally flow controlling headers?", my response was - nah it is too difficult to do without radical change, so we'll just have to reject connections that send big headers.
> 
> However, I'm not100% sure if my acceptance is really for good technical reasons, or just because of some kind of Stockholm syndrome.

Welcome :)

>    But anyway, for those that +1'd some of my recent criticisms, here an explanation of why I've capitulated :)
> 
> When you see hpack at work it is pretty good.  The huffman is ok, with a 50% saving on average, but the differential headers does produce great savings and you can see whole new requests pop out of header frames that are only a few bytes long, even empty header frames. Plus it is kind of fun to play with different encoding strategies.
> 
> I still don't like the whack-a-mole changing indexes and the need to copy static entries.  But once implemented it does not feel too bad and the fact that it prevents some encoding optimisations is not a huge deal as you do get huge gains by just having common headers in the ref set.
> 
> Once you accept that the big gains are to be had by differential encoding, then you are accepting a shared state table between streams.  The moment you have that, then either you have to limit header sizes to a single frame OR exclude headers from flow control - because it is just too difficult to allow headers to block partially sent with the current style of shared header table; OR have multiple share tables of some kind.
> 
> As the decision was made some time ago to have a single shared state table, then the best approach is as Martin has said, to just treat Continuations as part of one big uber-frame and hope that nobody important sends headers so big that you have to reject them.
> 
> Note also that I do accept the requirement that we do need good compression of headers, as it is good to pack in 80+ requests into a single slow start window.  While this does kind of duplicate the solution of push, because if push works, then there is no need to have 80+ requests do a round trip before data can be sent.   But I guess there will always be deployment modes where a single response from one server can provoke 80+ simultaneous requests on a CDN - so both mechanisms are called for.
> 
> So working code has made me join the lets-suck-it-and-see camp.   Almost all of the technical uglies do logically flow from the decision to have a single shared state table (and the ones that don't are not huge problems).      I believe that if this protocol is not found to be acceptable by wider review/usage, then we will have to rewind all the way back to considering multiple shared state tables or similar to avoid ending up at the same place.

Thanks for the feedback.

I think people would be interested in your thoughts on priority dependencies too, if you’re able...

Cheers,

P.S. Now that you’re here, my argument for a .au meeting is getting stronger...


Mark Nottingham   http://www.mnot.net/

Received on Friday, 13 June 2014 14:27:40 UTC