W3C home > Mailing lists > Public > ietf-http-wg@w3.org > April to June 2014

http2 Working Code

From: Greg Wilkins <gregw@intalio.com>
Date: Fri, 13 Jun 2014 14:42:11 +0200
Message-ID: <CAH_y2NF5YOAgU8BYOPOc2k-JBtouFHtJowGxqZOunwHq7eiuiQ@mail.gmail.com>
To: HTTP Working Group <ietf-http-wg@w3.org>
So Jetty's had a working http2 implementation for less that 48 hours and
I'm already feeling the affect of working code in as much as some of my
vocal opposition to some hpack/http2 design is melting away.    When asked
"should we make a provision for optionally flow controlling headers?", my
response was - nah it is too difficult to do without radical change, so
we'll just have to reject connections that send big headers.

However, I'm not100% sure if my acceptance is really for good technical
reasons, or just because of some kind of Stockholm syndrome.    But anyway,
for those that +1'd some of my recent criticisms, here an explanation of
why I've capitulated :)

When you see hpack at work it is pretty good.  The huffman is ok, with a
50% saving on average, but the differential headers does produce great
savings and you can see whole new requests pop out of header frames that
are only a few bytes long, even empty header frames. Plus it is kind of fun
to play with different encoding strategies.

I still don't like the whack-a-mole changing indexes and the need to copy
static entries.  But once implemented it does not feel too bad and the fact
that it prevents some encoding optimisations is not a huge deal as you do
get huge gains by just having common headers in the ref set.

Once you accept that the big gains are to be had by differential encoding,
then you are accepting a shared state table between streams.  The moment
you have that, then either you have to limit header sizes to a single frame
OR exclude headers from flow control - because it is just too difficult to
allow headers to block partially sent with the current style of shared
header table; OR have multiple share tables of some kind.

As the decision was made some time ago to have a single shared state table,
then the best approach is as Martin has said, to just treat Continuations
as part of one big uber-frame and hope that nobody important sends headers
so big that you have to reject them.

Note also that I do accept the requirement that we do need good compression
of headers, as it is good to pack in 80+ requests into a single slow start
window.  While this does kind of duplicate the solution of push, because if
push works, then there is no need to have 80+ requests do a round trip
before data can be sent.   But I guess there will always be deployment
modes where a single response from one server can provoke 80+ simultaneous
requests on a CDN - so both mechanisms are called for.

So working code has made me join the lets-suck-it-and-see camp.   Almost
all of the technical uglies do logically flow from the decision to have a
single shared state table (and the ones that don't are not huge
problems).      I believe that if this protocol is not found to be
acceptable by wider review/usage, then we will have to rewind all the way
back to considering multiple shared state tables or similar to avoid ending
up at the same place.


Greg Wilkins <gregw@intalio.com>
http://eclipse.org/jetty HTTP, SPDY, Websocket server and client that scales
http://www.webtide.com  advice and support for jetty and cometd.
Received on Friday, 13 June 2014 12:42:42 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:14:31 UTC