Re: expected substantial and measurable improvements in WG charter

Hi Larry,

> On 11 Mar 2015, at 4:17 pm, Larry Masinter <> wrote:
> The working group seems to have come to the rough consensus that “HTTP/2 is good enough to ship”. And that’s fine.

"Seems to" is an understatement -- it's done and dusted, and the time to dispute that has truly past. Not saying that you're doing so, but it does appear that some folks might not understand the process.

> That’s not the same as consensus as to whether HTTP/2 meets the “It is expected that” list in the charter. In particular, the charter expects HTTP/2 to substantially and measurably improve end-user perceived latency in most cases.  But it seems there mostly agreement (not consensus to the contrary) that to substantially improve end-user perceived latency in “most cases”, you not only need HTTP/2 but also a good deal of mainly undisclosed magic. And that quite a few sites will see worse performance if they merely replace HTTP/1.1 with HTTP/2 (with the necessary shift to TLS).

That's not the agreement that I see at all. Most people with operational experience of the protocol have said that one can expect a 5-15% end-user perceived performance benefit "out of the box" with a reasonable implementation, and substantially more with some tweaking (e.g., removing spriting/inlining/sharding/concatenation, adjusting prioritisation algorithms and thinking about server push). Those numbers don't hold for every site on every network, but that's the nature of the Internet.

>  Perhaps the tight group developing  HTTP/2 represent interests with knowledge of how to rearrange their servers to actually see substantial and measurable performance improvements, but the means are not clear, and might even be proprietary.

That sounds a lot like FUD, Larry.

> It is  a disservice to ship HTTP/2 without clearly documenting what you have to do to actually get the substantial improvements expected.
> Now, you might argue that no work meets all of the expectations placed on it, and perhaps HTTP/2 isn’t quite as good as expected, but that’s how it always works. But the words in the charter need to have some purpose beyond a wish list:   Meet these expectations or explain when and why they cannot be met.
> The working group should focus on the task of documenting clearly the deployment steps needed to get the expected benefit. Without doing so, HTTP/2 isn’t complete.

No one has made that argument. 

As you and I have discussed before, there's a substantial community around Web Performance, and I fully expect it to step up and develop best practices, tooling, etc. around H2 in time. The Velocity conferences over the next few years should be interesting.

h2 is only recently available in commonly available browsers, and server-side support is still in progress for the platforms that most people use (if folks haven't seen it yet, you may be interested in <>). Since most ops and perf people are by nature hands-on, most have only recently had a chance to get familiar with the practicalities of the protocol.

WRT Google and others who have existing deployment experience with SPDY and early H2 - in the discussions I've seen, they've been pretty open about the "tricks" they use to get decent perf out of the protocol; much of it has to do with TCP and TLS tuning (see <>). We could perhaps do more to collect h2-specific information together in one place -- but I'm not sure that the right place to put it is in an RFC. 

> And the working group should do this before taking on other topics proposed in the draft agenda — all of which seem to be out of scope for the current charter.

Our charter explicitly allows us to "define additional extensions to HTTP." I don't see how you get to "out of scope" from here.

> I see no point in slowing the publication  of HTTP/2 as Proposed Standard, I’m just calling for full disclosure.

That's an interesting phrase to use.


Mark Nottingham

Received on Thursday, 12 March 2015 03:05:22 UTC