W3C home > Mailing lists > Public > ietf-http-wg@w3.org > January to March 2015

Re: Last Call: <draft-ietf-httpbis-http2-16.txt> (Hypertext Transfer Protocol version 2) to Proposed Standard

From: Jonathan Thackray <jthackray+http2@gmail.com>
Date: Thu, 8 Jan 2015 14:29:20 +0000
Message-Id: <6FEE1B65-52FF-4059-B398-2E61A6CA0306@gmail.com>
To: ietf-http-wg@w3.org
On Jan 7, 2015 7:24 AM, "Poul-Henning Kamp" <phk@phk.freebsd.dk> wrote:
> Nobody has demonstrated a HTTP/2.0 implementation that approached
> contemporary wire speeds. Faster? Not really.

Is anyone working on HTTP/2 benchmarking at the moment? It would be
good to have some actual numbers for a number of use-cases of HTTP/1.1
versus HTTP/2.

AIUI, HTTP/2 can be faster in some cases, since it avoids the problem
of TCP retransmissions on a congested link due to domain-sharding
interacting poorly with the TCP slow start algorithm:

   https://insouciant.org/tech/network-congestion-and-web-browsing/

And whilst time to send the actual data might not be quicker, time
to render can be improved by prioritising sending HTML, JS and CSS
files before image data:

   https://nghttp2.org/blog/2014/11/16/visualization-of-http-slash-2-priority/

Historically, I've seen some websites (inadvisedly) using large (4KB)
cookies with ~100 resources on a webpage. This results in 400KB of
"uploads" from the browser, i.e. sending the same HTTP headers
repeatedly for each GET request, which unsurprisingly results in
a slow browsing experience.

Yes, websites shouldn't be using such massive cookies (or perhaps
cookies in general), but at least HTTP/2 fixes the issue of repeated
duplicate HTTP header data in both directions.

> You would think that a protocol intended for tens of millions of computers
> would be the subject of some green scrutiny, but surprisingly—at
> least to me —I have not been able to find any evidence that the
> IETF considers environmental impact at all —ever.

It would have been good to convert the Date: header from ASCII to a
multi-byte binary integer, as discussed previously on this list, given
that you previously stated Varnish spends ~30% of its CPU usage
just processing Date: headers. I imagine other clients and servers
spend a large amount of CPU time parsing this ASCII format repeatedly.

> The IETF, obviously fearing irrelevance, hastily "discovered" that
> the HTTP/1.1 protocol needed an update, and tasked a working group
> with preparing it on an unrealistically short schedule.

Agreed. All four major browsers (Firefox, Chrome, IE and Safari) now
support SPDY/3.1, so I can understand the IETF wanting to avoid this
becoming a "de-facto" non-RFC standard.

Is 2 years too short, though? It was 2013-01-21 when
draft-ietf-httpbis-http2-01.txt was first issued. How many more
years should be allocated?

> Yet, despite this, HTTP/2.0 will be SSL/TLS only, in at least three
> out of four of the major browsers, in order to force a particular
> political agenda. The same browsers, ironically, treat self-signed
> certificates as if they were mortally dangerous, despite the fact
> that they offer secrecy at trivial cost.

Agree with you here. I'm disappointed that HTTP/2 won't be supported
over TCP except in IE, AFAIK. What about my HP Laserjet printer that
has a web interface, and could generate a self-signed certificate?

It can't run HTTP/2 over TCP unless a particular browser is used,
and a self-signed cert will trigger click-through warnings which will
put most regular users off. Small devices like this will be never be
able to use HTTP/2, as far as I can see.
Received on Thursday, 8 January 2015 14:29:50 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:14:42 UTC