Re: HTTP2 Expression of Interest

Hello Doug,

I have a few comments and questions below.

On Sun, Jul 15, 2012 at 03:38:35AM +0000, Doug Beaver wrote:
> 2.2 Transport layer encryption
> We feel strongly that HTTP/2.0 should require transport encryption,
> and we acknowledge that this position is potentially controversial.
> RFC 2616 likely will be at least 15 years old by the time HTTP/2.0 is
> ratified.  Comparing the Internet of today to the Internet of the late
> 1990s, two trends stand out:
>   * The sophistication and surface area of attacks have grown
>     dramatically.
>   * The Internet user community has grown steadily, from a niche
>     in 1999 to a third of the world's population in 2012.
> We can't forecast what the Web will look like in 10-15 years, but
> based on history we can assume that more and more personal information
> will be flowing between users and applications, and that the user
> population will continue to grow.
> Mandating transport layer encryption will make things harder for
> implementors such as ourselves, but in return it will offer greater
> privacy and safety to the billions of people who use the Web today and
> in the years to come.  We think this is a good thing.

I don't want to start the encryption debate in this thread, but since you
have a fairly balanced approach, I'd like to note that at the moment, almost
100% of stolen user information is been done on encryption-protected services,
whether it is bank account credentials or webmail credentials or information.
The issue always comes from malware running on the PC, infecting the browser
and stealing the information at the human interface. However, users feel
safer because they see the SSL lock. And it's not always the browser, as
there was a report of stolen webmail information in TLS traffic in a certain
country when a CA was broken and new certs for a number of large sites were

Also, you said that it could make things harder for you, but did you
evaluate only the front access or also the protocol used between your
load balancers and backend servers ? I'm asking because there is a
difference between mandating the use of encryption in browsers and
designing the protocol based on this. For instance, WebSocket has the
masking flag mandatory on upstream traffic but the protocol supports
not having it between servers.

Basically, since all sensible sites already make use of TLS, I don't think
we can make them safer by mandating use of TLS for them. However mandating
use of TLS will make it harder to work on the backend, it will very often
be a counter-productive effort which increases costs a lot (cert managing,
troubleshooting, etc) with no added benefit.

> 2.5 Server push
> We provide real-time, user-to-user text messaging on multiple
> platforms via multiple protocols.  For HTTP clients, we use long
> polling and streamed, chunked responses (one chunk per message) as a
> lowest common denominator solution.  This solution works, but it moves
> a lot of protocol processing complexity into client-side JavaScript.
> We are interested in the development of a standardized server push
> mechanism to replace long polling in HTTP/2.0
> A subtle but important requirement for applications such as web-based
> chat is that data sent from the server must be pushed without delay.
> We would like to see the inclusion in HTTP/2.0 of a "no buffering"
> flag at either the message or the chunk level, to indicate to the
> recipient and any intermediaries that the flagged content should not
> be delayed for buffering or I/O-coalescing purposes.

I think that what you're describing here precisely is what WebSocket
offers, but I may be wrong, depending on your precise use-cases. It
implicitly offers server push in the sense you're describing it (push
of any data, not HTTP objects), and automatically offers the no-buffering
flag because when HTTP gateways switch to WebSocket, they know this
is interactive traffic and stop buffering. I think your description
confirms the need to unify the transport layer to support both HTTP
and WS at the same time in the same connection.

> 3. Assessment of the HTTP/2.0 Proposals
> 3.3 Network-Friendly HTTP Upgrade
> We have not implemented Network-Friendly HTTP Upgrade, and we
> currently do not plan to implement it, due to the incompleteness of
> the specification and the lack of client implementations.

Note that its incompleteness was intentional as it was not meant to be
a complete proposal but just to study alternative compression and upgrade
schemes only.

> Assessment using our criteria:
>   * Multiplexing: supported
>   * Transport layer encryption: missing

Encryption only relies on the transport layer here so that we can still
support non-encrypted traffic between servers (think webservices etc...).

>   * Zero latency upgrade: missing

A solution for this is presented in paragraph 5 of the spec. Zero latency
upgrade is mandatory in my opinion, we cannot afford to waste one round
trip on mobile terminals. As it is suggested, the protocol also supports
zero-latency fallback, which is equally important in the many environments
which won't support HTTP/2 at the beginning : you don't want to waste a
round trip to retry on HTTP/1 when you see the HTTP/2 connection has failed.

> Additional considerations:
> Network-Friendly HTTP Upgrade uses a Transport Header Frame to
> communicate headers that will be the same for every request on the
> connection.  While this is a good solution for the connection between
> a browser and a load balancer it does not work between the load
> balancer and an upstream web server, where requests from different
> clients may be multiplexed onto the same connection.

In recent works, we have noted that this offers minimal benefit. Basically,
you'd have only the Via and Host headers, which is useless. However, we
also know that what flows between a load balancer and a server is much
less sensible to latency. Right now this is almost always 1 or 10 Gbps
connectivity through a single switch. Our recent experiments have shown
that we can get rid of this transport header classification without making
a compromise on performance.

> The use of a registry for well-known header field names would allow
> for compact encoding of those names, but we foresee interoperability
> problems as new fields are added.  A client will not be able to use
> the assigned numeric code for a new field without knowing whether the
> server also knows about it.

It's just a matter of protocol versioning. We already have a registry of
HTTP headers and it works well enough. An HTTP version would mandate one
encoding and clients would respect this encoding. For tentatively new
headers, they would be sent in clear text form until the connection
suggests that the version is high enough to send them in raw form. Also,
it has apparently worked well for WAP and I think that we could restart
from this encoding instead of reinventing a fresh new one (maybe it needs
a bit of refreshing though).

Best regards,

Received on Sunday, 15 July 2012 06:35:51 UTC