HTTP2 Expression of Interest

This is Facebook's response to the call for expressions of interest
in HTTP/2.0: http://trac.tools.ietf.org/wg/httpbis/trac/wiki/Http2CfI

1. Introduction

Facebook's mission is to give people the power to share and make the
world more open and connected.  Our web and mobile applications are
used by well over 900 million users worldwide.

At Facebook, we serve HTTP/1.1 from a globally distributed
infrastructure that operates at large scales.  We are interested in
sharing our experiences and plan to actively participate in the
development of HTTP/2.0.

We currently are implementing SPDY/v2, due to the availability of
browser support and the immediate gains we expect to reap.  Although
we have not run SPDY in production yet, our implementation is almost
complete and we feel qualified to comment on SPDY from the
implementor's perspective.  We are planning to deploy SPDY widely at
large scale and will share our deployment experiences as we gain them.

The remainder of this response presents a protocol-neutral summary of
what we need in the next generation of HTTP, followed by an assessment
of each of the three current HTTP/2.0 proposals.

2. Criteria for the Next Version of HTTP

In order to provide faster and more secure online services to our
users, the features we need in HTTP/2.0 are:

  * Multiplexing
  * Transport layer encryption
  * Zero-latency upgrade
  * Per-request flow control
  * Server push

2.1 Multiplexing

Like many large web companies, we have invested in content packaging
mechanisms to reduce the number of round trips required to download a
web page.  While this has worked reasonably well for us, we see two
problems:

  * Many of the best practices in web performance optimization - for
    example, image spriting and domain sharding - are workarounds for
    HTTP 1.1's lack of widely-usable pipelining.  The next version of
    the HTTP should fix that.
  * The complexity of these workarounds has limited their adoption.
    We want the whole web to be faster, not just our own site.

Thus we recently rebuilt our internal HTTP framework to support
multiplexing of many independent requests per connection, and we plan
to use this framework to support SPDY and the eventual HTTP/2.0
standard.

2.2 Transport layer encryption

We feel strongly that HTTP/2.0 should require transport encryption,
and we acknowledge that this position is potentially controversial.

RFC 2616 likely will be at least 15 years old by the time HTTP/2.0 is
ratified.  Comparing the Internet of today to the Internet of the late
1990s, two trends stand out:

  * The sophistication and surface area of attacks have grown
    dramatically.
  * The Internet user community has grown steadily, from a niche
    in 1999 to a third of the world's population in 2012.

We can't forecast what the Web will look like in 10-15 years, but
based on history we can assume that more and more personal information
will be flowing between users and applications, and that the user
population will continue to grow.

Mandating transport layer encryption will make things harder for
implementors such as ourselves, but in return it will offer greater
privacy and safety to the billions of people who use the Web today and
in the years to come.  We think this is a good thing.

At present, TLS is the pragmatic choice for encrypting the transport
due to its widespread implementation in the existing Web
infrastructure. We do not see the need to mandate TLS itself; if there
is an improved protocol in the future that supports both
authentication and encryption, that would be fine to use as well.

Regarding our deployment experience, we have deployed TLS at a large
scale using both hardware and software load balancers. We have found
that modern software-based TLS implementations running on commodity
CPUs are fast enough to handle heavy HTTPS traffic load without
needing to resort to dedicated cryptographic hardware. We serve all of
our HTTPS traffic using software running on commodity hardware.

2.3 Zero-latency upgrade

Some of the current HTTP/2.0 proposals use the HTTP/1.1 Upgrade header
to negotiate the use of HTTP/2.0.  We prefer the TLS NPN extension,
because it allows the immediate use of HTTP/2.0 on a newly established
TLS connection without an additional network round trip for the
upgrade.

2.4 Per-request flow control

HTTP proxying is an inherent part of our large, distributed
infrastructure.  The ability to multiplex HTTP streams from many
clients into a shared upstream transport is good for performance,
especially if there is a high network latency between the proxy and
the upstream server.  But different clients will produce and consume
data at different rates, so it is important to have per-stream flow
control.

2.5 Server push

We provide real-time, user-to-user text messaging on multiple
platforms via multiple protocols.  For HTTP clients, we use long
polling and streamed, chunked responses (one chunk per message) as a
lowest common denominator solution.  This solution works, but it moves
a lot of protocol processing complexity into client-side JavaScript.
We are interested in the development of a standardized server push
mechanism to replace long polling in HTTP/2.0

A subtle but important requirement for applications such as web-based
chat is that data sent from the server must be pushed without delay.
We would like to see the inclusion in HTTP/2.0 of a "no buffering"
flag at either the message or the chunk level, to indicate to the
recipient and any intermediaries that the flagged content should not
be delayed for buffering or I/O-coalescing purposes.

3. Assessment of the HTTP/2.0 Proposals

3.1 SPDY

We are implementing SPDY and plan to deploy it widely in two roles:
speaking HTTP directly to users, and enabling faster communication
between geographically distant web servers on our network. Of the
three proposals, we believe it is the best basis for further work due
to the variety of client and server implementations, its proven usage
at large scale, and its full support for our HTTP 2.0 criteria.

Assessment using our criteria:

  * Multiplexing: supported
  * Transport layer encryption: While SPDY currently does not
    require an encrypted transport, current client implementations
    implement SPDY over TLS.
  * Zero latency upgrade: TLS NPN -- not required by the current
    SPDY draft, but used by current implementations -- allows the
    negotiation of SPDY or HTTP/1.1 with no extra network round
    trips.
  * Per-request flow control: supported in SPDY/3
  * Server push: supported

Additional considerations:

Of the three HTTP/2.0 proposals, SPDY currently is the one with the
largest user base, due to its inclusion in Firefox 13 and Chrome.

SPDY's header compression is a good, general-purpose solution, and
gzip is a good starting point, but we would prefer to see a more
lightweight compression algorithm for the HTTP/2.0 standard.

3.2 HTTP Speed+Mobility

We have not implemented HTTP Speed+Mobility, and we currently do not
plan to implement it. There is no sizable deployment of either clients
or servers, and it is missing features we feel are required.

Assessment using our criteria:

  * Multiplexing: supported
  * Transport layer encryption: missing
  * Zero latency upgrade: missing
  * Per-request flow control: supported
  * Server push: missing, but highlighted as a recommended area
    for additional development

Additional considerations:

HTTP Speed+Mobility's dependence on the HTTP Upgrade header is a
problem for us because it adds an additional network round trip in a
very common use case: loading several small, static resources from a
CDN.  Section 1.4 of the HTTP Speed+Mobility proposal notes the need
to tunnel the WebSockets stream over TLS when there is an
"incompatible proxy" (i.e. a proxy not known to support HTTP/2.0)
between the client and server.  We agree, and therefore the following
comparison uses TLS:

  * SPDY: 4 x RTT minimum elapsed time to fetch N resources on a
    new connection (without TLS session resumption or False Start) :
    TCP handshake plus 2 RTT for TLS handshake with NPN, plus
    1 RTT to fetch the resources.
  * HTTP Speed+Mobility: 5 x RTT minimum elapsed time: TCP handshake,
    2 RTT for TLS handshake, 1 RTT to fetch the first resource via
    HTTP/1.1 with "Upgrade: websocket," and 1 RTT to fetch the
    remaining N-1 resources.

3.3 Network-Friendly HTTP Upgrade

We have not implemented Network-Friendly HTTP Upgrade, and we
currently do not plan to implement it, due to the incompleteness of
the specification and the lack of client implementations.

Assessment using our criteria:

  * Multiplexing: supported
  * Transport layer encryption: missing
  * Zero latency upgrade: missing
  * Per-request flow control: missing; Section 2 suggests that
    this is a TBD item
  * Server push: missing

Additional considerations:

Network-Friendly HTTP Upgrade uses a Transport Header Frame to
communicate headers that will be the same for every request on the
connection.  While this is a good solution for the connection between
a browser and a load balancer it does not work between the load
balancer and an upstream web server, where requests from different
clients may be multiplexed onto the same connection.

The use of a registry for well-known header field names would allow
for compact encoding of those names, but we foresee interoperability
problems as new fields are added.  A client will not be able to use
the assigned numeric code for a new field without knowing whether the
server also knows about it.

4.0 Summary

We at Facebook are enthusiastic about the potential for an HTTP/2.0
standard that will deliver enhanced speed and safety for Web users.

Of the three proposals, we recommend the use of SPDY as the basis for
development of the HTTP/2.0 specification, but feel that the
requirement for a secure transport must be added. We plan to continue
developing and optimizing our HTTP, TLS, and SPDY implementations and
are deploying them on a global scale. We look forward to sharing our
experiences with the community.

Received on Sunday, 15 July 2012 03:39:03 UTC