HTTP/2.0 Expression of Interest: Akamai

Akamai has deployed a pervasive, highly-distributed cloud
optimization platform with over 105,000 servers in 78 countries within
over 1,000 networks, and delivers between 15-30% of all HTTP traffic

Akamai is the leading cloud platform for helping enterprises provide
secure, high-performing user experiences on any device, anywhere. At
the core of the company's solutions is the Akamai Intelligent
Platform™ providing extensive reach, coupled with reliability,
security, visibility, and expertise. Akamai removes the complexities
of connecting the increasingly mobile world, supporting 24/7 consumer
demand, and enabling enterprises to securely leverage the cloud

At Akamai we are thrilled about the opportunity to participate in
developing HTTP/2.0, and share our experience from 14 years of serving
massive loads of HTTP/1.0 and HTTP/1.1 traffic, as well as the
perspective of being a surrogate for a large portion of the worlds web
traffic.

>From the submitted proposals Akamai so far implemented SPDY draft 2.
The reason for implementing SPDY is twofold: testing a new proposal
aimed to address performance needs, as well as wide adoption and
support by significant number of clients/browsers - which enables
real-world testing and evaluation of the protocol.

Criteria for HTTP/2.0: When considering our objectives for HTTP/2.0 we
believe the suggestion should address and improve on the following
aspects which we see as key for today’s internet:

Performance: Help make HTTP and web applications faster, addressing
existing protocol limitations.

Efficiency: Improve network resource utilization, as
well as client and server efficiency: faster protocol handling, and
eliminating redundant or obsolete features. specifically - when taking
it to scale (see below) - we need to ensure that the protocol can be
efficiently handled and parsed on scale, at large numbers of requests
per server.

Security: The protocol should improve security and make
it easy to use. Security doesn't necessarily means forced encryption
for all traffic, but should provide the means for an end-user to
validate that the server is trusted, and that the content served is
indeed served from the requested server

Scale and serviceability - the suggestion should also take into
consideration the existing network infrastructure used to serve large
sites, as well as overall web platforms: serving content of
potentially thousands of servers, from multiple locations, proxy
servers and multi-tiered architectures, as well as servers serving
hundreds or thousands of different hosts/domains.

Specifically we believe that the protocol should include the following
technologies which helps achieve the above goals:

1. Multiplexing requests on a single connection: Given the fact that
multiple hosts can be served by the same server, we should also
support and cover cases to enable multiplexing requests for multiple
hosts on the same single connection, ensuring the security and privacy
of the served hostnames/domains. This is specifically important for
mobile devices, as we would like to reuse when possible a single
connection for as many domains as possible. This may also call for
additional requirements such as cert push and DNS push over HTTP.

2. Prioritization and flow control of requests within a single
connection

3. Server push

4. Header compression and optimization: eliminate redundancy and
parsing overhead per request, as well as better structuring of some
headers, for improved parsing and header handling.

5. Content integrity: Both for full and partial object.  TCP payload
corruption happens and not all of them are caught by the TCP checksum.


Aside from the techniques called out in the different proposals, there
are some additional requirements we think are critical in order to
take things to scale:

1. TLS should not be a requirement: Though it certainly greatly
simplifies protocol adoption while generally raising the security
level, TLS will put an unnecessary and significant burden on
infrastructure.  Computing costs for symmetric encryption tend to be
between 10-20% over non-encrypted traffic.  This additional cost is
potentially a blocker to adoption of a protocol that requires all
traffic to go over TLS as medium to large server deployments will need
to scale accordingly.

2. Requiring SNI support (or equivalent) - Whether TLS is a
requirement or not, SNI should be called out as a requirement of TLS.
without SNI, offering TLS for a domain requires a dedicated IP
address. with the limitation of IPv4 addresses, it becomes a necessity
to serve multiple certs on a single IP address in order to scale SSL
at a large scale. We believe supporting SNI should be a requirement,
and shouldn't be optional.

3. Easy detection of host header for efficient load balancing and
service handling.

4. Control server mapping and optimizing other layers, such as
certificate handling and DNS/host mapping for end users. Specifically
- pushing to user-agents DNS data and certificates. In a world where
request multiplexing is supported, and long-living connections are the
standard, mechanisms to better handle connections on the server level
are required. For instance, gracefully handing over a connection to an
alternate server - for load management or to ensure better
performance/reduced connection latency. Traditionally these would be
handled by DNS or other protocols, but we believe it is required to
add control for that directly between the server and client.

Evaluation of the existing alternatives: So far we have implemented
SPDY - as clients are publicly available at scale.  We haven't
implemented other proposals as there was no widely deployed client
implementation that we could test with.

Received on Monday, 16 July 2012 06:03:20 UTC