W3C home > Mailing lists > Public > ietf-http-wg@w3.org > July to September 2012

Fwd: SPDY Feedback

From: Mark Nottingham <mnot@mnot.net>
Date: Tue, 17 Jul 2012 17:47:32 +1000
To: HTTP Working Group <ietf-http-wg@w3.org>
Message-Id: <09BEE96E-A450-4ACF-B87F-A5B3B20E4468@mnot.net>
FYI


Begin forwarded message:
> From: Kent Alstad <kent.alstad@strangeloopnetworks.com>
> Subject: SPDY Feedback
> Date: 17 July 2012 5:39:12 PM AEST
> To: "mnot@mnot.net" <mnot@mnot.net>
> 

> It’s clear that over the years, HTTP/1.1 has come to be a less-than-optimal protocol for the modern web.  Its core inefficiencies are now well understood and include the lack of true request multiplexing, a poor concurrency model, poor use of the transport layer, and serialized messaging, among others.  HTTP/2.0 must be able to solve these problems.  As such, a model that utilizes a single long-lived TCP connection with support for pipelining, out-of-order requests/responses, response interleaving to overcome head-of-line blocking, and efficient use of the transport (e.g. elimination of redundant headers, header compression, etc) is desired.  These and other similar issues are core problems that the next version of the protocol must address.  At the same time, the proliferation of HTTP/1.1 has left us with a huge global infrastructure of clients, servers, and intermediary proxies (CDNs, load balancers, security devices, and other forward and reverse surrogates) and the next version of the protocol must enable a smooth transition for this infrastructure.  Maintaining the core semantics of HTTP is one way to ensure smooth integration with existing infrastructure.
> 
> Our experience with proposed HTTP/2.0 frameworks has only been limited to SPDY.  However, lessons learned from SPDY can hopefully assist in decisions as we move forward with HTTP/2.0.  SPDY has shown great promise and success through the following:
> 
> using a single, long-lasting TCP connection striving to reach the maximum bandwidth potential of the connection.
> providing a framework for true out-of-order pipelining with response interleaving, all of which allow non-serialized communication between two endpoints with less possibilities for head-of-line blocking over the single connection
> compressed headers to save bandwidth, with the ability to avoid header redundancy across multiple requests/responses.
> Google’s backing of the protocol and Chrome’s support for it has certainly allowed for quicker collection of real world data; the promise of new browsers and services supporting SPDY also helps its deployment.
> the promise of server push, which would allow a server to proactively send responses to a client
> request prioritization, provided the client is well behaved
> 
> at the same time, SPDY deployment caused us various challenges that we’d hope are addressed within the HTTP/2.0 framework.  These include:
> 
> as a device that was often positioned behind reverse proxies and load balancers, the Strangeloop Site Optimizer was often unable to terminate client TCP connections (CDNs, load balancers, security devices and other such reverse proxies were usually doing the terminating).  This severely limited the deployments in which SPDY could be used.  SPDY’s use of SSL as transport didn’t help the issue since these proxies and load balancers often expect pure HTTP over the SSL transport.  HTTP/2.0 should strive to make these types of deployments (where not all infrastructure is compliant) easier.
> the requirement to always use TLS-secured connections with SPDY creates severe troubleshooting barriers.  SPDY makes a strong case for its “everything secure” mandate.  But, we cannot ignore the needs of network operators and those who are tasked with troubleshooting an already complex infrastructure. An always-secure HTTP/2.0 framework would bring massive, and arguably unnecessary, challenges to these disciplines.  Security is likely best left out of HTTP/2.0’s scope, allowing the protocol to operate either over clear text or TLS, as HTTP does today.  Then the onus falls back to content providers and website owners who would now have a choice over the security model governing their content.
> As the speed+mobility spec argues, it could be conceivable that server push can fall outside the scope of the protocol.  Though incredibly useful at its core, the challenge has, and continues to be, the meta information necessary to determine what to push.  Although this leaves room for innovation, the protocol itself should strive to provide a flexible framework for these types of capabilities.  It’s unclear what those are at this time, but a model, for example, that allows servers to communicate to clients the same way clients communicate to servers (with bidirectional feedback) can allow for more flexibility since it would allow active participation by both endpoints in determining what content should proactively be sent to a client.
> 
> SPDY is a successful protocol which has field-proven methods that solve some of HTTP’s largest current issues.  It has shown tangible benefits in real-world deployments proving that it’s time for a new generation of HTTP to arrive.  The lessons learned from SPDY so far should be able to greatly inform upcoming decisions in regards to HTTP/2.0.  Coupling SPDY’s successful techniques with a model that provides for consistent semantics, compatibility with existing infrastructure, and ease of troubleshooting will surely help us build a great next version of HTTP and ensure its future success.

--
Mark Nottingham   http://www.mnot.net/
Received on Tuesday, 17 July 2012 07:48:03 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Tuesday, 17 July 2012 07:48:13 GMT