W3C home > Mailing lists > Public > ietf-http-wg@w3.org > October to December 2013

should tools like wget implement HTTP 2.0?

From: <bizzbyster@gmail.com>
Date: Sun, 3 Nov 2013 08:59:46 -0800
Message-Id: <E3BDDC87-2BD4-4135-9E6B-9F400B5EBD70@gmail.com>
To: HTTP Working Group <ietf-http-wg@w3.org>
The introduction, pasted below, is a nice summary of the benefits of HTTP 2.0. But it made me realize there is nothing in it for either uploads or large file downloads. Reading through the rest of the spec I still couldn't find any benefits for simple file transfers. So a couple questions:

Is there any reason why HTTP file transfer clients like curl and wget should ever implement 2.0?

More radically, since there is no benefit for uploads, should HTTP 2.0 even support the upload verbs?

Or, is the argument that 2.0 flow control is less effective if there are non-2.0 TCP connections between the browser and the content server that the 2.0 connection is competing against (and unable to account for in its flow control dance) and therefore uploads and large file downloads should use HTTP 2.0? That seems like a small benefit, and it only kicks in when the user is doing both web browsing (transferring lots of small files efficiently with 2.0) and transferring large files.

Thanks,
 
Peter


1.  Introduction

   The Hypertext Transfer Protocol (HTTP) is a wildly successful
   protocol.  However, the HTTP/1.1 message format ([HTTP-p1], Section
   3) is optimized for implementation simplicity and accessibility, not
   application performance.  As such it has several characteristics that
   have a negative overall effect on application performance.

   In particular, HTTP/1.0 only allows one request to be outstanding at
   a time on a given connection.  HTTP/1.1 pipelining only partially
   addressed request concurrency and suffers from head-of-line blocking.
   Therefore, clients that need to make many requests typically use
   multiple connections to a server in order to reduce latency.

   Furthermore, HTTP/1.1 header fields are often repetitive and verbose,
   which, in addition to generating more or larger network packets, can
   cause the small initial TCP congestion window to quickly fill.  This
   can result in excessive latency when multiple requests are made on a
   single new TCP connection.

   This document addresses these issues by defining an optimized mapping
   of HTTP's semantics to an underlying connection.  Specifically, it
   allows interleaving of request and response messages on the same
   connection and uses an efficient coding for HTTP header fields.  It
   also allows prioritization of requests, letting more important
   requests complete more quickly, further improving performance.

   The resulting protocol is designed to be more friendly to the
   network, because fewer TCP connections can be used, in comparison to
   HTTP/1.x.  This means less competition with other flows, and longer-
   lived connections, which in turn leads to better utilization of
   available network capacity.

   Finally, this encapsulation also enables more scalable processing of
   messages through use of binary message framing.
Received on Sunday, 3 November 2013 17:00:12 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 1 March 2016 11:11:19 UTC