W3C home > Mailing lists > Public > ietf-http-wg@w3.org > January to March 2014

Re: Support for gzip at the server #424 (Consensus Call)

From: Roy T. Fielding <fielding@gbiv.com>
Date: Tue, 18 Mar 2014 14:01:03 -0700
Message-Id: <D88941AC-A7CD-4645-9989-518754393CD6@gbiv.com>
To: HTTP Working Group <ietf-http-wg@w3.org>
It might help to understand that chunked requests are universally
supported by servers but not universally supported by resources.

In particular, it is impossible to implement unlimited chunked
request bodies via a CGI script because that gateway protocol
requires the content-length be set in the environment variable
before the command is invoked. I think the same limitation is in
Servlets, because it was just a copy of CGI. In contrast, it is
trivial to implement request streaming with an Apache module, since
the server's protocol filter handles the chunks automatically
if told to do so.  It is also easy to implement limited request
buffers for a legacy back-end, configurable on a per-resource basis,
provided that sufficient protections against denial-of-sevice attacks
are in place.

It would be a terrible mistake to limit HTTP/2 to the worst
of old implementations.  That is the opposite of HTTP's design
for flexible extensibility.  There are hundreds (if not thousands) of
implementations of HTTP/1.1 that have no problem whatsoever with
compression, chunked encoding, or any of the other features of HTTP.
That is because the people installing them control the network in which
those features are enabled, and can remove any products that get them
wrong. HTTP/2 should focus on making features self-descriptive,
rather than inventing limitations on use.

Received on Tuesday, 18 March 2014 21:01:20 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:14:24 UTC