W3C home > Mailing lists > Public > ietf-http-wg@w3.org > January to March 2012

Re: Rechartering HTTPbis

From: Willy Tarreau <w@1wt.eu>
Date: Sat, 28 Jan 2012 00:58:49 +0100
To: patrick mcmanus <pmcmanus@mozilla.com>
Cc: "'HTTP Working Group'" <ietf-http-wg@w3.org>
Message-ID: <20120127235849.GA22945@1wt.eu>
Hi Pat,

On Fri, Jan 27, 2012 at 05:42:31PM -0500, patrick mcmanus wrote:
> On 1/27/2012 4:46 PM, David Morris wrote:
> >I talked with a hardware vendor yesterday who had implemented an HTTP 
> >server and client (and other stuff) in 128KB of ram in a special 
> >device. Adding mandatory zlib support would kill his product. 
> and mandatory TLS will probably bury it :) (*)
> 
> but I don't think extreme ram constraint can be a serious requirement 
> for a next generation web transport protocol.

I agree on this precise point, but this is not counting devices having
to deal with huge number of connections.

For instance, haproxy requires around 40 GB of RAM to handle one million
concurrent connections on Linux, and this is with 8 kB buffers and with
a finely tuned network stack. 8 kB buffers are already quite small for
todays web, but as you can guess it, people running with that large
amounts of sessions only deal with small request/response patterns and
will probably migrate to WebSocket. Having to handle zlib constraints
in such usages is totally useless. It will bring no noticeable network
benefit on such traffic, and having to decompress the stream to parse
it on a machine which is already almost packet-bound will require
tremendous amounts of CPU power and will probably add unneeded latency.
To get back to the haproxy example, right now it's able to process up
to 2 million tiny requests per second on a Core2 3 GHz (this is cheating
using pipelining etc). The parser is optimized to avoid reading the same
byte twice. Having to deal with the complexity of inflating the stream
to get the headers will significantly impact the request rate for a gain
that will be minor if we already manage to reduce header size and number.

I agree this is another extreme case, but we're trying to make HTTP
scalable and cheaper on resources. And there are product vendors who
already manage to support much higher connection counts because they
can access to much lower layers and optimize their resources even more.
On the other hand, the added complexity of handling a compressed stream
will surely affect their ability to sustain the same performance levels.

> All the folks running computers in their pockets with 4000x that memory 
> deserve an experience with always on encryption and compression to deal 
> with the high latency mobile world we're looking at.  Just one vendor, 
> Apple, shipped 93 million such devices last year .

But here we're comparing apples and oranges (no pun intended). My opinion
is that any CPU will be able to handle a tens of connections at a time.
Given David's numbers, a CPU equipped with 1 MB of RAM would be able to
ten connections. The issue really is on the server -and more importantly-
intermediary side. We need these components to scale if we want to serve
more users.

> Making security optional in HTTP/1 has been a disaster (firesheep and 
> cookies anyone?).

But making it mandatory will be even worse in my opinion. This will result
in new "dummy" implementations aimed at people who don't need the feature.
We'll see the NULL crypts again and things like this. This will also easily
result in having users even more accustomed to click "I understand the risks"
when presented a security warning. Firesheep is just a joke right now, I'm
more concerned by malware running inside the browsers that are insensible
to encryption and authentication.

> Making compression optional isn't a security disaster 
> of course, but it has generally been a giant performance opportunity 
> lost.

For web browsing yes, but right now HTTP is used for many other things.
One of my customer uses Curl to upload compressed log files every night
over HTTP. The added compression here will not bring anything at all.
Similarly, most downloads are already compressed files/images/etc...
Compression really is useful for many things but not for all. And it
comes with a cost.

> It was all reasonable at the time, but this is a new time. There 
> are tail cases where each approach doesn't buy you much, but being able 
> to rely on the property (and thus that it won't be unapplied in an 
> important case out of convenience, cheapness, whatever) is far more 
> important.

In fact, I like the idea of having compression, encryption, etc... well
defined in the protocol, but negociated between parties. This way it's
possible to continue to use HTTP for things other than pure browsing
without the impact of unused or unwanted features.

Regards,
Willy
Received on Friday, 27 January 2012 23:59:17 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 27 April 2012 06:51:53 GMT