W3C home > Mailing lists > Public > ietf-http-wg@w3.org > January to March 2012

Re: Review: http://www.ietf.org/id/draft-mbelshe-httpbis-spdy-00.txt

From: Willy Tarreau <w@1wt.eu>
Date: Wed, 29 Feb 2012 08:24:49 +0100
To: patrick mcmanus <pmcmanus@mozilla.com>
Cc: ietf-http-wg@w3.org
Message-ID: <20120229072449.GC32187@1wt.eu>
Hi Patrick,

On Tue, Feb 28, 2012 at 09:04:35PM -0500, patrick mcmanus wrote:
> The  spdy compression scheme has proven itself very valuable in firefox 
> testing.
> Just like Mike has seen - we see 90% header reduction sizes, including 
> on cookies because they are so repetitive between requests even if the 
> individual cookies don't compress well. Exclusively, proscribing maps 
> for well known values doesn't scale well to the value side of the n/v 
> pairs and it biases strongly in favor of the status quo making it too 
> difficult to deploy things like DNT in the future. I'm totally open to 
> other formats as well - but this has proven itself to me with 
> experience.. and right now I trust experience first, out of context data 
> second, and good ideas third.

I get your point, but as I said to Mike, saving space on *existing* HTTP
implementations with compressions might have been the only way of getting
to that point. But when redesigning HTTP we should address the issues that
caused compression to be introduced.

For instance, we currently lack session-oriented headers : over a
multiplexed connection, we should be able to send requests from
various agents at the same time and isolate them in "channels". We
could then send a number of headers only once per channel. Cookies
certainly are session-oriented. User-agent too. Accept headers too.
Host maybe. By doing so, we could simply state that session headers
are valid for all the requests of a given channel in a connection.
Instead of sending these headers to the compression lib and relying
on it to deduplicate them, you just send them once with the proper

> the dos implications don't bother me. http headers always have an 
> implementation size limit anyhow - the fact that you can detect that 
> limit has been hit with fewer bytes on the wire is kind of a mixed 
> result (i.e. it isn't all bad or good).

The problem is that in order to detect you reached the limit, you first
have to process up to the limit. And it costs nothing to the sender to
push you to the limit. DDoS fighting is a matter of strength balance.
"How can I maximize your pain at a minimal cost".

> For anyone that hasn't read some of the other posts on this topic, the 
> compression here is a huge win on the request side because of CWND 
> interaction. spdy multiplexes requests and can therefore, especially 
> when a browser needs to get all of the subresources identified on the 
> page, create a little bit of a flood of simultaneous requests. If each 
> request is 500 bytes (i.e. no cookies) and IW=4, that's just  12 
> requests that can  really be sent simultaneously. 90% compression means 
> you can send 100+, which is a lot more realistic for being able to 
> capture a page full of resources..

I totally agree with these points. That's why I've been suggesting for a
long time that we work on dramatically reducing request size.

> but honestly that's not far past 
> current usage patterns and cookies of course put pressure on here too. 
> So it's impt to count bytes and maximize savings. (pipelines have this 
> challenge too and its the primary reason a pipeline of double digit 
> depth doesn't really add much incremental value.)

Agreed, but please, let's try to address the issue at its root instead
of trying to cover it. What SPDY does might be optimal for HTTP/1.1, but
here we're talking about how we'd like HTTP/2.0 to address HTTP/1.1 issues.
Let's focus on this first. At least thank to Mike's work and your feedback,
we *know* there is something to do with repeated cookies for instance (eg:
per-session headers).

Received on Wednesday, 29 February 2012 07:25:21 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 1 March 2016 11:11:00 UTC