RE: In Defense of Header Compresson

Patrick, 

I apologize for not seeing this earlier -- this is great data!

I completely agree that if you have very small entity bodies then maturally the header size will matter getting more requests into the pipe faster. However, in the tests that we have done with actual data the size of the entity bodies was large enough that the impact was minimal. This is especially the case if you also do bundling/minification as it naturally leads you to large entities.

In other words, it is hard to beat the performance characteristic of the request you *don't* have to make :)

I completely agree that whatever transformation we do has to be lossless -- it's primarily the *mechanism* I am interested in. In particular, I am interested in its ability for easy inspection, which is why I am somewhat weary of stream-based compressing algorithms. I would much rather see some form of tokenization.

That said, we are gathering a bunch more data as we speak on the various ways of sending less headers -- I hope to be able to send out more data soon.

Thanks,

Henrik

-----Original Message-----
From: Patrick McManus [mailto:pmcmanus@mozilla.com] 
Sent: Wednesday, August 08, 2012 13:24
To: Henrik Frystyk Nielsen
Cc: ietf-http-wg@w3.org Group
Subject: In Defense of Header Compresson

On Thu, 2012-08-02 at 20:53 +0000, Henrik Frystyk Nielsen wrote:
> I also have trouble with the use of compression over the headers [..
> mcmanus deletes argument #1 just for clarity of reply..]
> 
> 2) Further, it is unclear whether there is any noticeable performance 
> gain from doing so. The only headers that today are open-ended are 
> really User-Agent and Set-Cookie/Cookie pairs. In all our data where 
> we don't include these headers we see no gain from using header 
> compression whatsoever as long as you are conservative in how many 
> headers you choose to include.

Hi Henrik, Thanks for the report - but this really really doesn't mesh with my experience. I kept it in my inbox until I had the time to do some research to back up the value of header compression when I piped up (I'm referring to compression generically - not a particular scheme for doing so..).

In one sense, we must be talking about different contexts. It's obvious that if you have very small objects that the headers are going to dominate transfer time. They do this not so much from serialization delay but because of interaction with CWND - you can get more headers upstream faster in the same CWND and form a deeper pipeline (whether that is serial http/1 style or spdy-muxxed style isn't impt). Once you get enough requests upstream that the server can fill the BDP it doesn't much matter, but until you reach that point it matters a lot and it turns into significant time.

This is especially problematic when responses are smaller than requests.
The poster child for this is a 304 response to a request with a big cookie. 

But its also a problem where clients have less aggressive TCP parameters than the server they are talking with - which is pretty typical. A devops environment might be tweaked up with IW=10 and set to not revert to slowstart after small periods of idle time, but a desktop is almost certainly going to be running vanilla IW=3 for many years. That exacerbates the problem of the client not being able to send requests fast enough for the server to be able to have something to respond to at their full sending rate.

I put together some tests to illustrate it. I formed a HTTP/1 style pipeline of 86 requests that all return 304. (click reload on one page).
I tested both without a cookie and with a modest ~1000 byte cookie. You can look at the packet captures if you want to see the transactions, but they are terribly vanilla - requests are 477 bytes (plus possible cookie), and responses are 307. Both client and server are running IW=3, and my timings exclude the handshake, and I measured over 4 latencies.
Timings are in ms.

Test    50ms    100ms    200ms    300ms
1         52    102       202      302 
2         52    102       202      302
3        358    808      1401     2108
4        256    506      1006     1542

Test 1 is with the cookie and zlib compression Test 2 is without the cookie but with zlib Test 3 is with the cookie and no zlib Test 3 is without cookie and no zlib

It's pretty clear that TCP transfer (as it almost always is for http) is dominated by round trip penalties growing CWND. No surprise there - and no surprise that reducing the amount of data to be sent means less penalties absorbed. The magnitude of it is a little surprising though - if you sit behind a 300ms link this page takes 2.1 seconds to reload without compression vs ~0.3 seconds with it.

The packet captures for these are at
https://www.ducksong.com/misc/pcaps-for-zlib/


Strategies that lossily mess with which headers to transfer are going to create backwards compatibility problems and probably not get support.
You should see the mess we caused just trying to drop Accept-Charset from our request headers - indeed nobody was doing charset negotiation with it but some very big names were using it for fingerprinting and stuff started to break :(. Something like User-Agent is a couple orders of magnitude more intimidating.

So +1 for some form of lossless encoding of the headers.

-Patrick

Received on Wednesday, 15 August 2012 11:37:38 UTC