Re: Significantly reducing headers footprint

Hi Mike,

On Mon, Jun 11, 2012 at 07:32:41AM -0700, Mike Belshe wrote:
> This is good work, Willy.

Thanks.

> Any perf results on how much this will impact the user?  Given the stateful
> nature of gzip already in use, I'm betting this has almost no impact for
> most users?

No, I don't have numbers. All I can say is that on one core of my core2 3 GHz, 
I could compress around 70k requests per second with the PoC code, so the
CPU cost even at 100 req/s will be extremely low. Also, the compression
ratio was quite high (12.7x, 92%) even with the currently limited set of
features, so I'm quite confident that the impact on upstream will be a
significant gain. For instance, the original 132 kB of requests were around
90 MSS, which represent a significant number of RTTs. The resulting 10kB are
7 MSS, which can be sent at once with the default INITCWND 10. I'd be pleased
if I could put my hands on large amounts of reassembled requests streams.
It's something much more difficult to get than I initially believed. And
I'm not a browser developer, I'd really now know where to start from to
make a PoC with a real browser.

> There is a tradeoff; completely custom compression will introduce more
> interop issues.  Registries of "well known headers" are notoriously painful
> to maintain and keep versioned.

I agree. But some are clearly protocol elements. Basically everything
that is described in the spec could have its number. We have a syntax
for If-Modified-Since, we can have a number too.

> >  - User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; fr;
> > rv:1.9.2.12) Gecko/20101026 Firefox/3.6.12
> >    => Well, this one is only sent once over the connection, but we could
> >       reduce this further by using a registery of known vendors/products
> >       and incite vendors to emit just a few bytes (vendor/product/version).
> 
> I don't think the compressor should be learning about vendor-specific
> information.  This gives advantages to certain browser incumbents and is
> unfair to startups.  We absolutely MUST NOT give advantages to the current
> popular browsers.

I'm with you on this, but I was not speaking about making the compressor
aware of the numbers (I probably was not clear on this, it was late). I'd
rather have vendors register IDs and choose to advertise them instead of
the current text. An 1.1 -> 2.0 gateway would just pass along what is above,
as my PoC code did. For instance, the UA above could be advertised by the
browser as 0x0002:0x0306:0xC (just 5 bytes). There could be an experimental
range as you have on USB/PCI/Ethernet so that new users are not disadvantaged.

> >    => With better request reordering, we could have this :
> >
> >       11 Accept: */*
> >      109 Accept: image/png,image/*;q=0.8,*/*;q=0.5
> >        4 Accept: text/css,*/*;q=0.1
> >        3 Accept:
> > text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
> >
> 
> As long as the browser uses the same accept header from request to request
> (which it generally does), this compresses to almost zero after the first
> header block.

Indeed, this was more a note about things that can be improved upstream,
where the requests are originated.

> >  - Cache-Control: max-age=0
> >    => I suspect the user hit the Refresh button, this was present in about
> >       half the requests. Anyway, this raises the question of the length it
> >       requires for something which is just a boolean here ("ignore cache").
> >       Probably that a client has very few Cache-Control header values to
> >       send, and that reducing this to a smaller set would be beneficial.
> >
> 
> Trying to change the motivation or semantics of headers is a large
> endeavor....  Not sure if the compression of the bits is the right
> motivation for doing so.

The compression clearly does not gain much from this. In fact it's more
that I noticed it in the compressed stream, and it made me realize that
improving semantincs in 2.0 could improve reliability and interoperability.
I've seen many times users sending "maxage=0" or "max-age:0", which are
not supposed to be the correct form. In fact what they want is simply to
ignore cached contents.

> >  - Cookie: xtvrn=$OaiJty$; xtan327981=c; xtant327981=c; has_js=c;
> > __utma=KBjWnx24Q.7qFKqmB7v.i0JDH91L_R.0kU2W1uL49.JM4KtFLV0b.C;
> > __utmc=Rae9ZgQHz;
> > __utmz=NRSZOcCWV.d5MlK5RJsi.-.f.N8J73w=S1SLuT_j0m.O8|VsIxwE=(jHw58obb)|r9SgsT=WQfZe8jr|pFSZGH=/@/qwDyMw3I;
> > __gads=td=ASP_D5ml4Ebevrej:R=pvxltafqZK:x=E4FUn3YiNldW3rhxzX6YlCptZp8zF-b5qc;
> > _chartbeat2=oQvb8k_G9tduhauf.LqOukjnlaaE7K.uDBaR79E1WT4t.Kr9L_lIrOtruE8;
> > __qca=LC9oiRpFSWShYlxUtD37GJ2k8AL; __utmb=vG8UMEjrz.Qf.At.pXD61lUeHZ;
> > pm8196_1=c; pm8194_1=c
> >
> >    => amazingly, this one compresses extremely well with the above scheme,
> >       because additions are performed at the end so consecutive cookies
> > keep
> >       a lot in common, and changes are not too frequent. However, given the
> >       omnipresent usage of cookies, I was wondering why we should not
> > create
> >       a new entity of its own for the cookies instead of abusing the Cookie
> >       header. It would make it a lot easier for both ends to find what they
> >       need. For instance, a load balancer just needs to find a server name
> >       in the thing above. What a waste of on-wire bits and of CPU cycles !
> >
> > BTW, binary encoding would probably also help addressing a request I often
> > hear in banking environments : the need to sign/encrypt/compress only
> > certain
> > headers or cookies. Right now when people do this, they have to
> > base64-encode
> > the result, which is another transformation at both ends and inflates the
> > data. If we make provisions in the protocol for announcing encrypted or
> > compressed headers using 2-3 bits, it might become more usable. I'm not
> > convinced it provides any benefit between a browser and an origin server
> > though. So maybe it will remain application-specific and the transport
> > just has to make it easier to emit 8-bit data in header field values.
> >
> 
> Happens all the time, yes.  Just make sure that HTTP2 -> HTTP1.1 definition
> is preserved so that gateways still work.

In fact I'd say that we have to make provisions for those gateways to reliably
encode bytes that cannot be represented in 1.1, and have the conversion back.
This is a bit tricky, might look a bit like what happened with quoted-printable
text in mails but certainly can be done a lot easier.

> > Has anyone any opinion on the subject above ? Or ideas about other things
> > that terribly clobber the upstream pipe and that should be fixed in 2.0 ?
> >
> > I hope I'll soon find some time to update our draft to reflect recent
> > updates
> > and findings.
> >
> 
> Again, I think we could spend a lot of time debating the compressor.  And
> with one more registry or one more semantic header change from HTTP, there
> will always be one more bit to compress out.  But these are, IMHO, already
> diminishing returns for performance.  I hope we'll all focus on the more
> important parts of the protocol (flow control, security, 1.x to 2.x
> upgrades, etc) than compression.

Totally agreed. In fact I feel a bit frustrated to be working on this because
I know there are a lot of other aspects. But I wanted to ensure that we could
squeeze enough bytes of the stream impacting the end user in a way that would
be cheaper to process than gzip for intermediaries.

On the other hand, I'm perfectly fine with the way you process streams and
flow control in SPDY, which is another reason why I have no motivation for
working on it too. Upgrades and gatewaying are other very important points
that still need some work.

Thanks for your comments, Mike !

Willy

Received on Monday, 11 June 2012 16:59:37 UTC