- From: James M Snell <jasnell@gmail.com>
- Date: Fri, 12 Jul 2013 09:53:06 -0700
- To: Roberto Peon <grmocg@gmail.com>
- Cc: Jeff Pinner <jpinner@twitter.com>, Mike Belshe <mike@belshe.com>, Amos Jeffries <squid3@treenet.co.nz>, httpbis mailing list <ietf-http-wg@w3.org>
Most of the feedback that I've seen around header compression can be summarized as: Sending less data on the wire... Good. Less latency... Good. More efficiency... Good. Less visibility in the protocol... Bad. Complicated implementation... Bad. Introducing statefulness in the protocol... Bad. Where I come down on this is simple: Yes, we need a way of encoding headers more efficiently in HTTP/2 relative to HTTP/1... however, as the mantra states, Perfect is the Enemy of Good. The header compression algorithm as currently stands is *technically* a very good approach, but that doesn't mean it's the right approach for right now. The problems we see in the verbosity and repetitiveness of HTTP request and response headers has more to do with how those headers are defined and used that it does with how those headers are encoded on the wire. There are reasonable application-level approaches we could be exploring to reduce redundancy and increase efficiency that would not require complex new mechanisms and requirements to be added to the base framing protocol. For instance, I have demonstrated many times that by simply changing the way we define header values, we can make significant reductions in header transmission size. >From everything that I can see so far, Header Compression is completely orthogonal to the basic operation of the new HTTP/2 framing layer. It is not critical for the new framing layer to be successful. Yes, it has it's benefits. Yes, it makes things more efficient. Yes, it has security advantages relative to other compression options. But no, it is not the Simplest Thing That Could Possibly Work and no, it's not technically required in order to make HTTP/2 successful... and so far, the majority public reaction seems to be "Whoa! WTF is with all the new complexity! Dial it back a bit please!" It's good that people are starting to get implementation experience with what's been written up so far. It's good to start getting interop testing going. For me, going through my own implementation of current header compression mechanism has simply reinforced what I had already suspected and feared: it's a lot of very well designed and functional new complexity that has some benefit but is not strictly required to get the job done.. and, what's worse, is the fact that it makes us jump through some additional design hoops (e.g. the routing question) that would otherwise be fairly simple to address. That all said, I'm definitely willing to be convinced otherwise. Let's see what happens in Germany next month, but let's definitely keep an open mind towards alternative, less complicated (albeit, possibly less efficient) approaches that do not incur such a large New Complexity Tax. - James On Fri, Jul 12, 2013 at 9:36 AM, Roberto Peon <grmocg@gmail.com> wrote: > This was the first thing I experimented. :) > It either requires two different state size settings, or it makes state size > management .... interesting.... > Having a single table made much more sense and was less complicated, > especially for proxies. > > -=R > > > On Fri, Jul 12, 2013 at 8:22 AM, Jeff Pinner <jpinner@twitter.com> wrote: >> >> >> On Fri, Jul 12, 2013 at 2:11 AM, Mike Belshe <mike@belshe.com> wrote: >>> >>> I'm also in favor of removing the compressor completely. >> >> >> So the compressor buys us the ability to share headers between streams and >> possibly to reduce the size of the headers via some sort of encoding >> (whether it's typed encodings, or huffman compressed strings, or varint >> lengths, etc). So a dumb proposal: >> >> A HEADERS frame consists of encoded name values pairs, let's say varint >> length followed by UTF-8 bytes of the string (we can argue over compressed >> strings, types, etc. later, but basically no indexing into shared state). >> >> Sending a HEADERS frame on Stream-ID 0 creates a set of headers that gets >> saved and added to the HEADERS frame that opens any streams after it is >> sent. Sending a new HEADERS frame on Stream-ID 0 overwrites the previous >> frame. >> >> This allows us to share Cookies, User-Agent, Host, etc. between requests, >> but wouldn't allow for any response header sharing. It would allow us to >> share headers for pushed responses since those are streams opened by the >> server. >> >> >
Received on Friday, 12 July 2013 16:53:53 UTC