- From: Nico Williams <nico@cryptonector.com>
- Date: Thu, 17 Jan 2013 16:16:55 -0600
- To: James M Snell <jasnell@gmail.com>
- Cc: Roberto Peon <grmocg@gmail.com>, "ietf-http-wg@w3.org" <ietf-http-wg@w3.org>
On Thu, Jan 17, 2013 at 3:44 PM, James M Snell <jasnell@gmail.com> wrote: > We certainly cannot come up with optimized binary encodings for everything > but we can get a good ways down the road optimizing the parts we do know > about. We've already seen, for instance, that date headers can be optimized > significantly; and the separation of individual cookie crumbs allows us to > keep from having to resend the entire cookie whenever just one small part > changes. I'm certain there are other optimizations we can make without > feeling like we have to find encodings for everything. The only way cookie compression can work is by having connection state. But often the whole point of cookies is to not store state on the server but on the client. The more state we associate with connections the more pressure there will be to close connections sooner and then we'll have to establish new connections, build new compression state, and then have it torn down again. Fast TCP can help w.r.t. reconnect overhead, but that's about it. We need to do more than measure compression ratios. We need to measure state size and performance impact on fully-loaded middleboxes. We need to measure the full impact of compression on the user experience. A fabulous compression ratio might nonetheless spell doom for the user experience and thence the protocol. If we take the wrong measures we risk failure for the new protocol, and we may not try again for a long time. Also, with respect to some of those items we cannot encode minimally (cookies, URIs, ...): their size is really in the hands of the entities that create them -- let *them* worry about compression. That might cause some pressure to create shorter, less-meaningful URIs, but... we're already there anyways. Nico --
Received on Thursday, 17 January 2013 22:17:18 UTC