- From: J Ross Nicoll <jrn@jrn.me.uk>
- Date: Tue, 14 Jul 2009 12:06:16 +0100
- To: Jim Gettys <jg@freedesktop.org>
- Cc: Julian Reschke <julian.reschke@gmx.de>, HTTP Working Group <ietf-http-wg@w3.org>
Jim Gettys wrote: > Those of you with memory of my role in HTTP may find the following > comments surprising, but bear with me. > > Doing something like this has to pass test, in my mind: > o that it be shown to be significantly more compact than HTTP + > deflate style compression (probably with a pre-defined dictionary, and > canonicalization of cases of strings to minimize the size of the > dictionary). Ad-hoc binary compression systems are often/commonly not > less efficient than what gzip style compressors can do. (to remind the > audience, deflate is gzip without some preamble information; and IIRC, > it can be used with a predefined dictionary, so you don't have to > transmit said dictionary first when you know the material being > compressed). I'd also want it shown that it's a saving even worth having. It seems a tiny saving in terms of bandwidth used, even on a low speed network. The specification talks about memory footprint as well ("Binary compression: HTTP headers are compressed into a binary format to save bandwidth and buffer space"), but I still find it hard to believe anything for which 100-200 bytes (if that) will make a difference, will be able to do anything sensible with the content.
Received on Tuesday, 14 July 2009 11:06:56 UTC