- From: Martin Thomson <martin.thomson@gmail.com>
- Date: Fri, 5 Sep 2014 09:10:22 -0700
- To: Poul-Henning Kamp <phk@phk.freebsd.dk>
- Cc: Martin Nilsson <nilsson@opera.com>, HTTP Working Group <ietf-http-wg@w3.org>
On 5 September 2014 00:52, Poul-Henning Kamp <phk@phk.freebsd.dk> wrote: > Also I don't recall ever seeing the actual process used to derive > the huffman table documented, and suspect that it should be documented > and reviewed before we finalize the huffman table. It should be in the minutes of the NY interim. Chrome deployed an experiment that performed HPACK compression on all requests and responses seen by their browser over a significant data set. The resulting counts for each octet were collected. That was before removing the reference set removal so results will be dominated slightly more by header fields that change frequently and there will be instances of '\0' (used at that time to ensure ordering of values was maintained) that are no longer valid. I can speculate that a new experiment along the same lines would shift the bias slightly in favour of base64 characters, but only a similar experiment will decide that. (I'll also note that the results fortuitously produce sequences that don't have unique mappings from length to character, which reduces information leakage from the resulting compressed length. The research Google sponsored indicates that this is minimal.)
Received on Friday, 5 September 2014 16:10:51 UTC