- From: Martin Nilsson <nilsson@opera.com>
- Date: Tue, 10 Jun 2014 16:44:55 +0200
- To: "ietf-http-wg@w3.org" <ietf-http-wg@w3.org>
Regarding the process to validate the current hpack huffman codes against a large set of real headers, I think there is a risk that we'll paint ourselves into a corner dictated by how HTTP/1 looks like. As pointed out, a lot of base64 or hex encoded headers greatly benefits from huffman encoding. However, if we can carry binary data there is no point in having the data encoded in the first place, and not something to train the code table for. These headers will change over time, because even if they are taken into consideration for the huffman table, it is still more space efficient to not encode them. The code lengths for the characters of the other headers might suffer though. /Martin Nilsson -- Using Opera's revolutionary email client: http://www.opera.com/mail/
Received on Tuesday, 10 June 2014 14:45:30 UTC