- From: Osama Mazahir <OSAMAM@microsoft.com>
- Date: Fri, 25 Oct 2013 02:09:09 +0000
- To: HTTP Working Group <ietf-http-wg@w3.org>
Received on Friday, 25 October 2013 02:09:57 UTC
Based on my understanding, the encoder and decoder have to have the exact same value for max-header-table-size so that the indices match. So if the client advertises SETTINGS_HEADER_TABLE_SIZE=5MB then that means the client-decoder is using a header-table with maxsize=5MB. Which means the server-encoder also has to use a header-table of maxsize=5MB. But if the server only wants to use 4KB of encoder space then after storing 4KB worth of header name/values, it can only reuse from the initially stored 4KB or emit literals (either as-is or Huffman encoded). So it seems to bound its memory usage, the server (in the above example) has to be choosy in what it adds to the server-encoder header-table because once something is added it cannot be removed (unless it decides to expand to 5MB+)? Thanks, --Osama.
Received on Friday, 25 October 2013 02:09:57 UTC