- From: Jason Greene <jason.greene@redhat.com>
- Date: Tue, 8 Jul 2014 17:09:40 -0500
- To: Martin Thomson <martin.thomson@gmail.com>
- Cc: "K.Morgan@iaea.org" <K.Morgan@iaea.org>, Greg Wilkins <gregw@intalio.com>, Mark Nottingham <mnot@mnot.net>, Roberto Peon <grmocg@gmail.com>, Poul-Henning Kamp <phk@phk.freebsd.dk>, John Graettinger <jgraettinger@chromium.org>, Mike Bishop <Michael.Bishop@microsoft.com>, HTTP Working Group <ietf-http-wg@w3.org>
On Jul 8, 2014, at 3:46 PM, Martin Thomson <martin.thomson@gmail.com> wrote: > On 8 July 2014 12:52, <K.Morgan@iaea.org> wrote: >> What does it matter if the uncompressed headers are 32M, or 32G for that matter? As soon as you reach the limit you are willing to commit in resources to the uncompressed headers, you'll stop and respond with 431. > > Correct. And I'm merely noting that having a setting does nothing to > help you avoid having to follow that process. The setting - on its > own - is therefore of marginal value. Mark was suggesting that we > consider the setting in isolation, which is the point that I was > addressing. Just one minor caveat I feel like making. Technically a lot of this depends on the compute model of the server. It’s possible, for example, to take advantage of the HPACK referential model. N references to the header table do not necessarily mean N * entry size. It’s quite possible that a memory optimized implementation would have a formula which was (compressed size * sizeof(void *)) + header_size. Or perhaps even compressed size + header_size if it simply mirrored the compact HPACK structure in memory. Although to your point, that’s not something that can be relied upon. -- Jason T. Greene WildFly Lead / JBoss EAP Platform Architect JBoss, a division of Red Hat
Received on Tuesday, 8 July 2014 22:10:25 UTC