- From: Martin Thomson <martin.thomson@gmail.com>
- Date: Thu, 22 May 2014 05:53:54 -0700
- To: HTTP Working Group <ietf-http-wg@w3.org>
The discussion on header blocks and flow control raised the question of whether it is appropriate to allow an endpoint to limit the size of header block it accepts. Due to the design of HPACK, there are many cases where a server or intermediary is forced to buffer an entire header block. This is because critical routing information like :path, :authority and :scheme can be placed anywhere in a header block. More importantly, some of these are highly likely to be in the reference set, causing them to be emitted only at the end of the block. This means that processing a header block can require an essentially unbounded amount of state for many implementations. Being able to limit this commitment would be good. However, it's not clear what an announced limit would enable at a sender. Conforming to a limit ultimately requires a sender to drop header fields. It seems unlikely that a sender would be able to arbitrarily drop header fields in order to comply with an arbitrary limit. Header fields are there to express semantics, and it's difficult to know what fields are safe to drop without knowing application requirements in some detail. (Someone suggested that a block could be split, but that doesn't work due to the structure of the protocol.) Thus, I conclude that changing the protocol is not advisable. Implementations that receive more header information than they are willing to tolerate can process any updates to the header table and then reset the corresponding stream (which for a PUSH_PROMISE would be the promised stream, of course). I will raise an editorial issue to note the DoS implications of excessively large header blocks and the above method for handling them.
Received on Thursday, 22 May 2014 12:54:22 UTC