W3C home > Mailing lists > Public > ietf-http-wg@w3.org > April to June 2014

Re: Limiting header block size

From: Roland Zink <roland@zinks.de>
Date: Thu, 22 May 2014 16:10:39 +0200
Message-ID: <537E055F.8010300@zinks.de>
To: Martin Thomson <martin.thomson@gmail.com>
CC: HTTP Working Group <ietf-http-wg@w3.org>
Hi Martin,

looks good to me, the only small comment on this is that the client not 
really knows what it did wrong. Should it retry the request at a later time?

Regards,
Roland


On 22.05.2014 15:51, Martin Thomson wrote:
> On 22 May 2014 06:25, Roland Zink <roland@zinks.de> wrote:
>> The sender then shouldn't send the request and notify the user / return an
>> error message. If the receiver is resetting the stream the result will be
>> not much different.
> This has the advantage of bringing the error forward.
>
> It has the disadvantage of being less flexible and introduces the need
> to specify exactly how to measure the limit.
>
> I'd say that in the common case, state commitment would have to be
> based on post-decompression header fields.  But any setting we define
> would need to be more deterministic and enforceable, so it would be
> easiest to express a limit based solely on the size of header block
> frames.
>
> That creates a mismatch that could be exploited.  The problem then for
> implementations is to choose what value to advertise.  In order to be
> perfectly safe from attack, the limit would have to be so small it
> would basically prevent any real messages from getting through.  Thus,
> implementations are basically required to implement header block
> discarding anyway.
>
> The only advantage of a setting then is to - in some cases - cause
> early detection of some errors, at the cost of more protocol
> machinery.
>
> So, I'm going to stick with my conclusion, and propose:
> https://github.com/http2/http2-spec/pull/482
Received on Thursday, 22 May 2014 14:11:04 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:14:30 UTC