- From: Poul-Henning Kamp <phk@phk.freebsd.dk>
- Date: Wed, 16 Jul 2014 06:35:03 +0000
- To: Mark Nottingham <mnot@mnot.net>
- cc: HTTP Working Group <ietf-http-wg@w3.org>
In message <F81935AB-CDA5-493D-ACEF-C94313EC50C5@mnot.net>, Mark Nottingham writes: >A lot of the discussion around ><https://github.com/http2/http2-spec/issues/551> is around having a hard >limit for header block sizes in the protocol, and the resulting ways >that helps and hurts. > >I wonder if we can make a small adjustment to ease some of the pain. >Specifically, what if it were only advisory, and there were no default? > >I.e., instead of a setting with the semantic of "You MUST NOT send >header blocks larger than <x>", what if it were "If you send header >blocks larger than <x>, I'll very likely discard them (responses) / >respond with a 431 (requests)"? That works for me. I still think the limit should apply to compressed headers, since that is what memory allocation is required for. Once the receiver has received the frame, it's trivial to do a non-producing pass over it to calculate the uncompressed size, if this is relevant to know before memory allocation happens. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk@FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence.
Received on Wednesday, 16 July 2014 08:55:40 UTC