- From: Poul-Henning Kamp <phk@phk.freebsd.dk>
- Date: Fri, 11 Jul 2014 20:15:34 +0000
- To: Roberto Peon <grmocg@gmail.com>
- cc: Jason Greene <jason.greene@redhat.com>, Greg Wilkins <gregw@intalio.com>, HTTP Working Group <ietf-http-wg@w3.org>, Martin Thomson <martin.thomson@gmail.com>
In message <CAP+FsNdoESu1GyRwyU5GCQGXFxXaHNfi92d13K86gHxxwFYEJg@mail.gmail.com> , Roberto Peon writes: >As I mentioned before, IIRC we've seen response headers as large as 12mb, >at which point we said: OK, lets have a 2G limit (effectively infinite), >because clearly we can't predict this. So there are two questions we need to ask ourselves: 1. Should the protocol support this case ? 2. By default or by configuration ? 3. Who should suffer most ? My answers are: Yes, configuration and sender. Yes, because it is stupid to make a protocol with arbitrary limitations. Configuration because we should not force all HTTP/2.0 implementations to over-reserve memory on the off-chance that they ever see one of these requests. Sender, because in particular in a case like this, it is important to give the receiver advance notice that exceptional memory management will be required. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk@FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence.
Received on Friday, 11 July 2014 20:15:57 UTC