- From: Poul-Henning Kamp <phk@phk.freebsd.dk>
- Date: Mon, 07 Jul 2014 20:47:36 +0000
- To: Roberto Peon <grmocg@gmail.com>
- cc: Johnny Graettinger <jgraettinger@chromium.org>, Greg Wilkins <gregw@intalio.com>, HTTP Working Group <ietf-http-wg@w3.org>
In message <CAP+FsNdRPnFZvhXnozFBaPSdUybD4GZqaP9j_6Cv_pvNs5d4fQ@mail.gmail.com>, Roberto Peon wri tes: >> >The problem with doing so is that it precludes the possibility >> >of streaming HPACK. >> >> While it may be an advantage (how big ?) for a client to be able to >> "stream HPACK" it is a major inconvenience to everybody else in the >> HTTP network topology to have no idea how much data is incoming. > >By data you're talking about the size of the compressed headers. >You're basing that argument about the amount of inconvenience on the >assertion that one achieves higher efficiency by allocating memory based on >what a possibly malicious client says should be the size. Which is much better than the current drafts unbounded time/space requirement for the server while the malicious CONTINUATION frames trickle in. >If we're following this argument to its logical conclusion, however, we >should also be communicating the *uncompressed* size, I've actually thought about proposing that. The reason I havn't is three-fold. I suspect that most servers and proxies will not decompress more than they absolutely need to. If nobody cares about User-Agent, why spend time de-huffman'ing it ? Therefore they won't need to know the size of the uncompressed headers. Second it would impose a buffering requirement on client which through "out-of-band" means know that they will not exceed the server limits. Think of a lightbulb: It knows that its request will need between 200 and 227 bytes, which is guaranteed to be acceptable, so there is no (other) requirement to buffer it. Third it is perfectly conceiveable that the client won't know until it has composed and compressed the headers so the number wouldn't be able until at the end. To be useful for the server, it needs to be up front (where space could be reserved before compression is started - but complexity etc.) If I'm wrong, it's a good candidate for one of the reserved bits later. >> Given that the CPU/MEM performance bottleneck is everywhere but the >> client there needs to be really good arguments for shifting the >> inconvenience of finding the total length from client to everybody >> else. >> >There is also the latency tradeoff, which for many usecases is the big >deal, and keeps being ignored. The only way client buffering of full HEADERS can introduce latency is if the network bandwidth is approx 10 times higher than the bandwidth at which the client can produce the HEADERS. Which many usecases would that be ? >Since proxies are clients, and often even more constrained than servers, >requirements to buffer are potentially onerous, especially given that one >is not required to do this for HTTP today. You seem to forget that the proxies with the highest loads gets the number for free from the client, and can just pass it on. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk@FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence.
Received on Monday, 7 July 2014 20:48:01 UTC