- From: Martin Thomson <martin.thomson@gmail.com>
- Date: Thu, 22 May 2014 05:36:07 -0700
- To: Ilari Liusvaara <ilari.liusvaara@elisanet.fi>
- Cc: David Krauss <potswa@gmail.com>, Mark Nottingham <mnot@mnot.net>, Roberto Peon <grmocg@gmail.com>, Greg Wilkins <gregw@intalio.com>, HTTP Working Group <ietf-http-wg@w3.org>, Michael Sweet <msweet@apple.com>
On 22 May 2014 04:47, Ilari Liusvaara <ilari.liusvaara@elisanet.fi> wrote: > Unfortunately, there are rather many possible limits, e.g.: > - Maximum total uncompressed size > - Maximum compressed size of individual header backed by stream > - Maximum uncompressed size of individual header backed by stream > - Maximum size of individual header backed by header table > - <probably fair amount of limits I can't quickly come up with> > > Some of those may also be effectively infinity. I imagine that we'd have to rely on the serialized size if we did anything at all. That way it's trivial to enforce; an important property methinks. >> You can't however reject a header block that you don't want. Not without >> also dumping the connection. Common state being what it is. > > AFAIK, It is possible to do just corrupting the stream state, not connection > state (just mark things as failed and stop really emitting headers). Then upon > end of block, if things failed, try to deal with the corrupt stream. Yes, you can process the updates to the header table, but dump the output into the bitbucket and reset the stream. > These kind of cases are very nasty to handle in streaming manner, because the > webserver can't dispatch execution before almost the end of block. Good point. That's why a good implementation will put all the routing fields at the start. But that's not possible if the routing fields are pulled from the reference set, which I just realize is highly likely for things like :scheme and :authority. Bleargh.
Received on Thursday, 22 May 2014 12:36:37 UTC