W3C home > Mailing lists > Public > ietf-http-wg@w3.org > July to September 2014

Re: Options for CONTINUATION-related issues

From: Jason Greene <jason.greene@redhat.com>
Date: Thu, 17 Jul 2014 13:23:35 -0500
Cc: Mark Nottingham <mnot@mnot.net>, Greg Wilkins <gregw@intalio.com>, HTTP Working Group <ietf-http-wg@w3.org>
Message-Id: <9C7CD176-F530-449A-B27F-82FB2A4C9E8E@redhat.com>
To: Roberto Peon <grmocg@gmail.com>
Yes that was in my original email about the option (which was included in the reply):

> >> Cons:
> >> - As with all other length limited proposals, sender has to rewind encoder state to that of last sent frame.


I was just focusing on the differentiation of the pros though. 

I didn’t bother to add it to the wiki, just because I wasn’t sure there would be any interest to go this way. I can definitely do that though just in case.

On Jul 17, 2014, at 1:09 PM, Roberto Peon <grmocg@gmail.com> wrote:

> The list you have here didn't mention that any length requirement also requires the compressor to be able to rewind state, which is most decidedly non-trivial, and is a great way to require memory over-commitment.
> These are considered with pros/cons/requirements in the wiki link that Mark sent out a few days back-- it would be good to refer to that doc (and/or modify it), as it is likely to better capture all pros/cons.
> Certainly it is more reliable than my memory :)
> 
> -=R
> 
> 
> On Thu, Jul 17, 2014 at 10:04 AM, Jason Greene <jason.greene@redhat.com> wrote:
> 
> On Jul 16, 2014, at 11:44 PM, Mark Nottingham <mnot@mnot.net> wrote:
> 
> >
> > On 17 Jul 2014, at 1:42 am, Jason Greene <jason.greene@redhat.com> wrote:
> >
> >>
> >> On Jul 16, 2014, at 9:53 AM, Greg Wilkins <gregw@intalio.com> wrote:
> >>
> >>>
> >>> On 16 July 2014 17:08, Mark Nottingham <mnot@mnot.net> wrote:
> >>> Are there any other realistic (i.e., capable of achieving consensus, NOT just your favourite approach) options that we should be considering?
> >>>
> >>> hmmmm I am probably being unrealistic.... but let's tilt at this windmill
> >>>
> >>> c) Remove CONTINUATION from the specification, allow HEADERS to be fragmented and add a new setting that advises the maximum header set size (i.e,. uncompressed) a peer is willing to receive (but might not imply PROTOCOL_ERROR or STREAM_ERROR on receipt).
> >>
> >> I have a fourth option to add to the mix which is based on all of the feedback I have seen.
> >>
> >> AFAICT there is only one limited use-case that can not be covered by the recent move to allow large frames. That is the ability to split a frame into chunks so that a 1:1 proxy doesn’t have to buffer.
> >>
> >> We could have a more narrow form of CONTINUATION:
> >>
> >> - The sum of all continuation frames can not exceed max_frame_size.
> >
> > The default of max_frame_size is 16k; that means that a server can't ever send more than 16k (or 32k, depending on whether you intend to include HEADERS in the max) of response headers without some adjustment by the browser…
> 
> Right. The thinking there is that 99.8% of headers are < 16K, so this is more than enough, and when you have a .2% you can just bump the frame size.
> 
> If you compare CONTINUATION to Large Frames, they are actually very similar.
> 
> They both:
> - Can send very large headers
> - Block other streams until completion
> - Support writing to the wire in chunks
> 
> Continuations adds:
> - 16MB -> Unlimited headers
> - Ability to write in chunks without knowing the total compressed length
> 
> The former is not really desirable and causes #551.
> The latter offers very limited value, in that a 1:1 proxy can re-encode the frame using different hpack state and send the data across in piecemeal.
> 
> So this option is simply saying we can address that particular use-case, if deemed important, by supporting fragmentation of the HEADERS frame. We don’t have to span frames to accomplish the need. By not spanning frames we solve #551, and also eliminate the DOS exposure concern which has been brought up against a header size limit solution.
> 
> >
> >
> >> Since this use case is avoiding buffering, they may not know the exact size has exceeded the max until the Nth continuation is sent. Therefore:
> >>
> >> - Add a discard flag on the last HEADER/CONTINUATION frame, which instructs the endpoint to discard the request and all buffered data.
> >
> > Why not just reset the stream?
> 
> The only benefit in not reseting the stream is just notifying the server that a request wasn’t sent because it was too big. Although I agree its of limited value. It was a concern brought up against size limits, that a server does not know when a request is too big. I personally disagree with the concern, I just was describing one way it could be handled.
> 
> >
> >
> >> Pros:
> >> - Addresses 551
> >> - There is only one max setting
> >> - Encourages reduction of gigantic headers
> >> - As a side-effect of the discard flag, it provides a client the ability to inform the server that a request was too large and would have generated a 431 (not really significant but it came up in a thread and worth mentioning)
> >>
> >> Cons:
> >> - As with all other length limited proposals, sender has to rewind encoder state to that of last sent frame.
> >
> > Cheers,
> >
> >
> > --
> > Mark Nottingham   https://www.mnot.net/
> >
> >
> >
> >
> 
> --
> Jason T. Greene
> WildFly Lead / JBoss EAP Platform Architect
> JBoss, a division of Red Hat
> 
> 
> 

--
Jason T. Greene
WildFly Lead / JBoss EAP Platform Architect
JBoss, a division of Red Hat
Received on Thursday, 17 July 2014 18:24:07 UTC

This archive was generated by hypermail 2.3.1 : Wednesday, 30 March 2016 09:57:09 UTC