Re: Options for CONTINUATION-related issues

On Jul 16, 2014, at 11:44 PM, Mark Nottingham <mnot@mnot.net> wrote:

> 
> On 17 Jul 2014, at 1:42 am, Jason Greene <jason.greene@redhat.com> wrote:
> 
>> 
>> On Jul 16, 2014, at 9:53 AM, Greg Wilkins <gregw@intalio.com> wrote:
>> 
>>> 
>>> On 16 July 2014 17:08, Mark Nottingham <mnot@mnot.net> wrote:
>>> Are there any other realistic (i.e., capable of achieving consensus, NOT just your favourite approach) options that we should be considering?
>>> 
>>> hmmmm I am probably being unrealistic.... but let's tilt at this windmill 
>>> 
>>> c) Remove CONTINUATION from the specification, allow HEADERS to be fragmented and add a new setting that advises the maximum header set size (i.e,. uncompressed) a peer is willing to receive (but might not imply PROTOCOL_ERROR or STREAM_ERROR on receipt).
>> 
>> I have a fourth option to add to the mix which is based on all of the feedback I have seen.
>> 
>> AFAICT there is only one limited use-case that can not be covered by the recent move to allow large frames. That is the ability to split a frame into chunks so that a 1:1 proxy doesn’t have to buffer. 
>> 
>> We could have a more narrow form of CONTINUATION: 
>> 
>> - The sum of all continuation frames can not exceed max_frame_size.
> 
> The default of max_frame_size is 16k; that means that a server can't ever send more than 16k (or 32k, depending on whether you intend to include HEADERS in the max) of response headers without some adjustment by the browser…

Right. The thinking there is that 99.8% of headers are < 16K, so this is more than enough, and when you have a .2% you can just bump the frame size.

If you compare CONTINUATION to Large Frames, they are actually very similar.

They both:
- Can send very large headers 
- Block other streams until completion
- Support writing to the wire in chunks

Continuations adds:
- 16MB -> Unlimited headers
- Ability to write in chunks without knowing the total compressed length

The former is not really desirable and causes #551. 
The latter offers very limited value, in that a 1:1 proxy can re-encode the frame using different hpack state and send the data across in piecemeal. 

So this option is simply saying we can address that particular use-case, if deemed important, by supporting fragmentation of the HEADERS frame. We don’t have to span frames to accomplish the need. By not spanning frames we solve #551, and also eliminate the DOS exposure concern which has been brought up against a header size limit solution. 

> 
> 
>> Since this use case is avoiding buffering, they may not know the exact size has exceeded the max until the Nth continuation is sent. Therefore:
>> 
>> - Add a discard flag on the last HEADER/CONTINUATION frame, which instructs the endpoint to discard the request and all buffered data.
> 
> Why not just reset the stream?

The only benefit in not reseting the stream is just notifying the server that a request wasn’t sent because it was too big. Although I agree its of limited value. It was a concern brought up against size limits, that a server does not know when a request is too big. I personally disagree with the concern, I just was describing one way it could be handled. 

> 
> 
>> Pros:
>> - Addresses 551
>> - There is only one max setting
>> - Encourages reduction of gigantic headers
>> - As a side-effect of the discard flag, it provides a client the ability to inform the server that a request was too large and would have generated a 431 (not really significant but it came up in a thread and worth mentioning)
>> 
>> Cons:
>> - As with all other length limited proposals, sender has to rewind encoder state to that of last sent frame.
> 
> Cheers,
> 
> 
> --
> Mark Nottingham   https://www.mnot.net/
> 
> 
> 
> 

--
Jason T. Greene
WildFly Lead / JBoss EAP Platform Architect
JBoss, a division of Red Hat

Received on Thursday, 17 July 2014 17:05:08 UTC