Re: Cost analysis: (was: Getting to Consensus: CONTINUATION-related issues)

On 2014–07–19, at 2:11 PM, Greg Wilkins <gregw@intalio.com> wrote:

> 
> Roberto,
> 
> With regard to roll back, I see no difference between the burden of roll back in the sender vs the burden of enforced decoding in the receiver.   Ie we currently have the issue that when a header limit is breached then the receiver has to either continue to process anyway or discard the entire connection.      By moving a compressed limit check to the sender, the choice is much the same - roll back or discard the entire connection.  

There’s a middle path, which I alluded to in my A/B “vote” message.

Enforced decoding is not a burden on the receiver as long as it implements streaming. An ideal receiver (the QOI essentially required by the current spec) can keep on receiving and forwarding/discarding beyond its limit without committing extra memory or undue CPU.

If we can agree that GOAWAY on excessive headers is good enough for simple implementations, and streaming is reasonable to implement for any application that really doesn’t want to send GOAWAY, then the hard limit should remain at the receiver, with voluntary self-limiting by senders.

> The only way to efficiently handle a limit is to have it as a declared uncompressed limit enforced by the sender before encoding starts. Only then can failure be determine before committing to the entire encoding/sending/decoding process.

A proxy representing servers with different limits has to report the lowest common denominator. A client application may know better that its particular server supports a higher limit. The best outcome requires sending the headers anyway and just seeing whether someone complains.

On the other hand, no evidence has been presented that a server requiring big-header requests has ever been proxyable in the first place.

Limiting compressed data is nonsense. Users and applications can’t really reason about compressed sizes. Having added jumbo frames, pushing that sort of semantic up the abstraction stack is irresponsible. Uncompressed limits are what both application-level endpoints care about.

> Note that pretending that there is no limit by not declaring it, does not solve the problem as there will always be a limit (or a massive DoS vulnerability). Making the limit undeclared does not avoid the problem that encoding has started.

A good server can set a different limit on each individual web app. The DoS potential of headers beyond such a limit is no more than any other garbage that gets thrown in the bit-bucket. Kerberos is bringing the pain upon itself... but to the application software, big request headers are the same as any upload.

> In hind site,  I think we should have separated the issues of how do we send large headers from the orthogonal  issue of how we limit large headers, which are really orthogonal:
> 
> How do we transport large headers?:
> a) Large Frames
> b) Continuations
> c) Fragmented Headers frame
> 
> How do we limit the max header size?
> x) Expressed as a max compressed size (perhaps == a max frame size)
> y) Expressed as a max uncompressed size
> z) No declared limit (but receivers may apply a limit with 431 or GO-AWAY)
> 
> I think any of the limits can be applied to any of the transports.
> 
> Mark - is it too late to re frame the consensus questions?  Have you been able to see any clarity in the other thread?
> 
> For the record my preferences are   c,a,b   &  y,x,z,    but I can live with all.

Excellent selection, sir.

Received on Saturday, 19 July 2014 07:31:59 UTC