Re: Cost analysis: (was: Getting to Consensus: CONTINUATION-related issues)

On Sat, Jul 19, 2014 at 10:28 AM, Jason Greene <jason.greene@redhat.com>
wrote:

> On Jul 19, 2014, at 2:31 AM, David Krauss <potswa@gmail.com> wrote:
>
> >
> > On 2014–07–19, at 2:11 PM, Greg Wilkins <gregw@intalio.com> wrote:
> >
> >>
> >> Roberto,
> >>
> >> With regard to roll back, I see no difference between the burden of
> roll back in the sender vs the burden of enforced decoding in the receiver.
>   Ie we currently have the issue that when a header limit is breached then
> the receiver has to either continue to process anyway or discard the entire
> connection.      By moving a compressed limit check to the sender, the
> choice is much the same - roll back or discard the entire connection.
> >
> > There’s a middle path, which I alluded to in my A/B “vote” message.
> >
> > Enforced decoding is not a burden on the receiver as long as it
> implements streaming. An ideal receiver (the QOI essentially required by
> the current spec) can keep on receiving and forwarding/discarding beyond
> its limit without committing extra memory or undue CPU.
>
> I argue the opposite is true.
> If you look at a comparison of say a client that sends 1MB of compressed
> headers, with one intermediary, but with a 16KB frame limit:
>
> Streaming discard approach
> --------------------------
> - Client hpack encodes and transmits 64x16KB frames
> - Intermediary reencodes 64x16KB frames
> - Origin decodes and discards 64x16KB frames
>
> Simple Compressed Limit Client
> (treats compressed limit as uncompressed)
> -----------------------------------------
> - Client compares 1MB to 16KB and rejects the request with no copying,
> transmitting, or processing
>

How does the client know that 1MB cannot compress to 16KB? 1MB *can*
compress to 16kb.
The client must have compressed the header to know if it would or would not
become 16kb.
Either that, or it is guessing, and that would hurt latency, reliability,
and determinism for the substantial number of false-positives it would
force into being.


> - Intermediary never sees a request, able to work on other workloads
>
- Origin never sees a request, able to work on other workloads
>

Again, this is not guaranteed, it is only specified.


>
> Compression Efficient Client
> ----------------------------
> - Client compares 1MB to 16KB, and realizes it must copy the state table
> (4k extra temp mem)
> - Client processes until full (likely 32KB of data)
> - Intermediary never sees a request, able to work on other workloads
> - Origin never sees a request, able to work on other workloads
>
>
This leaves out the common case when the state table is copied and there
was no revert needed. That was 4k worth of copying for every request where
no copying was necessary. This is likely to be a substantial expense in the
common case.


Reset on Overflow Client
> -------------------------
> - Client processes until overflow (likely 32KB of data)
> - Subsequent request needs a few bytes to reset the table
> - Intermediary never sees a request, able to work on other workloads
> - Origin never sees a request, able to work on other workloads
>

The intermediary would still likely see a request, at least if it was
forward proxy, as it is very unlikely to know what the limit is when it
just has the request from the user, and doesn't yet have a connection to
the origin.

Of any of the compressed-length options, this one is clearly the best,
though it still doesn't help any of the determinism problems, e.g. where
you can get request 'B' in, but only if it was preceded by request 'A';
request 'B' on a new connection would "mysteriously" fail. The determinism
issue is really quite nasty, imho, and is a strong argument for not using
compressed-length limits.


>
> The streaming discard approach has the highest overall cost in computation
> time for all parties. It also introduces latency since all other streams
> must wait until the stream has completed. Finally it consumes unnecessary
> network bandwidth.
>

In the common case (i.e. ~99.9%) of the time, streaming potentially reduces
latency since one need not wait for the entire set of headers to be encoded
before forwarding. In the hopefully rare case (or else the protocol has
some real interop problems) where the headers exceed the recipient's limit,
you're right, it can increase latency.


>
>
>
> > If we can agree that GOAWAY on excessive headers is good enough for
> simple implementations,
>
> Dropping the connection is somewhat tolerable for a client to origin
> topology. However it negatively impacts user experience. It’s problematic
> when you have intermediaries since a dropped connection potentially affects
> more traffic than that initiated by the user.
>
> > and streaming is reasonable to implement for any application that really
> doesn’t want to send GOAWAY, then the hard limit should remain at the
> receiver, with voluntary self-limiting by senders.
>
> Voluntary self-limiting does indeed help the problem because an
> intermediary can prevent relaying and the subsequent GOAWAY.
>
> >
> >> The only way to efficiently handle a limit is to have it as a declared
> uncompressed limit enforced by the sender before encoding starts. Only then
> can failure be determine before committing to the entire
> encoding/sending/decoding process.
> >
> > A proxy representing servers with different limits has to report the
> lowest common denominator.
>
> Not necessarily. A proxy could dynamically pick the highest (provided its
> within tolerable levels) and discard traffic for lower limited origins.
>
>
... and then the limit fails to offer any supposed savings.


> > A client application may know better that its particular server supports
> a higher limit. The best outcome requires sending the headers anyway and
> just seeing whether someone complains.
>
> I don’t follow your argument here. A receiver is always going to be the
> one to know what its limits are unless it reports incorrect values, which
> would be a bug.
>

This isn't true. A forward proxy must contact a server before it can know
what the server's limit is, thus the client can not know what the limit for
that server would be until after it has sent the message.


>
> >
> > On the other hand, no evidence has been presented that a server
> requiring big-header requests has ever been proxyable in the first place.
>
> Well, there is the gigantic kerberos ticket use case, and those are
> certainly proxyable today. It’s hard to see how large headers are only
> appropriate across a single hop vs multiple hops.
>
> >
> > Limiting compressed data is nonsense. Users and applications can’t
> really reason about compressed sizes.
>
> Sure they can:
>
> https://github.com/http2/http2-spec/wiki/ContinuationProposals#dealing-with-compressed-limits
>
>
> > Having added jumbo frames, pushing that sort of semantic up the
> abstraction stack is irresponsible. Uncompressed limits are what both
> application-level endpoints care about.
>
> I think its fair to say that looking at uncompressed values are simpler
> for a sender.
>
> >
> >> Note that pretending that there is no limit by not declaring it, does
> not solve the problem as there will always be a limit (or a massive DoS
> vulnerability). Making the limit undeclared does not avoid the problem that
> encoding has started.
> >
> > A good server can set a different limit on each individual web app. The
> DoS potential of headers beyond such a limit is no more than any other
> garbage that gets thrown in the bit-bucket. Kerberos is bringing the pain
> upon itself... but to the application software, big request headers are the
> same as any upload.
>
> You can’t really determine which app to send the request to until the
> headers are processed, and partial processing isn’t reliable since we don’t
> have ordering rules on common selectable data. So the limit makes the most
> sense at a higher level than the application. This is quite different than
> an upload which involves passing the request to the application before the
> upload data is fully consumed, and the application is in control of that
> processing.
>
>
This isn't necessarily true-- once one has the headers one needs, one can
choose to make a connection.
For reverse proxies in particular, the receipt of a set of headers on a
particular IP, or with a particular host indication via SNI, the
intermediary can know to whom the connection should be created without
having received *any* of the headers.

Even in the forward-proxy case, all it needs are the ':' headers.

-=R

Anyway just to be clear I am fine with both approaches. I am not arguing
> against the B proposal. I just wanted to address some of the concerns with
> the client impact A.
>
> --
> Jason T. Greene
> WildFly Lead / JBoss EAP Platform Architect
> JBoss, a division of Red Hat
>
>

Received on Saturday, 19 July 2014 20:39:11 UTC