- From: Roberto Peon <grmocg@gmail.com>
- Date: Sat, 19 Jul 2014 20:23:35 -0700
- To: Jason Greene <jason.greene@redhat.com>
- Cc: David Krauss <potswa@gmail.com>, Greg Wilkins <gregw@intalio.com>, HTTP Working Group <ietf-http-wg@w3.org>, Mark Nottingham <mnot@mnot.net>
- Message-ID: <CAP+FsNdXtG7E9W6HOg=G8pR6ZfUVq0qJAwK56sty9C647KG3sw@mail.gmail.com>
On Sat, Jul 19, 2014 at 7:39 PM, Jason Greene <jason.greene@redhat.com> wrote: > > On Jul 19, 2014, at 3:38 PM, Roberto Peon <grmocg@gmail.com> wrote: > > > > On Sat, Jul 19, 2014 at 10:28 AM, Jason Greene <jason.greene@redhat.com> > wrote: > > > > How does the client know that 1MB cannot compress to 16KB? 1MB *can* > compress to 16kb. > > The client must have compressed the header to know if it would or would > not become 16kb. > > Either that, or it is guessing, and that would hurt latency, > reliability, and determinism for the substantial number of false-positives > it would force into being. > > My example was with 1MB of compressed state. However, the simple > optimization I was referring to is that if the uncompressed state is <= the > compressed limit (99.8% case), then the client knows it will fit and need > not do anything special. If you have a 0.2% case, then you don’t know and > need to one of the various strategies for handling it. > > Yup, you did say it was compressed... But how do you know that you have 1MB of compressed state without actually compressing? > > > >> - Intermediary never sees a request, able to work on other workloads > >> - Origin never sees a request, able to work on other workloads > > > >> Again, this is not guaranteed, it is only specified. > > Sure, intentionally bad actors can’t be prevented. Having optimal rules > for good players improves general handling and also makes it easier to > detect bad actors. But we've asserted that these rules won't be used in the common case, and also that malicious entities won't respect them! If so, why are we using these rules? Is the exception where we're talking about non-malicious clients sending large headers such a big deal? > > > > > > >> Compression Efficient Client > >> —————————————— > >> - Client compares 1MB to 16KB, and realizes it must copy the state > table (4k extra temp mem) > >> - Client processes until full (likely 32KB of data) > >> - Intermediary never sees a request, able to work on other workloads > >> - Origin never sees a request, able to work on other workloads > > > > > >> This leaves out the common case when the state table is copied and > there was no revert needed. > That was 4k worth of copying for every > request where no copying was necessary. This is likely > > to be a > substantial expense in the common case. > > According to the data we have the common case is < 16KB of *uncomrpessed* > data, which has no additional overhead. In the case where you do have > > 16KB of uncompressed data. Once we are in the 0.2% realm, then yes there is > a measurable impact that is potentially wasted. From a memory perspective, > assuming 16KB frames, its up to 25% additional overhead. The compute time > varies with the number of entries in the table, which I guess the max is > 120 with all one byte names and values. > Sure, a heuristic whereby a copy is made only when the uncompressed data will reduce overhead in the common case, and will likely cause a connection reset when it occurs, or a compression reset. Of course, since one had to actually compress the headers to figure out when one has exceeded the limit, it still doesn't reduce CPU much for the sender. Smart implementations may be able to dump the compressor state when this happens. None of that changes that sometimes a request will get through because it compressed to less than the limit because of previous requests priming the compression state, and that sometimes it won't. > > >> The streaming discard approach has the highest overall cost in > computation time for all > >> parties. It also introduces latency since all other streams must wait > until the stream has > >> completed. Finally it consumes unnecessary network bandwidth. > > > > In the common case (i.e. ~99.9%) of the time, streaming potentially > reduces latency since one need not wait for the entire set of headers to be > encoded before forwarding. In the hopefully rare case (or else the protocol > has some real interop problems) where the headers exceed the recipient's > limit, you're right, it can increase latency. > > Right in the case where you can fragment the request smaller than the > frame size it improves latency, and this is definitely a disadvantage to A > > >> A proxy representing servers with different limits has to report the > lowest common denominator. > > > >> Not necessarily. A proxy could dynamically pick the highest (provided > its within tolerable > >> levels) and discard traffic for lower limited origins. > > > > > > ... and then the limit fails to offer any supposed savings. > > It offers savings up to the limit you set (tolerable levels). So as an > example if you have one endpoint that accepts 20K and the other as 16K, you > only have a 4K inefficiency. Thats better than no limit. > Proxies have three options: 1) Configure an effectively infinite limit and drop requests internally - this allows the most requests to the endpoints. 2) Configure some arbitrary limit - This implies some requests would fail to go to endpoints that would happily accept the requests. 3) Do something 'smart', i.e. configure different limits based on priori knowledge - Since the proxy can't know a priori to the receipt of a request the place to which the request was headed, this ends up being the same problem all over again. The proxy will reject things which the endpoint would have accepted. Basically, it doesn't seem like it fosters good interop (*especially* given that it is non-deterministic). > > > > >>> A client application may know better that its particular server > supports a higher limit. The > >>> best outcome requires sending the headers anyway and just seeing > whether someone complains. > > > >> I don’t follow your argument here. A receiver is always going to be the > one to know what its > >> limits are unless it reports incorrect values, which would be a bug. > > > > This isn't true. A forward proxy must contact a server before it can > know what the server's limit is, thus the client can not know what the > limit for that server would be until after it has sent the message. > > Well a forward proxy is going to know its limit, which very well could be > less than the origin. That is the same today with H1, its just that the > limit isn’t communicated. Although I think I understand David’s argument > now, which is that the spec implied default could lead to proxies being > more restrictive than they were in the past. > Yup. Also, I suspect many proxies have had a limit on requests sizes but not response sizes... -=R > > > > > This isn't necessarily true-- once one has the headers one needs, one > can choose to make a connection. > > For reverse proxies in particular, the receipt of a set of headers on a > particular IP, or with a particular host indication via SNI, the > intermediary can know to whom the connection should be created without > having received *any* of the headers. > > > > Even in the forward-proxy case, all it needs are the ':' headers. > > Ah yes thats true, there are cases where factors other than headers allow > selection. I recall discussion of sorting : headers up, but don’t recall > the status of that. This is a good reason to do that. > > -- > Jason T. Greene > WildFly Lead / JBoss EAP Platform Architect > JBoss, a division of Red Hat > >
Received on Sunday, 20 July 2014 03:24:02 UTC