- From: Jason Greene <jason.greene@redhat.com>
- Date: Sat, 19 Jul 2014 23:33:58 -0500
- To: Roberto Peon <grmocg@gmail.com>
- Cc: David Krauss <potswa@gmail.com>, Greg Wilkins <gregw@intalio.com>, HTTP Working Group <ietf-http-wg@w3.org>, Mark Nottingham <mnot@mnot.net>
On Jul 19, 2014, at 10:23 PM, Roberto Peon <grmocg@gmail.com> wrote: > > > > On Sat, Jul 19, 2014 at 7:39 PM, Jason Greene <jason.greene@redhat.com> wrote: > > On Jul 19, 2014, at 3:38 PM, Roberto Peon <grmocg@gmail.com> wrote: > > > > On Sat, Jul 19, 2014 at 10:28 AM, Jason Greene <jason.greene@redhat.com> wrote: > > > > How does the client know that 1MB cannot compress to 16KB? 1MB *can* compress to 16kb. > > The client must have compressed the header to know if it would or would not become 16kb. > > Either that, or it is guessing, and that would hurt latency, reliability, and determinism for the substantial number of false-positives it would force into being. > > My example was with 1MB of compressed state. However, the simple optimization I was referring to is that if the uncompressed state is <= the compressed limit (99.8% case), then the client knows it will fit and need not do anything special. If you have a 0.2% case, then you don’t know and need to one of the various strategies for handling it. > > Yup, you did say it was compressed... > But how do you know that you have 1MB of compressed state without actually compressing? You definitely don’t know. My example was just a rebuttal to David’s suggestion that a discard on an exceeded limit is more efficient than the extra work a client has to do to drop the request. If you are making the point that a hop can have a limit that the compressed limit greatly exceeds, then yes I agree that is a problem with compressed limits, and a good argument for uncompressed limits. I still argue though that a compressed limit achieves its goal of optimizing the wire, and the multiplexing. It allows for better compression efficiency at the cost of not catching memory model limits of the actual peer. I honestly don’t care if the limit is uncompressed or compressed because they both contribute to the same result. > > > >> - Intermediary never sees a request, able to work on other workloads > >> - Origin never sees a request, able to work on other workloads > > > >> Again, this is not guaranteed, it is only specified. > > > Sure, intentionally bad actors can’t be prevented. Having optimal rules for good players improves general handling and also makes it easier to detect bad actors. > > But we've asserted that these rules won't be used in the common case, and also that malicious entities won't respect them! > If so, why are we using these rules? Is the exception where we're talking about non-malicious clients sending large headers such a big deal? Thats a fair question. I think a good protocol should expect greedy yet compliant actors to pay a price so that good actors continue to receive a good quality of service. While it won’t prevent a DOS, if a server sees these limits being exceeded, it can choose to take countermeasures quicker than without them. > > > > > > > >> Compression Efficient Client > >> —————————————— > >> - Client compares 1MB to 16KB, and realizes it must copy the state table (4k extra temp mem) > >> - Client processes until full (likely 32KB of data) > >> - Intermediary never sees a request, able to work on other workloads > >> - Origin never sees a request, able to work on other workloads > > > > > >> This leaves out the common case when the state table is copied and there was no revert needed. > That was 4k worth of copying for every request where no copying was necessary. This is likely > > to be a substantial expense in the common case. > > > According to the data we have the common case is < 16KB of *uncomrpessed* data, which has no additional overhead. In the case where you do have > 16KB of uncompressed data. Once we are in the 0.2% ? > > realm, then yes there is a measurable impact that is potentially wasted. From a memory perspective, assuming 16KB frames, its up to 25% additional overhead. The compute time varies with the number of > > entries in the table, which I guess the max is 120 with all one byte names and values. > > Sure, a heuristic whereby a copy is made only when the uncompressed data will reduce overhead in the common case, and will likely cause a connection reset when it occurs, or a compression reset. Of course, since one had to actually compress the headers to figure out when one has exceeded the limit, it still doesn't reduce CPU much for the sender. Smart implementations may be able to dump the compressor state when this happens. > None of that changes that sometimes a request will get through because it compressed to less than the limit because of previous requests priming the compression state, and that sometimes it won’t. Hmm I don’t follow your reply. If the sender has 15KB of uncompressed headers to send, it can reliably generate a < 16KB compressed header and does not need to handle a rollback. So it’s only when you have > 16KB that this extra work kicks in. If the next hop sets a higher limit, then the same would apply to that limit. > > >> Not necessarily. A proxy could dynamically pick the highest (provided its within tolerable > >> levels) and discard traffic for lower limited origins. > >> > >> > >> ... and then the limit fails to offer any supposed savings. > > > It offers savings up to the limit you set (tolerable levels). So as an example if you have one endpoint that accepts 20K and the other as 16K, you only have a 4K inefficiency. Thats better than no limit. > > Proxies have three options: > 1) Configure an effectively infinite limit and drop requests internally > - this allows the most requests to the endpoints. > 2) Configure some arbitrary limit > - This implies some requests would fail to go to endpoints that would happily accept the requests. > 3) Do something 'smart', i.e. configure different limits based on priori knowledge > - Since the proxy can't know a priori to the receipt of a request the place to which the request was headed, this ends up being the same problem all over again. The proxy will reject things which the endpoint would have accepted. > > Basically, it doesn't seem like it fosters good interop (*especially* given that it is non-deterministic). It depends on the type of the proxy. If its a reverse proxy, it has the ability to preconfigure. If its a forward proxy it can have a user configurable limit which specifies the max its willing to tolerate, and advertise lower values as it acquires knowledge. > > > > > >>> A client application may know better that its particular server supports a higher limit. The > >>> best outcome requires sending the headers anyway and just seeing whether someone complains. > > > >> I don’t follow your argument here. A receiver is always going to be the one to know what its > >> limits are unless it reports incorrect values, which would be a bug. > > > > This isn't true. A forward proxy must contact a server before it can know what the server's limit is, thus the client can not know what the limit for that server would be until after it has sent the message. > > Well a forward proxy is going to know its limit, which very well could be less than the origin. That is the same today with H1, its just that the limit isn’t communicated. Although I think I understand David’s argument now, which is that the spec implied default could lead to proxies being more restrictive than they were in the past. > > Yup. Also, I suspect many proxies have had a limit on requests sizes but not response sizes… Yeah that could very well be true. > -=R > > > > > > This isn't necessarily true-- once one has the headers one needs, one can choose to make a connection. > > For reverse proxies in particular, the receipt of a set of headers on a particular IP, or with a particular host indication via SNI, the intermediary can know to whom the connection should be created without having received *any* of the headers. > > > > Even in the forward-proxy case, all it needs are the ':' headers. > > Ah yes thats true, there are cases where factors other than headers allow selection. I recall discussion of sorting : headers up, but don’t recall the status of that. This is a good reason to do that. > > -- > Jason T. Greene > WildFly Lead / JBoss EAP Platform Architect > JBoss, a division of Red Hat > > -- Jason T. Greene WildFly Lead / JBoss EAP Platform Architect JBoss, a division of Red Hat
Received on Sunday, 20 July 2014 04:34:33 UTC