Re: HTTP router point-of-view concerns

However, if we would make it possible to have multiple compression contexts
on a single HTTP/2 connection, that would make it work... Only those
requests would be blocked, which belong to the same compression context.
There would be a compression context for each user of the proxy, and the
upstream connection would multiplex HEADERS frames belonging to these
compression context. The proxy would have to maintain all those compression
contexts anyway... except, of course, if it negotiates a stateless
compression strategy with the clients.

This would make things more complicated (having a new, compression-context
id for every headers frame, limiting the number of compression contexts,
etc...), so I don't think it's a very good solution, but it could maybe
generate more fresh ideas regarding this problem.


2013/7/12 Gábor Molnár <gabor.molnar@sch.bme.hu>

> I've been thinking of the benefits using a streaming decompressor and
> compressor here to quickly extract the routing information. But in the end,
> I think it wouldn't solve the problem, at least when using a stateful
> compression algorithm.
>
> Let's assume we have a streaming decompressor that emits headers as soon
> as it decompresses them, and always emits the headers needed to make
> routing decision first (headers starting with colon). It would make routing
> very quick. Let's suppose that the header block format would also be
> optimized to support this behaviour. The proxy would start streaming the
> headers to a compressor, which can forwards them immediately on the upsteam
> connection, without waiting for the whole header set to be decoded.
>
> The problem is that there's a DoS vector here. While sending the HEADERS
> frames on the upstream connection, there must not be anything else sent
> there. Now suppose a client is very slow, for example, it waits one second
> between sending the first and the second (last) HEADERS frame to the proxy.
> During this time (which can be arbitrary large), the proxy cannot send
> anything on its upstream connection (and it cannot create a new connection
> as it's forbidden in the current spec), so it's basically is blocked.
>
>
> 2013/7/12 Mike Belshe <mike@belshe.com>
>
>> I'm also in favor of removing the compressor completely.
>>
>> The reason is because we've seen the "negotiable compression protocol"
>> movie before.  At this point, its negotiable, and therefore just a big time
>> sink.  In the end, it will not be deployed because some player will have a
>> bug forcing everyone else to turn it off to avoid the hassle.   I apologize
>> if this sounds pessimistic - but history shows that this is a likely result.
>>
>> If we can't agree on mandatory compression (which was long ago thrown
>> out) why not just add session state.  Cookies & User Agents can be set as
>> state which applies across all streams and be done.  It's mandatory, its
>> super simple, it fixes the single biggest redundant-bandwidth problem
>> elegantly, and in the end, I think this is basically what pkh and christian
>> are asking for under different names.
>>
>> We'll probably save a lot of time debating and end up with a protocol
>> which is almost as good as a true compressor.
>> Mike
>>
>>
>>
>> On Fri, Jul 12, 2013 at 1:26 AM, Amos Jeffries <squid3@treenet.co.nz>wrote:
>>
>>> On 12/07/2013 7:35 a.m., Roberto Peon wrote:
>>>
>>>> I think it is perfectly reasonable for an intermediary to set the
>>>> compression size to zero if it wishes.
>>>>
>>>> Market forces will (in the long-term) pick the correct strategy for
>>>> this-- assuming the compression is effective at reducing latency, and that
>>>> people care about latency reductions, then eventually intermediaries might
>>>> evolve to use it.
>>>> If it is ineffective at reducing latency, or if reduced latency is not
>>>> actually desirable, then intermediaries would not use it.
>>>>
>>>>
>>>> The DoS vector you're talking about is not a DoS vector if the
>>>> intermediary resets all streams before the change-of-state-size comes into
>>>> effect.
>>>>
>>>
>>> If you means RST_STREAM on all the initial streams which use a larger
>>> compression size then what you are doing is adding an RTT penalty to all
>>> those requests over and beyond what HTTP/1 suffers from already on a normal
>>> transaction. This is not a useful way forward (wastes packets, RTT and
>>> stream IDs) and resolving it is to make decompression with the default
>>> state size mandatory for all recipients. Which brings us full circle on the
>>> problem of having a default >0 in the dynamic part of the state tables.
>>>
>>>
>>>
>>>  When the state size is 0, one should be able to use some kinds of
>>>> 'indexed' representations, so long as those representations refer only to
>>>> items in the static tables. Why do you believe that this would use more or
>>>> less CPU? (It should use less CPU and less memory...)
>>>>
>>>
>>> I did not mention CPU. Only the bandwidth amplification effects that
>>> agents disabling compression would incur and need to consider carefully.
>>>
>>> Personally I would like to see a 127 entry mandatory static table in the
>>> spec itself and tied to the "2.0" version with a 127 entry optional dynamic
>>> table indicated by the high-end bit of the byte code. With a capacity byte
>>> size for dynamic table sent each way and senders forbidden to add new
>>> entries to the dynamic table until they hold the value from both ends of
>>> the connection. Agreed value being the minimum of both ends capacities.
>>>
>>> Amos
>>>
>>>
>>
>

Received on Friday, 12 July 2013 10:22:13 UTC