Re: HTTP router point-of-view concerns

Correct, I mean to say that, if you can't deal with 4k of state in the
first RT then you RST those requests, causing them to suffer one RT of
latency.

Personally, I think one should be able to deal with state for the first RT,
especially you're going to have more than that in general in the IO
buffers, kernel buffers, etc.
But, anyway, assuming you're under DoS attack, there are multiple options:

1) send a new settings frame with the size you want, and RST everything
'till that becomes effective.
2) we implement James' proposal of a goaway-and-come-back after sending the
settings, where the settings are effective on the next connection
3) If we kept the persistent settings on the client, the first time the
client spoke to the intermediary, it would learn and have appropriate
settings in the future.
4) reject HTTP/2 (which uses more state in exchange for lower latency) in
preference for HTTP/1.0, which will put less data as persistent state for
the first RT.

5) assuming we did the DNS thing, the client would already have the correct
setting, and there'd be no additional latency.

I'm confused the complaining about extra latency of any of the solutions
above, however.
Do we care about latency or not?
Arguments that complain that we have to hold state for 1 RT, and that we
want to eliminate all state make me think that latency is viewed as a
distant-second consideration.
Is latency a prime consideration, as indicated in the charter, or not?

(And you can call it session caching or whatever, it is still just state on
the other side and is all the same idea).
-=R


On Fri, Jul 12, 2013 at 1:26 AM, Amos Jeffries <squid3@treenet.co.nz> wrote:

> On 12/07/2013 7:35 a.m., Roberto Peon wrote:
>
>> I think it is perfectly reasonable for an intermediary to set the
>> compression size to zero if it wishes.
>>
>> Market forces will (in the long-term) pick the correct strategy for
>> this-- assuming the compression is effective at reducing latency, and that
>> people care about latency reductions, then eventually intermediaries might
>> evolve to use it.
>> If it is ineffective at reducing latency, or if reduced latency is not
>> actually desirable, then intermediaries would not use it.
>>
>>
>> The DoS vector you're talking about is not a DoS vector if the
>> intermediary resets all streams before the change-of-state-size comes into
>> effect.
>>
>
> If you means RST_STREAM on all the initial streams which use a larger
> compression size then what you are doing is adding an RTT penalty to all
> those requests over and beyond what HTTP/1 suffers from already on a normal
> transaction. This is not a useful way forward (wastes packets, RTT and
> stream IDs) and resolving it is to make decompression with the default
> state size mandatory for all recipients. Which brings us full circle on the
> problem of having a default >0 in the dynamic part of the state tables.
>
>
>
>  When the state size is 0, one should be able to use some kinds of
>> 'indexed' representations, so long as those representations refer only to
>> items in the static tables. Why do you believe that this would use more or
>> less CPU? (It should use less CPU and less memory...)
>>
>
> I did not mention CPU. Only the bandwidth amplification effects that
> agents disabling compression would incur and need to consider carefully.
>
> Personally I would like to see a 127 entry mandatory static table in the
> spec itself and tied to the "2.0" version with a 127 entry optional dynamic
> table indicated by the high-end bit of the byte code. With a capacity byte
> size for dynamic table sent each way and senders forbidden to add new
> entries to the dynamic table until they hold the value from both ends of
> the connection. Agreed value being the minimum of both ends capacities.
>
> Amos
>
>

Received on Friday, 12 July 2013 16:32:13 UTC