Re: HTTP router point-of-view concerns

On stream ID exhaustion:
Ok, so if you do 10k requests/second, you'll have to make a new connection
every ~30 hours. Is this really problematic?

On persistent settings/goaway-and-come-back:
Correct. If your loadbalancing arrangement is flapping and almost never
puts connections back in the location where they were from ~100ms before,
then these solutions will not work reliably.
If you're flapping, then I think you'd have bigger problems then, however :)

On rejecting HTTP/2:
HTTP/2 is thusfar trading a constant sized in-userspace buffer for latency
reduction and kernel-space reduction in memory.
If that isn't acceptable, then the only options are to hobble HTTP/2 by
trading latency for memory, or to allow HTTP/2 to trade constant amounts of
memory for latency and do an HTTP/1 to HTTP/2 gateway.

On DNS reliability:
If DNS is unreliable for some small portion of time.. .who cares? If it
screws up much of the time, then I agree it is not an adequate mitigation.

Thusfar, we have two kinds of deployments in mind for HTTP/2: on the
internet, and within a private network.
For the former, most people seem to be saying that they'll deploy clients
that use ALPN to perform the negotiation.
In such a case, any proxy must be explicit, and thus there is a chance to
do the DNS resolution.
Within a private network people seem to be saying that they'd negotiate
unencrypted over port 80, in which case the server performs the upgrade and
can have already stated that it wants the compression state to be zero
sized before the client sends any information.

Thus, the only place where there is any potential for problems are for
explicitly configured proxies, or for actual endpoints where a DNS lookup
for that entity is likely.

In any case, we're talking about having a dynamic table size of 4k by
default, nothing near the large amount you had to debug in the router (I
feel your pain!)
In the intercepting-home-router case, btw, the RTTs involved are so small
that doing a RST for one RTT costs very little. Unlike HTTP/1, we do have a
mechanism that causes the browser to try again!
In such cases, thus, the tradeoff of latency for memory makes sense because
the costs in latency is so tiny.


-=R


On Fri, Jul 12, 2013 at 11:03 AM, Amos Jeffries <squid3@treenet.co.nz>wrote:

> On 13/07/2013 4:31 a.m., Roberto Peon wrote:
>
>> Correct, I mean to say that, if you can't deal with 4k of state in the
>> first RT then you RST those requests, causing them to suffer one RT of
>> latency.
>>
>> Personally, I think one should be able to deal with state for the first
>> RT, especially you're going to have more than that in general in the IO
>> buffers, kernel buffers, etc.
>> But, anyway, assuming you're under DoS attack, there are multiple options:
>>
>> 1) send a new settings frame with the size you want, and RST everything
>> 'till that becomes effective.
>>
>
> -0. RST wastes Stream IDs. We only have a limited 31-bit resource there.
> It will be exhausted easily enough on long running or high throughput
> connections. Every RST is one step clsoer to exhaustion. They are not so
> much the enemy as RTT pehapse, but still an enemy.


>
>  2) we implement James' proposal of a goaway-and-come-back after sending
>> the settings, where the settings are effective on the next connection
>>
>
> -1. We have absolutely zero confidence that the followup will be to the
> same half of the planet as the first connection, let alone the same server.
>
>
>  3) If we kept the persistent settings on the client, the first time the
>> client spoke to the intermediary, it would learn and have appropriate
>> settings in the future.
>>
>
> -1. same reasons as for (2) above.
>
>
>  4) reject HTTP/2 (which uses more state in exchange for lower latency) in
>> preference for HTTP/1.0, which will put less data as persistent state for
>> the first RT.
>>
>
> -1. Counter to WG primary goals of rolling out HTTP/2.
>
>
>  5) assuming we did the DNS thing, the client would already have the
>> correct setting, and there'd be no additional latency.
>>
>
> -1. For all the reasons discussed earlier about DNS failures. There is
> zero confidence that the next-hop server is the one DNS was mentioning. In
> fact from the router/middleware viewpoint this thread is about there is
> nearly 100% confidence that the DNS is *not* about the next-hop.
>
>
>  I'm confused the complaining about extra latency of any of the solutions
>> above, however.
>> Do we care about latency or not?
>>
>
> Latency? who mentioned latency? This is all about implicit security
> vulnerabilities/considerations and potential frame routing problems in the
> compression design. Latency is miles away from all that.
>
>
>
>  Arguments that complain that we have to hold state for 1 RT, and that we
>> want to eliminate all state make me think that latency is viewed as a
>> distant-second consideration.
>> Is latency a prime consideration, as indicated in the charter, or not?
>>
>
> Lets put this the other way. Home router with 512KB baked into the drivers
> for the HTTP stack responsible for routing traffic between ~9 devices (2
> family members with phone, tablet, and laptop each, and old house PC
> system, a games console, and digital TV - maybe more but that is a fairly
> accurate description of my non-tech friends household).
> Someone is watching TV, with ~256KB of streaming state in the stack, both
> have their phones on 24/7 logged into their favourite Social media site
> with combined 184KB of state in the stack. Then someone opens a connection
> to website X a log sin with 12x 7KB of Cookie data on the first request
> headers.
>  --- this is a realworld HTTP/1 situation I only just got finished
> debugging the device crash for yesterday. What happens in HTTP/2 if the
> device is unable to specify a max-72KB dynamic table size to the client? or
> even to advertise new lower dynamic table sizes to the existing clients
> fast enough not to block the new client?
>
> Naturally I am worried. About the setup and whether it is negotiation or
> mandatory initial state, etc.
>
> Amos
>
>
>> (And you can call it session caching or whatever, it is still just state
>> on the other side and is all the same idea).
>> -=R
>>
>>
>> On Fri, Jul 12, 2013 at 1:26 AM, Amos Jeffries <squid3@treenet.co.nz<mailto:
>> squid3@treenet.co.nz>> wrote:
>>
>>     On 12/07/2013 7:35 a.m., Roberto Peon wrote:
>>
>>         I think it is perfectly reasonable for an intermediary to set
>>         the compression size to zero if it wishes.
>>
>>         Market forces will (in the long-term) pick the correct
>>         strategy for this-- assuming the compression is effective at
>>         reducing latency, and that people care about latency
>>         reductions, then eventually intermediaries might evolve to use it.
>>         If it is ineffective at reducing latency, or if reduced
>>         latency is not actually desirable, then intermediaries would
>>         not use it.
>>
>>
>>         The DoS vector you're talking about is not a DoS vector if the
>>         intermediary resets all streams before the
>>         change-of-state-size comes into effect.
>>
>>
>>     If you means RST_STREAM on all the initial streams which use a
>>     larger compression size then what you are doing is adding an RTT
>>     penalty to all those requests over and beyond what HTTP/1 suffers
>>     from already on a normal transaction. This is not a useful way
>>     forward (wastes packets, RTT and stream IDs) and resolving it is
>>     to make decompression with the default state size mandatory for
>>     all recipients. Which brings us full circle on the problem of
>>     having a default >0 in the dynamic part of the state tables.
>>
>>
>>
>>         When the state size is 0, one should be able to use some kinds
>>         of 'indexed' representations, so long as those representations
>>         refer only to items in the static tables. Why do you believe
>>         that this would use more or less CPU? (It should use less CPU
>>         and less memory...)
>>
>>
>>     I did not mention CPU. Only the bandwidth amplification effects
>>     that agents disabling compression would incur and need to consider
>>     carefully.
>>
>>     Personally I would like to see a 127 entry mandatory static table
>>     in the spec itself and tied to the "2.0" version with a 127 entry
>>     optional dynamic table indicated by the high-end bit of the byte
>>     code. With a capacity byte size for dynamic table sent each way
>>     and senders forbidden to add new entries to the dynamic table
>>     until they hold the value from both ends of the connection. Agreed
>>     value being the minimum of both ends capacities.
>>
>>     Amos
>>
>>
>>
>
>

Received on Friday, 12 July 2013 19:43:25 UTC