- From: Amos Jeffries <squid3@treenet.co.nz>
- Date: Sat, 13 Jul 2013 06:03:35 +1200
- To: ietf-http-wg@w3.org
On 13/07/2013 4:31 a.m., Roberto Peon wrote: > Correct, I mean to say that, if you can't deal with 4k of state in the > first RT then you RST those requests, causing them to suffer one RT of > latency. > > Personally, I think one should be able to deal with state for the > first RT, especially you're going to have more than that in general in > the IO buffers, kernel buffers, etc. > But, anyway, assuming you're under DoS attack, there are multiple options: > > 1) send a new settings frame with the size you want, and RST > everything 'till that becomes effective. -0. RST wastes Stream IDs. We only have a limited 31-bit resource there. It will be exhausted easily enough on long running or high throughput connections. Every RST is one step clsoer to exhaustion. They are not so much the enemy as RTT pehapse, but still an enemy. > 2) we implement James' proposal of a goaway-and-come-back after > sending the settings, where the settings are effective on the next > connection -1. We have absolutely zero confidence that the followup will be to the same half of the planet as the first connection, let alone the same server. > 3) If we kept the persistent settings on the client, the first time > the client spoke to the intermediary, it would learn and have > appropriate settings in the future. -1. same reasons as for (2) above. > 4) reject HTTP/2 (which uses more state in exchange for lower latency) > in preference for HTTP/1.0, which will put less data as persistent > state for the first RT. -1. Counter to WG primary goals of rolling out HTTP/2. > 5) assuming we did the DNS thing, the client would already have the > correct setting, and there'd be no additional latency. -1. For all the reasons discussed earlier about DNS failures. There is zero confidence that the next-hop server is the one DNS was mentioning. In fact from the router/middleware viewpoint this thread is about there is nearly 100% confidence that the DNS is *not* about the next-hop. > I'm confused the complaining about extra latency of any of the > solutions above, however. > Do we care about latency or not? Latency? who mentioned latency? This is all about implicit security vulnerabilities/considerations and potential frame routing problems in the compression design. Latency is miles away from all that. > Arguments that complain that we have to hold state for 1 RT, and that > we want to eliminate all state make me think that latency is viewed as > a distant-second consideration. > Is latency a prime consideration, as indicated in the charter, or not? Lets put this the other way. Home router with 512KB baked into the drivers for the HTTP stack responsible for routing traffic between ~9 devices (2 family members with phone, tablet, and laptop each, and old house PC system, a games console, and digital TV - maybe more but that is a fairly accurate description of my non-tech friends household). Someone is watching TV, with ~256KB of streaming state in the stack, both have their phones on 24/7 logged into their favourite Social media site with combined 184KB of state in the stack. Then someone opens a connection to website X a log sin with 12x 7KB of Cookie data on the first request headers. --- this is a realworld HTTP/1 situation I only just got finished debugging the device crash for yesterday. What happens in HTTP/2 if the device is unable to specify a max-72KB dynamic table size to the client? or even to advertise new lower dynamic table sizes to the existing clients fast enough not to block the new client? Naturally I am worried. About the setup and whether it is negotiation or mandatory initial state, etc. Amos > > (And you can call it session caching or whatever, it is still just > state on the other side and is all the same idea). > -=R > > > On Fri, Jul 12, 2013 at 1:26 AM, Amos Jeffries <squid3@treenet.co.nz > <mailto:squid3@treenet.co.nz>> wrote: > > On 12/07/2013 7:35 a.m., Roberto Peon wrote: > > I think it is perfectly reasonable for an intermediary to set > the compression size to zero if it wishes. > > Market forces will (in the long-term) pick the correct > strategy for this-- assuming the compression is effective at > reducing latency, and that people care about latency > reductions, then eventually intermediaries might evolve to use it. > If it is ineffective at reducing latency, or if reduced > latency is not actually desirable, then intermediaries would > not use it. > > > The DoS vector you're talking about is not a DoS vector if the > intermediary resets all streams before the > change-of-state-size comes into effect. > > > If you means RST_STREAM on all the initial streams which use a > larger compression size then what you are doing is adding an RTT > penalty to all those requests over and beyond what HTTP/1 suffers > from already on a normal transaction. This is not a useful way > forward (wastes packets, RTT and stream IDs) and resolving it is > to make decompression with the default state size mandatory for > all recipients. Which brings us full circle on the problem of > having a default >0 in the dynamic part of the state tables. > > > > When the state size is 0, one should be able to use some kinds > of 'indexed' representations, so long as those representations > refer only to items in the static tables. Why do you believe > that this would use more or less CPU? (It should use less CPU > and less memory...) > > > I did not mention CPU. Only the bandwidth amplification effects > that agents disabling compression would incur and need to consider > carefully. > > Personally I would like to see a 127 entry mandatory static table > in the spec itself and tied to the "2.0" version with a 127 entry > optional dynamic table indicated by the high-end bit of the byte > code. With a capacity byte size for dynamic table sent each way > and senders forbidden to add new entries to the dynamic table > until they hold the value from both ends of the connection. Agreed > value being the minimum of both ends capacities. > > Amos > >
Received on Friday, 12 July 2013 18:04:04 UTC