- From: Simpson, Robby (GE Energy Management) <robby.simpson@ge.com>
- Date: Tue, 16 Apr 2013 18:27:53 +0000
- To: Martin Thomson <martin.thomson@gmail.com>
- CC: "ietf-http-wg@w3.org" <ietf-http-wg@w3.org>
Another attempt >>On 4/16/13 12:03 PM, "Roberto Peon" >><grmocg@gmail.com<mailto:grmocg@gmail.com>> wrote: >> >> Unfortunately, most flows with http/1 are short (and bursty) flows, so >>it is incorrect to say that they have reached steady-state w.r.t. >>congestion control. Many resource fetches complete in between one and >>three rtts, then much of time the connection(s) sit idle for extended >>periods of time (seconds to minutes). Good point. Most of my work with HTTP/1 (and TCP) are with long-lived flows and I live in the long tail. My gut tells me you are correct when it comes to traditional web usage. >> I'm hoping Will will chime in here with data soon, but the distribution >>on single connection cwnds as measured on traffic in the wild shows us >>that using multiple connections puts more packets on the wire as a >>result of init-cwnd (and thus not subject to congestion control, ouch) >>than a single stable-state, heavily used and reused connection such as >>Spdy or HTTP/2 would allow. Wouldn't Spdy or HTTP/2 still face the issues regarding steady-state then? Part of my concern is that we may, once again, create an HTTP that unfairly dominates traffic due to lots of bursty flows. <snip> >> I agree that solving the problem for http/2 alone won't fix it for >>everything. On the other hand, we also need to act swiftly to solve >>problems on timescales that matter to users, and http is a fine venue >>for that. The fact that http/2 could do this doesn't stop us from coming >>up with an opaque blob that *any* latency sensitive application protocol >> could use to communicate with the transport layer. As someone stated earlier, I fear the opposite to be true. This may end up slowing down HTTP/2 and not be swift at all..
Received on Tuesday, 16 April 2013 18:28:37 UTC