On Tue, Apr 16, 2013 at 10:58 AM, Simpson, Robby (GE Energy Management) <
robby.simpson@ge.com> wrote:
> On 4/16/13 12:03 PM, "Roberto Peon" <grmocg@gmail.com<mailto:
> grmocg@gmail.com>> wrote:
>
> Unfortunately, most flows with http/1 are short (and bursty) flows, so it
> is incorrect to say that they have reached steady-state w.r.t. congestion
> control. Many resource fetches complete in between one and three rtts, then
> much of time the connection(s) sit idle for extended periods of time
> (seconds to minutes).
>
> Good point. Most of my work with HTTP/1 (and TCP) are with long-lived
> flows and I live in the long tail. My gut tells me you are correct when it
> comes to traditional web usage.
>
> I'm hoping Will will chime in here with data soon, but the distribution on
> single connection cwnds as measured on traffic in the wild shows us that
> using multiple connections puts more packets on the wire as a result of
> init-cwnd (and thus not subject to congestion control, ouch) than a single
> stable-state, heavily used and reused connection such as Spdy or HTTP/2
> would allow.
>
> Wouldn't Spdy or HTTP/2 still face the issues regarding steady-state then?
>
No-- by muxing the many separate requests onto one connection, we are far
more likely to achieve steady-state before we stop sending bytes around.
Thusfar we've seen that the one connection thing wins (so long as it is not
handicapped by startup) so long as packet loss is less than ~1.5%. It does
better there, often with smaller aggregate window size and packets. Part of
this is that it stays along enough to actually trigger the fast rexmit
path. Above ~1.5%, and the circumvention of connections (with crap
duty-cycle) wins out, as the loss strikes fewer than all of the
connections, and thus the CWND closes more slowly than it does for the
connection with a much better proportion of time where it is actually
sending/receiving packets.
>
> Part of my concern is that we may, once again, create an HTTP that
> unfairly dominates traffic due to lots of bursty flows.
>
I understand. I worry about the same thing, just from the other end. If we
don't make a single connection competitive with many, we'll have people
using many, and we know that is a bad thing as it is a very effective
circumvention method for both initial burst size AND avoiding loss backoff
:/
>
> <snip>
>
> I agree that solving the problem for http/2 alone won't fix it for
> everything. On the other hand, we also need to act swiftly to solve
> problems on timescales that matter to users, and http is a fine venue for
> that. The fact that http/2 could do this doesn't stop us from coming up
> with an opaque blob that *any* latency sensitive application protocol
> could use to communicate with the transport layer.
>
> As someone stated earlier, I fear the opposite to be true. This may end
> up slowing down HTTP/2 and not be swift at all..
>
>
This is true of any starting condition. The question becomes: Is past
experience of X time ago less wrong on average than starting with an
arbitrary constant.
-=R