W3C home > Mailing lists > Public > ietf-http-wg@w3.org > January to March 2012

RE: multiplexing -- don't do it

From: Peter L <bizzbyster@gmail.com>
Date: Fri, 30 Mar 2012 12:02:05 -0400
To: "'Steve Padgett'" <steve@wreck.net>, <ietf-http-wg@w3.org>
Message-ID: <02f301cd0e8e$74af8fe0$5e0eafa0$@gmail.com>
Hi Steve,

 

I think I replied to your server load point in the other thread.

 

In response to this point on prioritization and SPDY...

 

SPDY multiplexes the streams with consideration to priority, so this
situation wouldn't happen (afaik).  The high priority object (coming
over a higher priority stream) would preempt the lower-priority
in-flight object as soon as it was requested.

I think this actually may be better than existing HTTP:

1) In existing http, if you have 5 low-priority sessions fetching
objects and you need to fetch 1 high priority object, you can create a
high-priority session to fetch it.  By default (assuming no tcp window
size or other manipulation) that tcp connection gets 1/6th the
bandwidth (~17%).

2) With SPDY, assuming the same situation -- the high priority session
would pre-empt all low-priority streams, so the high-priority stream
would be getting ~100% of the bandwidth.



SPDY sits above a single TCP connection. So the order that it sends data to
the TCP stack is the order that the receiver must read it in on the other
side. This means that for in-flight data (which can easily be an entire web
page worth on high latency / bandwidth product links), there is no
prioritization. For example, let's say SPDY receives a small low priority
object from the back end web server and pushes it to the TCP stack and then
receives a small high priority object and pushes that to the TCP stack.
Assuming the objects are small, these objects will go out as two separate
TCP packets -- low and then high. If the first packet gets dropped, then
even if the second packet makes it to the other side, it cannot be delivered
to the browser until the first packet is retransmitted and received at the
user device. In fact, because multiplexing makes the traffic opaque to
intermediary devices, layer 7 switches that perform differential shaping for
web performance (javascript before images say) cannot enforce prioritization
in the network when congested. Much better would be to have the low and high
priority objects on two separate TCP connections so that when congested the
switch can still provide bandwidth for the high priority object.

 

A multiplexer sitting on top of TCP can only apply prioritization in the
case where it is processing two objects simultaneously but since most web
objects are small this is relatively rare.

 

Thanks,

 

Peter

 

 

 

 

 

 

From: Steve Padgett [mailto:steve@wreck.net] 
Sent: Friday, March 30, 2012 11:31 AM
To: Peter L
Subject: Re: multiplexing -- don't do it

 

Sure, no problem.

On Mar 30, 2012 4:34 PM, "Peter L" <bizzbyster@gmail.com> wrote:

Hi Steve,

 

Do you mind if I reply to your email and CC the list?

 

Thanks,

 

Peter

On Fri, Mar 30, 2012 at 3:49 AM, Steve Padgett <steve@wreck.net> wrote:

On Fri, Mar 30, 2012 at 4:07 AM, Peter L <bizzbyster@gmail.com> wrote:
> I'm new to this list but have been studying web performance over high
> latency networks for many years and multiplexing seems to me like the
wrong
> way to go. The main benefit of multiplexing is to work around the 6
> connections per domain limit but it reduces transparency on the network,
> decreases the granularity/modularity of load balancing

On sites that have enough traffic to need load balancing, I suspect
that the # of concurrent client connections is several orders of
magnitude higher than the # of web servers - so the load should still
be evenly distributed.  Plus, one of the primary bottlenecks in load
balancers is the # of connections (both concurrent & new-per-second)
so having a 4x to 6x decrease in this would likely actually save a lot
of load balancer resources.


> and increases object
> processing latency in general on the back end as everything has to pass
> through the same multiplexer, and introduces its own intractable
> inefficiencies. In particular the handling of a low priority in flight
> object ahead of a high priority object when packet loss is present is a
step
> backwards from what we have today for sites that get beyond the 6
> connections per domain limit via domain sharding.

SPDY multiplexes the streams with consideration to priority, so this
situation wouldn't happen (afaik).  The high priority object (coming
over a higher priority stream) would preempt the lower-priority
in-flight object as soon as it was requested.

I think this actually may be better than existing HTTP:

1) In existing http, if you have 5 low-priority sessions fetching
objects and you need to fetch 1 high priority object, you can create a
high-priority session to fetch it.  By default (assuming no tcp window
size or other manipulation) that tcp connection gets 1/6th the
bandwidth (~17%).

2) With SPDY, assuming the same situation -- the high priority session
would pre-empt all low-priority streams, so the high-priority stream
would be getting ~100% of the bandwidth.

I also agree with Brian on the additional issues that exist due to the
increasing the # of concurrent sessions...

Steve

 
Received on Friday, 30 March 2012 16:02:43 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 27 April 2012 06:51:57 GMT