Re: multiplexing -- don't do it

On Sat, Mar 31, 2012 at 3:25 PM, Peter L <bizzbyster@gmail.com> wrote:

> Multiplexing inefficiencies:
>
> I don't have data on this but I can share my experience developing a web
> accelerator for satellite networks that does prefetching (similar to Server
> Push but prefetched objects are buffered in the modem as opposed to the
> browser itself) and has supported multiplexing similar to SPDY.
>
> Bad performance of hi priority objects (browser requested JavaScript say)
> when preceded by low priority objects (Images or prefetched objects) is a
> serious problem when objects are multiplexed, especially for congested
> networks. The ideal solution is for the optimizer to send in priority order
> but also mark the priority of packets, maintain original streams, as well
> as the visibility of headers so that COTS networking equipment can be used
> to differentially shape the streams based on priority.
>
>
If we are on a transport such as SCTP, we can do some of these things and
avoid some HOL blocking cases that can occur with multiplexing over TCP. I
could see defining a mapping for HTTP2 onto such pre-multiplexed protocols,
but Mark would know best if that is something we can do. It'd make sense to
me, though you'd end up with two dufferent


> Hypothetical Example: Assuming SPDY's Server Push feature is widely
> adopted, ISPs will want to be able to treat prefetched data with a lower
> priority than interactive or user requested data when their networks are
> congested.
>

Bleh! You have to consider the second-order consequences of doing things
like that.  If you do that, then server push is effectively optional and
people will just inline again, and the likely outcome is that you'll
neither have an effective cache, nor decrease overall bandwidth. Anything
that forces people to inline has strongly negative effects on efficiency
and latency as compared to server push.

I am, however, all in support of the browser declaring what it will or will
not support w.r.t. server push. The client-side's incentives align with the
user, and so putting the policy decision there makes sense so long as it
doesn't incur an extra RT from negotiation.


> They already have the gear in place to do this type of L7 differential
> shaping. It's good for end users and good for site owners as it allows for
> good page load times even during peak busy hours. Do you agree HTTP 2.0
> should not break this type of functionality?
>

Traffic shaping is an awesome technology, but the interests of the
middleware providers rarely aligns with the user. Look at the manoeuvring
that AT&T is doing these days w.r.t. bandwidth on mobile devices, or the
debacle of putting the full set of HTTP over port 80 (even when the
endpoints agree and are tested to speak the full set) as examples.

I don't want to be in a world where they look at which site (as opposed to
port) you're going to and potentially raise or lower the priority of your
packets because someone pays them more? Screw that!

In any case, do we believe that even an elightened and altruistic
middleware provider can separate and distinguish between the hi-pri things
within the one protocol effectively? I think that alone is a difficult
problem... and thankfully probably out of the purvue of the WG.

-=R


>
> Thanks,
>
> Peter
>
>
> On Mar 30, 2012, at 9:22 AM, Mike Belshe <mike@belshe.com> wrote:
>
>
>
> On Fri, Mar 30, 2012 at 4:07 AM, Peter L <bizzbyster@gmail.com> wrote:
>
>> I'm new to this list but have been studying web performance over high
>> latency networks for many years and multiplexing seems to me like the wrong
>> way to go. The main benefit of multiplexing is to work around the 6
>> connections per domain limit but it reduces transparency on the network,
>> decreases the granularity/modularity of load balancing and increases object
>> processing latency in general on the back end as everything has to pass
>> through the same multiplexer, and introduces its own intractable
>> inefficiencies.
>
>
> The CPU processing at the server is one thing we could optimize for.  Or
> we could optimize for user's getting their pages faster.
>
> Data suggests that your claims of inefficiency are simply incorrect.  But
> if you have a benchmark to report upon, we could discuss that.
>
>
>
>
>> In particular the handling of a low priority in flight object ahead of a
>> high priority object when packet loss is present is a step backwards from
>> what we have today for sites that get beyond the 6 connections per domain
>> limit via domain sharding. Why not just introduce an option in HTTP 2.0
>> that allows clients and servers to negotiate max concurrent connections per
>> domain?
>
>
> As you can see from data, websites are not having any trouble getting
> around the 6 connection limit already.
>
> We could do this, but it would do nothing to make pages load faster or be
> lighter weight on the network.
>
>
>
>> When web sites shard domains, aren't they essentially telling the browser
>> that they will happily accept lots more connections? I'm sure this
>> suggestion has long since been shot  down but browsing around on the web
>> I'm not finding it.
>>
>> As for header compression, again this is a trade-off between
>> transparency/multiple streams and bandwidth savings. But I'd think this
>> group could come up with ways to reduce the bytes in the protocol
>> (including cookies) without requiring the use of a single compression
>> history, resulting in an order-sensitive multiplexed stream.
>>
>
> I'm not sure why you are opposed to compression.  We could reduce the
> bytes as well, and nobody is against that.
>
> What is "transparency on the wire"?  You mean an ascii protocol that you
> can read?  I don't think this is a very interesting goal, as most people
> don't look at the wire.  Further, if we make it a secure protocol, its a
> moot point, since the wire is clearly not human readable.
>
> mike
>
>
>
>
>
>
>>
>> Thanks,
>>
>> Peter
>>
>>
>> On Thu, Mar 29, 2012 at 9:26 AM, Mike Belshe <mike@belshe.com> wrote:
>>
>>> I thought the goal was to figure out HTTP/2.0; I hope that the goals of
>>> SPDY are in-line with the goals of HTTP/2.0, and that ultimately SPDY just
>>> goes away.
>>>
>>> Mike
>>>
>>>
>>> On Thu, Mar 29, 2012 at 2:22 PM, Willy Tarreau <w@1wt.eu> wrote:
>>>
>>>> Hello,
>>>>
>>>> after seeing all the disagreements that were expressed on the list these
>>>> days (including from me) about what feature from SPDY we'd like to have
>>>> mandatory or not in HTTP, I'm thinking that part of the issue comes from
>>>> the fact that there are a number of different usages of HTTP right now,
>>>> all of them fairly legitimate.
>>>>
>>>> First I think that everyone here agrees that something needs to be done
>>>> to improve end user experience especially in the mobile networks. And
>>>> this is reflected by all proposals, including the http-ng draft from
>>>> 14 years ago!
>>>>
>>>> Second, the privacy issues are a mess because we try to address a social
>>>> problem by technical means. It's impossible to decide on a protocol if
>>>> we all give an example of what we'd like to protect and what we'd prefer
>>>> not to protect because it is useless and possibly counter-productive.
>>>>
>>>> And precisely, some of the disagreement comes from the fact that we're
>>>> trying to see these impacts on the infrastructure we know today, which
>>>> would obviously be a total breakage. As PHK said it, a number of sites
>>>> will not want to afford crypto for privacy. I too know some sites which
>>>> would significantly increase their operating costs by doing so. But
>>>> what we're designing is not for now but for tomorrow.
>>>>
>>>> What I think is that anyway we need a smooth upgrade path from current
>>>> HTTP/1.1 infrastructure and what will constitute the web tomorrow
>>>> without
>>>> making any bigbang.
>>>>
>>>> SPDY specifically addresses issues observed between the browser and the
>>>> server-side infrastructure. Some of its mandatory features are probably
>>>> not desirable past the server-side frontend *right now* (basically
>>>> whatever addresses latency and privacy concerns). Still, it would be
>>>> too bad not to make the server side infrastructure benefit from a good
>>>> lifting by progressively migrating from 1.1 to 2.0.
>>>>
>>>> What does this mean ? Simply that we have to consider HTTP/2.0 as a
>>>> subset of SPDY or that SPDY should be an add-on to HTTP. And that
>>>> makes a lot of sense. First, SPDY already is an optimized messaging
>>>> alternative to HTTP. It carries HTTP/1.1, it can as well carry HTTP/2.0
>>>> since we're supposed to maintain compatible semantics.
>>>>
>>>> We could then get to a point where :
>>>>  - an http:// scheme indicates a connection to HTTP/1.x or 2.x server
>>>>  - an https:// scheme indicates a connection to HTTP/1.x or 2.x server
>>>>    via an SSL/TLS layer
>>>>  - a spdy:// scheme indicates a connection to HTTP/1.x or 2.x server
>>>>    via a SPDY layer
>>>>
>>>> By having HTTP/2.0 upgradable from 1.1, this split is natural :
>>>>
>>>>        +----------------------------+
>>>>        |       Application          |
>>>>        +----+-----------------------+
>>>>        | WS |     HTTP/2.0          |
>>>>        +----+--------------+        |
>>>>        |      HTTP/1.1     |        |
>>>>        |         +-----+---+--------+
>>>>        |         | TLS | SPDY       |
>>>>        +---------+-----+------------+   server-side
>>>>            ^        ^        ^
>>>>            |        |        |
>>>>            |        |        |
>>>>            |        |        |
>>>>        +---------+-----+------------+  user-agent
>>>>        |         | TLS | SPDY       |
>>>>        |         +-----+-------+----+
>>>>        |  HTTP/1.1, 2.0        |    |
>>>>        +-------------------+---+    |
>>>>        |                   |   WS   |
>>>>        |  Applications     +--------+
>>>>        |                            |
>>>>        +----------------------------+
>>>>
>>>> The upgrade path would then be much easier :
>>>>
>>>>  1) have browsers, intermediaries and servers progressively
>>>>     adopt HTTP/2.0 and support a seamless upgrade
>>>>
>>>>  2) have browsers, some intermediaries and some servers
>>>>     progressively adopt SPDY for the front-line
>>>>
>>>>  3) have a lot of web sites offer URLs as spdy:// instead of http://,
>>>>     and implement mandatory redirects from http:// to spdy:// like a
>>>>     few sites are currently doing (eg: twitter)
>>>>
>>>>  4) have browsers at some point use the SPDY as the default scheme
>>>>     for any domain name typed on the URL bar.
>>>>
>>>>  5) have browsers at one point disable by default transparent support
>>>>     for the old http:// scheme (eg: put a warning or have to tweak
>>>>     some settings for this). This will probably 10-20 years from now.
>>>>
>>>> Before we get to point 5, we'd have a number of sites running on the
>>>> new protocol, with an efficient HTTP/2.0 deployed at many places
>>>> including the backoffice, and with SPDY used by web browsers for
>>>> improved performance/privacy. That will not prevent specific agents
>>>> from still only using a simpler HTTP/2.0 for some uses.
>>>>
>>>> So I think that what we should do is to distinguish between what is
>>>> really desirable to have in HTTP and what is contentious. Everything
>>>> which increases costs or causes trouble for *some* use cases should
>>>> not be mandatory in HTTP but would be in the SPDY layer (as it is
>>>> today BTW).
>>>>
>>>> I think that the current SPDY+HTTP mix has shown that the two protocols
>>>> are complementary and can be efficient together. Still we can
>>>> significantly
>>>> improve HTTP to make both benefit from this, starting with the
>>>> backoffice
>>>> infrastructure where most of the requests lie.
>>>>
>>>> Willy
>>>>
>>>>
>>>>
>>>
>>
>

Received on Sunday, 1 April 2012 04:19:15 UTC