Re: multiplexing -- don't do it

On Mon, Apr 2, 2012 at 6:57 AM, Amos Jeffries <squid3@treenet.co.nz> wrote:

> On 1/04/2012 5:17 a.m., Adam Barth wrote:
>
>  On Sat, Mar 31, 2012 at 4:54 AM, Mark Nottingham wrote:
>>
>>> On 31/03/2012, at 1:11 PM, Mike Belshe wrote:
>>>
>>>  For the record - nobody wants to avoid using port 80 for new protocols.
>>>>  I'd love to!  There is no religious reason that we don't - its just that
>>>> we know, for a fact, that we can't do it without subjecting a non-trivial
>>>> number of users to hangs, data corruption, and other errors.  You might
>>>> think its ok for someone else's browser to throw reliability out the
>>>> window, but nobody at Microsoft, Google, or Mozilla has been willing to do
>>>> that…
>>>>
>>> Mike -
>>>
>>> I don't disagree on any specific point (as I think you know), but I
>>> would observe that the errors you're talking about can themselves be viewed
>>> as transient. I.e., just because they occur in experiments now, doesn't
>>> necessarily mean that they won't be fixed in the infrastructure in the
>>> future -- especially if they generate a lot of support calls, because they
>>> break a lot MORE things than they do now.
>>>
>>> Yes, there will be a period of pain, but I just wanted to highlight one
>>> of the potential differences between deploying a standard and a
>>> single-vendor effort.  It's true that we can't go too far here; if we
>>> specify a protocol that breaks horribly 50% of the time, it won't get
>>> traction. However, if we have a good base population and perhaps a good
>>> fallback story, we *can* change things.
>>>
>> That's not our experience as browser vendors.  If browsers offer an
>> HTTP/2.0 that has a bad user experience for 10% of users, then major
>> sites (e.g., Twitter) won't adopt it.  They don't want to punish their
>> users any more than we do.
>>
>> Worse, if they do adopt the new protocol, users who have trouble will
>> try another browser (e.g., one that doesn't support HTTP/2.0 such as
>> IE 9), observe that it works, and blame the first browser for being
>> buggy.  The net result is that we lose a user and no pressure is
>> exerted on the intermediaries who are causing the problem in the first
>> place.
>>
>> These are powerful market force that can't really be ignored.
>>
>
> So the takeway there is pay attention to the intermediary people when they
> say something cant be implemented (or won't scale reasonably).
>

I agree we should pay attention to scalability - and we have.

Please don't disregard that Google servers switched to SPDY with zero
additional hardware (the google servers are fully conformant http/1.1
proxies with a lot more DoS logic than the average site).  I know, some
people think Google is some magical place where scalability defies physics
and is not relevant, but this isn't true.  Google is just like every other
site, except much much bigger.   If we had a 10% increase in server load
with SPDY, Google never could have shipped it.  Seriously, who would roll
out thousands of new machines for an experimental protocol?  Nobody.  How
would we have convinced the executive team "this will be faster", if they
were faced with some huge cap-ex bill?  Doesn't sound very convincing, does
it?  In my mind, we have already proven clearly that SPDY scales just fine.

But I'm open to other data.  So if you have a SPDY implementation and want
to comment on the effects on your server, lets hear it!   And I'm not
saying SPDY is free.  But, when you weigh costs (like compression and
framing) against benefits (like 6x fewer connections),  there is no
problem.  And could we make improvements still?  Of course.  But don't
pretend that these are the critical parts of SPDY.  These are the mice nuts.

Mike



> With plenty of bias, I agree.
>
> AYJ
>
>

Received on Monday, 2 April 2012 14:29:00 UTC