Re: Backwards compatibility

On Mon, Apr 2, 2012 at 1:59 PM, Adrien W. de Croy <adrien@qbik.com> wrote:

>
> ------ Original Message ------
> From: "Mike Belshe" <mike@belshe.com>
> To: "Peter Lepeska" <bizzbyster@gmail.com>
> Cc: "ietf-http-wg@w3.org" <ietf-http-wg@w3.org>
> Sent: 3/04/2012 6:04:16 a.m.
> Subject: Re: Backwards compatibility
>
>
>
> On Mon, Apr 2, 2012 at 10:56 AM, Peter Lepeska < <bizzbyster@gmail.com>
> bizzbyster@gmail.com> wrote:
>
>> Big bites do seem to go down easier than lots of little ones. The problem
>> is that SPDY is eating *two* shit sandwiches, trying to make the web
>> both fast and secure, at the same time. This bite is more than most can
>> chew and so adoption will be much slower b/c of the SSL requirement, in my
>> opinion.
>
>
> It certainly doesn't make the transition happen faster, I agree with you
> on that front.
>
> But responsible content providers are already moving to SSL (twitter,
> facebook, google, etc) because they need to for user protection, data
> integrity, and legal reasons.  We, as protocol designers, need to be making
> secure communications much easier for everyone.  We have an opportunity to
> do this now which may never come up again.
>
>
> I think we will need to make an intermediate step in 1.1 land.
>
> adoption of proxy support for SSL without tunnelling, e.g.
>
> GET https:// ...
>

Sounds good to me!  But, this needs to combine with SSL to the proxy
itself.  Otherwise, you're sending an otherwise end-to-end secure request
in the clear between the client & proxy.



> Even just Gmail, FB and search etc moving to SSL/TLS is creating more and
> more pressure for proxy vendors to implement MITM.  Do we really want to
> force it in that direction?
>

Exactly - I think you and I were writing the same thing in separate threads
at the same time :-)  Even without HTTP/2.0, the world is moving to SSL at
such a rate that proxy solutions are no longer able to provide adequate
protection for corporations.

Mike




>
> Adrien
>
>
>
>
> Mike
>
>
>
>>
>>  Peter
>>
>> On Mon, Apr 2, 2012 at 1:31 PM, Mark Watson < <watsonm@netflix.com>
>> watsonm@netflix.com> wrote:
>>
>>>  All - the messages exchange below was supposed to be on-list - my
>>> mistake hitting reply instead of reply-all ...
>>>  On Apr 1, 2012, at 1:15 PM, Mike Belshe wrote:
>>>
>>>
>>>
>>> On Sat, Mar 31, 2012 at 9:47 AM, Mark Watson < <watsonm@netflix.com>
>>> watsonm@netflix.com> wrote:
>>>
>>>>  Mike, all,
>>>>
>>>> This thread has gone into the weeds rather and is missing the point of
>>>> my original comment.
>>>>
>>>> I did not intend a single throw-away paragraph to be a complete
>>>> technical
>>>> proposal.
>>>>
>>>> My point was that deploying a new protocol at scale is hard. Look at
>>>> IPv6. It's not even mainly a technical problem. There are HTTP1.x-specific
>>>> assumptions throughout the network - people have paid money to put them
>>>> there, so presumably they have goals which would be undermined if
>>>> large amounts of traffic moved to a new protocol.
>>>>
>>>> Whilst the fraction of HTTP1.x-compatible traffic stays close to its
>>>> current value you will not see deployment problems with new protocols. But
>>>> if you want to migrate large swathes of traffic to a new protocol, many
>>>> things have to be upgraded.
>>>>
>>>> Before embarking on this, then, we should have a very firm idea of the
>>>> expected gains. Which means comparing with what can be achieved with a new
>>>> protocol to what can be achieved through simple extensions to the existing
>>>> one.
>>>>
>>>> It seems to me, superficially, that several of the proposed
>>>> enhancements could be done this way.
>>>>
>>>> It's true that there is a region where the difference between
>>>> 'extensions' and 'new protocol' is partly marketing. I'm not sure we should
>>>> go there. But it's also true there is a social engineering aspect to this
>>>> problem: people are often overly resistant to revolutionary changes and
>>>> prefer changes that appear evolutionary.
>>>>
>>>> Having said all the above, it may be sufficient that there is
>>>> single-RTT fallback to HTTP1.1 in the presence of HTTP1.1 intermediaries.
>>>>
>>>
>>> Heh - I think we're in more agreement than it might seem.
>>>
>>> We had a philosophy when designing spdy:  "If you're going to eat a shit
>>> sandwich, take big bites".
>>>
>>> What does that mean, you might ask?
>>>
>>> Prior to starting SPDY, we had tried all sorts of incremental changes to
>>> HTTP - header compressors, data compressors, bundling, multiplexing, etc
>>> etc.  Some of these could be done with very small semantic changes to HTTP.
>>>  But, each of those semantic changes meant that every existing HTTP
>>> implementation out there (browsers, servers, or proxies) had to be made
>>> aware of the change and deal with appropriately...
>>>
>>> In the end, the shitty part of changing HTTP is that changing the
>>> infrastructure is a ton of work (this is what you're rightly pointing out).
>>>  We knew we had several significant changes to make to HTTP.   Rather than
>>> doing them incrementally, and each one needing to figure out how to
>>> rechange the infrastructure, we decided taking one big bite is a preferred
>>> approach.  Solve all of these problems, but only change the infrastructure
>>> once.
>>>
>>> I hope this metaphor isn't too off color and that it demonstrates the
>>> point.
>>>
>>>
>>> MW: Sure. Fortunately I wasn't eating breakfast at the time ...
>>>
>>>
>>>
>>> Regarding interleaved vs non-interleaved streams:  It sure seems easier
>>> to do what you're proposing, but I suspect that your proposal won't work.
>>>  For example, how would you do a comet-style hanging-GET without
>>> interleaved streams?
>>>
>>>
>>> MW: I'm not familiar with exactly what that is, but I think the answer
>>> is use a separate connection.
>>>
>>>   This could be mitigated by opening up more parallel connections, but
>>> that is non-desirable too.
>>>
>>>
>>> MW: I'm not really sure why. I can see that a parallel connections arms
>>> race is not a good idea - but we are all talking about things that reduce
>>> the need for parallel connections. Parallel connections are, also, a way to
>>> get a different overall congestion control behavior in a way that is
>>> reasonably safe.
>>>
>>>
>>> BTW - did you mean to reply to all?
>>>
>>>
>>> Yes, fixed.
>>>
>>>  Mike
>>>
>>>
>>>
>>>>
>>>> ...Mark
>>>>
>>>>
>>>> Sent from my iPhone
>>>>
>>>> On Mar 30, 2012, at 9:18 PM, "Mike Belshe" < <mike@belshe.com>
>>>> mike@belshe.com> wrote:
>>>>
>>>>
>>>>
>>>>  On Sat, Mar 31, 2012 at 3:03 AM, Mark Watson < <watsonm@netflix.com>
>>>> watsonm@netflix.com> wrote:
>>>>
>>>>>
>>>>>  On Mar 30, 2012, at 4:46 PM, Mike Belshe wrote:
>>>>>
>>>>>
>>>>>
>>>>> On Fri, Mar 30, 2012 at 6:53 PM, Mark Watson < <watsonm@netflix.com>
>>>>> watsonm@netflix.com> wrote:
>>>>>
>>>>>>
>>>>>>  On Mar 30, 2012, at 9:29 AM, William Chan (陈智昌) wrote:
>>>>>>
>>>>>> On Fri, Mar 30, 2012 at 6:13 PM, Mark Watson < <watsonm@netflix.com>
>>>>>> watsonm@netflix.com> wrote:
>>>>>>
>>>>>>> All,
>>>>>>>
>>>>>>> I'd like to make a plea/request/suggestion that wherever possible
>>>>>>> new features be added incrementally to HTTP1.1, in a backwards compatible
>>>>>>> way, in preference to a "new protocol" approach. A "new protocol" is
>>>>>>> required only if it is not technically possible (or especially awkward) to
>>>>>>> add the feature in a backwards compatible way.
>>>>>>>
>>>>>>> The object should be to enable incremental implementation and
>>>>>>> deployment on a feature by feature basis, rather than all-or-nothing.
>>>>>>> HTTP1.1 has been rather successful and there is an immense quantity of code
>>>>>>> and systems - including intermediaries of various sorts - that work well
>>>>>>> with HTTP1.1. It should be possible to add features to that code and those
>>>>>>> systems without forklifting substantial amounts of it. It is better if
>>>>>>> intermediaries that do not support the new features cause fallback to
>>>>>>> HTTP1.1 vs the alternative of just blocking the new protocol. In
>>>>>>> particular, it should not cost a round trip to fall back to HTTP1.1. It is
>>>>>>> often lamented that the Internet is now the "port-80 network", but at least
>>>>>>> it is that.
>>>>>>>
>>>>>>
>>>>>> Don't forget port 443. And I agree, it should not cost a round trip
>>>>>> to fallback to HTTP/1.1.
>>>>>>
>>>>>>
>>>>>>>
>>>>>>> Many of the features contemplated as solutions to the problems of
>>>>>>> HTTP1.1 can be implemented this way: avoiding head-of-line blocking of
>>>>>>> responses just requires a request id that is dropped by intermediaries that
>>>>>>> don't support it and echoed on responses. Request and response header
>>>>>>> compression can be negotiated - again with a request flag that is just
>>>>>>> dropped by unsupporting intermediaries. Pipelined requests could be
>>>>>>> canceled with a new method. These things are responsible for most of the
>>>>>>> speed improvements of SPDY, I believe.
>>>>>>>
>>>>>>
>>>>>> It's unclear to me how this would work. Are you suggesting waiting a
>>>>>> HTTP request/response pair to figure out if the id gets echoed, before
>>>>>> trying to multiplex requests? Or would you rely on HTTP pipelining as a
>>>>>> fallback if the ids don't get echoed?
>>>>>>
>>>>>>
>>>>>> Send the requests (yes, pipelined). If they come back without ids,
>>>>>> then they are coming back in the order they were sent. If they come back
>>>>>> with ids, then that tells you which response is which.
>>>>>>
>>>>>
>>>>> You can't do this until you've got confirmation that the server is
>>>>> going to give you a HTTP/1.1 response.  It could come back HTTP/1.0.
>>>>>
>>>>> So do we first have to do a 1.1 request successfully (with 1.1
>>>>> response) before we can ever attempt to do a pipelined upgrade?
>>>>>
>>>>>
>>>>> For each server, yes. Servers don't often get downgraded from 1.1 to
>>>>> 1.0, so you could cache that result for quite a while.
>>>>>
>>>>>
>>>>>
>>>>>>
>>>>>>  The former incurs a large latency cost. The latter depends very
>>>>>> much on how deployable you view pipelining on the overall internet.
>>>>>>
>>>>>>
>>>>>> It's certainly widely deployed in servers and non-transparent
>>>>>> proxies. Non-supporting non-transparent proxies are easily detected. Yes,
>>>>>> broken transparent proxies are a (small) problem, but you can also detect
>>>>>> these.
>>>>>>
>>>>>>  I am skeptical it is sufficiently deployable and we on Chromium are
>>>>>> gathering numbers to answer this question ( <http://crbug.com/110794>
>>>>>> http://crbug.com/110794).
>>>>>>
>>>>>>
>>>>>> Our internal figures suggest that more than 95% of users can
>>>>>> successfully use pipelining. That's an average. On some ISPs the figure is
>>>>>> much lower.
>>>>>>
>>>>>
>>>>> Do you a research result to cite here?  Sounds interesting.  5%
>>>>> failures is pretty high.
>>>>>
>>>>>
>>>>> No, these are just internal figures right now. Yes, it does seem high,
>>>>> I've a feeling many of those are false negatives where we avoid pipelining
>>>>> unnecessarily.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>>
>>>>>>  Also, pipelining is clearly inferior to multiplexing.
>>>>>>
>>>>>>
>>>>>> Yes, but perhaps in practice not by much. To render a page you need
>>>>>> all the objects, so from a time-to-page-load perspective it makes no
>>>>>> difference how you multiplex them, as long as the link remains fully
>>>>>> utilized. To see some difference you need some notion of object importance
>>>>>> and some metric for 'page loaded except for the unimportant bits'. You send
>>>>>> the most important requests first. Even then it's not clear that
>>>>>> multiplexing within objects will perform significantly better than object
>>>>>> by object sending.
>>>>>>
>>>>>
>>>>>
>>>>> Don't forget that pipelining does *not* apply to all resources.  Even
>>>>> when pipelining works end-to-end, browsers need to take great care not to
>>>>> accidentally pipeline a critical resource behind a slow one (like a hanging
>>>>> GET).  This leads to browsers doing tricks like "only pipeline images
>>>>> together" or other subsets of pipelining.
>>>>>
>>>>>
>>>>> I was assuming you could avoid the head-of-line blocking with an
>>>>> extension that allows out-of-order responses.
>>>>>
>>>>>
>>>>> But when we consider pipelining a fallback for SPDY, this all falls
>>>>> apart.  SPDY did not have these restrictions.  So now, SPDY would need to
>>>>> run in some sort of degraded mode for what types of requests are pipelined
>>>>> just so fallback to a HTTP/1.1 protocol that the server might not support
>>>>> (because it could be HTTP/1.0) or which the user might not support because
>>>>> he's one of the unlucky 5% (according to Mark's data) where pipelining just
>>>>> breaks altogether.
>>>>>
>>>>> All in all, we've now compounded 3 unique restrictions on the initial
>>>>> set of requests in order to work around past bugs in order to support use
>>>>> of the Upgrade header.
>>>>>
>>>>> Realistically, you're going to get one request on the upgrade, and
>>>>> you'll have to wait to open up the parallel requests.  This is a
>>>>> significant restriction of the Upgrade process - it requires a round trip
>>>>> to kick into the real protocol at full gear.
>>>>>
>>>>> This is highly annoying, but for initial web page loads, probably
>>>>> won't be a significant burden because the browser initially only has one
>>>>> URL.  For page reloads, or validations, or subsequent pages on reconnect,
>>>>> it will be a performance hit.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>> Interleaving within responses does require some kind of framing
>>>>>>> layer, but I'd like to learn why anything more complex than interleaving
>>>>>>> the existing chunked-transfer chunks is needed (this is also especially
>>>>>>> easy to undo).
>>>>>>>
>>>>>>
>>>>>> Sorry, I'm not sure I understand what you mean by interleaving
>>>>>> existing chunked-transfer chunks. Are these being interleaved across
>>>>>> different responses (that requires framing, right?).
>>>>>>
>>>>>>
>>>>>> Interleaving data from multiple responses requires some kind of
>>>>>> framing, yes. Chunked transfer encoding is a kind of framing that is
>>>>>> already supported by HTTP. Allowing chunks to be associated with different
>>>>>> responses would be a simple change. Maybe it feels like a hack ? That was
>>>>>> my question: why isn't a small enhancement to the existing framing
>>>>>> sufficient ?
>>>>>>
>>>>>>
>>>>> Even if you could hack it into a chunk, thats a real jumbled mess.
>>>>>  Why do you want to do this?  It doesn't give you backward compatibility in
>>>>> any way (existing browsers won't know what to do with these nonstandard
>>>>> chunks anyway), its just a mess for the sake of a mess?
>>>>>
>>>>>
>>>>> So, your answer to my question is fairly clear ;-)
>>>>>
>>>>> It doesn't feel like such a 'mess' to me - we're talking about
>>>>> negotiating use of new protocol elements. They're only used if both ends
>>>>> support them so, yes, the only kind of backwards compatibility is that the
>>>>> use of framing is negotiated, rather than assumed from the start. My point
>>>>> was that you don't need a whole shim layer to do this, because HTTP already
>>>>> has framing. Perhaps it makes little difference, but it means you can
>>>>> develop and deploy functionality incrementally, rather than all-or-nothing.
>>>>>
>>>>
>>>> Your approach is just out-of-order pipelining, right?  It's not an
>>>> interleaved multiplexing system.  And you're right, you don't necessarily
>>>> need a full framing layer to support that.  (unless you want flow control,
>>>> which you probably do, but haven't considered yet)
>>>>
>>>> We can do a lot better than that, thats all.
>>>>
>>>> BTW - more than one implementor has come to me and said, "wow - spdy
>>>> framing was really easy to implement".  It's not like the framing layer is
>>>> a hard concept.
>>>>
>>>> I guess overall - I'm just not sure what your goals are.  You seem to
>>>> want it to look like HTTP even though it won't be HTTP and even though you
>>>> sacrificed a key part of the performance.  But what is the point of that?
>>>>  You're no longer trying to make it as fast as you can, so who is your
>>>> target market?
>>>>
>>>> Mike
>>>>
>>>>
>>>>
>>>>
>>>>>
>>>>>
>>>>>>
>>>>>>>
>>>>>>> Putting my question another way, what is the desired new feature
>>>>>>> that really *requires* that we break backwards compatibility with the
>>>>>>> extremely successful HTTP1.1 ?
>>>>>>>
>>>>>>
>>>>>> Multiplexing,
>>>>>>
>>>>>>
>>>>>> See my question above
>>>>>>
>>>>>>  header compression,
>>>>>>
>>>>>>
>>>>>> Easily negotiated: an indicator in the first request indicates that
>>>>>> the client supports it. If that indicator survives to the server, the
>>>>>> server can start compressing response headers right away. If the client
>>>>>> receives a compressed response it can start compressing future requests on
>>>>>> that connection. It's important that this indicator be one which is dropped
>>>>>> by intermediaries that don't support compression.
>>>>>>
>>>>>>  prioritization.
>>>>>>
>>>>>>
>>>>>> I think you mean "re-priortization". I can send requests in priority
>>>>>> order - what I can't do is change that order to response to user actions.
>>>>>> How big a deal is this, vs closing the connection and re-issuing
>>>>>> outstanding requests in the new order ?
>>>>>>
>>>>>
>>>>> Its the difference between web pages rendering faster or slower.
>>>>>  Load up 100 image requests on your twitter page, and then fetch the images
>>>>> before the JS.  The page loads slower unless you lower the priority of the
>>>>> images.  But you still don't want to add serialization delays that HTTP has.
>>>>>
>>>>> BTW - the effects of priorities has been measured, and you're welcome
>>>>> to use the existing benchmarking harness to verify yourself that these
>>>>> things are true in real code rather than just theory.  (see
>>>>> dev.chromium.org/spdy).  I wish I had published the tests when I did
>>>>> this long ago - spent a lot of time on it.
>>>>>
>>>>>
>>>>> Again, I don't think you need anything more than the basic possibility
>>>>> to return responses out-of-order to get most of the gains.
>>>>>
>>>>
>>>>
>>>>
>>>>>  Send the requests in priority order and have the server return them
>>>>> in priority order, unless a response is not available in which case other
>>>>> responses can push ahead. The absence of interleaving within responses just
>>>>> reduces the granularity. Request the JS first, then the 100 images. With
>>>>> interleaving, if the JS is available half way through sending image 3, we
>>>>> can start sending the JS right there. Without interleaving you have to wait
>>>>> until the end of image 3.
>>>>>
>>>>> What you don't have is, as I said, "re-prioritization", where the
>>>>> client can change its mind about the priority order after sending the
>>>>> requests - you'd have to close the connection and send the requests again.
>>>>>
>>>>> Not perfect, but I feel you could get a good chunk of the gains, with
>>>>> out-of-order responses and negotiated compression.
>>>>>
>>>>> Set aside that the significant advantages of small incremental changes
>>>>> to a well-understood, widely deployed, very successful protocol vs
>>>>> invention and all-at-once deployment of a new one.
>>>>>
>>>>> …Mark
>>>>>
>>>>>
>>>>> Mike
>>>>>
>>>>>
>>>>>
>>>>>>
>>>>>> …Mark
>>>>>>
>>>>>>
>>>>>>
>>>>>>>
>>>>>>> …Mark
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>>
>>
>

Received on Monday, 2 April 2012 21:09:25 UTC