Re[2]: multiplexing -- don't do it

  
The original argument was one about whether it's worth-while specifying 
features that are mandatory to implement, but optional to use.
  
This pipelining thing was given as an example.
  
Personally whilst I think pipelining is more likely to have been fixed 
had it been mandatory to use, maybe it would have been less likely to 
have been included in the spec at all, since in some cases it's quite 
problematic and onerous.
  
So I think we need to get back to the original point is all... 
currently we're extrapolating from 1 data point :)
  
Actually I don't even have a strong feeling that it's worth trying to 
come up with a rule for whether optional to use things are bad or not.  
It needs to be case by case. I'm sure for every example of where o-t-u 
was bad, others can be found where it was good.
  
Adrien
  

------ Original Message ------
From: "William Chan (陈智昌)" <willchan@chromium.org>
To: "Roy T. Fielding" <fielding@gbiv.com>
Cc: "Mike Belshe" <mike@belshe.com>;"ietf-http-wg@w3.org Group" 
<ietf-http-wg@w3.org>
Sent: 4/04/2012 11:46:29 a.m.
Subject: Re: multiplexing -- don't do it
>On Wed, Apr 4, 2012 at 12:23 AM, Roy T. Fielding <fielding@gbiv.com> wrote:
> On Mar 31, 2012, at 4:11 AM, Mike Belshe wrote:
> >On Sat, Mar 31, 2012 at 8:57 AM, Julian Reschke <julian.reschke@gmx.de> wrote:
> > On 2012-03-31 01:53, Mike Belshe wrote:
> >  ...  
> >  Before thinking this way we should look at how well other 
> >  mandatory but
> >  optional to use features have turned out.
> >  
> >  One such example is pipelining.  Mandatory for a decade, but 
> >  optional to
> >  implement. We still can't turn it on.
> >  ...
 >  
 >  But then many people have it turned on, and it seems to be on by 
 >  default in Safari mobile. Maybe the situation is much better than 
 >  you think.
>  
>  The data is overwhelming that it doesn't work.
  
  It works just fine.  The data shows only that a general-purpose 
  browser,
  that doesn't even bother to report the nature of network protocol 
  errors,
  encounters a small percentage of network problems that exceed its 
  users'
  tolerance for failure conditions because its users have no control 
  over
  their network.  That might indicate that the browser cannot deploy 
  it, or
  it might indicate that there was a protocol bug on the browser that 
  failed
  on edge cases (just like Netscape 1-3 had a buffer reading bug that 
  would
  only trigger if the blank line CRLF occurred on a 256 byte buffer 
  boundary).
 
 I'm starting to get data back, but not in a state that I'd reliably 
 release. That said, there are very clear indicators of intermediaries 
 causing problems, especially when the pipeline depth exceeds 3 
 requests.
 
  
  It doesn't indicate anything about whether the feature works in HTTP 
  for
  clients that are not browsers or for networks where the administrators
  do control their own deployment of intermediaries.
 
 Why would the case be different for HTTP clients that are not 
 browsers? Are you implying that browsers are more broken in terms of 
 pipelining support than other HTTP clients? Or simply that they 
 communicate over different networks?
 
 As for networks that control their own deployments of intermediaries, 
 are these entirely private networks? If you go over the public 
 internet at any point, I'd expect to encounter some form of 
 intermediary not controlled by administrators.
 
  
  >Data points:
  >a) chrome study showing connectivity results on port 80, 443, and 
  >61xxx for websockets showed >10% failures on port 80 for non HTTP 
  >protocols.
 >
 >Unrelated to pipelining.
 >
 >>b) no major browser has deployed pipelining.  It's not like we don't 
 >>want to.  We all want to!  Ask Patrick McManus for details - to 
 >>think this works is just wishful thinking. If all we had to do was 
 >>turn on pipelining 3 years ago, we would have done it.
>>
>>Major browsers care about all networks and all customers. Most of the 
>>clients
>>on the Web are not major browsers.  Most of the systems on the Web 
>>that use HTTP
>
>That's a fair point, but major browsers are probably in general more 
>important, due to the vast number of users. Is that widely disagreed? 
>Sorry if I'm blinded by my personal bias since I work on a browser. I 
>hope this naive claim won't come off too arrogant.
> 
> pipelining deploy it within environments wherein they do control the 
> network
> and can rubbish the stupid intermediaries that fail to implement it 
> correctly.
 
 What are these environments? Are they private networks? In these 
 cases, is HTTP pipelining that big a win? Do these networks operate on 
 a global scale? Or are they more local? If local, I'd expect the RTTs 
 to be much lower, and pipelining to be less of a win.
 
 Also, I'm going to take the opportunity to ask a dumb question (sorry, 
 I lack your guys' experience with all the uses of HTTP). To what 
 extent do these other environments matter? If you don't run over the 
 public internet, instead running over private networks, can't you run 
 whatever protocol you want anyway? Is it more about saving time and 
 not having to write more code? Can they just use HTTP/1.1 and forget 
 HTTP/2.0?
 
  The rest can and do tolerate 5% failure rates because they actually 
  report
  errors to the user and then the user fixes their own network problem.
  
  >For the record - nobody wants to avoid using port 80 for new 
  >protocols.  I'd love to!  There is no religious reason that we don't 
  >- its just that we know, for a fact, that we can't do it without 
  >subjecting a non-trivial number of users to hangs, data corruption, 
  >and other errors.  You might think its ok for someone else's browser 
  >to throw reliability out the window, but nobody at Microsoft, 
  >Google, or Mozilla has been willing to do that...
  >
  >As for mobile safari - I mentioned this in my talk the other day - 
  >its a bit of a conundrum.  Android's browser (not chrome) also turns 
  >on pipelining.  But I know that neither Apple nor the Android team 
  >have produced data or analyzed the success or failures of 
  >pipelining.  Mobile browsing is downright awful (due to bad content, 
  >networking errors, and other things).  It could be that mobile 
  >networks have fewer interfering proxies, or it could be that these 
  >errors are just getting blamed on other mobile network glitches.  I 
  >honestly don't know.  I'd love to see data on the matter.
 >
 >Mobile networks use proxies that are owned by the mobile network.
 >That is why they can and do implement pipelining.
>
>It's not clear to me which intermediaries are causing the problems. 
>Your statement here seems to be predicated on the problematic 
>intermediaries being located closer to the client. Do we have any data 
>to support this?
> 
> 
> We have to realize that HTTP is used everywhere.  The problems you
> have encountered while writing a general-purpose browser are not the
> same problems that I encounter while writing a spider and a content
> management system, what Samsung encounters when writing a TV and a
> refrigerator, what Willy encounters while writing a proxy, etc.
> There is no universal set of features for HTTP.
> 
> I have seen dozens of systems over the years deploy products that are
> entirely dependent on chunked requests and never see a single problem
> with them because they are interacting with an Apache module that uses
> the chunked parser that I wrote.  They don't give a rat's ass about 
> your
> experience with a general-purpose browser making use of general 
> Internet
> access without any control over the intermediaries.  That is not a 
> problem
> they share.
> 
> They still need a standard for HTTP that includes the features they 
> use.
 
 What features do they need beyond what's offered in HTTP/1.1? Or is 
 the assumption that we want to completely kill off HTTP/1.1? What 
 about Mike's point in his httpbis presentation that we may want 
 different protocols for the "backoffice" and the general internet?
  
  
  >Either way - until someone produces data to contradict the current 
  >major browser data - we need to stop dreaming that port 80 is viable 
  >for anything other than pure HTTP.  The data we have says its not.
 >
 >You must be thinking of some other thread.  An exchange over port 80 
 >will
 >either work or it will not -- the trick is to design the protocol so 
 >that
 >it can succeed, or fails in a safe and recognizable way.
>
>And falls back to HTTP/1.1 in a reasonably fast manner that does not 
>significantly degrade user experience.
> 
> 
> >Either produce new data or admit you don't know and trust what the 
> >browser developers are telling you.
 >
 >Hah!  That's a good one.
 >
 >Regardless, I consider some form of multiplexing to be a requirement 
 >for
 >whatever replaces HTTP/1.1, since there is no better reason to replace
 >HTTP/1.1 (tokenizing or compression are hardly worth the bother given 
 >how
 >quickly mobile is catching up to PCs).  I'd rather just replace TCP;
 >I expect that we'll need a protocol that can operate over multiple mux
 >and non-mux transports, because HTTP/1.1 works right now over many 
 >more
 >transports than just TCP and TLS.  But mux over TCP is a reasonable 
 >start.
 >
 >....Roy
 >
>

Received on Wednesday, 4 April 2012 00:03:33 UTC