Re: multiplexing -- don't do it

Hi William,

On Wed, Apr 04, 2012 at 01:46:29AM +0200, William Chan (?????????) wrote:
> > It works just fine.  The data shows only that a general-purpose browser,
> > that doesn't even bother to report the nature of network protocol errors,
> > encounters a small percentage of network problems that exceed its users'
> > tolerance for failure conditions because its users have no control over
> > their network.  That might indicate that the browser cannot deploy it, or
> > it might indicate that there was a protocol bug on the browser that failed
> > on edge cases (just like Netscape 1-3 had a buffer reading bug that would
> > only trigger if the blank line CRLF occurred on a 256 byte buffer
> > boundary).
> >
> 
> I'm starting to get data back, but not in a state that I'd reliably
> release. That said, there are very clear indicators of intermediaries
> causing problems, especially when the pipeline depth exceeds 3 requests.

Personally I'm still thinking that if we only pipeline on http/2 and not
on http/1, we'll avoid all the associated risks. The real issue I can
imagine with broken intermediaries is related to those still parsing
requests in *packets* without reassembling them as a stream. Once your
requests exceeds 1 packet (or the product's limit), you can run into
trouble. Seen that on Alteon years ago (was an excellent L3/L4 LB but
marketing made it L7 and caused issues everywhere).

As we suggested in our draft, once the requests are reduced, it's possible
to send the next ones along with the first request in a header field, this
protects us against 1.1-only intermediaries which can't see them.

> As for networks that control their own deployments of intermediaries, are
> these entirely private networks? If you go over the public internet at any
> point, I'd expect to encounter some form of intermediary not controlled by
> administrators.

We still need to keep in mind that broken intermediaries are deployed at two
places :
  - inside the users' ISP network (eg: caches, LB, compressors)
  - on the server side

When users complain they can't access site X or Y, either they complain on
the ISP side and the ISP has to find a quick fix (last time I experienced
this was just two weeks ago). And when it's on the server side, the fix has
to be applied too otherwise the site loses a lot of visitors (experienced
this at the same time, the fight was whether it was the ISP or the site
which was broken ; the breakage was unrelated to HTTP, it was TCP which
failed due to use of CGN).

What must absolutely be avoided is the random hang that cannot be diagnosed
and the issues which cause random delays. We need a clean fallback to a
working behaviour.

> > pipelining deploy it within environments wherein they do control the
> > network
> > and can rubbish the stupid intermediaries that fail to implement it
> > correctly.
> >
> 
> What are these environments? Are they private networks? In these cases, is
> HTTP pipelining that big a win? Do these networks operate on a global
> scale? Or are they more local? If local, I'd expect the RTTs to be much
> lower, and pipelining to be less of a win.

It depends. Some corporate users are forced to browse on their laptop via
their corporate VPN and proxies. When you run a VPN on top of 3G in which
you use a proxy, you're exactly like a smartphone user (except you don't
suffer from the DNS round trip).

> Also, I'm going to take the opportunity to ask a dumb question (sorry, I
> lack your guys' experience with all the uses of HTTP). To what extent do
> these other environments matter? If you don't run over the public internet,
> instead running over private networks, can't you run whatever protocol you
> want anyway? Is it more about saving time and not having to write more
> code? Can they just use HTTP/1.1 and forget HTTP/2.0?

In my opinion we must not fragment 1.1 and 2.0. If 2.0 cannot replace 1.1
everywhere I will consider it a total failure. It's not acceptable to
design a new protocol that some people will say "I won't use it because X
or Y". I prefer by large "I'm not compatible right now but am planning to
upgrade".

> It's not clear to me which intermediaries are causing the problems. Your
> statement here seems to be predicated on the problematic intermediaries
> being located closer to the client. Do we have any data to support this?

I don't have data either but from what I have *observed*, server-side tends
to be a lot more careful at what they deploy so that all their infrastructure
is compatible with their own needs. ISPs tend to deploy something which seems
OK for most usages (and it's not easy for them to catch all corner cases). So
this practice tends to put more breakage at the ISP's than at the content
provider's. Also, when an ISP deploys a transparent cache, economics are much
more involved than when a content provider deploys a load balancer, so the
ability to accept tradeoffs is easy to understand in the former case.

> What features do they need beyond what's offered in HTTP/1.1? Or is the
> assumption that we want to completely kill off HTTP/1.1? What about Mike's
> point in his httpbis presentation that we may want different protocols for
> the "backoffice" and the general internet?

I do think that what's in the backoffice must converge to 2.0 too and that
what's outside may be a superset of this. Once again, personal opinion only.
I would hate to say users "hey we released 2.0 and only browsers will be
able to use it - if you have a web server that's 2.0 and want to put a LB
in front of it, you must degrade it to 1.1 first". That does not make much
sense in my opinion.

> And falls back to HTTP/1.1 in a reasonably fast manner that does not
> significantly degrade user experience.

This is extremely important (reason why I do like the Upgrade).

Regards,
Willy

Received on Wednesday, 4 April 2012 05:49:34 UTC