Re: multiplexing -- don't do it

On Wed, 2012-04-04 at 07:02 +0000, Poul-Henning Kamp wrote:
> In message <20120404054903.GA13883@1wt.eu>, Willy Tarreau writes:
> 
> >> I'm starting to get data back, but not in a state that I'd reliably
> >> release. That said, there are very clear indicators of intermediaries
> >> causing problems, especially when the pipeline depth exceeds 3 requests.
> 
> I always thought that the problem in HTTP/1.x is that you can never
> quite be sure if there is an un-warranted entity comming after at GET,

its not uncommon to have the consumer RST the whole TCP session when
asked to recv too far beyond the current request it is processing. For
some devices "too far" appears to be defined as "any new packet". I
presume some variation of this is where Will's data point comes from.
(Often 3 uncompressed requests fit in 1 packet). 

That class of bug sounds absurd, but its really a pretty common pattern.
As an example: hosts that fail TLS False Start (for which I understand
second hand that Chrome needs to keep a black-list), react badly because
there is TCP data queued when they are in a state that the expect their
peer to be quiet. Same pattern.

The lesson to me is that you want to define a tight set of functionality
that is reasonably testable up front - and that's what you can depend
widely on later. Using anything beyond that demands excessive levels of
pain, complexity, and cleverness.

(and all this pipelining talk as if it were equivalent to spdy mux is
kind of silly. Pipelining's intrinsic HOL problems are at least as bad
of an issue as the interop bugs.)

-Patrick

Received on Wednesday, 4 April 2012 13:08:27 UTC