Re: What error code when concurrent stream limit is exceeded

On Mon, Nov 24, 2014 at 12:26 AM, Willy Tarreau <w@1wt.eu> wrote:

> On Sun, Nov 23, 2014 at 10:31:51PM -0800, Brad Fitzpatrick wrote:
> > On Sun, Nov 23, 2014 at 10:24 PM, Willy Tarreau <w@1wt.eu> wrote:
> >
> > > On Sun, Nov 23, 2014 at 08:07:45PM -0800, Roberto Peon wrote:
> > > > Yup, modulo the lack of knowledge on the first RT or so...
> > >
> > > There's another case where network latency adds some uncertainty :
> > > when clients abort some streams from time to time and consider they
> > > can immediately open a new one as a replacement (the stop button or
> > > Ctrl-F5). It is possible that for internal scheduling reasons or
> > > flow control in the client, the abort is sent after the new stream
> > > is presented, and that from time to time a new stream is rejected.
> > >
> >
> > That's just a bad client. The frames from the client to the server are
> > serialized over the one connection on which the limits apply.
> >
> > If the client is keeping its state under different locking than it orders
> > its serialized frames on the connection, it's going to have all sorts of
> a
> > bad time in any case.
>
> It just depends on the client architecture. If the client maintains a
> totally
> asynchronous set of streams in certain tasks (say a thread per stream) and
> relies on a lower layer to handle the connection, then enforces the stream
> limit in the upper layers, it might be possible that the connection layer
> will not be aware of it and does not necessarily serialize optimally.
>
> Note, I'm not saying that it's the way it should be done, I'm saying that
> protocols cannot always dictate the way software is implemented. Here it
> seems obvious that the limit should be enforced at the connection layer,
> except that I can easily imagine that when mapping some H/1 compatible
> products to H/2, things could differ quite a bit for some time.


I don't mean to be antagonistic or beat this issue to death, but I
disagree. A protocol may not dictate how software is implemented, but it
necessarily influences it very much, or at least it's allowed to impose
some constraints within the selected design.

To imagine an absurd example, an HTTP/2 client could just write random
bytes to the socket from random threads and hope to get an HTTP/2 response.
HTTP/2 requests are small enough, this strategy will eventually work. It's
not a good strategy though.

Slightly less absurd is to have a "totally asynchronous set of streams in
certain tasks (say a thread per stream)" all speaking mostly-valid HTTP/2,
only using a lower level to coordinate access to the socket, but not
coordinating actions such as "hey, this is thread wants to open a stream...
am I allowed to?" It's still not a good strategy. It might look like HTTP/2
at the frame level, and usually get lucky, but if it can't count basic
things and track what all its threads are doing, things like violating
MAX_CONCONCURRENT_STREAMS won't be its only problems.

I'm not saying implementations can't pick their favorite architecture, but
whatever they pick, it still has to play by the rules. The protocol
shouldn't be modified to make it easier for people who choose the option of
writing random bytes.

Received on Monday, 24 November 2014 16:52:19 UTC