Re: Design Issue: Max Concurrent Streams Limit and Unidirectional Streams

I agree with that, although there's no current WebSocket API use case for
server initiated bidirectional streams. I could imagine it in the future
though.


On Wed, May 1, 2013 at 4:55 PM, Roberto Peon <grmocg@gmail.com> wrote:

> I still want to be able to support the WS API over HTTP/2. It would be
> tragic to have N+1 connections instead of 1 when 1 works better anyway...
>
> -=R
>
>
> On Wed, May 1, 2013 at 10:46 AM, William Chan (陈智昌) <willchan@chromium.org
> > wrote:
>
>> The only benefit to that is supporting non-HTTP/2 application layering
>> semantics, which is intended not to change from HTTP/1.X. So there's
>> currently no use to allow the server to initiate streams with the
>> client=>server direction open.
>>
>> I consider the current trend of our discussions to tend towards
>> eliminating complexity and targeting for HTTP/2 application layering
>> semantics. I think if we have another use case come up that would require
>> supporting server initiated bidirectional streams, I think at that point
>> it'd be worthwhile to revisit how we do this.
>>
>> I'd like to hear from others if they disagree with my assessment of how
>> most people feel so far. FWIW, I personally would like us to support server
>> initiated bidirectional streams.
>>
>>
>> On Wed, May 1, 2013 at 2:26 PM, James M Snell <jasnell@gmail.com> wrote:
>>
>>> Why not just bring the UNIDIRECTIONAL flag back as a PUSH_PROMISE
>>> frame-specific flag? If a PUSH_PROMISE frame has the unidirectional
>>> flag set, the stream is automatically half-closed in the return
>>> direction. If the flag is unset, the promised stream remains half-open
>>> until the client half-closes or a rst_stream is sent.
>>>
>>> On Mon, Apr 29, 2013 at 2:44 PM, William Chan (陈智昌)
>>> <willchan@chromium.org> wrote:
>>> > Remember we originally *had* a flag for UNIDIRECTIONAL, which we
>>> removed
>>> > because it was redundant in the traditional HTTP use cases.
>>> >
>>> >
>>> > On Mon, Apr 29, 2013 at 6:39 PM, Roberto Peon <grmocg@gmail.com>
>>> wrote:
>>> >>
>>> >> At worst, we burn a flag which states it is half-closed or
>>> unidirectional,
>>> >> or provide some other information which identifies the IANA port
>>> number for
>>> >> the overlayed protocol or something.
>>> >> Anyway, *shrug*.
>>> >> -=R
>>> >>
>>> >>
>>> >> On Mon, Apr 29, 2013 at 2:32 PM, William Chan (陈智昌)
>>> >> <willchan@chromium.org> wrote:
>>> >>>
>>> >>> On Mon, Apr 29, 2013 at 6:17 PM, James M Snell <jasnell@gmail.com>
>>> wrote:
>>> >>>>
>>> >>>> +1 on this.  I like this approach.
>>> >>>>
>>> >>>> On Apr 29, 2013 2:15 PM, "Roberto Peon" <grmocg@gmail.com> wrote:
>>> >>>>>
>>> >>>>> I had thought to provide no explicit limit for PUSH_PROMISE, just
>>> as
>>> >>>>> there is no limit to the size of a webpage, or the number of links
>>> upon it.
>>> >>>>> The memory requirements for PUSH are similar or the same (push
>>> should
>>> >>>>> consume a single additional bit of overhead per url, when one
>>> considers that
>>> >>>>> the URL should be parsed, enqueued, etc.).
>>> >>>>> If the browser isn't done efficiently, or, the server is for some
>>> >>>>> unknown reason being stupid and attempting to DoS the browser with
>>> many
>>> >>>>> resources that it will never use, then the client sends RST_STREAM
>>> for the
>>> >>>>> ones it doesn't want, and makes a request on its own. all tidy.
>>> >>>
>>> >>>
>>> >>> I don't feel too strongly here. I do feel like this is more of an
>>> edge
>>> >>> case, possibly important for forward proxies (or reverse proxies
>>> speaking to
>>> >>> backends over a multiplexed channel like HTTP/2). It doesn't really
>>> matter
>>> >>> for my browser, so unless servers chime in and say they'd prefer a
>>> limit,
>>> >>> I'm fine with this.
>>> >>>
>>> >>>>>
>>> >>>>> As for PUSH'd streams, the easiest solution is likely to assume
>>> that
>>> >>>>> the stream starts out in a half-closed state.
>>> >>>
>>> >>>
>>> >>> I looked into our earlier email threads and indeed this is what we
>>> agreed
>>> >>> on (
>>> http://lists.w3.org/Archives/Public/ietf-http-wg/2013JanMar/1106.html).
>>> >>> I voiced some mild objection since if you view the HTTP/2 framing
>>> layer as a
>>> >>> transport for another application protocol, then bidirectional server
>>> >>> initiated streams might be nice. But in absence of any such
>>> protocol, this
>>> >>> is a nice simplification.
>>> >>>
>>> >>>>>
>>> >>>>> -=R
>>> >>>>>
>>> >>>>>
>>> >>>>> On Mon, Apr 29, 2013 at 12:33 PM, William Chan (陈智昌)
>>> >>>>> <willchan@chromium.org> wrote:
>>> >>>>>>
>>> >>>>>> On Mon, Apr 29, 2013 at 3:46 PM, James M Snell <jasnell@gmail.com
>>> >
>>> >>>>>> wrote:
>>> >>>>>>>
>>> >>>>>>>
>>> >>>>>>> On Apr 29, 2013 11:36 AM, "William Chan (陈智昌)"
>>> >>>>>>> <willchan@chromium.org> wrote:
>>> >>>>>>> >
>>> >>>>>>> [snip]
>>> >>>>>>>
>>> >>>>>>> >
>>> >>>>>>> >
>>> >>>>>>> > Oops, forgot about that. See, the issue with that is now we've
>>> made
>>> >>>>>>> > PUSH_PROMISE as potentially expensive as a HEADERS frame,
>>> since it does more
>>> >>>>>>> > than just simple stream id allocation. I guess it's not really
>>> a huge issue,
>>> >>>>>>> > since if it's used correctly (in the matter you described),
>>> then it
>>> >>>>>>> > shouldn't be too expensive. If clients attempt to abuse it,
>>> then servers
>>> >>>>>>> > should probably treat it in a similar manner as they treat
>>> people trying to
>>> >>>>>>> > abuse header compression in all other frames with the header
>>> block, and kill
>>> >>>>>>> > the connection accordingly.
>>> >>>>>>> >
>>> >>>>>>>
>>> >>>>>>> Not just "potentially" as expensive..   As soon as we get a push
>>> >>>>>>> promise we need to allocate state and hold onto it for an
>>> indefinite period
>>> >>>>>>> of time. We do not yet know exactly when that compression
>>> context can be let
>>> >>>>>>> go because it has not yet been bound to stream state.  Do push
>>> streams all
>>> >>>>>>> share the same compression state? Do those share the same
>>> compression state
>>> >>>>>>> as the originating stream? The answers might be obvious but they
>>> haven't yet
>>> >>>>>>> been written down.
>>> >>>>>>
>>> >>>>>>
>>> >>>>>> I guess I don't see per-stream state as being that expensive.
>>> >>>>>> Compression contexts are a fixed state on a per-connection basis,
>>> meaning
>>> >>>>>> that additional streams don't add to that state. The main cost,
>>> as I see it,
>>> >>>>>> is the decompressed headers. I said potentially since that
>>> basically only
>>> >>>>>> means the URL (unless there are other headers important for
>>> caching due to
>>> >>>>>> Vary), and additional headers can come in the HEADERS frame. Also,
>>> >>>>>> PUSH_PROMISE doesn't require allocating other state, like
>>> backend/DB
>>> >>>>>> connections, if you only want to be able to handle
>>> (#MAX_CONCURRENT_STREAMs)
>>> >>>>>> of those backend connections in parallel.
>>> >>>>>>
>>> >>>>>> If they're not specified, then we should specify it, but I've
>>> always
>>> >>>>>> understood the header compression contexts to be directional and
>>> apply to
>>> >>>>>> all frames sending headers in a direction. Therefore there should
>>> be two
>>> >>>>>> compression contexts in a connection, one for header blocks being
>>> sent and
>>> >>>>>> one for header blocks being received. If this is controversial,
>>> let's fork a
>>> >>>>>> thread and discuss it.
>>> >>>>>>
>>> >>>>>>>
>>> >>>>>>> >>
>>> >>>>>>> >>
>>> >>>>>>> >> > As far as the potential problem above, the root problem is
>>> that
>>> >>>>>>> >> > when you
>>> >>>>>>> >> > have limits you can have hangs. We see this all the time
>>> today
>>> >>>>>>> >> > with browsers
>>> >>>>>>> >> > (it's only reason people do domain sharding so they can
>>> bypass
>>> >>>>>>> >> > limits). I'm
>>> >>>>>>> >> > not sure I see the value of introducing the new proposed
>>> limits.
>>> >>>>>>> >> > They don't
>>> >>>>>>> >> > solve the hangs, and I don't think the granularity
>>> addresses any
>>> >>>>>>> >> > of the
>>> >>>>>>> >> > costs in a finer grained manner. I'd like to hear
>>> clarification
>>> >>>>>>> >> > on what
>>> >>>>>>> >> > costs the new proposed limits will address.
>>> >>>>>>> >>
>>> >>>>>>> >> I don't believe that the proposal improves the situation
>>> enough
>>> >>>>>>> >> (or at
>>> >>>>>>> >> all) to justify the additional complexity.  That's something
>>> that
>>> >>>>>>> >> you
>>> >>>>>>> >> need to assess for yourself.  This proposal provides more
>>> granular
>>> >>>>>>> >> control, but it doesn't address the core problem, which is
>>> that
>>> >>>>>>> >> you
>>> >>>>>>> >> and I can only observe each other actions after some delay,
>>> which
>>> >>>>>>> >> means that we can't coordinate those actions perfectly.  Nor
>>> can
>>> >>>>>>> >> be
>>> >>>>>>> >> build a perfect model of the other upon which to observe and
>>> act
>>> >>>>>>> >> upon.
>>> >>>>>>> >>  The usual protocol issue.
>>> >>>>>>> >
>>> >>>>>>> >
>>> >>>>>>> > OK then. My proposal is to add a new limit for PUSH_PROMISE
>>> frames
>>> >>>>>>> > though, separately from the MAX_CONCURRENT_STREAMS limit,
>>> since PUSH_PROMISE
>>> >>>>>>> > exists as a promise to create a stream, explicitly so we don't
>>> have to count
>>> >>>>>>> > it toward the existing MAX_CONCURRENT_STREAMS limit (I
>>> searched the spec and
>>> >>>>>>> > this seems to be inadequately specced). Roberto and I
>>> discussed that before
>>> >>>>>>> > and may have written an email somewhere in spdy-dev@, but I
>>> don't think
>>> >>>>>>> > we've ever raised it here.
>>> >>>>>>> >
>>> >>>>>>>
>>> >>>>>>> Well,  there is an issue tracking it in the github repo now, at
>>> >>>>>>> least.  As currently defined in the spec,  it definitely needs
>>> to be
>>> >>>>>>> addressed.
>>> >>>>>>
>>> >>>>>> Great. You guys are way better than I am about tracking all known
>>> >>>>>> issues. I just have it mapped fuzzily in my head :)
>>> >>>>>
>>> >>>>>
>>> >>>
>>> >>
>>> >
>>>
>>
>>
>

Received on Wednesday, 1 May 2013 19:59:12 UTC