W3C home > Mailing lists > Public > ietf-http-wg@w3.org > April to June 2013

Re: Design Issue: Max Concurrent Streams Limit and Unidirectional Streams

From: James M Snell <jasnell@gmail.com>
Date: Mon, 29 Apr 2013 13:58:52 -0700
Message-ID: <CABP7RbdjRWxJeMZeeq_8Zknfe1VgLTqFf_X=4RbeCfnRUPfNtQ@mail.gmail.com>
To: ChanWilliam(陈智昌) <willchan@chromium.org>
Cc: ietf-http-wg@w3.org, Martin Thomson <martin.thomson@gmail.com>
On Apr 29, 2013 12:33 PM, "William Chan (陈智昌)" <willchan@chromium.org>
> I guess I don't see per-stream state as being that expensive. Compression
contexts are a fixed state on a per-connection basis, meaning that
additional streams don't add to that state. The main cost, as I see it, is
the decompressed headers. I said potentially since that basically only
means the URL (unless there are other headers important for caching due to
Vary), and additional headers can come in the HEADERS frame. Also,
PUSH_PROMISE doesn't require allocating other state, like backend/DB
connections, if you only want to be able to handle
(#MAX_CONCURRENT_STREAMs) of those backend connections in parallel.
> If they're not specified, then we should specify it, but I've always
understood the header compression contexts to be directional and apply to
all frames sending headers in a direction. Therefore there should be two
compression contexts in a connection, one for header blocks being sent and
one for header blocks being received. If this is controversial, let's fork
a thread and discuss it.

I think we all have been working off that basic assumption but, again, it
hasn't been written down. I'll gladly take that to do...  Doing so would
help us to evaluate the proposals on the table so far...  In a separate
thread tho.

Regarding the original issue for this thread,  MAX_CONCURRENT_STREAMS as
currently defined, is simply not workable with pushed streams because of
the half closed issue.  There are several ways to address the problem,  we
just need to identify and pick one. I don't particularly care which one we
choose :-)

- James

>> >>
>> >>
>> >> > As far as the potential problem above, the root problem is that
when you
>> >> > have limits you can have hangs. We see this all the time today with
>> >> > (it's only reason people do domain sharding so they can bypass
limits). I'm
>> >> > not sure I see the value of introducing the new proposed limits.
They don't
>> >> > solve the hangs, and I don't think the granularity addresses any of
>> >> > costs in a finer grained manner. I'd like to hear clarification on
>> >> > costs the new proposed limits will address.
>> >>
>> >> I don't believe that the proposal improves the situation enough (or at
>> >> all) to justify the additional complexity.  That's something that you
>> >> need to assess for yourself.  This proposal provides more granular
>> >> control, but it doesn't address the core problem, which is that you
>> >> and I can only observe each other actions after some delay, which
>> >> means that we can't coordinate those actions perfectly.  Nor can be
>> >> build a perfect model of the other upon which to observe and act upon.
>> >>  The usual protocol issue.
>> >
>> >
>> > OK then. My proposal is to add a new limit for PUSH_PROMISE frames
though, separately from the MAX_CONCURRENT_STREAMS limit, since
PUSH_PROMISE exists as a promise to create a stream, explicitly so we don't
have to count it toward the existing MAX_CONCURRENT_STREAMS limit (I
searched the spec and this seems to be inadequately specced). Roberto and I
discussed that before and may have written an email somewhere in spdy-dev@,
but I don't think we've ever raised it here.
>> >
>> Well,  there is an issue tracking it in the github repo now, at least.
As currently defined in the spec,  it definitely needs to be addressed.
> Great. You guys are way better than I am about tracking all known issues.
I just have it mapped fuzzily in my head :)
Received on Monday, 29 April 2013 20:59:19 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 1 March 2016 11:11:12 UTC