Re: Design Issue: Max Concurrent Streams Limit and Unidirectional Streams

On Mon, Apr 29, 2013 at 3:20 PM, Martin Thomson <martin.thomson@gmail.com>wrote:

> On 29 April 2013 10:40, William Chan (陈智昌) <willchan@chromium.org> wrote:
> > I read this entire thread and don't really see the problem we want to
> solve.
> > Can someone clarify? Let me reviewe the paint points as I understand
> them:
> > * Stream data buffers are expensive - flow control solves this
>
> s/solves/mitigates/  - they aren't free
>

Fair enough.


>
> > * Stream headers are potentially expensive - MAX_CONCURRENT_STREAMS
> > mitigates this, although I'm not entirely convinced this is a complete
> > solution, especially when you can keep adding HEADERS for a stream
> without
> > bound (headers are problematic since you have to process them due to the
> > shared compression context).
>
> MAX_CONCURRENT_STREAMS also helps with the buffer size problem.
>

To a certain degree, yes, but the real thing that controls the bounds is
the session window or not calling read().


> > * Stream ids are cheap. They're ids and don't require much context.
> > Historically PUSH_PROMISEs were cheap, but now that they can carry header
> > blocks, we've regressed on that. I forget why we added the header blocks
> > into the PUSH_PROMISE, can someone remind me (better yet, link to the
> > appropriate email thread)?
>
> You have to signal what you intend to push to give the client a chance
> to reject it.  That's all.  So the only things that have to be in the
> promise are resource identification things (:scheme, :host, :path),
> and maybe (maybe) things that help identify cache (content-type, vary,
> cache-control)
>

Oops, forgot about that. See, the issue with that is now we've made
PUSH_PROMISE as potentially expensive as a HEADERS frame, since it does
more than just simple stream id allocation. I guess it's not really a huge
issue, since if it's used correctly (in the matter you described), then it
shouldn't be too expensive. If clients attempt to abuse it, then servers
should probably treat it in a similar manner as they treat people trying to
abuse header compression in all other frames with the header block, and
kill the connection accordingly.


>
> > As far as the potential problem above, the root problem is that when you
> > have limits you can have hangs. We see this all the time today with
> browsers
> > (it's only reason people do domain sharding so they can bypass limits).
> I'm
> > not sure I see the value of introducing the new proposed limits. They
> don't
> > solve the hangs, and I don't think the granularity addresses any of the
> > costs in a finer grained manner. I'd like to hear clarification on what
> > costs the new proposed limits will address.
>
> I don't believe that the proposal improves the situation enough (or at
> all) to justify the additional complexity.  That's something that you
> need to assess for yourself.  This proposal provides more granular
> control, but it doesn't address the core problem, which is that you
> and I can only observe each other actions after some delay, which
> means that we can't coordinate those actions perfectly.  Nor can be
> build a perfect model of the other upon which to observe and act upon.
>  The usual protocol issue.
>

OK then. My proposal is to add a new limit for PUSH_PROMISE frames though,
separately from the MAX_CONCURRENT_STREAMS limit, since PUSH_PROMISE exists
as a promise to create a stream, explicitly so we don't have to count it
toward the existing MAX_CONCURRENT_STREAMS limit (I searched the spec and
this seems to be inadequately specced). Roberto and I discussed that before
and may have written an email somewhere in spdy-dev@, but I don't think
we've ever raised it here.

Received on Monday, 29 April 2013 18:37:17 UTC