Re: Flow control for server push

> On 3 May 2016, at 06:35, Martin Thomson <martin.thomson@gmail.com> wrote:
> 
> On 3 May 2016 at 05:06, Roy T. Fielding <fielding@gbiv.com> wrote:
>> Are we concerned about a server accidentally sending too many pushes,
>> or deliberately attacking the client via too many pushes?
> 
> It's the former, accidental overloading.  And it's not really an
> attack, just an infidelity.  We did a lot to provide feedback
> mechanisms where there was a risk of overload, and we missed this tiny
> corner case.

This does feel a bit like a problem we can’t do much about.

A misconfigured server can always provide more work than a client wants to deal with. There are lots of non-flow-controlled frames that a poorly-written server may emit lots of that will mandatorily consume client resources. In fact, for very nearly any of the non-flow-controlled frames that exist today it’s pretty easy to conceive of a way to accidentally mis-use them (a blizzard of WINDOWUPDATE frames that increment the flow control window by one byte each time, for example, or repeatedly emitting SETTINGS frames that contain all the settings and gradually shrink the HPACK table size by one byte each time).

I think the point that hasn’t been made clear to me yet is: why is this problem more worthy of addressing at the protocol level than any of the others? If the client is feeling overwhelmed by the pushes it is free to do minimal processing of them (pass the headers through the HPACK decoder and maintain the minimal stream state information required to ensure that nothing untoward happens, but otherwise ignore them).

And if a client is really overwhelmed, it can RST the parent stream, change the value of SETTINGS_ENABLE_PUSH to 0, and then re-request the resource. That then reduces the client to needing to maintain extremely minimal state for each pushed stream (pass through the HPACK decoder and then RST on any pushed stream).

More importantly though: if we did enshrine this at a protocol level, what would a client do in the case where a server ignores those protocol requirements? Is it going to be any different from the above?

I’m open to seeing an extension to the protocol for this, but it’s hard to see what we gain. There’s no protocol level way to avoid the client needing to process these frames, at least minimally, because it needs to keep the HPACK state in play. All we can do is have a protocol-level way to signal to a server “don’t send too many pushes at once”. I suppose that’s useful, but it doesn’t seem like we need any more than an extra SETTINGS field and a specification for what to do when a server misbehaves. We will then have to expect that servers will misbehave, and just like in HTTP/1.1, clients will need to decide what they’re doing when they do.

Cory

Received on Tuesday, 3 May 2016 09:06:43 UTC