Re: Flow control for server push

On Mon, May 2, 2016 at 10:35 PM, Martin Thomson <martin.thomson@gmail.com>
wrote:

> On 3 May 2016 at 05:06, Roy T. Fielding <fielding@gbiv.com> wrote:
> > Are we concerned about a server accidentally sending too many pushes,
> > or deliberately attacking the client via too many pushes?
>
> It's the former, accidental overloading.  And it's not really an
> attack, just an infidelity.  We did a lot to provide feedback
> mechanisms where there was a risk of overload, and we missed this tiny
> corner case.
>
> As with the push policy stuff, the point is to avoid having a server
> send pushes when the client doesn't really want them.
>
> > Not (2) -- that's overkill.  (1) seems a shame given that it won't
> prevent
> > a server from sending pushes, and doesn't feel right given that a client
> > has no idea how many pushes it might need.
>
> I agree on both counts.  I haven't been able to contrive anything that
> works without ugly side-effects of one sort or other.  I'll probably
> write a draft with a few ideas in it and see what people think when
> confronted with specifics.
>
>  My experience with #2 and the general problem (like a push channel over
http/*):
1. "explicit" acks should be left to the app ..  to be useful since network
level delivery doesn't guarantee any commit semantics (as in the rpc case)
2. a limit on # of pending "messages" (based on network-level delivery
acks) on the sender + suspension of delivery acks on the receiver

In the absence of delivery acks (as in this web-push case), an explicit
flow-control message to the peer seems like a reasonable choice, i.e. what
the actual problem is.

Received on Tuesday, 10 May 2016 17:35:01 UTC