Re: WebRTC and backpressure (how to stop reading?)

On 03/26/2014 02:51 PM, Nicholas Wilson wrote:
> On 26 March 2014 12:45, Harald Alvestrand <harald@alvestrand.no> wrote:
>> Is backpressure the right way to slow down the sender for reasons only known
>> to the application in an async-callback environment?
>>
>> From the design of the WebSockets API, I suspect that this was considered
>> and answered with "no" in that group, and we should avoid revisiting that
>> decision in this group.
> The speed at which an app can process data traditionally was
> considered a valid use of backpressure. Backpressure is the "right"
> way to slow down the sender to the rate that the receiver can receive
> *and process* data.
It is the way that comes naturally when you are programming to the
sockets model using TCP.
That doesn't mean it's the way that is natural for all contexts and all
protocols.

As Michael Tuexen mentioned, the multi-channel,
single-congestion-controller model of SCTP means that using this
mechanism with SCTP leads to head-of-line blocking: All channels become
unable to receive data because one channel has filled up the transport
layer's buffer space.

It's clearly a problem that needs to have a well known solution. But I'm
not sure backpressure fits the bill here.

>  In a traditional desktop application using the BSD
> sockets API, the application won't select for read on the fd unless
> it's ready for the next chunk, and that model is built in to the
> sockets API: it's completely assumed that sending should eventually
> block if the receiving end can't even keep up with the network stack's
> transmission rate.
>
> Imagine WebRTC is used to backup files on a peer, but without the peer
> being able to read the files, so they're encrypted using WebCrytpo. In
> this use case, the device's bandwidth might be higher than its ability
> to decrypt and store the content. There has to be some way to slow
> down the sender to the device's maximum data handling rate, which
> could be lower than the throughput of the network. If the browser
> doesn't expose a suitable method, then authors will have to reinvent
> the wheel each time with their own application-level flowcontrol on
> top of WebRTC, with the consequence also of lower throughput than
> using the existing congestion window.



>
> My suspicion is that WebSockets came along just a bit too early: the
> relevant discussion might have happened before the current threading
> model for web workers was drafted. In that case, blocking the main
> javascript thread during the onmessage callback would have been OK.

Blocking the main Javascript thread has been considered bad practice
ever since people started writing responsive code in Javascript.

> But, now we're able to post the data to a worker thread for
> decrypting/other processing, and return from the onmessage handler
> immediately. Given the evolution of the threading model for workers, I
> think it probably is a good time to revisit the decision the
> WebSockets API made in omitting flow control methods.
>
> I'm only guessing about WebSockets though - I did raise it on their
> list a while back, but after the spec was effectively frozen.
>
> Best,
> Nicholas
>
> -----
> Nicholas Wilson: nicholas@nicholaswilson.me.uk
> Site and blog: www.nicholaswilson.me.uk
>
>
> On 26 March 2014 12:45, Harald Alvestrand <harald@alvestrand.no> wrote:
>> Larger question:
>>
>> Is backpressure the right way to slow down the sender for reasons only known
>> to the application in an async-callback environment?
>>
>> From the design of the WebSockets API, I suspect that this was considered
>> and answered with "no" in that group, and we should avoid revisiting that
>> decision in this group.
>>
>> I could be wrong.
>>


-- 
Surveillance is pervasive. Go Dark.

Received on Wednesday, 26 March 2014 14:27:58 UTC