W3C home > Mailing lists > Public > public-webrtc@w3.org > March 2014

Re: WebRTC and backpressure (how to stop reading?)

From: Michael Tuexen <Michael.Tuexen@lurchi.franken.de>
Date: Wed, 26 Mar 2014 17:03:12 +0100
Cc: Harald Alvestrand <harald@alvestrand.no>, public-webrtc@w3.org
Message-Id: <D70FD60D-8C65-4837-8188-71DF219E35BC@lurchi.franken.de>
To: Nicholas Wilson <nicholas@nicholaswilson.me.uk>
On 26 Mar 2014, at 14:51, Nicholas Wilson <nicholas@nicholaswilson.me.uk> wrote:

> On 26 March 2014 12:45, Harald Alvestrand <harald@alvestrand.no> wrote:
>> Is backpressure the right way to slow down the sender for reasons only known
>> to the application in an async-callback environment?
>> 
>> From the design of the WebSockets API, I suspect that this was considered
>> and answered with "no" in that group, and we should avoid revisiting that
>> decision in this group.
> 
> The speed at which an app can process data traditionally was
> considered a valid use of backpressure. Backpressure is the "right"
> way to slow down the sender to the rate that the receiver can receive
> *and process* data. In a traditional desktop application using the BSD
> sockets API, the application won't select for read on the fd unless
> it's ready for the next chunk, and that model is built in to the
> sockets API: it's completely assumed that sending should eventually
> block if the receiving end can't even keep up with the network stack's
> transmission rate.
As far as I understand the JS API, the send call will not block, the
message will be buffered.

Best regards
Michael
> 
> Imagine WebRTC is used to backup files on a peer, but without the peer
> being able to read the files, so they're encrypted using WebCrytpo. In
> this use case, the device's bandwidth might be higher than its ability
> to decrypt and store the content. There has to be some way to slow
> down the sender to the device's maximum data handling rate, which
> could be lower than the throughput of the network. If the browser
> doesn't expose a suitable method, then authors will have to reinvent
> the wheel each time with their own application-level flowcontrol on
> top of WebRTC, with the consequence also of lower throughput than
> using the existing congestion window.
> 
> My suspicion is that WebSockets came along just a bit too early: the
> relevant discussion might have happened before the current threading
> model for web workers was drafted. In that case, blocking the main
> javascript thread during the onmessage callback would have been OK.
> But, now we're able to post the data to a worker thread for
> decrypting/other processing, and return from the onmessage handler
> immediately. Given the evolution of the threading model for workers, I
> think it probably is a good time to revisit the decision the
> WebSockets API made in omitting flow control methods.
> 
> I'm only guessing about WebSockets though - I did raise it on their
> list a while back, but after the spec was effectively frozen.
> 
> Best,
> Nicholas
> 
> -----
> Nicholas Wilson: nicholas@nicholaswilson.me.uk
> Site and blog: www.nicholaswilson.me.uk
> 
> 
> On 26 March 2014 12:45, Harald Alvestrand <harald@alvestrand.no> wrote:
>> Larger question:
>> 
>> Is backpressure the right way to slow down the sender for reasons only known
>> to the application in an async-callback environment?
>> 
>> From the design of the WebSockets API, I suspect that this was considered
>> and answered with "no" in that group, and we should avoid revisiting that
>> decision in this group.
>> 
>> I could be wrong.
>> 
> 
> 
Received on Wednesday, 26 March 2014 16:03:38 UTC

This archive was generated by hypermail 2.3.1 : Monday, 23 October 2017 15:19:38 UTC