Re: data channel send flow control?

The trouble with polling timers for sending is you don't know the rate 
in which data can flow. You could send 1 meg of data and assume "oh I'll 
set a timer for 1 second because 1 meg usually takes 1 second", but on a 
very fast network 1 meg can be consumed instantly but on a slow network 
1 meg might take a good chunk of time. So with polling you never know 
the proper time-out to use. It can work but it's not really ideal.

"onsent" would be better by me too as a Promise and might even be more 
ideal than just a generalized "onsendready" mechanism. I like it.

-Robin



> Peter Thatcher <mailto:pthatcher@google.com>
> April 28, 2014 at 1:26 PM
> How is your "onsendready" different than polling and then checking if 
> the buffer is below a certain amount?  In other words, couldn't you 
> implement "onsendready" with some JS on top of what is already there?
>
> If we were to change the API to make it more pleasant, I'd prefer 
> something where we get a callback when a message is sent (leaves the 
> buffer).  Something like this:
>
>
> void send (DOMString data,
> ​function onsent
> );
>
> Or:
>
> Promise void send (DOMString data);
>
>
> ​So you could do this:
>
> var dc = createDataChannel(...);
> var chunks = [...];
> function sendChunks() {
>   dc.send(chunks.shift(), sendChunks);
> }
> sendChunks();
>
> Or this:
>
> var dc = createDataChannel(...);
> var chunks = [...];
> function sendChunks() {
>   dc.send(chunks.shift()).then(sendChunks);
> }
> sendChunks();
>
>
>
> It would be useful for more than just flow control.
>
>
>
>
>
> Robin Raymond <mailto:robin@hookflash.com>
> April 24, 2014 at 10:25 PM
>
> I noticed the WebRTC 1.0 data channel API for which we are modeling 
> doesn't include a way to easily do application level sending flow 
> control and likely because web sockets doesn't have application 
> sending flow control either. Personally I think this is a such an 
> important use case and such an easy thing to fix.
>
> If I want to stream a larger file from peer to peer over a reliable 
> channel, currently I'd have to poll the send buffer size to see if 
> there's room to add more data. But I think a better way to do this 
> which is more bandwidth agnostic is to have an "onsendready(...)" so 
> you can have events from the sending engine fire indicating there's 
> room for more data and only re-fire the event again once the 
> application has called the send(...) method and again there's more 
> room to send afterwards. That would allow for a much easier way for 
> applications to do flow control of streamed application level data 
> without having to have as much intelligence about bandwidth and 
> polling to maximize throughput. Plus eventing "send ready" is a pretty 
> typical mechanism and a well understood paradigm.
>
> I think this oversight should be addressed.
>
> -Robin
>

Received on Monday, 28 April 2014 17:47:59 UTC