W3C home > Mailing lists > Public > public-webrtc@w3.org > October 2015

Re: DataChannels in Workers

From: Feross Aboukhadijeh <feross@feross.org>
Date: Mon, 26 Oct 2015 23:40:03 +0000
Message-ID: <CA+nRABmGTHAY1upHZQf-qezqVysWVXdAcO9kh6wCoC1M9sitxg@mail.gmail.com>
To: Randell Jesup <randell-ietf@jesup.org>, public-webrtc@w3.org
I've been following this thread with excitement, as well! Here are the use
cases that would be supported by this proposal:

1. Peer-assisted delivery

DataChannel in WebWorkers would support the use case of "peer assisted
delivery" a la PeerCDN
<https://web.archive.org/web/20150512115957/https://peercdn.com/faq.html>,
Peer5 <https://www.peer5.com>, Greta <https://greta.io>, and StreamRoot
<http://www.streamroot.io/>. These solutions (and others like them) are
helping to make 4K video streaming affordable for service providers by
allowing peers to share some of the hosting burden. They also improve video
bitrate during peak hours when the link between a service provider (like
YouTube) and the ISP is saturated.

There's also WebTorrent <https://webtorrent.io>. WebTorrent connects web
users together to form a distributed, decentralized browser-to-browser
network for efficient file transfer.

What these use cases have in common is that they rely on content hashing
(SHA1 or SHA256) to verify content received from peers. Doing this on the
main thread, as is currently done, has an undue effect on page performance.
Moving the channel to a worker and only transferring data to the main
thread (via transferrable objects) once it's been verified is the right
design.

2. Routing systems

DataChannels in workers (especially WebWorkers and SharedWorkers) would
allow one to construct and reuse a DHT (a decentralized/distributed lookup
service similar to a hash table) across tabs. Useful for routing to nodes
in decentralized applications. For context, see the BitTorrent DHT
<http://www.bittorrent.org/beps/bep_0005.html>.

Projects that are exploring the WebRTC routing space include WebDHT
<https://github.com/jhiesey/webdht>, WebRTC-Explorer
<https://www.npmjs.com/package/webrtc-explorer>, bitable
<https://github.com/hallettj/bitable>, and Intel MeshCentral
<https://software.intel.com/en-us/blogs/2015/03/18/meshcentral-experimental-webrtc-mesh>.
The IPFS project <https://ipfs.io/> is also building a web DHT for their
browser version. Additionally, I also know that Substack
<https://github.com/substack> is currently building a WebRTC DHT for
WebTorrent.

In short, there's a ton of exciting work happening in this space. The thing
that all these efforts have in common is that they're building up a
non-trivial topology where each peer has dozens of connections to other
peers in the network. Using a SharedWorker would allow this topology to be
reused across instances of the same application, or even different
applications that agree to use the same routing system. This would save
tens of seconds, maybe minutes, of connection overhead when an application
is first opened.

I'm in support of adding this to WebRTC 1.0 so that we don't stifle the
many innovations happening with the data channel.

Cheers,
Feross

On Thu, Oct 1, 2015 at 11:42 AM Randell Jesup <randell-ietf@jesup.org>
wrote:

> On 10/1/2015 7:50 AM, David Dias wrote:
>
> Hi all,
>
> I’ve been following this thread with great excitement! I know several
> projects that having DataChannels in *WebWorkers *(!== ServiceWorkers to
> avoid confusion) can make them from a practical and fun demo to something
> thats actually usable by a lot of users.
>
> One of them, would be webrtc-explorer (for ref:
> <http://blog.daviddias.me/2015/03/22/enter-webrtc-explorer>
> http://blog.daviddias.me/2015/03/22/enter-webrtc-explorer) where I found
> empirically that a ‘Store and Forward’ message network approach can drag
> the browser down and make all UI interactions unusable, when the number of
> data events and write is high. Projects like WebTorrent (webtorrent.io),
> PANDO (Scalable and Reliable Dynamic Array-based Computing on the Web, a
> new endeavour by Erick Lavoie) and IPFS (the browser version) can benefit
> greatly from it.
>
>
> Great to hear some more uses for this beyond games!  I agree this could
> really help make tools like this a lot easier to create without causing
> problems (such as blocking the main page content).  We hope to have an
> implementation of this soon, but don't have a target release date yet.
>
>
> Right now, there isn’t also a way to prioritise messages once we connect
> to more than one peer at the same time, which becomes critical in a single
> threaded environment where a browser can have hundreds of Data Channels
> open.
>
>
> There are several things that can help here, including the SCTP ndata work
> that's ongoing.  Another is BufferedAmountLowThreshold (Firefox and Chrome
> both recently added support for it, but full support including data
> buffered in the stack requires some new features in the upstream SCTP
> library.  Currently both implementations only include data buffered outside
> of the SCTP stack).
>
>
> One thing I haven’t understood completely is what are the actual
> implications of enabling the WebRTC API to be accessible from a WebWorker,
> if any?
>
>
> For RTCDataChannels, the implications are pretty simple:
> * You can call channel.send()/channel.close() from the worker
> * Events for the channel will occur in the worker (onmessage/etc)
> * You have no access to the RTCPeerConnection (and thus can't create new
> channels from there)
> * New data channels created from the other side will appear in the main
> context (via ondatachannel/etc); you can immediately transfer them to the
> worker.
>
> Making peerconnection transferrable or creating some sort of
> peerconnection-proxy-for-workers would be more involved.
> Allowing createDataChannel or ondatachannel to occur in the worker would
> be more complex, and it's unclear those are time-critical operations the
> way send() and message reception are.
>
>
> --
> Randell Jesup -- rjesup a t mozilla d o t com
> Please please please don't email randell-ietf@jesup.org!  Way too much spam
>
>
Received on Monday, 26 October 2015 23:40:44 UTC

This archive was generated by hypermail 2.3.1 : Monday, 23 October 2017 15:19:46 UTC