Re: Proposal for API for Data

On 3/15/2012 9:34 AM, Cullen Jennings wrote:
> I had a proposal for the Data API for awhile and have circulated it privately to some folks but I had not sent it to the list because I wanted to keep the focus on the PeerConnection / JSEP stuff for awhile. However, I think I probably need to send this out.

I wish I had seen this earlier.   We're pretty far along, which was why 
I was doing things like the IETF protocol and engaging people to help; 
the only reason I hadn't proposed an W3 API for it is that Adam was 
supposed to be working on an update.

> I think at the high level there is general agreement we need to be abel to send and receive data much like web sockets does, we need to be abel to create and destroy channels, and we need to be able to mark a channel as reliable or non reliable delivery. The API I'm proposing to do this is very simple.
>
> Interface DataMediaStreamTrack : MediaStreamTrack
> {
>       attribute boolean reliable;

We'd need a few more attributes as well.

> }
> DataMediaStreamTrack implements WebSocket;

So these are tracks, which implies they live in a MediaStream.  Is there 
a separate MediaStream for all the data channels?  I assume you mean to 
ignore the requirements for synchronization of Tracks in a MediaStream, 
and to ignore things like muting of tracks,

> This ends up getting the function to send and receive data as well as report errors fromWebSocket.

There are issues here when you mix in unreliable channels - the 
WebSockets API doesn't allow for getting any of the additional info you 
need on an unreliable channel (sequence number for example).

> Any library that takes a WebSocket can take a DataMediaStream. It unifies creation of a new stream to be the same as MediaStreams. So in the same way that a new Video track was created, you can add a data track or remove one.

"Any library that takes a WebSocket" -> good.  This speaks to whatever 
object we have for a data channel, though there are issues (see above).

Unified with creation of MediaStreams ->  to be honest, I'm not seeing 
how this is buying us a lot other than less spec verbiage.  Which is 
something, but not critical.  And there are complications when talking 
about duplicating MediaStreams, etc - how do status, handlers, etc 
ripple through?   This puts a complex transfer-oriented object all the 
way down a the track level. You could define data tracks in 
MediaStreams, but those should be  more abstract datagram pipes just 
like the audio and video tracks are fairly abstract, and the 
manipulations on them have nothing to do with network traffic.

> It is labeled in same way as the the other tracks and so on. It seems like i works well for data sources that are from the JS app and it also can be used with data sources, such as a sensor or game controller, that may be more controlled by the browser in the same way a microphone or camera is more controlled form the browser. That will allow for better real time performance of devices that produce a data stream instead of an media stream.

This is the strongest argument, but would require more specification of 
how data is fed into a DataStreamTrack by non-JS sources - and it should 
be compared to feeding data in from a JS worker.  Also, I see this usage 
as relatively special-purpose - I don't see a lot of hardware devices 
being directly hooked to WebRTC data channels except for some 
specialized telepresence/ teleoperation type uses.  (You'd also have to 
have that data available to the UA in the first place, though not every 
UA is a browser.)

But again, this speaks to an abstract datagram source, not to a complex 
WebSockets interface.

I *could* see (with more work) "datachannel.src = data_track1; 
datachannel.dest = data_track2;" to feed data from abstract datagram 
tracks into/out-of a data channel.  (And then you could extend 
WebSockets to do the same thing!)  But we'd need a clear reason to want 
to have data tracks.

-- 
Randell Jesup
randell-ietf@jesup.org

Received on Thursday, 15 March 2012 17:55:12 UTC