Re: Proposal for API for Data

On 04/10/2012 04:48 AM, Randell Jesup wrote:
> On 4/9/2012 5:31 PM, Cullen Jennings wrote:
>> On Mar 15, 2012, at 10:13 AM, Harald Alvestrand wrote:
>>
>>> On 03/15/2012 02:34 PM, Cullen Jennings wrote:
>>>> I had a proposal for the Data API for awhile and have
>>>> circulated it privately to some folks but I had not sent it to
>>>> the list because I wanted to keep the focus on the
>>>> PeerConnection / JSEP stuff for awhile. However, I think I
>>>> probably need to send this out. I think at the high level there
>>>> is general agreement we need to be abel to send and receive
>>>> data much like web sockets does, we need to be abel to create
>>>> and destroy channels, and we need to be able to mark a channel
>>>> as reliable or non reliable delivery. The API I'm proposing to
>>>> do this is very simple.
>>>>
>>>> Interface DataMediaStreamTrack : MediaStreamTrack { attribute
>>>> boolean reliable; } DataMediaStreamTrack implements WebSocket;
>>>>
>>>>
>>>> This ends up getting the function to send and receive data as
>>>> well as report errors fromWebSocket. Any library that takes a
>>>> WebSocket can take a DataMediaStream. It unifies creation of a
>>>> new stream to be the same as MediaStreams. So in the same way
>>>> that a new Video track was created, you can add a data track or
>>>> remove one. It is labeled in same way as the the other tracks
>>>> and so on. It seems like i works well for data sources that are
>>>> from the JS app and it also can be used with data sources, such
>>>> as a sensor or game controller, that may be more controlled by
>>>> the browser in the same way a microphone or camera is more
>>>> controlled form the browser. That will allow for better real
>>>> time performance of devices that produce a data stream instead
>>>> of an media stream.
>>>>
>>>> I'd like to discuss on one of the call the pros / cons of such
>>>> an interface but right now I am heads down on the JSEP
>>>> changes.
>>> At one point I argued something similar....
>>>
>>> the biggest worry I have with this approach is that it inherits a
>>> lot of stuff that doesn't seem to make sense, for instance:
>> I think my answer to all of this this is more or less "same thing
>> as media". Something in media stream that does not work for data
>> probably will not work for DTMF.
>>
>> My primary issues here is that I'd rather not have things be
>> greatly different if they don't need to be. If there is something
>> in the MediaStreams that does not work for Data, then I am worried
>> that it will have problem for other media types that get done in
>> the future that are not Audio or Video.
>>
>> I do think that given the primary goal of the Data interface is to
>> reduce latency, that we need to have a model that works when the
>> Data is generated directly by a device and not from the JS app. Of
>> course we all need to support the JS app generation.
>
> I should note that this proposal is the first time I've heard
> mentioned the wish to hook up hardware (or non-JS sources) directly
> instead of via JS.  And of course this begs the question of data
> formats and where the source is (getUserNonMedia()? ;-)  I'm not
> saying it's bad to want to do this, but what are the use-cases, and
> in what way is the data standardized or otherwise definable?  (The
> app could transfer a data description in conjunction with the actual
> data, for example.)

I think that hooking up hardware to to generate data is very similar to
getUserMedia - after all the mike and the cam is really nothing more 
than sensors or data sources.

My personal view is that this is something that we should look into, but 
I don't think we should do it right now. And for this kind of data 
sources the concept of streams and tracks probably fits quite well (as 
does perhaps the possibility to mute/disable tracks), but when the 
application generates the data I don̈́'t think it does.

Received on Tuesday, 10 April 2012 13:13:36 UTC