W3C home > Mailing lists > Public > public-webrtc@w3.org > April 2012

Re: Proposal for API for Data

From: snandaku <snandaku@cisco.com>
Date: Mon, 09 Apr 2012 21:09:50 -0700
To: Randell Jesup <randell-ietf@jesup.org>, <public-webrtc@w3.org>
Message-ID: <CBA9009E.51BD%snandaku@cisco.com>


I am curious of a use-case where I want my RDP Stream or something like that
be sent in the data-channel for showing my remote desktop or remote
application along with video of the remote caller .... [chromoting + video
call ]

For scenarios like this , Mute, tracks, low-latency flow of data for screen
updates and implementation similar to media flow makes more sense .. Keeping
the semantics generic is the way to go since a this point we don't know all
the different use-cases

./S


On 4/9/12 7:48 PM, "Randell Jesup" <randell-ietf@jesup.org> wrote:

> On 4/9/2012 5:31 PM, Cullen Jennings wrote:
>> On Mar 15, 2012, at 10:13 AM, Harald Alvestrand wrote:
>> 
>>> On 03/15/2012 02:34 PM, Cullen Jennings wrote:
>>>> I had a proposal for the Data API for awhile and have circulated it
>>>> privately to some folks but I had not sent it to the list because I wanted
>>>> to keep the focus on the PeerConnection / JSEP stuff for awhile. However, I
>>>> think I probably need to send this out. I think at the high level there is
>>>> general agreement we need to be abel to send and receive data much like web
>>>> sockets does, we need to be abel to create and destroy channels, and we
>>>> need to be able to mark a channel as reliable or non reliable delivery. The
>>>> API I'm proposing to do this is very simple.
>>>> 
>>>> Interface DataMediaStreamTrack : MediaStreamTrack
>>>> {
>>>>       attribute boolean reliable;
>>>> }
>>>> DataMediaStreamTrack implements WebSocket;
>>>> 
>>>> 
>>>> This ends up getting the function to send and receive data as well as
>>>> report errors fromWebSocket. Any library that takes a WebSocket can take a
>>>> DataMediaStream. It unifies creation of a new stream to be the same as
>>>> MediaStreams. So in the same way that a new Video track was created, you
>>>> can add a data track or remove one. It is labeled in same way as the the
>>>> other tracks and so on. It seems like i works well for data sources that
>>>> are from the JS app and it also can be used with data sources, such as a
>>>> sensor or game controller, that may be more controlled by the browser in
>>>> the same way a microphone or camera is more controlled form the browser.
>>>> That will allow for better real time performance of devices that produce a
>>>> data stream instead of an media stream.
>>>> 
>>>> I'd like to discuss on one of the call the pros / cons of such an interface
>>>> but right now I am heads down on the JSEP changes.
>>> At one point I argued something similar....
>>> 
>>> the biggest worry I have with this approach is that it inherits a lot of
>>> stuff that doesn't seem to make sense, for instance:
>> I think my answer to all of this this is more or less "same thing as media".
>> Something in media stream that does not work for data probably will not work
>> for DTMF.
>> 
>> My primary issues here is that I'd rather not have things be greatly
>> different if they don't need to be. If there is something in the MediaStreams
>> that does not work for Data, then I am worried that it will have problem for
>> other media types that get done in the future that are not Audio or Video.
>> 
>> I do think that given the primary goal of the Data interface is to reduce
>> latency, that we need to have a model that works when the Data is generated
>> directly by a device and not from the JS app. Of course we all need to
>> support the JS app generation.
> 
> I should note that this proposal is the first time I've heard mentioned
> the wish to hook up hardware (or non-JS sources) directly instead of via
> JS.  And of course this begs the question of data formats and where the
> source is (getUserNonMedia()? ;-)  I'm not saying it's bad to want to do
> this, but what are the use-cases, and in what way is the data
> standardized or otherwise definable?  (The app could transfer a data
> description in conjunction with the actual data, for example.)
> 
>>> - "mute" functions from the MediaStreamTrack: What does it mean to "mute" a
>>> data channel?
>> same thing as with media - it largely an indicate that the received data will
>> be ignored
> 
> And the usecase for this is...?
> 
> Extending/sub-setting WebSockets has some advantages, though there isn't
> a 1-1 matchup.  We have designed the current API to be largely similar
> to WebSockets to ease use and learning curve.  Doing the "implements"
> bit would add a fair bit of API, some of which might confuse the issue
> because of the slightly iffy matchup.  I'll try to look more closely at
> both.
Received on Tuesday, 10 April 2012 04:10:21 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Tuesday, 10 April 2012 04:10:23 GMT