Re: Alternative to the offer/answer mechanism

Hi Matthew,

     Common use-cases should be easy, advanced use-cases should be possible.

     We need an API that lies somewhere between WebRTC and CU-RTCWEB 
because it looks like the latter complicates common use-cases. If you 
disagree, can you please reply with a complete testcase demonstrating 
what a common use-case would look like?

Thanks,
Gili

On 20/06/2013 1:11 PM, Matthew Kaufman (SKYPE) wrote:
> I already showed the use case for why you want more than "the browser 
> does ICE for you" a few days ago (which is part of the motivation for 
> separation in the CU-RTCWEB proposal)... I'll repeat one of them here:
>
> I want to be able to open a connection between two browsers that uses 
> my private fiber network (via relays), but I also want a connectivity 
> test done to see if they have direct Internet connectivity should I 
> need to fall back to that.
>
> Having the browser choose the candidates all on its own makes it 
> difficult to intervene in the decision, and thus the (better) relayed 
> path would not be selected.
>
> Matthew Kaufman
>
> ------------------------------------------------------------------------
> *From:* cowwoc [cowwoc@bbs.darktech.org]
> *Sent:* Thursday, June 20, 2013 9:17 AM
> *To:* piranna@gmail.com
> *Cc:* Roman Shpount; public-webrtc@w3.org
> *Subject:* Re: Alternative to the offer/answer mechanism
>
>
>     ... a good point, especially now that I revisit their document 
> with this in mind. Yes, that's the general idea though they went much 
> lower level than is necessary in some places (replacing ICE 
> connectivity with an API for opening ports... why bother?). The goal 
> is to be able to specify all configuration using Javascript even 
> though in most cases you'll never have to deal with anything more than:
>
> navigator.getUserMedia({ audio: true }, function(media) {
>      var track = media.audioTracks[0];
>      var description = new RealtimeMediaDescription(track);
>      var localRtStream = new LocalRealtimeMediaStream(track, description, realtimeTransport);
>      signalingChannel.send(description.toDictionary());
> });
>
>     (taken from their document)
>
> Gili
>
> On 20/06/2013 12:06 PM, piranna@gmail.com wrote:
>>
>> That reminds me to Microsoft & Skype CU-WebRTC specification...
>>
>> El 20/06/2013 17:37, "Roman Shpount" <roman@telurix.com 
>> <mailto:roman@telurix.com>> escribió:
>>
>>     On Thu, Jun 20, 2013 at 11:04 AM, cowwoc <cowwoc@bbs.darktech.org
>>     <mailto:cowwoc@bbs.darktech.org>> wrote:
>>
>>
>>             On that topic, isn't it reasonable to assume we could
>>         expose an alternative to offer/answer without going down to
>>         the low level found in the C++ classes you mentioned? Surely
>>         we should be able to come up with an intermediate-level
>>         interface that stands between offer/answer and low-level
>>         signaling?
>>
>>
>>     C++ does not necessarily means low level. I was just pointing out
>>     that internally webrtc is not based on offer/answer.
>>
>>     The  API that I think would makes sense will be something that
>>     gives you media types supported (audio and video), list of
>>     supported codecs, let's you configure receive payload types (ie
>>     what codecs this is and what parameters are associated with each
>>     payload type you expect to receive), send payload type (what
>>     payload id you will use to send, what codec, and what encoding
>>     parameters that you will use). Transports are separate. Media
>>     streams (audio and video sources) are separate. Essentially
>>     instead of negotiating everything at once (ie send codecs,
>>     receive codecs, transports) using a single opaque blob you
>>     control each logically independent component separately using an
>>     separate API call. All the same concepts, just not mungled up
>>     together.
>>     _____________
>>     Roman Shpount
>>
>

Received on Thursday, 20 June 2013 17:38:32 UTC