- From: Stefan Hakansson LK <stefan.lk.hakansson@ericsson.com>
- Date: Wed, 8 Aug 2012 14:43:48 +0200
- To: public-webrtc@w3.org
On 08/08/2012 12:52 PM, Harald Alvestrand wrote: > [Continuing discussion on list] > > I updated the bug in order to solicit views from the WG - would you > prefer A, B or C? > > On 08/08/2012 12:23 PM, bugzilla@jessica.w3.org wrote: >> https://www.w3.org/Bugs/Public/show_bug.cgi?id=18485 >> >> Stefan Hakansson LK<stefan.lk.hakansson@ericsson.com> changed: >> >> What |Removed |Added >> ---------------------------------------------------------------------------- >> CC| |stefan.lk.hakansson@ericsso >> | |n.com >> >> --- Comment #2 from Stefan Hakansson LK<stefan.lk.hakansson@ericsson.com> 2012-08-08 10:23:07 UTC --- >> (In reply to comment #1) >> >> As for alternatives B and C, I think that the tones should not be inserted in >> the same MediaStream as the outgoing DTMF. The tones are to be used for local >> feedback, and you would not like to play the other outgoing audio locally. >> >> I think we should go for alternative A initially. > My thought was that if an app wants ringback of the tones, he does: > > incomingStream = pc.remoteStreams[0].audioTracks[0] > outgoingStream = pc.remoteStreams[0].audioTracks[0] Guess it should read outgoingStream = pc.localStreams[0].audioTracks[0] > > pc.sendDTMF(outgoingStream, "12345") > pc.sendDTMF(incomingStream, "12345") > > the two should then play out at ~ the same time. > > I don't want to force there to be always ringback present - that's app > dependent. I agree, and this makes sense. Personally I don't have a strong opinion, we could go for A (the web author wanting local feedback could easily accomplish that with a audio element and some files with tones), B or C. We could even consider an alternative D: pc.canSendDTMF(MediaStreamTrack) pc.sendDTMF(MediaStreamTrack, tones, duration, optional MediaStreamTrack) where tones are inserted in the audio of the second (optional) MediaStreamTrack supplied. > > In case C, even this is possible: > > speakers = <whatever constructor we have for a synthetic media source> > <add speakers to an audio tag> > pc.sendDTMF(speakers, "12345") > > It means that behind the scenes, the "PC" implementation must be able to > touch any audio stream in the system, but then again - streams aren't > real physical items, they're control-surface abstractions anyway. Agree. > >> >> >>> based on discussion on the list, there seems to be 3 alternate descriptions of >>> what sendDTMF actually does. >>> >>> I outline them below as text that can be inserted into the description. >>> >>> A) >>> 1) If the track argument to sendDTMF is not an audio track connected to this >>> PeerConnection on an outgoing SSRC where use of RFC 4733 DTMF has been >>> negotiated, throw a<IllegalParameter> exception. >>> 2) Send the tones using RFC 4733 signalling. >>> >>> B) >>> 1) If the track argument to sendDTMF is not an audio track connected to this >>> PeerConnection, throw an<illegalParameter> exception. >>> 2) If sendDTMF is connected to an outgoing SSRC where use of RFC 4377 is >>> negotiated, send the tones using RFC 4377 and return. >>> 3) Insert the corresponding tones into the media stream as if they were coming >>> from the media source. >>> >>> C) >>> 1) If sendDTMF is connected to an outgoing SSRC on this PeerConnection where >>> use of RFC 4377 is negotiated, send the tones using RFC 4377 and return. >>> 2) Insert the corresponding tones into the media stream as if they were coming >>> from the media source. >>> >>> The difference between B) and C) is that B) insists on using the right PC to >>> generate tones. > >
Received on Wednesday, 8 August 2012 12:44:20 UTC