W3C home > Mailing lists > Public > public-webrtc@w3.org > July 2012


From: ᛏᚮᛘᛘᚤ <tommyw@google.com>
Date: Wed, 18 Jul 2012 15:19:17 +0200
Message-ID: <CALLKCfNhk1LPnMsE_52mHUAFj3PeRtaay+1fyVEwXLpia3zyGQ@mail.gmail.com>
To: public-webrtc@w3.org
I found two things that I would like clarification on while trying to
implement this.

Firstly, can someone clarify how the DTMF information should flow from one
PeerConnection to another? The current editors draft implies that the data
should go as a string since the other side will have a notification somehow
that DTMF data has arrived but it isn't written out explicitly.

Editor Note: It seems we would want a callback or event for incoming tones.
The proposal sent to the list had them played as audio to the speaker but I
don’t see how that is useful.

If the data should flow as a string, and not mixed into the outgoing audio
track, then I argue that this API should be scrapped and the data channel
should be used for this kind of functionality. If the DTMF should be mixed
into the outgoing audio then no notification should pop up on the other

Secondly, where and how should the type AudioMediaStreamTrack be used? For
this to work MediaStream's audioTracks should be a corresponding type
AudioMediaStreamTrackList but that implies that
LocalMediaStreams acquired from getUserMedia will need to have
AudioMediaStreamTracks as well.

One simple solution to both these question is to scrap the DTMF api and to
use the data channel instead.


Tommy Widenflycht, Senior Software Engineer
Google Sweden AB, Kungsbron 2, SE-11122 Stockholm, Sweden
Org. nr. 556656-6880
And yes, I have to include the above in every outgoing email according to
EU law.
Received on Wednesday, 18 July 2012 13:19:49 UTC

This archive was generated by hypermail 2.3.1 : Monday, 23 October 2017 15:19:28 UTC