W3C home > Mailing lists > Public > public-webrtc@w3.org > July 2012


From: Justin Uberti <juberti@google.com>
Date: Wed, 18 Jul 2012 11:10:04 -0700
Message-ID: <CAOJ7v-2kcfHhm4MefwTmMh-ThopF3JpKm=wzeZBSsWW97f_PPg@mail.gmail.com>
To: Tommy Widenflycht (ᛏᚮᛘᛘᚤ) <tommyw@google.com>
Cc: public-webrtc@w3.org
On Wed, Jul 18, 2012 at 6:19 AM, Tommy Widenflycht (ᛏᚮᛘᛘᚤ) <
tommyw@google.com> wrote:

> I found two things that I would like clarification on while trying to
> implement this.
> Firstly, can someone clarify how the DTMF information should flow from one
> PeerConnection to another? The current editors draft implies that the data
> should go as a string since the other side will have a notification somehow
> that DTMF data has arrived but it isn't written out explicitly.
> Editor Note: It seems we would want a callback or event for incoming
> tones. The proposal sent to the list had them played as audio to the
> speaker but I don’t see how that is useful.
> If the data should flow as a string, and not mixed into the outgoing audio
> track, then I argue that this API should be scrapped and the data channel
> should be used for this kind of functionality. If the DTMF should be mixed
> into the outgoing audio then no notification should pop up on the other
> side.

The data is mixed into the outgoing audio track as DTMF events. Handling of
incoming tones isn't important right now, apps that need to receive data
can use better mechanisms, as you point out.

> Secondly, where and how should the type AudioMediaStreamTrack be used? For
> this to work MediaStream's audioTracks should be a corresponding type
> AudioMediaStreamTrackList but that implies that
> LocalMediaStreams acquired from getUserMedia will need to have
> AudioMediaStreamTracks as well.

I would expect that tracks of type audio would always be
AudioMediaStreamTracks, regardless of where they came from.
Received on Wednesday, 18 July 2012 18:10:51 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:17:30 UTC