- From: Stefan Håkansson LK <stefan.lk.hakansson@ericsson.com>
- Date: Tue, 29 Nov 2011 10:48:37 +0100
- To: public-webrtc@w3.org
On 11/29/2011 10:41 AM, Neil Stratford wrote: > On 29/11/2011 08:48, Stefan Håkansson LK wrote: >> I the mail referenced, sendDTMF is a method on MediaStreamTrack. I >> think the method should apply on PeerConnection because my >> understanding is that the idea is to generate RTP-packets according to >> RFC4733, not to insert tones in the audio. This means that "sendDTMF" >> has really no meaning outside a PeerConnection. >> >> I understand that this means that there are some other things that has >> to be met: >> * There must be an audio MediaStreamTrack in at least one of the >> localStream's (that the DTMF RTP packets can share SSRC with) >> * If there are several outgoing audio RTP streams (having different >> SSRC's), it must be possible to understand (control?) which SSRC that >> will be reused by DTMF. >> >> My very simple proposal for this would be that the DTMF RTP packets >> will share SSRC with the first audio track of the first MediaStream >> that has at least one audio track. If there is no such MediaStream in >> localStream's, then "sendDTMF" will fail. > It is important that it is possible to send DTMF without any request for > microphone access if the call is purely to an informational IVR where > the caller is never expected to speak, but still needs to navigate that > IVR. Similarly there are cases where DTMF may be required but the call > is video only, with no audio component. > > How should we handle these cases? Can we create a null audio track using > the current API? How would you do it for a legacy client? My understanding was that RFC4733 required and existing (audio) RTP stream (identified by SSRC) to insert its DTMF payloads into. > > Neil >
Received on Tuesday, 29 November 2011 09:49:16 UTC