Re: Reviewing the Web Audio API (from webrtc)

On Wed, Apr 11, 2012 at 2:12 AM, Stefan Hakansson LK <
stefan.lk.hakansson@ericsson.com> wrote:

> On 04/10/2012 09:18 PM, Chris Rogers wrote:
>
>> Hi Stefan,
>>
>> On Fri, Mar 30, 2012 at 1:29 AM, Stefan Hakansson LK
>> <stefan.lk.hakansson@ericsson.**com <stefan.lk.hakansson@ericsson.com>
>> <mailto:stefan.lk.hakansson@**ericsson.com<stefan.lk.hakansson@ericsson.com>>>
>> wrote:
>>
>>    Hi Chris,
>>
>>    now I have had the time to read your mail as well!
>>
>>    I have a couple of follow up questions:
>>
>>    * for the time being, when no createMediaStreamSource or Destination
>>    methods are available, can the outline I used be used (i.e. would it
>>    work)? (1. Use MediaElementAudioSourceNode to get access to audio;
>>    2. process using Web Audio API methods; 3. Play using
>>    AudioDestinationNode)
>>
>>
>> I'm not sure I understand the question.  The MediaElementAudioSourceNode
>> is used to gain access to an <audio> or <video> element for streaming
>> file content and not to gain access to a MediaStream.  For WebRTC
>> purposes I believe something like createMediaStreamSource()
>> and createMediaStreamDestination() (or their track-based versions) will
>> be necessary.
>>
>
> I agree that createMediaStreamSource() and createMediaStreamDestination()
> should be used, but what I am trying to understand is if there is short
> path to use while waiting for them to be defined and implemented.
>
> And the possible short path I saw was to:
> 1. Play (the relevant audio tracks of) a MediaStream in an <audio>
> element, but that audio element being muted so nothing is actually played
> in the speaker(s)
> 2. Use MediaElementAudioSourceNode from the Web Audio API spec to get hold
> of the audio samples played in that audio element
> 3. Do some processing (filtering, mixing, panning, whatever) using
> different methods in the Web Audio API
> 4. Play the processed audio using AudioDestinationNode
>

Stefan, thanks for clarifying.  I can only speak for the Chrome
implementation, but this is not possible given our current implementation.


>
> It can be noted (and this is more of a matter to the WebRTC WG really)
> that currently there is no way for the app to get to know the level in an
> audio track. So to be able to show a meter in a UI of the audio level the
> app would either have to use an API from the Audio WG, or the WebRTC WG
> would have to add this to the definition of e.g. AudioMediaStreamTrack.
>
> I think being able to show such a meter is fundamental: in the same way
> that you can verify that the video you're going to send using a self view,
> there need to be a way for the user to verify that the browser is using a
> mike that works - and a meter is much better than playing back the local
> audio for obvious reasons.


Yes, I agree.  We hope to satisfy that use case in the Web Audio API, but
we'll need to implement createMediaStreamSource() in order to do that.
In terms of the Chrome implementation, I've spoken with Jan Linden and
we've made it a high priority to do this.


>
>
>
>>
>>    * perhaps createMediaStreamSource / Destination should work on track
>>    level instead (as you seem to indicate as well); a MediaStream is
>>    really just a collection of tracks, and those can be audio or video
>>    tracks. If you work on track level you can do processing that
>>    results in an audio track and combine that with a video track into a
>>    MediaStream
>>
>>
>> Yes, I think that based on previous discussions we've had that we'll
>> need more track-based versions of createMediaStreamSource / Destination.
>>  Although perhaps we could have both.  For a simple use,
>> if createMediaStreamSource() were used, then it would grab the first
>> audio track from the stream and use that by default.  How does that
>> sound?  Because often a MediaStream would contain only a single audio
>> track?
>>
>
> That sounds reasonable. I think in many cases there will only be a single
> audio track.
>
>
>
>>
>>    * kind of schedule do you foresee for the things needed to integrate
>>    with MediaStreams (tracks)?
>>
>>
>> I think we're still in the planning stages for this, so can't give you a
>> good answer.
>>
>
> Ok. Thanks.
>
>
>> Regards,
>> Chris
>>
>>
>

Received on Thursday, 12 April 2012 19:40:18 UTC