Hi Stefan,
On Fri, Mar 30, 2012 at 1:29 AM, Stefan Hakansson LK <
stefan.lk.hakansson@ericsson.com> wrote:
> Hi Chris,
>
> now I have had the time to read your mail as well!
>
> I have a couple of follow up questions:
>
> * for the time being, when no createMediaStreamSource or Destination
> methods are available, can the outline I used be used (i.e. would it work)?
> (1. Use MediaElementAudioSourceNode to get access to audio; 2. process
> using Web Audio API methods; 3. Play using AudioDestinationNode)
>
I'm not sure I understand the question. The MediaElementAudioSourceNode is
used to gain access to an <audio> or <video> element for streaming file
content and not to gain access to a MediaStream. For WebRTC purposes I
believe something like createMediaStreamSource()
and createMediaStreamDestination() (or their track-based versions) will be
necessary.
>
> * perhaps createMediaStreamSource / Destination should work on track level
> instead (as you seem to indicate as well); a MediaStream is really just a
> collection of tracks, and those can be audio or video tracks. If you work
> on track level you can do processing that results in an audio track and
> combine that with a video track into a MediaStream
>
Yes, I think that based on previous discussions we've had that we'll need
more track-based versions of createMediaStreamSource / Destination.
Although perhaps we could have both. For a simple use,
if createMediaStreamSource() were used, then it would grab the first audio
track from the stream and use that by default. How does that sound?
Because often a MediaStream would contain only a single audio track?
>
> * kind of schedule do you foresee for the things needed to integrate with
> MediaStreams (tracks)?
>
I think we're still in the planning stages for this, so can't give you a
good answer.
Regards,
Chris