Re: Rationalizing new/start/end/mute/unmute/enabled/disabled

On Tue, Apr 9, 2013 at 12:43 AM, Stefan Håkansson LK <
stefan.lk.hakansson@ericsson.com> wrote:

>
> All tracks that we can decode. So e.g. if you play a resource with a
>> video track in an <audio> element and capture that to a MediaStream, the
>> MediaStream contains the video track.
>>
>
> What if there are two video tracks? Only one of them is selected/played
> naturally, but in principle both could be decoded. (What I am saying is
> that we need to spec this up).


Definitely. Yes, I think we should decode them both.


>
>>     In principle I agree, being able to switch source of a
>>     MediaStream(Track) would be a natural to have (and needed for
>>     certain legacy interop cases).
>>
>>
>> We may not need to "switch the source of a MediaStreamTrack". There are
>> a few ways to expose API to effective switch audio sources. One approach
>> would be to create a MediaStreamTrack from the output of a Web Audio
>> AudioNode. Then Web Audio can be used to switch from one audio source to
>> another. Web Audio already specs this:
>> https://dvcs.w3.org/hg/audio/**raw-file/tip/webaudio/**
>> specification.html#**MediaStreamAudioDestinationNod**e<https://dvcs.w3.org/hg/audio/raw-file/tip/webaudio/specification.html#MediaStreamAudioDestinationNode>
>> although no-one's implemented it yet AFAIK. It would be easy for us to
>> implement.
>>
>
> That's right, I did not think about that possibility. What about video?


There is no comparable API for video on a standards track, but there should
be.

MediaStream Processing defined a ProcessedMediaStream which would take a
set of incoming MediaStreams or MediaStreamTracks, mix the audio tracks
together with script-defined processing, composite the video tracks
together using a fixed compositing model, and give you the output as a
single audio and/or video track. It also offered fine-grained scheduling of
when inputs would be added/removed to the compositing mix, and had the
ability to pause incoming streams, and do some timestamp-based
synchronization. I think we should bring back something like that. We can
drop the scripted audio processing since Web Audio covers that now.

A simple initial stab at that API would be to define ProcessedMediaStream
as a subclass of MediaStream which takes an additional dictionary argument
for the track-array constructor:
  Constructor (MediaStreamTrackArray tracks,
ProcessedMediaStreamConfiguration config)
where ProcessedMediaStreamConfiguration would specify which kinds of tracks
should appear in the output, e.g. { video: true, audio: true }. The audio
track (if any) would be defined to be the mix of all zero or more input
audio tracks. The video track (if any) would be defined to be the
composition of zero or more input video tracks (defined to stretch all
video frames to the size of the largest video frame, or something like
that), in a defined order (e.g. the first track added to the stream is at
the bottom). Since most video tracks don't have an alpha channel, that
means the last video track added wins. (But we should add the ability to
make a VideoStreamTrack from an HTML canvas so we can have real-time
compositing of overlays onto video.)

Rob
-- 
q“qIqfq qyqoquq qlqoqvqeq qtqhqoqsqeq qwqhqoq qlqoqvqeq qyqoquq,q qwqhqaqtq
qcqrqeqdqiqtq qiqsq qtqhqaqtq qtqoq qyqoquq?q qEqvqeqnq qsqiqnqnqeqrqsq
qlqoqvqeq qtqhqoqsqeq qwqhqoq qlqoqvqeq qtqhqeqmq.q qAqnqdq qiqfq qyqoquq
qdqoq qgqoqoqdq qtqoq qtqhqoqsqeq qwqhqoq qaqrqeq qgqoqoqdq qtqoq qyqoquq,q
qwqhqaqtq qcqrqeqdqiqtq qiqsq qtqhqaqtq qtqoq qyqoquq?q qEqvqeqnq
qsqiqnqnqeqrqsq qdqoq qtqhqaqtq.q"

Received on Monday, 8 April 2013 22:16:40 UTC