W3C home > Mailing lists > Public > public-media-capture@w3.org > April 2013

Re: Rationalizing new/start/end/mute/unmute/enabled/disabled

From: Stefan Håkansson LK <stefan.lk.hakansson@ericsson.com>
Date: Mon, 8 Apr 2013 14:43:54 +0200
Message-ID: <5162BB8A.1020701@ericsson.com>
To: robert@ocallahan.org
CC: "public-media-capture@w3.org" <public-media-capture@w3.org>
On 4/8/13 12:22 PM, Robert O'Callahan wrote:
> On Mon, Apr 8, 2013 at 9:47 PM, Stefan Håkansson LK
> <stefan.lk.hakansson@ericsson.com
> <mailto:stefan.lk.hakansson@ericsson.com>> wrote:
>     2. We should define how a saved stream (and perhaps other media
>     files) can be converted to a MediaStream. Using the media element is
>     one option, but would not meet the requirement of allowing the user
>     to fool the application - something we have discussed we should support.
> Fooling the application is only relevant for streams generated by
> getUserMedia. If an application wants to stream its own saved resource
> as a MediaStream, we don't need to let the user interfere with that.

I agree to this. I just wanted to point out that there was an additional 
use of file -> MediaStream.

>     A question on video_element.captureStreamUntilEnded(): does it
>     capture only what is rendered, or also tracks that are not played?
> All tracks that we can decode. So e.g. if you play a resource with a
> video track in an <audio> element and capture that to a MediaStream, the
> MediaStream contains the video track.

What if there are two video tracks? Only one of them is selected/played 
naturally, but in principle both could be decoded. (What I am saying is 
that we need to spec this up).

>     And for the case of multiple audio tracks: those are mixed by the
>     media element when played.
>     Will those individual tracks be present in the captured MediaStream,
>     or will there be just one audio track (representing the mixed audio)?
> We don't currently support decoding more than one audio track, but when
> we do I think we should represent those as separate tracks in the
> MediaStream. The enabled state of those tracks will need to follow the
> track selections of the media element --- we haven't thought about how
> to do that yet.

OK. Sounds reasonable.

>     How well have you specified it, is there text available that could
>     be used?
> There was some text in the MediaStream Processing draft. It's not very
> good though.


>     In principle I agree, being able to switch source of a
>     MediaStream(Track) would be a natural to have (and needed for
>     certain legacy interop cases).
> We may not need to "switch the source of a MediaStreamTrack". There are
> a few ways to expose API to effective switch audio sources. One approach
> would be to create a MediaStreamTrack from the output of a Web Audio
> AudioNode. Then Web Audio can be used to switch from one audio source to
> another. Web Audio already specs this:
> https://dvcs.w3.org/hg/audio/raw-file/tip/webaudio/specification.html#MediaStreamAudioDestinationNode
> although no-one's implemented it yet AFAIK. It would be easy for us to
> implement.

That's right, I did not think about that possibility. What about video?

>     (Here you could also have a MediaStream with two video tracks sent
>     to the other end, and switch at the target. Maybe not the most
>     natural way, but doable.)
> Trying do all these effects at the target sounds clumsy, fragile and
> constraining.

I'd agree to that it is not the most intuitive way to do it.

> Rob
> --
> q“qIqfq qyqoquq qlqoqvqeq qtqhqoqsqeq qwqhqoq qlqoqvqeq qyqoquq,q
> qwqhqaqtq qcqrqeqdqiqtq qiqsq qtqhqaqtq qtqoq qyqoquq?q qEqvqeqnq
> qsqiqnqnqeqrqsq qlqoqvqeq qtqhqoqsqeq qwqhqoq qlqoqvqeq qtqhqeqmq.q
> qAqnqdq qiqfq qyqoquq qdqoq qgqoqoqdq qtqoq qtqhqoqsqeq qwqhqoq qaqrqeq
> qgqoqoqdq qtqoq qyqoquq,q qwqhqaqtq qcqrqeqdqiqtq qiqsq qtqhqaqtq qtqoq
> qyqoquq?q qEqvqeqnq qsqiqnqnqeqrqsq qdqoq qtqhqaqtq.q"
Received on Monday, 8 April 2013 12:44:21 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 16:26:16 UTC