W3C home > Mailing lists > Public > public-audio@w3.org > January to March 2012

MSP stalled streams, Was: Reviewing the Web Audio API (from webrtc)

From: Jussi Kalliokoski <jussi.kalliokoski@gmail.com>
Date: Thu, 29 Mar 2012 12:50:23 +0300
Message-ID: <CAJhzemUydkUS0wjEFkG9r7Ssq=04TpNtqJMCjxH0wWKZJ+e4Zg@mail.gmail.com>
To: robert@ocallahan.org
Cc: Chris Rogers <crogers@google.com>, "Wei, James" <james.wei@intel.com>, "public-audio@w3.org" <public-audio@w3.org>, public-webrtc@w3.org
Hey Rob!

I've been thinking about this discussion about synchronization of streams
and blocking the processing along with it, and I have a suggestion for the
MediaStream Processing API. This is based on the idea, that it's not
necessarily a good idea to prevent all processing if input streams are
blocked, for example it might be good user experience to keep a reverb
going on even if the input stream is stalled.

What about making the callbacks trigger regardless of whether the input
streams are blocked, but make the event contain information about the input
stream state, so that the developer can use her better judgement on whether
to and which effects to keep running:

partial interface MediaInputBuffer {
    readonly attribute MediaInputState state;
}

dictionary MediaInputState {
    active, paused, drained, blocked
}

then you could do something like this:

self.onprocessmedia = function (e) {
    if (e.inputs[0].state !== "active") return;
    // Do something when the stream is active
};

This approach would be useful for cases where the stream is drained as
well, for a simplified example, there's a media file containing a gunshot,
and the file has no silence padding at the end, so the reverb would
overflow if the callback stops when the stream is drained.

What do you think?

Cheers,
Jussi

On Thu, Mar 29, 2012 at 11:58 AM, Robert O'Callahan <robert@ocallahan.org>wrote:

> On Thu, Mar 29, 2012 at 6:32 PM, Chris Rogers <crogers@google.com> wrote:
>
>> None of the built-in Web Audio processing algorithms have any appreciable
>> latency which would perceptibly affect audio/video sync.
>
>
> OK, but there are processing algorithms that necessarily have significant
> latency, like this one:
> http://people.mozilla.org/~roc/stream-demos/video-with-extra-track-and-effect.html
>
> We're talking about 3ms or less here.  In terms of irritation, network
>> latency is of vastly more concern for WebRTC applications.
>
>
> That depends on the application. WebRTC APIs can be used for more than
> just interactive chat. For example, an application could pull an audio and
> video stream from some source, take a user's commentary in a stream from
> the microphone, mix them with a ducking effect, and stream the resulting
> audio and video out to a set of peers. The latency might be too high for
> interaction, but just fine for a "live broadcast".
>
>
> Rob
> --
> “You have heard that it was said, ‘Love your neighbor and hate your
> enemy.’ But I tell you, love your enemies and pray for those who persecute
> you, that you may be children of your Father in heaven. ... If you love
> those who love you, what reward will you get? Are not even the tax
> collectors doing that? And if you greet only your own people, what are you
> doing more than others?" [Matthew 5:43-47]
>
>
Received on Thursday, 29 March 2012 09:50:59 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Thursday, 29 March 2012 09:51:03 GMT