Re: Web Audio API spec review

I think Olli is talking about defining Web Audio on top of MediaStreams.
E.g. the spec could say that an AudioNode behaves just like a MediaStream,
but with certain extra features.

If we don't do that, then specifying all the interactions between
AudioNodes and MediaStreams (and any other media-stream-like abstractions
created later) could get a lot more difficult. For example we still have
unresolved questions about maintaining synchronization in combinations of
AudioNodes and MediaStreams. Another case is cycles in the graph. Cycles of
AudioNodes are allowed; what about cycles of MediaStreams? What if someone
creates a cycle involving both AudioNodes and MediaStreams?

Another danger is that in the future features get added to AudioNodes and
MediaStreams independently which cause problems when the frameworks are
mingled. This danger is mitigated if they share underlying definitions.

BTW since I realized last week that Web Audio allows cycles in the graph,
I've been studying the interaction of cycles (with delay) with the features
of MediaStreams Processing (pausing of individual streams, buffering to
maintain synchronization in the presence of high-delay processing nodes,
synchronized cueing of arbitrary streams) using formal methods. Some
combinations of these features have unresolvable problems. I'll have more
to say when I've done some more analysis.

Rob
-- 
“You have heard that it was said, ‘Love your neighbor and hate your enemy.’
But I tell you, love your enemies and pray for those who persecute you,
that you may be children of your Father in heaven. ... If you love those
who love you, what reward will you get? Are not even the tax collectors
doing that? And if you greet only your own people, what are you doing more
than others?" [Matthew 5:43-47]

Received on Monday, 21 May 2012 02:11:55 UTC