Re: Common goals and framework

One technique I have used in synthesizers I've written at Apple and
elsewhere is "voice dropping".  CPU usage is continuously monitored, and
certain notes are stopped when the usage becomes too high.  The trick is
deciding which notes to drop.  There are a few different strategies like
dropping the oldest notes, or the quietest ones.  Also, I remember Chris
Marrin suggesting the possibility of adding a "priority" attribute to an
audio source to help with this decision.  The voice dropping would happen
automatically in the underlying implementation, but there are also other
opportunities for the  javascript to setup appropriately scaled back
versions of the signal chain if the device were a slower one.  The
javascript could monitor the performance load of individual AudioNodes
during processing and adapt, or check ahead of time by pre-profiling
performance or looking at the user agent.  An example here, might be that
the javascript code would purposefully use a less demanding reverberation
effect if it were running on a mobile device.  Or, the reverberation
algorithm could automatically truncate (fade-out) any impulse responses
which would overload the machine.  A combination of these approaches will
help will some of the scalability issues, but of course the underlying
implementation should be optimized per-platform as much as possible.

For desktop and mobile devices, Apple currently supports the OpenAL API
which provides a similar, but more limited set of audio features as the
system I'm proposing.  So some of these issues have been faced before - it's
not all new territory.

Chris

On Thu, Jun 17, 2010 at 10:00 AM, Chris Marrin <cmarrin@apple.com> wrote:

>
> On Jun 17, 2010, at 7:32 AM, Yves Raimond wrote:
>
> > On 17/06/10 15:19, David Singer wrote:
> >> My  worry tis that audio processing could easily be defined in such a
> way that it is a synchronous task which has to 'keep up or fail
> spectacularly'.  The trouble is that the CPU available both varies widely by
> device (as you list) and also, on many devices, varies widely over time (CPU
> competition).
> >>
> >> I believe that the tricky task is to design a system that degrades
> gracefully when not all the desired CPU is available.  Events (e.g.
> mouseMoved) do that by dropping the event frequency.  Animations/transitions
> do that by dropping the frame rate.  How will sound processing do that?
> >>
> >>
> > Another option (although maybe a bit radical) would be to go for a fully
> declarative language, and leave it to the client to do the best it can...
> Maybe similar to CSound?
>
>
> That's really what Chris' design is. In its current incarnation it is a
> graph constructed via API calls rather than having a declarative incarnation
> as XML elements. I believe Chris was going in the direction of exposing his
> nodes as Elements, but in our discussions with him we agreed that a
> declarative form was not useful, so a programmatic approach to building the
> graph was sufficient. But I might be misremembering somewhat.
>
> I think Dave's statements are very true and should be added to our list of
> design criteria:
>
> n+1) Design should gracefully degrade to allow audio processing under
> resource constrained conditions without dropping audio frames.
>
> I think this criteria applies for either native or JavaScript processing.
> As Dave mentions there are fairly simple techniques for dealing with
> resource constraints with animation and video processing. It's much harder
> for audio because a dropped frame is extremely noticeable and unacceptable.
> Reducing sample rate while under load is an interesting alternative. But in
> that case we'd definitely need a filter chain model where native code could
> get involved at the connections between the filters to reduce the rates on
> the fly. Or something like that...
>
> -----
> ~Chris
> cmarrin@apple.com
>
>
>
>
>
>

Received on Thursday, 17 June 2010 18:38:27 UTC