Re: Common goals and framework

Sorry, correction:  should be "6) is [together with 1) & 2)] the simplest for ***content*** developers, ..."

	-- Chris G.


On 2010Jun 17, at 12:30 p, Chris Grigg wrote:

> Dave (Hi, Dave!) is right that avoiding breakup while at the same time spanning a wide range of device compute power is both essential and tricky.  Looking back into history, there are a number of classes of strategy that have been used that we could potentially use here.
> 
> They include at least:
> 
> 1) Adaptive sample rate (when needed, dynamically slow it down to provide more processor cycles per audio sample)
> 
> 2) Adaptive quality (when needed, dynamically switch to simpler processing/synthesis algorithms to reduce number of processor cycles per audio sample)
> 
> 3) Voice prioritization (render as many of the most important voices as there is processor power to support, dynamically muting the rest)
> 
> 4) Adaptive content (when needed, content creator determines which blocks of voices don't get rendered)
> 
> 5) Content profiles (define more than one device capability layer; content developers must choose which profiles to statically support and get guaranteed performance within each profile)
> 
> 6) Do not adapt, instead pick a baseline and have all content developers write to that level
> 
> The choice is pretty fraught because each of these strategies brings complications, some of which are really significant, and/or missed opportunities.  
> 
> To briefly characterize each one: 
> 
> 1) & 2) I don't know of any existing widely deployed mature music/audio engine implementations that do these, as engines necessarily tend to be highly optimized and that has precluded designing that kind of parametric model in; not to assume that new implementations necessarily couldn't do this.  
> 
> 3) has a long and relatively successful history in game audio, but it does complicate content authoring somewhat; implementation is simplest when all voices have the same or similar structure, as opposed to a fully configurable graph.  
> 
> 4) is used (for example) in the Scalable Polyphony MIDI standard ("SP-MIDI") and the Mobile DLS synth engine, which give the content developer greater defense against bad voice stealing artifacts; it works, but also complicates content development.
> 
> 5) is sensitive to getting the profile definitions right, as this kind of slicing may tend to lead to detrimental fragmentation; it also complicates content development.
> 
> 6) is [together with 1) & 2)] the simplest for

***content***

> developers, but also the most limiting since it's a lowest-common-denominator approach and therefore doesn't take advantage of more power when it's available.
> 
> There are probably more strategies worth reviewing here, but maybe this is a start.
> 
> 	-- Chris G.
> 

Received on Thursday, 17 June 2010 19:37:34 UTC