- From: Yves Raimond <yves.raimond@bbc.co.uk>
- Date: Thu, 17 Jun 2010 15:32:36 +0100
- To: David Singer <singer@apple.com>
- CC: public-xg-audio@w3.org
On 17/06/10 15:19, David Singer wrote: > My worry tis that audio processing could easily be defined in such a way that it is a synchronous task which has to 'keep up or fail spectacularly'. The trouble is that the CPU available both varies widely by device (as you list) and also, on many devices, varies widely over time (CPU competition). > > I believe that the tricky task is to design a system that degrades gracefully when not all the desired CPU is available. Events (e.g. mouseMoved) do that by dropping the event frequency. Animations/transitions do that by dropping the frame rate. How will sound processing do that? > > Another option (although maybe a bit radical) would be to go for a fully declarative language, and leave it to the client to do the best it can... Maybe similar to CSound? y
Received on Thursday, 17 June 2010 14:33:01 UTC