RE: Standardizing audio "level 1" features first?

Thanks Chris for your reply!


> In short, the developer would have to manage many more low-level details
than is currently necessary with the API.  But that doesn't mean that we
sacrifice low-level control.  If the 

> developer wants, multi-channel sources can be broken down into component
channels with individual processing on each channel, etc.  But we don't
*force* developers to work at that level.

I admit that for many developers the up/downmixing is very convenient. But
for those (like me) who design their audio applications “channel-based”
could it be kind of annoying. Currently, my applications having more
splitters/mergers than other AudioNodes. But I don’t want to complain about
that, just saying that developers are out there (at least one J ) who would
prefer a channel-based concept instead of the bus concept with implicit
up/downmixing. 

I’ve never checked, but maybe is the use of many mergers/splitters causing
some performance problems on some applications. If somebody does/know, I
would be very interested in.

 

> Good luck with WebCL!  It *may* one day become a standard, but that
doesn't appear to be the case anytime soon.  I haven't even seen prototypes
of useful high-performance  audio systems built > with WebCL, and don't
believe it will be a good fit for for developing general purpose,
high-quality and performant audio processing.

Why not? Because of the graphics card as processing device or more in
general?
I try to accelerate some signal processing routines, that use web
technology. So I thought, maybe it’s worth a look. 

 

Regards, 

Michael



 

 

Received on Monday, 13 February 2012 16:18:42 UTC