W3C home > Mailing lists > Public > public-audio@w3.org > January to March 2012

Re: Standardizing audio "level 1" features first?

From: Chris Rogers <crogers@google.com>
Date: Mon, 13 Feb 2012 10:40:53 -0800
Message-ID: <CA+EzO0mVmGazNE2znSKZz+QjS6BHLfsERxGq1fR-7LzL26xvsQ@mail.gmail.com>
To: Michael Schöffler <michael.schoeffler@audiolabs-erlangen.de>
Cc: public-audio@w3.org
Hi Michael,

On Mon, Feb 13, 2012 at 8:17 AM, Michael Schöffler <
michael.schoeffler@audiolabs-erlangen.de> wrote:

> Thanks Chris for your reply!****
>
>
> > In short, the developer would have to manage many more low-level details
> than is currently necessary with the API.  But that doesn't mean that we
> sacrifice low-level control.  If the ****
>
> > developer wants, multi-channel sources can be broken down into component
> channels with individual processing on each channel, etc.  But we don't
> *force* developers to work at that level.****
>
> I admit that for many developers the up/downmixing is very convenient. But
> for those (like me) who design their audio applications “channel-based”
> could it be kind of annoying. Currently, my applications having more
> splitters/mergers than other AudioNodes. But I don’t want to complain about
> that, just saying that developers are out there (at least one J ) who
> would prefer a channel-based concept instead of the bus concept with
> implicit up/downmixing.
>

I tried to design the API for the 80% use case.  But I don't think that
dealing with channels as you want should be that difficult.  I think that
if you organize your audio assets so that they're delivered as mono files
(or mono synthesized streams), then much of the splitting/merging wouldn't
even be necessary.  I'd have to see your specific code to be able to offer
more detailed advice on how to streamline it.


> ****
>
> I’ve never checked, but maybe is the use of many mergers/splitters causing
> some performance problems on some applications. If somebody does/know, I
> would be very interested in.
>

The current implementation simply uses a memcpy(), which could be optimized
out in many cases to simply passing around pointers.  Even without that
optimization, the memcpy() overhead is extremely tiny.



> ****
>
> ** **
>
> > Good luck with WebCL!  It *may* one day become a standard, but that
> doesn't appear to be the case anytime soon.  I haven't even seen prototypes
> of useful high-performance  audio systems built > with WebCL, and don't
> believe it will be a good fit for for developing general purpose,
> high-quality and performant audio processing.****
>
> Why not? Because of the graphics card as processing device or more in
> general?
> I try to accelerate some signal processing routines, that use web
> technology. So I thought, maybe it’s worth a look.
>

Sorry, I really didn't mean to sound as negative as I did.  And I certainly
wouldn't discourage you from experimenting with the technology.  But we
have solutions that are practical, high-quality, efficient, and easy to use
now.  WebCL is not very far along yet, and will not be an option for us
right now.  Even in the most advanced real-time desktop audio software,
GPUs are rarely used to do audio work because they're generally not that
practical and don't deliver any benefits compared with using SIMD
instructions and multi-threading.

Chris




> ****
>
> ** **
>
> Regards, ****
>
> Michael
>
> ****
>
> ** **
>
> ** **
>
Received on Monday, 13 February 2012 18:41:21 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Monday, 13 February 2012 18:41:25 GMT