W3C home > Mailing lists > Public > public-audio@w3.org > July to September 2013

Re: TAG feedback on Web Audio

From: Srikumar Karaikudi Subramanian <srikumarks@gmail.com>
Date: Thu, 8 Aug 2013 07:31:03 +0530
Cc: Noah Mendelsohn <nrm@arcanedomain.com>, Jer Noble <jer.noble@apple.com>, "K. Gadd" <kg@luminance.org>, Chris Wilson <cwilso@google.com>, Marcus Geelnard <mage@opera.com>, Alex Russell <slightlyoff@google.com>, Anne van Kesteren <annevk@annevk.nl>, Olivier Thereaux <Olivier.Thereaux@bbc.co.uk>, "public-audio@w3.org" <public-audio@w3.org>, "www-tag@w3.org List" <www-tag@w3.org>
Message-Id: <C4A080CA-27F3-4D78-80BA-12B6793C5077@gmail.com>
To: robert@ocallahan.org
I did a quick test to see what's possible on my laptop (MacBook Air 1.7GHz, core i5).

https://gist.github.com/srikumarks/6180450

The C program forks off a child process and the two keep sending one float32 buffer of a given size back and forth.  The interesting thing that came up in my trial runs is that the data throughput is severely affected by the buffer size and not as much (relatively) by whether the buffer is malloced fresh and filled for every send. Using a 128 sample buffer, I got a throughput around 45MB/s, but with a 4096 sample buffer, I got about 400MB/s. Both measurements done with a fresh malloc and fill for every send.

These numbers suggest that in the case of audio, the data throughput is not a bottleneck, but the process switching overhead is. However, even with the 128 case, > 200 such mono streams can be sent back and forth. This number is relevant when you have N number of script nodes in a chain before hitting the audio destination node.

When considering 5.1/48KHz audio, the length of the buffer in each send/recv is 768 samples, and I got, again, about 150 such streams possible in such a chain. The data throughput in this case was about 160MB/sec.

(All throughput numbers are "pessimized" values. See gist for real figures. I did not exit any of my other running applications to run this test.)

-Kumar


On 8 Aug, 2013, at 3:14 AM, "Robert O'Callahan" <robert@ocallahan.org> wrote:

> On Thu, Aug 8, 2013 at 8:11 AM, Noah Mendelsohn <nrm@arcanedomain.com> wrote:
> Now ask questions like: how many bytes per second will be copied in aggressive usage scenarios for your API? Presumably the answer is much higher for video than for audio, and likely higher for multichannel audio (24 track mixing) than for simpler scenarios.
> 
> For this we need concrete, realistic test cases. We need people who are concerned about copying overhead to identify test cases that they're willing to draw conclusions from. (I.e., test cases where, if we demonstrate low overhead, they won't just turn around and say "OK I'll look for a better testcase" :-).)
> 
> Rob
> -- 
> Jtehsauts  tshaei dS,o n" Wohfy  Mdaon  yhoaus  eanuttehrotraiitny  eovni le atrhtohu gthot sf oirng iyvoeu rs ihnesa.r"t sS?o  Whhei csha iids  teoa stiheer :p atroa lsyazye,d  'mYaonu,r  "sGients  uapr,e  tfaokreg iyvoeunr, 'm aotr  atnod  sgaoy ,h o'mGee.t"  uTph eann dt hwea lmka'n?  gBoutt  uIp  waanndt  wyeonut  thoo mken.o w  
Received on Thursday, 8 August 2013 02:01:42 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:50:10 UTC