Re: Integer PCM sample formats to Web Audio API?

Thanks for the replies!

The SDL code I linked was only to serve as an example how such a
JS-implemented API currently looks like. The low-level SDL audio doesn't
itself store much audio data in memory, since it mixes on the fly. The
large memory usage cases come in when the browser stores the data as
Float32 internally after the audio buffers have been created, which happens
currently mostly with the OpenAL support library.

2013/12/2 Marcus Geelnard <mage@opera.com>

> Hi again...
>
> Actually, when looking into it, the SDL audio layer is quite low level.
> It's completely callback-driven (no support for channels, mixing,
> interpolation, etc), which means that you're not really utilizing the full
> potential of the Web Audio API.
>
> SDL_mixer, on the other hand, has a higher level view of things that
> should be easier to map to the Web Audio API. If porting a big app from C
> to the Web I would personally build a more abstract version of the
> SDL_mixer API (e.g. by letting a Mix_Chunk represent an AudioBuffer rather
> than having an Uint8* buffer), and more or less forbid any use of the
> SDL_*Audio* methods and SDL_mixer callback based interfaces.
>
> ...just my 2 cents.
>
>
> /Marcus
>

It is easy to mentally follow that route and say "API X is too low-level"
or "API X is not a good match for the target platform", ergo for best
practices API X should be banned when porting. Or that "hundreds of clips
consume 200MB+ of data" -> "why do you use so much data simultaneously in
memory? You should be more conservative", but in these cases we are
surrendering the concept of _porting_ and instead talking about rewriting -
or even worse, having to redesign the application.

The reason I am talking about porting is that the need for feature parity
arises most painfully especially when porting existing projects: if native
platform can push X audio clips in memory, and the web build of the same
app can only take in X/2 before needing to add smartness (adding streaming,
dropping an API that's a bad fit, caching, compression, or whatever more
intelligent solution one can imagine), no matter how you look at it, the
porting story is impacted. When you factor in the more intelligent logic,
the native platform is still doubly better: it can cache twice the amount,
stream twice as much or compress twice the content. And if you happen to be
porting a project that was already caching, streaming and compressing, then
you're really in trouble since you don't know what you could do any
smarter, and you'll have no choice than to halve the content size, or ask
the user to buy double the RAM.

Emscripten aims to develop a serious porting path all the way up to the
most demanding triple-A games out there. The majority of games these days
are developed using some kind of games engine middleware, where the
deployment story is equal to choosing the target platform from a dropdown
list and clicking Build. In these cases, porting is not something
temporary, transitional or a migration path between old and new, but a
permanent and standard day-to-day operation that is done all the time. With
such an ease for simultaneously targeting multiple platforms, if one
platform is not up to par, it stands out very quick, and in a very bad way.

<hype-rant>The point I want to make is that C/C++ porting is not just a
fringe tech that only a few weird exotic game devs will be using. Instead,
crosscompiling C/C++ games will become the major tech that both indie and
professional games devs will use for developing any kind of business
related to games on the web. For games, audio is equally as important as 3D
rendering, and I (somewhat grandeously, I know) claim that Emscripten is
the single most important use case that Web Audio has, and its Priority
no.1 should be to take care that the native-vs-web feature parity exists.
As long as the parity is not there and the web does not scale as well as
native, HTML5 will lag behind and the weird NPAPI/Flash/Java approaches we
all so hate will keep staying afloat.</hype-rant>

I really really think that if native supports it, the web should support it
as well,
   Jukka

2013/12/2 Robert O'Callahan <robert@ocallahan.org>

> I think it would make sense to add an API to construct an AudioBuffer from
> signed 16-bit PCM data. Then, as long as you don't call getChannelData() on
> the buffer, the Web Audio implementation can probably optimize its way to
> victory at least for the simple cases.
>
> Rob
> --
> Jtehsauts  tshaei dS,o n" Wohfy  Mdaon  yhoaus  eanuttehrotraiitny  eovni
> le atrhtohu gthot sf oirng iyvoeu rs ihnesa.r"t sS?o  Whhei csha iids  teoa
> stiheer :p atroa lsyazye,d  'mYaonu,r  "sGients  uapr,e  tfaokreg iyvoeunr,
> 'm aotr  atnod  sgaoy ,h o'mGee.t"  uTph eann dt hwea lmka'n?  gBoutt  uIp
> waanndt  wyeonut  thoo mken.o w
>

Received on Tuesday, 7 January 2014 21:26:56 UTC