W3C home > Mailing lists > Public > public-audio@w3.org > July to September 2013

Re: Sites using webkitAudioContext only

From: Robert O'Callahan <robert@ocallahan.org>
Date: Thu, 4 Jul 2013 22:17:32 +1200
Message-ID: <CAOp6jLYAzuLNn-mog-SWHADs9Xi9NPHbZbpR9rZmKcd7J4xUxQ@mail.gmail.com>
To: Jussi Kalliokoski <jussi.kalliokoski@gmail.com>
Cc: "public-audio@w3.org" <public-audio@w3.org>
On Thu, Jul 4, 2013 at 9:32 PM, Jussi Kalliokoski <
jussi.kalliokoski@gmail.com> wrote:

> AudioContext#createBuffer():
>   * new AudioBuffer(sequence<Float32Array> data, sampleRate). This will
> avoid the synchronous memory allocation so authors can even offload the
> creation of the actual buffers to Web Workers. It also helps avoid an extra
> copy if you already have the data when you create the buffer.

Would this neuter the 'data' arrays?

> AudioContext#decodeAudioData():
>  * At the very least this should return a promise instead of the callbacks
> as arguments.

They're called Futures now. I think we could add the Future version later
--- a little duplicative, but no big deal.

 * Is there some way we could integrate this into AudioElement? e.g.
> Promise AudioElement#decode(). This would make the normal pipeline of
> loading the assets simpler as well.

This would be slightly painful. Media elements have a lot of state that
would just be baggage in this scenario. They might be playing at one offset
while this method tries to decode at another offset. Let's not do this.

>  * Use constructors.

This was raised. It might be worth adding constructors as well as the
existing factory methods, but it's probably not worth removing the existing
factory methods at this point. (I do think it would be nice to have
constructors that take the essential attributes as parameters, so one could
write (new AudioBufferSourceNode(audioContext, audioBuffer)).start().)

>  * Use AudioElement here as well? Either assign a MediaStream as the `src`
> or even better, make AudioElement and MediaStream connectable to, e.g.
> myGainNode.connect(myAudioElement) and there we have all that's required to
> make an audio stream audible. The AudioElement would work as the sink here
> so if for example pause() is fired, it would stop pulling in content. (That
> would fix the much wanted pause requirement as well)

I think these are doable as extensions.

I don't think there's anything here we need to do for the first release,
other than possibly add that AudioBuffer constructor (or a factory method)
if needed to avoid data races and sustain optimal performance.

Jtehsauts  tshaei dS,o n" Wohfy  Mdaon  yhoaus  eanuttehrotraiitny  eovni
le atrhtohu gthot sf oirng iyvoeu rs ihnesa.r"t sS?o  Whhei csha iids  teoa
stiheer :p atroa lsyazye,d  'mYaonu,r  "sGients  uapr,e  tfaokreg iyvoeunr,
'm aotr  atnod  sgaoy ,h o'mGee.t"  uTph eann dt hwea lmka'n?  gBoutt  uIp
waanndt  wyeonut  thoo mken.o w  *
Received on Thursday, 4 July 2013 10:17:59 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:03:22 UTC