W3C home > Mailing lists > Public > public-audio@w3.org > July to September 2013

Re: Sites using webkitAudioContext only

From: Robert O'Callahan <robert@ocallahan.org>
Date: Fri, 5 Jul 2013 00:37:34 +1200
Message-ID: <CAOp6jLbR=g4gY--DEngypLnS-iWySV=yfyL8shM2P4is0Xd3GQ@mail.gmail.com>
To: Jussi Kalliokoski <jussi.kalliokoski@gmail.com>
Cc: "public-audio@w3.org" <public-audio@w3.org>
On Thu, Jul 4, 2013 at 10:46 PM, Jussi Kalliokoski <
jussi.kalliokoski@gmail.com> wrote:

> On Thu, Jul 4, 2013 at 1:17 PM, Robert O'Callahan <robert@ocallahan.org>wrote:
>> On Thu, Jul 4, 2013 at 9:32 PM, Jussi Kalliokoski <
>> jussi.kalliokoski@gmail.com> wrote:
>>> AudioContext#createBuffer():
>>>    * new AudioBuffer(sequence<Float32Array> data, sampleRate). This
>>> will avoid the synchronous memory allocation so authors can even offload
>>> the creation of the actual buffers to Web Workers. It also helps avoid an
>>> extra copy if you already have the data when you create the buffer.
>> Would this neuter the 'data' arrays?
> I don't see any other sane way to do it.

Sounds fine to me.

>>  AudioContext#decodeAudioData():
>>  * At the very least this should return a promise instead of the
>> callbacks as arguments.
> They're called Futures now. I think we could add the Future version later
>> --- a little duplicative, but no big deal.
> I thought they were called Futures first and now Promises:
> http://infrequently.org/2013/06/sfuturepromiseg/


>>   * Is there some way we could integrate this into AudioElement? e.g.
>>> Promise AudioElement#decode(). This would make the normal pipeline of
>>> loading the assets simpler as well.
>> This would be slightly painful. Media elements have a lot of state that
>> would just be baggage in this scenario. They might be playing at one offset
>> while this method tries to decode at another offset. Let's not do this.
> You're probably right. Well what about Promise
> AudioElement.decode(DOMString url) and Promise
> AudioElement.decode(ByteArray url) then, as in, static methods?

Sure, but that's not very different to having them on the AudioContext.

Jtehsauts  tshaei dS,o n" Wohfy  Mdaon  yhoaus  eanuttehrotraiitny  eovni
le atrhtohu gthot sf oirng iyvoeu rs ihnesa.r"t sS?o  Whhei csha iids  teoa
stiheer :p atroa lsyazye,d  'mYaonu,r  "sGients  uapr,e  tfaokreg iyvoeunr,
'm aotr  atnod  sgaoy ,h o'mGee.t"  uTph eann dt hwea lmka'n?  gBoutt  uIp
waanndt  wyeonut  thoo mken.o w  *
Received on Thursday, 4 July 2013 12:38:01 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:03:22 UTC