Re: Sites using webkitAudioContext only

On Thu, Jul 4, 2013 at 1:17 PM, Robert O'Callahan <robert@ocallahan.org>wrote:

> On Thu, Jul 4, 2013 at 9:32 PM, Jussi Kalliokoski <
> jussi.kalliokoski@gmail.com> wrote:
>
>> AudioContext#createBuffer():
>>   * new AudioBuffer(sequence<Float32Array> data, sampleRate). This will
>> avoid the synchronous memory allocation so authors can even offload the
>> creation of the actual buffers to Web Workers. It also helps avoid an extra
>> copy if you already have the data when you create the buffer.
>>
>
> Would this neuter the 'data' arrays?
>

I don't see any other sane way to do it.


>  AudioContext#decodeAudioData():
>  * At the very least this should return a promise instead of the callbacks
> as arguments.
>

They're called Futures now. I think we could add the Future version later
> --- a little duplicative, but no big deal.
>

I thought they were called Futures first and now Promises:
http://infrequently.org/2013/06/sfuturepromiseg/


>  * Is there some way we could integrate this into AudioElement? e.g.
>> Promise AudioElement#decode(). This would make the normal pipeline of
>> loading the assets simpler as well.
>>
>
> This would be slightly painful. Media elements have a lot of state that
> would just be baggage in this scenario. They might be playing at one offset
> while this method tries to decode at another offset. Let's not do this.
>

You're probably right. Well what about Promise
AudioElement.decode(DOMString url) and Promise
AudioElement.decode(ByteArray url) then, as in, static methods?


> AudioNodes:
>>  * Use constructors.
>>
>
> This was raised. It might be worth adding constructors as well as the
> existing factory methods, but it's probably not worth removing the existing
> factory methods at this point. (I do think it would be nice to have
> constructors that take the essential attributes as parameters, so one could
> write (new AudioBufferSourceNode(audioContext, audioBuffer)).start().)
>
> AudioContext#destination
>>  * Use AudioElement here as well? Either assign a MediaStream as the
>> `src` or even better, make AudioElement and MediaStream connectable to,
>> e.g. myGainNode.connect(myAudioElement) and there we have all that's
>> required to make an audio stream audible. The AudioElement would work as
>> the sink here so if for example pause() is fired, it would stop pulling in
>> content. (That would fix the much wanted pause requirement as well)
>>
>
> I think these are doable as extensions.
>
> I don't think there's anything here we need to do for the first release,
> other than possibly add that AudioBuffer constructor (or a factory method)
> if needed to avoid data races and sustain optimal performance.
>
> Rob
> --
> Jtehsauts  tshaei dS,o n" Wohfy  Mdaon  yhoaus  eanuttehrotraiitny  eovni
> le atrhtohu gthot sf oirng iyvoeu rs ihnesa.r"t sS?o  Whhei csha iids  teoa
> stiheer :p atroa lsyazye,d  'mYaonu,r  "sGients  uapr,e  tfaokreg iyvoeunr,
> 'm aotr  atnod  sgaoy ,h o'mGee.t"  uTph eann dt hwea lmka'n?  gBoutt  uIp
> waanndt  wyeonut  thoo mken.o w  *
> *
>

Received on Thursday, 4 July 2013 10:47:08 UTC