Re: Sites using webkitAudioContext only

AudioElement#decode is a very interesting proposal. As much as I like it in
theory I suspect it would probably be worse than the current state of
things. In particular, it would eliminate the ability to load audio
directly from a byte array - you'd have to turn it into an Object URL so
that the audio element can use it as a src, then call decode on the
AudioElement. On the other hand, that does nicely fit into a model where
you can gracefully degrade to raw HTML5 <audio> in browsers where Web Audio
is not available (JSIL does this at present).

I agree with the points about AudioContext#destination and
AudioContext#listener but I'm not sure those suggested changes really
provide enough improvement to justify the work involved (or the changes to
existing applications). The listener model in particular, while very
limiting and arbitrary, is probably not causing most users any issues, so
it's not as urgent to change it.

-kg


On Thu, Jul 4, 2013 at 2:32 AM, Jussi Kalliokoski <
jussi.kalliokoski@gmail.com> wrote:

> Oops, we accidentally went offlist with Roc:
>
> ---------- Forwarded message ----------
> From: Jussi Kalliokoski <jussi.kalliokoski@gmail.com>
> Date: Wed, Jul 3, 2013 at 7:44 PM
> Subject: Re: Sites using webkitAudioContext only
> To: Robert O'Callahan <robert@ocallahan.org>
>
>
>  I liked your Media Streams Processing API proposal, but obviously we
> won't be going back to that, so my idea of corrections we need to make is
> at least these:
>
> AudioContext#createBuffer():
>  * new AudioBuffer(sequence<Float32Array> data, sampleRate). This will
> avoid the synchronous memory allocation so authors can even offload the
> creation of the actual buffers to Web Workers. It also helps avoid an extra
> copy if you already have the data when you create the buffer.
>
> AudioContext#decodeAudioData():
>  * At the very least this should return a promise instead of the callbacks
> as arguments.
>  * Is there some way we could integrate this into AudioElement? e.g.
> Promise AudioElement#decode(). This would make the normal pipeline of
> loading the assets simpler as well.
>
> AudioNodes:
>  * Use constructors.
>
> AudioContext#destination
>  * Use AudioElement here as well? Either assign a MediaStream as the `src`
> or even better, make AudioElement and MediaStream connectable to, e.g.
> myGainNode.connect(myAudioElement) and there we have all that's required to
> make an audio stream audible. The AudioElement would work as the sink here
> so if for example pause() is fired, it would stop pulling in content. (That
> would fix the much wanted pause requirement as well)
>
> AudioContext#listener
>  * First of all I don't think spatialization should be part of the first
> release, but aside from that:
>  * Each spatialization node has its own listener, or
>  * AudioElement and MediaStream have a listener associated to them
>
> AudioContext#createPeriodicWave()
>  * Use a constructor.
>
> Cheers,
> Jussi
>
>
> On Wed, Jul 3, 2013 at 11:04 AM, Robert O'Callahan <robert@ocallahan.org>wrote:
>
>> On Wed, Jul 3, 2013 at 7:39 PM, Jussi Kalliokoski <
>> jussi.kalliokoski@gmail.com> wrote:
>>
>>> 1. Web Audio API is not webby: I agree, and have been ranting about this
>>> enough already. Let's fix this.
>>>
>>
>> Do you have any particular proposals in mind?
>>
>> Rob
>> --
>> Jtehsauts  tshaei dS,o n" Wohfy  Mdaon  yhoaus  eanuttehrotraiitny  eovni
>> le atrhtohu gthot sf oirng iyvoeu rs ihnesa.r"t sS?o  Whhei csha iids  teoa
>> stiheer :p atroa lsyazye,d  'mYaonu,r  "sGients  uapr,e  tfaokreg iyvoeunr,
>> 'm aotr  atnod  sgaoy ,h o'mGee.t"  uTph eann dt hwea lmka'n?  gBoutt  uIp
>> waanndt  wyeonut  thoo mken.o w  *
>> *
>>
>
>
>

Received on Thursday, 4 July 2013 09:46:55 UTC