- From: lonce <lonce.aggregate@zwhome.org>
- Date: Tue, 28 Mar 2017 16:17:28 -0700
- To: Hongchan Choi <hongchan@google.com>, Raymond Toy <rtoy@google.com>
- Cc: "public-audio-dev@w3.org" <public-audio-dev@w3.org>, André Michelle <andre.michelle@audiotool.com>
- Message-ID: <da494f12-aec6-3183-5e61-2a48cc8b02b8@zwhome.org>
Hi Hongchan and All -
I just want to be able to write my own processor nodes and for them
to play nicely with the predefined webaudio nodes (not run on the UI
thread).
Best,
- lonce
On 28/3/17 1:55 pm, Hongchan Choi wrote:
> Hello Andre and Lonce,
>
> I believe we completed the spec work. If you find any issue to be
> dealt with before the spec freeze, now is the time:
> https://webaudio.github.io/web-audio-api/#AudioWorklet
>
> Also you can check out the progress in Chrome here:
> https://bugs.chromium.org/p/chromium/issues/detail?id=469639
>
> The progress is certainly not fast because it requires the fundamental
> change in our WebAudio rendering engine. I don't want to break
> anything to bring this new feature, so I am trying to be careful as I
> can be.
>
> As a contributor to the spec/implementation of AudioWorklet, I would
> like to ask a question to you - what kind of technical difficulty do
> you have due to the lack of AudioWorklet?
>
> Regards,
> Hongchan
>
>
>
> On Tue, Mar 28, 2017 at 8:40 AM Raymond Toy <rtoy@google.com
> <mailto:rtoy@google.com>> wrote:
>
> AudioWorklets are in the spec now. There might be some tweaks as
> implementors start implementing it.
>
> I can't speak for any other borwser, but Chrome is actively
> working on this. We don't normally give out timelines for these
> things, but everything is done in the open, of course, so you can
> follow along by watching crbug.com <http://crbug.com> or crrev.com
> <http://crrev.com>.
>
> On Mon, Mar 27, 2017 at 2:06 PM, lonce <lonce.aggregate@zwhome.org
> <mailto:lonce.aggregate@zwhome.org>> wrote:
>
>
> Hi All,
>
> First, a hearty 'thank you' to all the architects and
> engineers making audio happen in the browsers. You may not be
> getting paid for it, but it is deeply appreciated impactful.
>
> I have been biting my tongue on this issue to, but this
> will be such a life changing event for both us developers as
> well as for the general public's audio experience of the web.
> I know this is a distributed collaborative effort, but some
> kind of overall timeline prediction would be really helpful
> for planning, and just to satisfy curiosity!
>
> Many thanks,
> - lonce
> --
> Lonce Wyse, Associate Professor
> Communications and New Media
> Director, IDMI Arts and Creativity Lab
> National University of Singapore
> lonce.org/home <http://lonce.org/home>
>
>
>
> On 27/3/17 12:57 pm, André Michelle wrote:
>> Hey!
>>
>>
>> What is the current state of the AudioWorklet?
>>
>> It is very important for us to know when we can use it. The
>> AudioWorklet would improve the experience of our application
>> in many ways. The most obvious is that we expect way less
>> glitches. We know that buffer-underruns can still happen but
>> in very most cases the main-thread was using a little(!) bit
>> more time than we can allow to maintain seamless playback
>> with a reasonable latency.
>>
>> We already improved it a lot by not(!) using a
>> ScriptProcessor. BufferSourceNodes can be scheduled
>> seamlessly and offer more control and glitch-detection. The
>> audio-rendering is already done inside a worker and sent to
>> the main-thread. But this is where the bad glitches are
>> happening. Small things like layout changes can have an
>> immediate impact on the playback. We also have a version
>> where we use another Browser-TAB to render and playback the
>> audio. This solution works actually best (reduces glitches up
>> to 75% allowing 60fps animations). But this is hardly a
>> solution for long.
>>
>> Any pointers are welcome.
>>
>> ~
>> André Michelle
>> http://www.audiotool.com
>
>
>
Received on Tuesday, 28 March 2017 23:18:03 UTC