- From: Hongchan Choi <hongchan@google.com>
- Date: Thu, 30 Mar 2017 16:53:28 +0000
- To: Sebastian Zimmer <sebzim2@web.de>, public-audio-dev@w3.org
- Message-ID: <CAGJqXNtr3Cpjf-vSf=+2V_ZCRhEL_AVQ=7sZFp_ifZFz6HaB0g@mail.gmail.com>
Hello Sebastian, Yes. I will try to reach out developers when the Chromium prototype is ready. Through this mailing list, the web-audio slack channel and twitter (@hochsays). Best, Hongchan On Wed, Mar 29, 2017 at 11:33 AM Sebastian Zimmer <sebzim2@web.de> wrote: > Hello Hongchan and all, > > I'd also be willing to experiment with the prototype. > Maybe you could make an announcement on this list when that happens. > As a developer, I have subscribed to this public-audio-dev list, but look > only occasionally into the public-audio list. > > Thank you for your efforts on the API. > > Best, > Sebastian > > Am 29.03.2017 um 18:05 schrieb Hongchan Choi: > > Hello Andre and Lonce, > > Thanks for your opinion. The reason why I asked the obvious question is > that often times people think AudioWorklet will solve all the issues of Web > Audio API. If you have been following the progress in the Chrome's > implementation, we had to cut some corners to bring V8 JS engine and a > dedicated WorkerGlobalScope into the audio rendering thread. It's too early > to tell, but I am keep monitoring for any performance regression. > > Once I have the working prototype available behind a flag, I will reach > out to you (and others who are willing to experiment) to collect more > feedback. > > Regards, > Hongchan > > > On Tue, Mar 28, 2017 at 4:17 PM lonce <lonce.aggregate@zwhome.org> wrote: > > > Hi Hongchan and All - > > I just want to be able to write my own processor nodes and for them to > play nicely with the predefined webaudio nodes (not run on the UI thread). > > Best, > - lonce > > > > On 28/3/17 1:55 pm, Hongchan Choi wrote: > > Hello Andre and Lonce, > > I believe we completed the spec work. If you find any issue to be dealt > with before the spec freeze, now is the time: > https://webaudio.github.io/web-audio-api/#AudioWorklet > > Also you can check out the progress in Chrome here: > https://bugs.chromium.org/p/chromium/issues/detail?id=469639 > > The progress is certainly not fast because it requires the fundamental > change in our WebAudio rendering engine. I don't want to break anything to > bring this new feature, so I am trying to be careful as I can be. > > As a contributor to the spec/implementation of AudioWorklet, I would like > to ask a question to you - what kind of technical difficulty do you have > due to the lack of AudioWorklet? > > Regards, > Hongchan > > > > On Tue, Mar 28, 2017 at 8:40 AM Raymond Toy <rtoy@google.com> wrote: > > AudioWorklets are in the spec now. There might be some tweaks as > implementors start implementing it. > > I can't speak for any other borwser, but Chrome is actively working on > this. We don't normally give out timelines for these things, but > everything is done in the open, of course, so you can follow along by > watching crbug.com or crrev.com. > > On Mon, Mar 27, 2017 at 2:06 PM, lonce <lonce.aggregate@zwhome.org> wrote: > > > Hi All, > > First, a hearty 'thank you' to all the architects and engineers making > audio happen in the browsers. You may not be getting paid for it, but it is > deeply appreciated impactful. > > I have been biting my tongue on this issue to, but this will be such a > life changing event for both us developers as well as for the general > public's audio experience of the web. I know this is a distributed > collaborative effort, but some kind of overall timeline prediction would be > really helpful for planning, and just to satisfy curiosity! > > Many thanks, > - lonce > -- > Lonce Wyse, Associate Professor > Communications and New Media > Director, IDMI Arts and Creativity Lab > National University of Singapore > lonce.org/home > > > > On 27/3/17 12:57 pm, André Michelle wrote: > > Hey! > > > What is the current state of the AudioWorklet? > > It is very important for us to know when we can use it. The AudioWorklet > would improve the experience of our application in many ways. The most > obvious is that we expect way less glitches. We know that buffer-underruns > can still happen but in very most cases the main-thread was using a > little(!) bit more time than we can allow to maintain seamless playback > with a reasonable latency. > > We already improved it a lot by not(!) using a ScriptProcessor. > BufferSourceNodes can be scheduled seamlessly and offer more control and > glitch-detection. The audio-rendering is already done inside a worker and > sent to the main-thread. But this is where the bad glitches are happening. > Small things like layout changes can have an immediate impact on the > playback. We also have a version where we use another Browser-TAB to render > and playback the audio. This solution works actually best (reduces glitches > up to 75% allowing 60fps animations). But this is hardly a solution for > long. > > Any pointers are welcome. > > ~ > André Michelle > http://www.audiotool.com > > > > > > >
Received on Thursday, 30 March 2017 16:54:13 UTC