- From: Hongchan Choi <hongchan@google.com>
- Date: Wed, 10 Aug 2016 17:12:03 +0000
- To: Joe Berkovitz <joe@noteflight.com>
- Cc: Audio Working Group <public-audio@w3.org>
- Message-ID: <CAGJqXNtFS9ioEP8rVGTdWvgc6XLe8vUpTFy+PVnYmOUeF+Lx1Q@mail.gmail.com>
Hello Joe, I haven't gotten the response from Alex yet and am working on the following points: - Rename registerAudioWorkletProcessor() to registerProcessor(). - Define where 'AudioWorkletDefinition' gets stored. - Specify how the message passing system works for AudioWorklet: I think we can point the processing model. - Specify an algorithm that how the key-value pairs get parsed into corresponding AudioParams in the audio worklet processor. - AudioWorkletProcessor::process() cannot be specified in the IDL since it is not a part of the vendor implementation. - Write the introduction section for AudioWorklet preferably with an example. The current PR is too bloated with too many things (loading time is huge), so I would say we land what we have first and fiddle it over next few weeks. Once we land the current PR, we can break down the to-do items and collaborate easily. Best, Hongchan On Wed, Aug 10, 2016 at 8:43 AM Joe Berkovitz <joe@noteflight.com> wrote: > Hi Hongchan, > > It seems as though we are very close to absorbing this feedback from > Domenic into the spec. The above two issues still need resolution but the > path seems clear. > > Time is starting to run out prior to TPAC. How close are we to having the > TAG feedback that Paul and Ray wanted? I recall that you wanted Alex > Russell to give some feedback also. > > It would be great if we could have all AudioWorklet feedback in hand and a > plan for responding to it for tomorrow's call. > > Best, > > . . . . . ...Joe > > Joe Berkovitz > President > Noteflight LLC > > +1 978 314 6271 > > 49R Day Street > Somerville MA 02144 > USA > > "Bring music to life" > www.noteflight.com > > ---------- Forwarded message ---------- > From: Domenic Denicola <notifications@github.com> > Date: Mon, Aug 8, 2016 at 11:31 PM > Subject: Re: [WebAudio/web-audio-api] Add AudioWorklet section (#869) > To: WebAudio/web-audio-api <web-audio-api@noreply.github.com> > Cc: Joe Berkovitz <joe@noteflight.com>, Comment < > comment@noreply.github.com> > > > It's a subclass derived from AudioWorkletProcessor. This concept of > 'two-objects-in-different-threads' is quite new to the world, even for the > Worklet. However, we need two objects that represents the 'main thread > interface' and the 'audio thread interface' respectively. It might be nice > to have your perspective on this because the discussion has been going on > only inside of WG. > > Two objects in different threads seem fine. The problem is that the spec > as written creates the objects and then does nothing with them. This means > that they can be garbage collected immediately. You need to specify where > to store them and what holds references to them and uses them later. > > Hmm. This is surprising because this process method is the core of > AudioWorkletProcessor. As you pointed this must be implemented by the > subclass. If you don't, you get the silence out of this node. (or should we > throw? this needs to be defined too.) How can we capture the interface of > this method if we cannot put it on the IDL? > > The IDL is just instructions for how bindings generators in > implementations should perform conversions and create APIs. It doesn't have > anything to do with the web developer-facing interface and requirements for > using the API. That is better documented in, well, documentation, not in > normative interface definition language. > > Since the implementation should not have a process method, it must not > appear in the IDL. > > >
Received on Wednesday, 10 August 2016 17:12:52 UTC