Minutes for WebAudio F2F 2020 virtual meeting

Sorry for the delay.

The minutes for the F2F 2020 meeting can be found here
<https://docs.google.com/document/d/1a77fjPixfzzbMdiCF6IfbBNmSQdzAMxmvb3FhZmZnzE/edit?usp=sharing>,
or below.

Apparently the recordings are still being processed.  I'll send out links
when they're ready.

Thanks to everyone who attended.  I think we made pretty good progress
during the meeting.

Minutes:

WebAudio F2F

Jun 8-11, 15-18

F2F Agenda
<https://docs.google.com/document/d/1q-Vf4fYkzynzARL_IX3k8Z2RXK93oZnRuTU1DC2L00o/edit>
MinutesJun 8Attendees

Matt, Paul, Raymond, Michel Buffa, Chris Wilson
Minutes:

   -

   FrozenArrays for AudioWorklets
   <https://github.com/WebAudio/web-audio-api/issues/1933>
   -

      What about inputs when connecting/disconnecting?
      -

      Firefox and Chrome beginning implementation.  Karl is doing some
      benchmarks.
      -

      Paul suggests doing nothing for now and leave for v2.
      -

      Hongchan agrees
      -

      Paul explains some things about ECMAScript FrozenArrays.
      -

      Some difficulties when the shape of the array needs to change.  Not
      clear how to do this right now.
      -

      Paul asked if it’s possible to unfreeze.  No.
      -

      Hongchan has some promising results which shows good results,
      especially on low core count devices.
      -

      [Missed some stuff]
      -

      Possibly implement process() using WebIDL callback.
      -

      Paul says if we don’t (according to Boris) we can manually spec the
      WebIDL steps instead as part of the spec.
      -

   Privacy
   -


      https://github.com/WebAudio/web-audio-api/issues/1457#issuecomment-637010281
      -

      https://github.com/WebAudio/web-audio-api/issues/2191
      -

         https://github.com/w3cping/tracking-issues/issues/89
         -

      Paul says Firefox anti-fingerprinting is working well.
      -

      Paul has been discussing this with privacy experts for many months
      and they are happy with that.
      -

      Raymond agrees with Chris’s idea of allowing either 44.1 or 48 kHz at
      browsers discretion.
      -

      Paul:  What do we do if some fancy PC defaults to 192 kHz?  They get
      to pay the sampling cost always?
      -

      Chris Wilson joins for privacy issues.
      -

      Group is still opposed to dithering
      -

      Group generally agrees to do 44.1 or 48 kHz for fingerprinting.
      Resampling for other rates to one of these; up to UA to decide.
      -

      Ultrasonic already covered in privacy issues in the spec so we
      propose nothing additional is really needed.
      -

   V1
   -

      https://github.com/WebAudio/web-audio-api/issues/2176
      -

      Paul suggests just returning a rejected promise as Domenic says.
      Paul to follow up on the issue
      -

      https://github.com/WebAudio/web-audio-api/issues/2177
      -

      Nothing to do here; closing. (Comment in issue)


Jun 9Attendees

Attila Haraszti, AnthumChris, Christoph Guttandin, Hugh Rawlinson, Jack
Schaedler, Matthew Paradis, Michel Buffa, Paul Adenot, Philippe Milot, Ruth
John, Raymond Toy, Chris Lilley

Minutes

   -

   Introductions
   -

   No objections to meeting recording
   -

   V2 Goals
   -

      Matthew: 3 areas:  fixing V1 issues, new V2 items, new requirements.
      -

      Paul: AudioWorklets can do many things, but adding new nodes requires
      a higher bar.
      -

      Hugh:  What is the bar for adding new nodes?
      -

      Chris L: Yeah,
      -

      Paul: Multi-dimensional problem with ease of use and others.
      Community involvement is important.  Noise gen is a good example where
      native node is appropriate.
      -

      Michel: What about Juce? How does that fit in?
      -

      Paul: No building blocks, but could compile Juce into a worklet.
      -

      Michel: What about Audio Device Client replacement?  How to select
      outputs.
      -

      Paul: That’s something to be discussed this week.
      -

   How to write the V2 version
   -

      Paul: Web platform has several approaches.  WebGL has two specs.
      -

      Chris L: Probably have just new stuff. Linking to old stuff
      -

      Paul: So new separate doc?
      -

      Chris L: Yes.  Extending existing v1 is harder.
      -

      Raymond: For example, Oscillator with phase.  How to do that?
      -

      Paul: Yes, and like oscillator that is not band-limited.  Build on
      old one or new node?
      -

      Chris L: Same issue with pulse width osc.  New node.
      -

      Paul to write an explainer
      -

      Paul is concerned about backward in-compatible changes like render
      size. How to deal with that in the new spec?
      -

      Generally agree to a separate doc for V2.  Should be easier to see.
      -

      Paul: Any W3C things to do.
      -

      Chris L: Nothing needed until a first publication is done.
      -

   Prioritize features, part 1
   -

      https://github.com/WebAudio/web-audio-api-v2/issues
      -

      https://github.com/WebAudio/web-audio-api-v2/projects/1
      -

      Paul: Ready for editing column clearly contains issues which relate
      to existing areas of V1, possible issue.
      -

      Matt: We need to capture links between issues where common text might
      be required.
      -

      Priorities: 1: something we want to do sooner rather than later. 2:
      something we want but not right away
      -

      https://github.com/WebAudio/web-audio-api-v2/issues/1
      -

         Priority 1
         -

      https://github.com/WebAudio/web-audio-api-v2/issues/3
      -

         Paul: Only chrome has shipped cancelAndHoldAtTime()
         -

         Paul:We can spec as chrome has shipped and then choose to redesign
         -

         Paul: Has been comments that audioparam interface is possibly too
         complicated and too different to other applications.
         -

         Paul: Suggests option 2 which is what Chrome does today. Second
         part of the work is to understand what additional
functionality/behaviour
         that people want and spec it separately.
         -

      https://github.com/WebAudio/web-audio-api-v2/issues/4
      -

         Priority 1
         -

      https://github.com/WebAudio/web-audio-api-v2/issues/
      <https://github.com/WebAudio/web-audio-api-v2/issues/4>5
      -

         Paul: very closely related to #4.
         -

         Paul: Pure WASM in AudioWorklet without any JS.
         -

         Philippe: Need to do both #4 and #5 at the same time.
         -

         Priority 1
         -

      https://github.com/WebAudio/web-audio-api-v2/issues/
      <https://github.com/WebAudio/web-audio-api-v2/issues/4>6
      -

         Chris G: Another use case is offline context that doesn’t produce
         an audio buffer.
         -

         Paul: Maybe that’s a different feature request.
         -

         Priority 2
         -

      https://github.com/WebAudio/web-audio-api-v2/issues/7
      -

         Priority 1
         -

      https://github.com/WebAudio/web-audio-api-v2/issues/8
      -

         Priority 1




Jun 10Attendees

AnthumChris, Christop Guttandin, Jack Schaedler, Marcin Wolniewicz,
Philippe Milot, Ruth John, Paul Adenot, Hugh Rawlinson, Raymond Toy, Matt
Paradis, Chris Lilley, Attila Haraszti, Michel Buffa
Minutes

   -

   https://github.com/WebAudio/web-audio-api-v2/issues/9
   -

      Chris Lilley: Is the phase set up at creation?  Or is it an
      AudioParam?
      -

      Paul: Yeah, it’s unclear.
      -

      Raymond: Yeah, it’s something we should do.
      -

      Priority 1
      -

   https://github.com/WebAudio/web-audio-api-v2/issues/10
   -

      Paul: We were close to having a design (in a different spec)
      -

      Paul: On construction, pass in an id to specify the output device.
      Need a method to allow changing the output after construction.
(But sample
      rate is fixed.)
      -

      Paul: Media capture can provide the necessary info (sample rate,
      bits, number of channels)
      -

      Priority 1
      -

   https://github.com/WebAudio/web-audio-api-v2/issues/11
   -

      Paul: Related use-case is Facebook games on mobile which have large
      audio assets.  AudioBuffer uses too much memory, and many games just
      compile up ffmpeg and do their own decoding as needed.
      -

      Philippe: Would it complicate the API?
      -

      Paul: Just add a new element to the property bag for AudioBuffer.
      -

      Raymond: Yeah, and some details on how the AudioBuffer methods would
      now work.  Probably not too bad.
      -

      Raymond: Wonders how much use this would get with WebCodecs soon
      available.  AudioBuffer won’t save a lot of memory compared to compressed
      audio assets.
      -

      Paul: But having 16 bits (as already done in Firefox, behind the
      scenes with decodeAudioData) was quite beneficial.
      -

      Priority 2
      -

   https://github.com/WebAudio/web-audio-api-v2/issues/12
   -

      Paul: suggested adding an envelope follower with the rest using
      regular stuff
      -

      Chris L: Need to have side-chain.
      -

      Michel: Are we constrained to just one kind of compressor?
      -

      Attila: Would it actually be enough with an envelope follower
      combined with other nodes?
      -

      Michel: References for limiters and such:
      http://iem.at/~zmoelnig/publications/limiter/
      -

      Jack: Should they be prototyped in a worklet first?
      -

      Hugh: Also important for developer experience.
      -

      Raymond: Certainly prototype and see if we can make a native node.
      But thinks the working group probably doesn’t want to maintain an
      “official” library of worklets.
      -

      Michel: Faust has many different envelope followers.
      -

      Paul: Yeah, this is a nice starting point for investigation.
      -

      Michel (via chat):
      -

         https://faustide.grame.fr/
         <https://meet.google.com/linkredirect?authuser=0&dest=https%3A%2F%2Ffaustide.grame.fr%2F>
         check the example menu + look at the faust standard lib (
         https://github.com/grame-cncm/faustlibraries
         <https://meet.google.com/linkredirect?authuser=0&dest=https%3A%2F%2Fgithub.com%2Fgrame-cncm%2Ffaustlibraries>
         )
         -

      Attila (via chat):
      -

         expanding on my comment for #12 sidechain, it'd be worth looking
         at what\s going on in REAPER  (DAW) - the automation via
audio signal is
         essentially the envelope follower approach. for me i remember
it didn't
         yield the same results as using the in-built compressor
(might be a phase /
         delay issue as mentioned, or some kind of upsampling inside
the compressor
         plugin?)
         -

      Priority 1
      -

   https://github.com/WebAudio/web-audio-api-v2/issues/20
   -

      Raymond:  Not terrible to have n-channel splitter/merger and n mono
      convolvers.
      -

      Jack: Probably not more important than other priority items
      -

      Priority 2
      -

   https://github.com/WebAudio/web-audio-api-v2/issues/22
   -

      Paul: it’s common in music software
      -

      Priority 1
      -

   https://github.com/WebAudio/web-audio-api-v2/issues/23
   -

      Raymond: Use for introspection, but that’s about it.
      -

      No priority
      -

   https://github.com/WebAudio/web-audio-api-v2/issues/25
   -

      Jack + Paul: Unclear if this is just for the context or for ABSNs too.
      -

      Raymond: Chrome uses a pretty complicated sinc filter with fairly
      large delay and cpu usage
      -

      Jack: Some kind of quality switch would be nice to have.
      -

      Paul: Currently limited demand, so maybe not priority 1
      -

      Priority 2
      -

   https://github.com/WebAudio/web-audio-api-v2/issues/26
   -

      Raymond: No objections to adding this. Lots of design work needed.
      -

      Paul: Need to be careful with exposing a high res clock from this.
      -

      Priority 1.
      -

   https://github.com/WebAudio/web-audio-api-v2/issues/27
   -

      Editorial
      -

      Paul: Need to fix this because we’ve made mistakes before because
      text was not clear.
      -

   https://github.com/WebAudio/web-audio-api-v2/issues/29
   -

      Philippe: Pretty important for apps to handle these errors, including
      network issues (addModule).
      -

      Paul followed up in the issue comments.
      -

      Editorial issue
      -

   https://github.com/WebAudio/web-audio-api-v2/issues/30
   -

      Paul: Nice to have.
      -

      Raymond: What’s the API?
      -

      Priority 2


Jun 11Attendees

Jack Schaedler, Matthew Paradis, Philippe Milot, Raymond Toy, Christoph
Guttandin, Ruth John
Minutes

   -

   https://github.com/WebAudio/web-audio-api-v2/issues/31
   -

      Paul:  This is basically handled in issue 1933 in v1 with
      FrozenArrays for the parameters.  Probably nothing to do here.
      -

      Closing.  See comment in issue
      -

   https://github.com/WebAudio/web-audio-api-v2/issues/38
   -

      Raymond: describe the issue briefly.
      -

      Chris G: What does chrome do?
      -

      Raymond: Can’t remember
      -

      Priority 2
      -

   https://github.com/WebAudio/web-audio-api-v2/issues/40
   -

      Paul: Maybe can be done with an AudioWorklet with Performance.now by
      measuring the time between renders.
      -

      Philippe: Seems like a compromise.
      -

      Paul: Allowing Performance.now is generally useful for devs to
      measure perf.
      -

      Philippe: Would be useful for us (Wyse)
      -

      Jack: Agrees
      -

   https://github.com/WebAudio/web-audio-api-v2/issues/41
   -

      Philippe: Is it possible to have a disconnected output with
      user-intervention?
      -

      Raymond: I don’t think so.
      -

      Philippe: Should it be transitive? (Are all downstream nodes
      connected to the destination?)  Perhaps not that useful?
      -

      Raymond: I think it should be just a direct connection for simplicity.
      -

      Priority 2
      -

   https://github.com/WebAudio/web-audio-api-v2/issues/55
   -

      Paul: Why not?  Seems useful
      -

      Priority: none
      -

   https://github.com/WebAudio/web-audio-api-v2/issues/28
   -

      Paul: Explains the issue that setBuffer delays the main thread for a
      long time computing the FFTs
      -

      Paul: Use a promise for setBuffer which resolves when the FFTs are
      done.
      -

      Raymond: What about HRTF?
      -

      Philippe: HRTF problem is issue 21
      -

      Raymond: PeriodicWave has the same problem.
      -

      Paul: Could also have an event when things are done.
      -

      Priority 2

That finishes off the “In discussion tab”. Moving on to “Under
consideration”.

   -

   https://github.com/WebAudio/web-audio-api-v2/issues/53
   -

      Editorial issue. Ready for editing.
      -

   https://github.com/WebAudio/web-audio-api-v2/issues/2
   -

      Philippe: Could this be a library?
      -

      Christoph: Doesn’t quite work when connecting to a native node.
      -

      Paul: Hongchan’s comment about experimenting with extending GainNode
      is a good experiment to do.
      -

      Christoph: Firefox seems to allow GainNodes to be extended.
      -

      Christoph: Should I extend GainNode as an experiment?
      -

      Paul: Yes, that’s helpful.
      -

      Priority 1 for now.  Reduce if experiments work.
      -

   https://github.com/WebAudio/web-audio-api-v2/issues/13
   -

      Raymond: Suggests it is a hint
      -

      Paul: Issues with multiple audio contexts to different devices with
      different sizes. Also, an issue with how inputs would work.
      -

      Jack: Proposes priority 0
      -

      Raymond and Paul: It’s a lot of work, especially in the ConvolverNode.
      -

      Raymond: Plus you can’t do a partial implementation.  It has to be
      all done before shipping.
      -

      Paul: Summarizes in issue comment.
      -

   https://github.com/WebAudio/web-audio-api-v2/issues/14
   -

      Paul: Every browser has this in the MediaElement, but not exposed in
      WebAudio
      -

      Paul: What algorithms to use?  Is it a node that modifies its input?
      -

      Jack: Difficult choices to make
      -

      Philippe: Should be able to prototype in an AudioWorklet
      -

      Jack: Proposes specifying more requirements on what we want.
      -

      Paul: Updating issue with comments
      -

      Priority 2
      -

   https://github.com/WebAudio/web-audio-api-v2/issues/15
   -

      Raymond: Describes the issue
      -

      Paul: Make sense
      -

      Paul: Ready for editing.

Jun 15Attendees

Jack Schaedler, Matthew Paradis, Hugh Rawlinson, Chris Lilley, Christoph
Guttandin, Paul Adenot, Raymond Toy
Minutes

   -

   V1 status
   -

      Chris L: Update CR (Jun 11) published.  Still have 3 privacy issues.
      -

      Raymond: Chris W made some good comments, but no follow ups
      -

      Matt: Is there anything we need to do to make progress?
      -

      Matt: What’s the next step?
      -

      Chris L: Do our best to satisfy the issues
      -

      https://github.com/WebAudio/web-audio-api/issues/2203
      -

         Paul: Firefox has tests for this already.  Need to add some slots
         to hold destination to make it all work correctly.  Need to
be consistent,
         but probably less important for us.
         -

   https://github.com/WebAudio/web-audio-api-v2/issues/16
   -

      Paul: Something we could do (along with lots of others).
      Particularly tricky to do but useful for large apps like video games and
      tools.
      -

      Christoph: Useful with MIDI controllers where worker can process
      messages instead of main thread.
      -

      Jack: Not as important as many other issues, so it seems like
      priority 2.
      -

      Priority 2
      -

   https://github.com/WebAudio/web-audio-api-v2/issues/66
   -

      Basically the same issue as #6.
      -

      Priority 2
      -

   https://github.com/WebAudio/web-audio-api-v2/issues/18
   -

      Raymond: Seems like WebCodecs solves the streaming problem, and #14
      will give the pitch adjustment.
      -

      Paul: Closing issue (with comments).
      -

   https://github.com/WebAudio/web-audio-api-v2/issues/19
   -

      Raymond: Just add rectangular window at least
      -

      Chris L: Would the choice of window be affected if we returned real
      and imaginary parts of the FFT?
      -

      Raymond: Those people probably want rectangular
      -

      Jack: Interesting to know if WASM would give similar performance
      -

      Paul: Currently native FFT is probably much faster
      -

      Paul: Could add compatible method to return imaginary part. Window
      would be simple addition.
      -

      Priority 2
      -

   https://github.com/WebAudio/web-audio-api-v2/issues/21
   -

      Paul: Are there standards for this?
      -

      Matt: Did some investigation on this and will give additional info
      -

      Chris L: Points to
      https://meet.google.com/linkredirect?authuser=0&dest=https%3A%2F%2Fwww.tvtechnology.com%2Fopinions%2Fderiving-hrtfs-and-the-aes692015-file-format
      -

      Matt: Points to SOFA:
      https://meet.google.com/linkredirect?authuser=0&dest=https%3A%2F%2Fwww.sofaconventions.org%2Fmediawiki%2Findex.php%2FSOFA_(Spatially_Oriented_Format_for_Acoustics)
      -

      Matt; 1-2 MB to get the responses
      -

      Paul: Seems reasonable.
      -

      Priority 2
      -

   https://github.com/WebAudio/web-audio-api-v2/issues/24
   -

      Raymond: Doesn’t expose any info that can’t already be determined in
      a more awkward way.
      -

      Paul: Yes, we should do this.
      -

      Priority None
      -

   https://github.com/WebAudio/web-audio-api-v2/issues/34
   -

      Raymond: Explains the editorial issue
      -

      Priority None
      -

   https://github.com/WebAudio/web-audio-api-v2/issues/37
   -

      Paul: Would prefer to talk about this when Hongchan is available.
      -

   https://github.com/WebAudio/web-audio-api-v2/issues/39
   -

      Paul: How does this work with existing algorithms?  What happens when
      values are changing from two threads?
      -

      Raymond: Also related to Christoph’s idea on OfflineAudioContext not
      returning the rendered buffer.
      -

      Paul: Writing up notes on the issue
      -

      Priority 2
      -

   https://github.com/WebAudio/web-audio-api-v2/issues/42
   -

      Raymond: I think this is being handled in #1933
      -

   https://github.com/WebAudio/web-audio-api-v2/issues/49
   -

      Nothing new to add; state is the same from TPAC 2019
      -

      Priority 2

Untriaged issues

   -

   https://github.com/WebAudio/web-audio-api-v2/issues/84
   -

      Christoph: summarizes issue
      -

      Paul: Returning a promise is pretty heavy-weight and makes ABSNs more
      complicated.
      -

      Christoph: Can it be detected if promise is being used?
      -

      Paul: Don’t know if that’s possible, but this is a valid issue
      -

      Paul: Should schedule slightly ahead of time
      -

      Christoph: But how much is enough?
      -

      Moved to Under Consideration

Jun 16Attendees

Matthew Paradis, Christoph Guttandin, Attila Haraszti, Paul Adenot, Raymond
Toy, Hugh Rawlinson, Jack Schaedler
Minutes

   -

   https://github.com/WebAudio/web-audio-api-v2/issues/83
   -

      Christoph: Summarizes issue
      -

      Attila: Is there a privacy concern?
      -

      Paul: Can save lots of battery.   Perhaps use some special sink id?
      -

      Hugh: Agrees this is useful for Meyda too.
      -

      Paul: Updates issue
      -

      Priority 1
      -

   https://github.com/WebAudio/web-audio-api-v2/issues/85
   -

      Paul: On macos, 2 native apps and you block one, all audio stops.
      There are other ways to do that. In browsers, there’s a latency
hit due to
      sandboxing.
      -

      Raymond: Is this really multiple contexts?
      -

      Paul: Yes
      -

      Paul: Updates issue
      -

      Priority
      -

   https://github.com/WebAudio/web-audio-api-v2/issues/82
   -

      Paul: Proper way for offline audiocontext is to use streams for the
      output.  Then don’t use the output.  Probably committed to design the
      streams version of offline context.  This solves many issues
including this
      one.
      -

      Paul: Updates issue
      -

      Closing
      -

   https://github.com/WebAudio/web-audio-api-v2/issues/67
   -

      Raymond: Describes issue. Relatively easy, and already supported by
      Audio Cookbook formulas.
      -

      Paul: Looked at different daws and they all had this ability.
      -

      Priority 2
      -

   https://github.com/WebAudio/web-audio-api-v2/issues/75
   -

      Paul: Just need to do it; we’ve already agreed.
      -

      Priority 1
      -

   https://github.com/WebAudio/web-audio-api-v2/issues/78
   -

      Paul: Not a WebAudio issue
      -

      Closed
      -

   https://github.com/WebAudio/web-audio-api-v2/issues/79
   -

      Paul: Methods take a response which isn’t available in the global
      scope.  It’s possible to set a flag in WASM so these methods aren’t
      available. (https://webassembly.org/docs/web/
      <https://meet.google.com/linkredirect?authuser=0&dest=https%3A%2F%2Fwebassembly.org%2Fdocs%2Fweb%2F>
      )
      -

      Priority 2
      -

   https://github.com/WebAudio/web-audio-api-v2/issues/58
   -

      Paul: Close and reference WebCodecs. We’re not adding an encode
      method.
      -

   https://github.com/WebAudio/web-audio-api-v2/issues/54
   -

      Ready for editing.  Just do it for v2.
      -

   https://github.com/WebAudio/web-audio-api-v2/issues/51
   -

      Christoph: It’s about getting the value of an AudioParam that has an
      input
      -

      Paul: The value without the input is already covered in another issue.
      -

      Paul: Jack has an example of this using an AudioWorklet.  Works very
      well.
      -

      Attila: Is it useful without the time?
      -

      Paul: Yeah, there’s always a time value from the context.
      -

      Paul: Updates issue
      -

      Closing
      -

   https://github.com/WebAudio/web-audio-api-v2/issues/80
   -

      Christoph: Question: Is the output preserved between renders?
      -

      Paul: Does that actually work?
      -

      Raymond: Before frozen arrays, yes, you have to fill it up every
      time.  Not sure what happens with frozen arrays.  Would prefer
you fill it
      up each time.  Something we need to add to the spec.
      -

      Paul: k-rate output breaks the model.  And performance difference
      seems very small.
      -

      Paul: Closing with comment
      -

   https://github.com/WebAudio/web-audio-api-v2/issues/60
   -

      Chris L: created a PR just now
      -

   https://github.com/WebAudio/web-audio-api-v2/issues/61
   -

      Paul: Streams is how WebCodec will work with WebAudio
      -

      Paul: Updates issue and closes it.
      -

   https://github.com/WebAudio/web-audio-api-v2/issues/62
   -

      Raymond: Describes issue
      -

      Paul: Would be nice to have WASM FFT benchmarks
      -

      Raymond: Browsers have everything built-in to support this.
      -

      Jack: Interested in doing FFT benchmarks
      -

      Raymond: Chrome uses PFFFT which is in C with intrinsics. Should work
      with WASM?
      -

      Raymond: Hold off until we get benchmarks?
      -

      Paul: Updates issue
      -

   https://github.com/WebAudio/web-audio-api-v2/issues/77
   -

      Raymond: I think we agreed to do this already
      -

      Paul: Yes.  Could be extremely simple to do, maybe one line of IDL?
      -

      Priority 1
      -

   https://github.com/WebAudio/web-audio-api-v2/issues/65
   -

      Raymond: Summarizes issue
      -

      Paul: Just do it then?
      -

      Raymond: Yes, but only for the new stuff, not old stuff.
      -

      Paul: Just close?
      -

      Raymond: Ok if we’re not doing old stuff.
      -

      Paul: Closing
      -

   https://github.com/WebAudio/web-audio-api-v2/issues/64
   -

      Chris L: It’s a dup.
      -

   https://github.com/WebAudio/web-audio-api-v2/issues/81
   -

      Paul: Is huge release really needed?
      -

      Chris: Not for the existing node. It should be done with the new node.
      -

      Paul: Ableton is 3 sec.
      -

      Attila: Reaper is 5 sec
      -

      Paul: That’s an eternity!
      -

   Upcoming schedule
   -

      Raymond: We’re behind by about a day
      -

      Paul: What’s on the agenda
      -

      Raymond: Designing some new nodes.
      -

      Raymond: Just 10 min left, so we’ll start tomorrow with the
      investigation.



Jun 17Attendees

Matthew Paradis, Ruth John, Philippe Milot, Raymond Toy, Christoph
Guttandin, Hugh Rawlinson, Attila Haraszti
Minutes

   -

   https://github.com/WebAudio/web-audio-api-v2/issues/8
   -

      Paul: Two unsigned longs for the seed is a bit weird
      -

      Raymond: Yeah, unsigned long long doesn’t work right for what we
      want. A 2-element array is fine.
      -

      Paul: What’s the state of TC39 proposal to set the seed?
      -

      Ruth: Haven’t looked lately
      -

      Paul: Will ask a colleague about this generator
      -

      Paul: Maybe want bits (0 or 1)?  Just needs a uniform generator and
      devs can make their own.
      -

      Paul: Course of action:  Check license.  Verify that this would work
      for audio.  Figure out webIDL (used typed arrays for seed).
      -

      Paul: Updates issue
      -

   https://github.com/WebAudio/web-audio-api-v2/issues/7
   -

      Raymond : output range 0,1 or -1,1?
      -

      Paul: Should be -1,1 to match how everything else works.
      -

      Raymond: What’s the name? VariPulseOscillator?
      -

      Paul: Yeah, that will be bikeshedded to death.  Looks to see what
      DAWS use. BandlimitedPulseWave.  Doesn’t appear to be one name
that appears
      everywhere.
      -

      Paul: Updates issue
      -

   Low-level access
   -

      Paul: Charter:
      https://meet.google.com/linkredirect?authuser=0&dest=https%3A%2F%2Fwww.w3.org%2F2011%2Faudio%2Fcharter%2Faudio-2019.html
      -

      Paul: We have people outside the implementors so we need their input
      -

      Philippe: What does multi-channel I/O mean?
      -

      Paul: It means whatever we want.
      -

      Paul: Channels aren’t labeled so you don’t know what each channel
      means.  What is channel 0?  Enumeration doesn’t enumerate
latency, but does
      expose preferred sample rate. Same situation on output.  No
guarantee that
      outputs aren’t reordered.  There’s a notion of group id that indicates if
      the inputs and outputs are the same physical device (no clock drift).
      Device change is already present (ondevicechange).  Set sinkid already
      considered and much needed.
      -

      Philippe: Can we do just low-level access without WebAudio?
      -

      Paul: Update worklet to do what’s needed.
      -

      Philippe: Do we need to use worklet?  Changeable quantum size, sample
      rate, output device.
      -

      Raymond: Still prefers a different low-level access feature to layer
      the web features nicely.
      -

      Philippe: Porting existing code that has its own graph would not want
      to use WebAudio graphs
      -

      Philippe: Channel ordering/mapping is important.
      -

      Philippe: Would we consider output format? Int16 vs float32?
      -

      Paul: I think that’s possible.  Should be fine.
      -

      Paul: audiophiles complained that memcpy doesn’t sound as good as a
      loop. :-)
      -

      Philippe: Spatial audio at OS level. Would be nice to have that
      available.
      -

      Paul: There is precedence in webGL extensions where you can ask if
      feature is available. We could do something similar perhaps.
      -

      Philippe: Where do we file low-level access issues?
      -

      Paul: Use v2 for now. We move it later if needed.
      -

      Philippe:  Push vs pull models?  Is there something useful there?
      -

      Paul: Most (probably all) platforms, it’s callback-based.
      -

      Philippe: Pulseaudio or ALSA?
      -

      Paul: Firefox can choose pulseaudio, alsa, or jack.  Order is jack,
      pulse, alsa.  Mozilla doesn’t have jack built in.
      -

      Raymond: Chrome uses pulse and falls back to alsa if not available.
      -

      Paul: Firefox is working on AAudio support
      -

      Philippe: Chinese OEMs lie a lot more about capabilities with AAudio
      than OpenSLES.
      -

      Jack: Like the holistic view of low-level audio.  Concerned about
      worklet render size not working quite right with the desired size.

Jun 18Attendees

Paul Adenot, Jack Schaedler, Raymond Toy, Philippe Milot, Attila Haraszti,
Christoph Guttandin, Hugh Rawlinson
Minutes

   -

   Low-level access
   -

      Philippe: Filed a couple of issues on channel layouts and 3D audio
      -

      Matt: AC4 and MPH would be interesting.
      -

      Philippe: Is that HW accelerated codecs?  Is that WebCodecs?
      -

      Matt: Yeah, that would probably in codecs.
      -

      https://github.com/WebAudio/web-audio-api-v2/issues/88
      -

         Paul: Item 1 is already supported in PannerNode.
         -

         Paul: Item 2 …[I missed this]
         -

         Paul: Item 3: what does that mean?
         -

         Philippe: Some systems just allow bypassing any 3D effects.
         -

         Philippe: Microsoft Spatial Sounds is very opinionated.  Should be
         available everywhere.  Same API for windows and xbox so it
get used alot.
         Would be nice if the web platform could do this.  Low-level
access would
         help with this.
         -

         Philippe; Access to these features and being able to query what’s
         available and exposed to the web will help web platform work well.
         -

      https://github.com/WebAudio/web-audio-api-v2/issues/87
      -

         Paul: Philippe has done a good job with filling in the details.
         -

         Paul: Need to worry about fingerprinting.
         -

         Paul: Useful in browsers. Just a matter of figuring out how to
         expose it.
         -

         Philippe: Was using channelInterpretation to decide what to do in
         Wyse.
         -

         Paul: Plot twist: It’s always “speakers”.
         -

         Philippe: Works with current upmixing/downmixing rules that only
         go up to 5.1
         -

         Paul: Adding labels to channels and including in enumeration api
         helps
         -

         Philippe: In addition some OSes can differentiate headphones and
         speakers. Wyse can then do something appropriate.
         -

         Paul: Hasn’t worked out too well in Firefox.  Uses type of
         transport but can’t always tell what’s attached.
         -

         Philippe: Assume speakers by default, but only switch if you’re
         sure.
         -

         Attila: Pasted some links in chat about oculus browser:
         -


            https://meet.google.com/linkredirect?authuser=0&dest=https%3A%2F%2Fdeveloper.oculus.com%2Fdocumentation%2Fnative%2Faudio-intro%2F
            -


            https://meet.google.com/linkredirect?authuser=0&dest=https%3A%2F%2Fdeveloper.oculus.com%2Freference%2Faudio%2Fv16%2Fo_v_r_audio_8h
            -

         Paul: We need to do this
         -

   Future work
   -

      Paul: will handle V1 issues for now
      -

      Matt: We have about 5 weeks for v1 CR.  Just need to continue and fix
      up the issues, especially privacy.
      -

      Raymond: Should we make the meetings more focussed on v2?
      -

      Paul: yeah, most of the v1 issues have been discussed.
      -

      Matt: Start again with meetings next week?
      -

      Paul: Sure
      -

      Raymond: Sure, but the week after is a holiday at Google.
      -

   F2F meeting adjourned.  Thanks to everyone who participated.
   -

      Please send an email to request the video waiver form if you haven’t
      already.

Received on Wednesday, 15 July 2020 23:46:56 UTC