F2F 2021 Minutes for May 18

May 18Attendees

Jack Schaedler, Matthew Paradis, Philippe Milot, Michel Buffa, Raymond Toy,
Hongchan Choi, Jeff Switzer, Christoph Guttandin, Paul Adenot

Minutes

   -

   16:00-16:10 UTC (9:00-9:10 am PDT): Set up calls
   -

   16:10-16:30 UTC (9:10-10:00 am PDT): Headphone detection
   <https://github.com/WebAudio/web-audio-api-v2/issues/89>
   -

      Hongchan: I built a prototype for Mac OS.  Seems to be working, but
      still need to test with Windows and Chrome OS.  Seems useful.
Platform API
      seems quite usable (at least on Mac OS).  Philippe, how do you do this?
      Platform API?
      -

      Philippe: Yes
      -

      Honghan: Can you share statistics?
      -

      Philippe: Not really, but things have changed.  We want to do
      spatialization appropriately.  For Windows, IMM isn’t always reliable for
      things like people plugging in devices to their monitors.  Plugging in
      headphones doesn’t imply they want binauralization.  Microsoft suggests
      detecting if spatial sound is enabled, then do binauralization.
      -

      Hongchan: That’s detectable now?
      -

      Philippe: Yes, and it’s reliable, because user has to enable it.  It
      now functions more closely to what PS5 does because user has to opt-in to
      spatial audio.  There may be other usecases, but for us, we decided to do
      it via spatial api.
      -

      Hongchan: Not very general.
      -

      Philippe: It works for our specific usecase.
      -

      Paul: Echo cancellation is important use case.
      -

      Honghcan: [team] doesn’t think echo cancellation is a valid use-case
      for headphone detection.
      -

      [Philippe to update issue]
      -

      Hongchan: Spec changes will probably in Media Capture spec.  I’ll
      file an issue there.
      -

      Hongchan: Will you be doing headphone detection at all anymore?
      -

      Philippe: For now, nothing on the web; we haven’t decided what we
      want.  PannerNode seems to work.  (Is it hardware accelerated?) We
      recommend Resonance.
      -

      Paul: No, it’s just some convolutions.
      -

      Hongchan; We have one for WebAudio using ConvolverNodes.
      -

      Paul: With outputLatency, you can kind of tell if its’ a bluetooth
      device.
      -

      Philippe: I can show a Windows demo if there’s time
      -

      Raymond: It would be really cool; we’ll make some time for it this
      week.
      -

      Raymond: So what’s the status here? Seems like there’s nothing for us
      to do and it’s up to Media Capture.  Maybe reduce priority?
      -

      Hongchan: Just want to leave it here for now until an issue is
      filed.  I’ll update the issue then.
      -

      Raymond: Works for me.


   -

   16:30-16:45 UTC (9:30-9:45 am PDT): Access to different output devices
   <https://github.com/WebAudio/web-audio-api-v2/issues/10>
   -

      Paul: We have this mostly implemented, but not shipped yet.
      (Supporting sinkId). No additional latency added.  Will have a demo
      available sometime soon.
      -

      Hongchan: Look at the comment on Nov 19 for a proposal.
      https://github.com/WebAudio/web-audio-api-v2/issues/10#issuecomment-730535278
      -

      Paul: That’s what we’ve been working with for now.
      -

      Paul: Need’s some care if you have many contexts going to different
      devices.
      -

      Paul: Do people agree with the proposal?  If you’re not careful,
      you’ll get clicks. Clocks drift.  Less than 1% drift from what I’ve seen
      -

      Raymond: Why does it click?
      -

      Paul:  Consider two contexts to very different devices.  Connect them
      via MediaStream.  Clocks are different, so need to adjust samples between
      them.
      -

      Hongchan: What do you mean about UI picker?
      -

      Paul: Vendors question seeing a list of all devices.  Media group is
      thinking about displaying a browser controlled selector.  User picks one,
      and that’s the one you get, and properties are then exposed.
      -

      Hongchan: Do we want this here?
      -

      Paul: Work there isn’t finished; security reviews are on-going.  Will
      need to see if sinkId will work for setSinkId.
      -

      Hongchan: In constructor, we can throw error or use default ID.
      setSinkID can reject?
      -

      Paul: Yes, that should work.
      -

      Hongchan: So we need more spec work.
      -

      Paul: Need to add more details.  DJ people are interested in this for
      monitoring.  Beatport is interested.
      -

      Honghcan: Beatport is doing [something; I failed to capture it.]
      -

      Paul: I can summarize in the issue.
      -

   Demo from Philippe on spatialization
   -

      Raymond: Finished these a little early, so can you give us a demo
      Philippe?
      -

      Philippe: I can do that.
      -

      [Phlippe shows demo for native app, using spatialization in Wyze
      native app]
      -

      Philippe: Works on all MS platforms and PS4.  Will work with Apple
      since they have a spatial audio API too.  Would be nice if WebAudio had
      this too.  We switch back and forth between 3D mode or not based on user
      action (selecting spatial audio).  For PS4, putting on a 3D
headset enables
      this.  PS5 has a dashboard option for this. (Can only be done if the
      headset is attached).
      -

      Hongchan: Cool idea.  Maybe we can move headphone detection to 3D
      audio API instead.
      -

      Paul: Seems like there will be differences across OSes.
      -

      Hongchan: I think it’s more about exposing this instead of detecting.
      -

      Philippe: Main problem is if the app can detect whether spatial audio
      is wanted.
      -

      Hongchan: Seems hard to abstract all the different apis in a way to
      the web.
      -

      Philippe: Android is the big problem; it’s not in the OS.  Samsung
      has an SDK and Dolby has one.
      -

      Hongchan: Something to think about.
      -

   17:00-17:30 UTC (10:00-10:30 am PDT): Bandlimited pulse oscillator,
   <https://github.com/WebAudio/web-audio-api-v2/issues/7> hard-sync
   <https://github.com/WebAudio/web-audio-api-v2/issues/1>, and phase-offset
   <https://github.com/WebAudio/web-audio-api-v2/issues/9>.
   -

      Raymond: 3 related issues.  Pick one.
      -

      Raymond: Let’s go with issue #1, hard sync
      -

      Raymond: Paul’s last comment still holds.
      -

      Paul: It’s based on my experience of how other apps do this. Maybe t
      here are different styles.
      -

      Raymond: There’s a big circle of dependencies.
      -

      Paul: An osc could output the phase and that can be used as the input
      to another osc to do the hard-sync.  Everything should follow then.
      -

      Raymond: A prototype would be really helpful.
      -

      Paul: I’ve used them, but never looked into implementing.
      -

      Paul: A phase AudioParam unlocks everything.
      -

      Paul: We should propose and agree on API shape.
      -

      Raymond: Yes
      -

      Paul: I’m typing in a proposed plan for this.
      -

   17:30-18:00 UTC (10:30-11:00 am PDT): NoiseGate/Expander
   <https://github.com/WebAudio/web-audio-api-v2/issues/12>
   -

      Michel: DO you remember the noise gate I sent you?
      -

      Paul: Was very perceptual(?)
      -

      Michel: https://mainline.i3s.unice.fr/michel-wams/michel/utility/
      -

      Paul: Worked well for guitar, but we need something more generic to
      work with more types of input.
      -

      Raymond: Do you have papers that implement these?
      -

      Michel: Yes, these come from FAUST.  This is an Ichiban (?) noise
      gate.
      -

      Raymond: If you have references, it would be helpful for people to
      get started with a prototype
      -

      Michel: I can provide links.
      -

      Raymond: Great!  Please add them to the issue.
      -

      Paul: I’m looking at commercial impls.  I have two guitar noise gates
      and people have strong opinions on which to use
      -

      Raymond: Then how can we make a generic one that people will like?
      -

      Jack: This kind of gets to the heart of my apprehensions around lots
      of the native nodes
      -

      Michel: Noise gate depends on guitar style being played.  Limiter
      could be simpler
      -

      Hongchan: Even limiters depend very much on personal taste.
      -

      Raymond: Everything that people say here makes me think it will be
      very hard to make something useful.
      -

      Hongchan: That’s my feeling on lots of new v2 nodes.  Have we done
      polls to see what people want?
      -

      Paul: Many of these have been requested by various people over time
      in WAC and other places, so people do want them.
      -

      Hongchan: Podcasts and such would find these nodes useful.
      -

      Jack:  I think if someone could prototype this in AudioWorklet, there
      are enough folks in this group from different fields (music, games,
      broadcasting) we could likely say that it's good enough for a
      generic/cookbook sense.
      -

      Paul: Maybe do a gap analysis to see what’s missing.  Looking at
      use-cases
      -

      Michel: Big reference for compressor:
      https://www.eecs.qmul.ac.uk/~josh/documents/2012/GiannoulisMassbergReiss-dynamicrangecompression-JAES2012.pdf
      -

      Paul: We don’t want to go into the weeds, but I think there are still
      gaps here.

Received on Tuesday, 18 May 2021 18:07:12 UTC