F2F 2021 Meeting Minutes for Fri May 21

May 21Attendees

Philippe Milot, Paul Adenot, Raymond Toy, Christoph Guttandin

Minutes

   -

   16:00-16:10 UTC (9:00-9:10 am PDT): Set up
   -

   16:10-17:45 UTC (9:10-10:45 am PDT): others
   <https://github.com/WebAudio/web-audio-api-v2/issues?q=is%3Aissue+is%3Aopen+-label%3Apriority-1+-label%3Apriority-2>
   -

      Issue 121: channelCount set at MediaStreamAudioDestinationNode MUST
      set MediaStreamTrack channelCount
      <https://github.com/WebAudio/web-audio-api-v2/issues/121>
      -

         Paul: I agree generally
         -

         [Agreed to do this]
         -

      Issue: 118: Raw audio recording not supported
      <https://github.com/WebAudio/web-audio-api-v2/issues/118>
      -

         Raymond: Agrees with Paul’s last comment.
         -

         [Paul updates issue]
         -

      Issue 111: Request to Expose AudioBuffer to DedicatedWorker
      <https://github.com/WebAudio/web-audio-api-v2/issues/111>
      -

         [Paul updates issue; very similar to context on worker]
         -

      Issue 110: Should 'Atomics.wait' be available in AudioWorklets
      associated with an OfflineAudioContext?
      <https://github.com/WebAudio/web-audio-api-v2/issues/110>
      -

         Paul: I say no.
         -

         Raymond: I agree.
         -

         Paul: Close this?
         -

         Raymond: I’m fine with that.
         -

      Issue 107: AnalyserNode: provide access to complex FFT result
      <https://github.com/WebAudio/web-audio-api-v2/issues/107>
      -

         Raymond: We could do this, but doesn’t solve the actuarial
         use-case.
         -

         Philippe: He has a workaround for the issue
         -

         Raymond: Yes, using WASM FFT in a worklet
         -

         Paul: Shall we close?
         -

         Raymond: Yes
         -

         Paul: I’ll add some comments about WASM being optimized
         -

      Issue 106: AudioOutputContext
      <https://github.com/WebAudio/web-audio-api-v2/issues/106>
      -

         Raymond: What does he want to capture?
         -

         Paul: System audio for processing
         -

         Raymond: Seems like not a good idea.
         -

         Paul: I’ll update the issue.
         -

      Issue 112: Units & examples used in DynamicsCompressorNode are
      ambiguous <https://github.com/WebAudio/web-audio-api-v2/issues/112>
      -

         Raymond: Looks like editorial changes to clarify text
         -

         Paul: I’ll mark it as such.
         -

      Issue 105: Add ability to pause/resume AudioBufferSourceNode
      <https://github.com/WebAudio/web-audio-api-v2/issues/105>
      -

         Raymond: Last comment from the telecon got a thumbs up from the
         poster.  Does that mean he’s ok with the solutions?
         -

         Paul: playback zero works with no downsides.
         -

         Raymond: That works too.  Shall we close this, saying we won’t do
         this?
         -

         Paul: Yeah.
         -

      Issue 101: V2 documentation logistics
      <https://github.com/WebAudio/web-audio-api-v2/issues/101>
      -

         Raymond: Can close this
         -

      Issue 92: Add detune AudioParam for ConstantSourceNode
      <https://github.com/WebAudio/web-audio-api-v2/issues/92>
      -

         Paul: Looks good for consistency.
         -

      Issue 90: No way to set loop to false at a future time for an
      AudioBufferSourceNode
      <https://github.com/WebAudio/web-audio-api-v2/issues/90>
      -

         Paul: Doesn’t seem currently possible to do what he wants to do.
         Since we’re improving ABSN anyway, might as well.
         -

         Raymond: Not exactly sure how to do it, but we can think of
         something.
         -

      Issue 88: Expose platform-level 3D audio APIs to WebAudio
      <https://github.com/WebAudio/web-audio-api-v2/issues/88>
      -

         Philippe: It’s about exposing 3D audio.  And want to map it to the
         web by exposing this to the web.
         -

         Paul: What you want is much more advanced than the PannerNode?
         -

         Philippe: I don’t know.
         -

         Paul: PannerNode could have user HRTFs so 3D is possible.
         -

         Paul: Includes distance attenuation
         -

         Philippe: Can we make that optional?
         -

         Raymond: That’s easy. :-)
         -

         Paul: We can do that easily: add if statement in the
         implementation to skip attenuation.
         -

         Philippe: Might be ok to do PannerNode.  Like no distance atten.
         Is ambisonics good enough?
         -

         Raymond: Too bad Hongchan isn’t here to discuss Omnitone.
         -

         Philippe; Could this be added to WebAudio natively?
         -

         Philippe: Perhaps not using OS stuff would be better and using
         native nodes?
         -

         [Phlippe describes how it Wyze works]
         -

         Philippe: I’ll create 2 new issues on the thing we discussed
         -

      Issue 87: Channel layout detection for the destination output device
      <https://github.com/WebAudio/web-audio-api-v2/issues/87>
      -

         Philippe: Would be great to extend the supported layouts.
         -

         [discussions about channel layout that I didn’t capture]
         -

         Philippe: Would like at least 7.1 and 7.1.4 (atmos) since these
         exist.
         -

         Philippe: Could we increase priority?
         -

         Paul: We can do that.
         -

      Issue 84: Report actual startTime of AudioScheduledSourceNodes
      <https://github.com/WebAudio/web-audio-api-v2/issues/84>
      -

         Raymond: Can you describe the use case Christoph?
         -

         Christoph: I asked for it to precisely schedule consecutive nodes.
         If I schedule the first, I need to know when it was actually
started to
         precisely schedule the next.
         -

         Raymond: Can’t schedule it ahead of time?
         -

         Christoph: From my experience scheduling ahead of time doesn't
         always work. There are always some cases when the look ahead
wasn't big
         enough.
         -

         Christoph: It could maybe be a started event with the value.
         -

         Raymond: Seems like a reasonable request.  Don’t know what the API
         should be.
         -

         Christoph: But isn't that the concept of currentTime? Shouldn't it
         be the value that can be used to schedule something "now"?
         -

         Paul: It’s the time of the current render and is incremented at
         the end.
         -

         Christoph: Ah okay, I thought it's conceptually the time of the
         next frame.
         -

         Christoph: It would be nice to have a way to schedule something as
         fast as possible and then be able to know when it happened.
         -

         Paul: start(0) will do that, but you don’t know when that happens.
         -

         Christoph: Yes, exactly.
         -

         Paul: I’ll update the issue
         -

         Raymond: Are your buffers not super short?
         -

         Christoph: Maybe a second each. I ran for example into that
         problem when implementing a streaming player with the Web Audio API.
         -

         Raymond: I was asking because if the buffers are really short, you
         won’t get the promises or events fast enough to start the next.
         -

      Issue 83: Input only / muted AudioContext
      <https://github.com/WebAudio/web-audio-api-v2/issues/83>
      -

         Paul: sinkID null should work, right?
         -

         Raymond: I think so.
         -

      Issue 81: DynamicsCompressorNode release parameter range
      <https://github.com/WebAudio/web-audio-api-v2/issues/81>
      -

         Paul: Let’s just increase it
         -

         Raymond: Don’t know how the existing code will handle this, but
         seems ok to do this, spec-wise
         -

      Issue 77: Please make high resolution time available within
      AudioWorkletGlobalScope.
      <https://github.com/WebAudio/web-audio-api-v2/issues/77>
      -

         Paul: DIdn’t we already say ok?
         -

      Issue 62: Extend the `createPeriodicWave` API to accept time-domain
      arguments <https://github.com/WebAudio/web-audio-api-v2/issues/62>
      -

         Paul: Nothing more to add
         -

         Raymond: easy enough to add, but I agree.
         -

      Issue 55: More options when constructing an OfflineAudioContext
      <https://github.com/WebAudio/web-audio-api-v2/issues/55>
      -

         Paul: We already agreed to do this.
         -
      -

   17:45-18:00 UTC (10:45-11:00 am PDT): Closing remarks, future plans
   -

      V1!!!
      -

      We’ll continue with our regular meetings next week, at the usual time.
      -

      Thanks to everyone for helping to make this a productive meeting.

Received on Friday, 21 May 2021 18:38:06 UTC