- From: Raymond Toy <rtoy@google.com>
- Date: Mon, 17 May 2021 11:02:11 -0700
- To: "public-audio@w3.org Group" <public-audio@w3.org>, public-audio-comgp@w3.org
- Message-ID: <CAE3TgXEYnbJne_0jGfQoUHY3r1pZv9sFkpSF3QoeMTU5tCM5Lg@mail.gmail.com>
May 17Attendees
Jeff Switzer, Jack Schaedler, Matthew Paradis, Raymond Toy, Philippe Milot,
Christoph Guttandin, Paul Adenot
Minutes
Postponing headphone detection and output selection to tomorrow.
-
16:00-16:10 UTC (9:00-9:10 am PDT): Set up calls
-
16:10-16:30 UTC (9:10-9:30 am PDT): Make AudioBuffer Transferable
<https://github.com/WebAudio/web-audio-api-v2/issues/119>
-
[Paul presents updates]
-
Paul: A very rigid object that was planned to be used with WebCodecs
-
Paul: WebCodecs decide not to do this and expose its own buffer
supporting uint8, int16, int32 (really 24), and float32 with both
interleaved and planar support. Not exposed directly, but can be copied
out, while being decoded. Will do conversions to requested output types
(planar/interleaved, bit depth).
-
Paul: Can provide API to create an AudioBuffer from this. Includes a
polyfill for decodeAudioData.
-
Paul: Conclusion: WebCodecs don’t need transferable buffers, but we
can do this when WebAudio supports workers.
-
Raymond: Is uint8 common?
-
Paul: Useful for games with low-quality audio. Looked at what ffmpeg
supports too. But not a lot of uint8.
-
Paul: With AudioContext in a worker, MediaStreams not available. But
WebRTC working on supporting this on a worker. So will be able to stream
media completely off the main thread. Convenient for us because we get
MediaStreams mostly for free. But no MediaElement from worker. Can be
faked.
-
Raymond: Is it a problem because worker is lower priority?
-
Paul: User’s have to take care of this and buffer more.
-
Paul; That’s why the Worklet model is preferred to allow high
priority if needed.
-
Paul: Another related topic: update constructor to allow AudioBuffer
to take regular buffer with some kind of format descriptor to create an
appropriate internal buffer.
-
Raymond: So pass in a buffer that is copied to internal space in
AudioBuffer?
-
Paul: Yes, basically detaching the original buffer.
-
Paul: More useful to decouple AudioBuffer from WebCodec especially
for native developers. “Acquire the content” is pretty
confusing to devs.
Detaching is relatively straightforward.
-
Paul: Not sure what Chrome supports, probably still AudioBuffer but
will move to new scheme.
-
Paul: Any concerns about memory and threads, I’m happy to talk about
it. Create a bug somewhere and CC me and I’ll do my best to represent
WebAudio needs.
-
Philippe: Audio is starting to touch all kinds of stuff now.
-
Paul: A complicated area.
-
16:30-17:30 UTC (9:30-10:30 am PDT): Bring your own buffer and
AudioWorklets integration with WASM
<https://github.com/WebAudio/web-audio-api-v2/issues/4>
-
Paul: Useful for non-WASM use-cases too, but a lot more useful for
WASM.
-
Paul: Ties into read-only memory too. For example, ABSN feed two
different worklets. Then the memory could be read-only.
-
[Paul shows a diagram of this. Read-only can get rid of the copy.]
-
Philippe; If we bring our own memory for output, is there a copy?
-
Paul: If you’re not touching the input/output, you can maybe not copy.
-
Jack:
https://github.com/jackschaedler/karplus-stress-tester/blob/4f25faef931e309125813de7991652f677b88d58/string-processor-wasm.js#L75
-
Paul: Very hard for WASM since code can only access the WASM heap.
It’s difficult to do this and keep everything safe..
-
Paul: Notes that Hongchan thinks it’s useful, but not sure how to do
that and the use cases to be handled.
-
Philippe; I thought this issue is the opposite, where the WASM heap
is copied out to WebAudio?
-
[Raymond missed Paul’s comment]
-
Paul; IIRC, karplus can support multiple strings per worklet or
multiple worklets with a few strings.
-
Jack: Not sure what impact copying has on this. Hard to measure.
-
Paul; FIrefox dev tools can give some info on this.
-
Paul: API can get a reference to the heap, the starting offset, and
length to define a region of memory that the worklet could use. Need to
handle multiple channels, possibly by preallocating all the memory needed
for all the channels.
-
Raymond: Chrome can’t do this in the same render. You have to wait
at least to the next before the count changes.
-
Paul: That should be ok, we don’t require instant changes now, so
doing it next time is ok.
-
Raymond: So, we all want to do this, but it’s not yet clear how we
can do this. Need help from WASM folks
-
Raymond: Is dynamic channel count change really needed?
-
Jack: That’s a good question. Might be ok to preallocate all the
memory.
-
Paul; Let’s see. 128 frames * 5.1 channels * 4 bytes * 2 for
input/out = 6K. Not a huge amount of memory.
-
Raymond: Where do we go from here?
-
Paul: IDL proposal, like what Hongchan proposed. But need WASM
people look at it too. But can also use SharedArrayBuffers from worker
today.
-
[Paul writes some pseudo code.]
-
Jack: Is the idea to allocate all the memory upfront?
-
Paul: Yes, but they can also change the layout when needed.
-
Paul: See https://paste.mozilla.org/NDnNO7EM
-
Raymond: Is that permanent?
-
Paul:
partial interface AudioWorkletProcessor {
registerBuffers(ArrayBuffer memory, uint64_t offset, uint64_t
maxLength); // indicates that this memory will be used by the AudioWorklet
-- don't touch it yourself
// Called when the input or output channel count changes for whatever
reason -- you can accept or deny the request. In case of error, `process`
isn't called anymore
channelTopologyChange = function(ArrayBuffer memory, uint64_t offset,
uint64_t newMaxLength) {
// return false; // reject the change request: audio will be down/up
mixed to the previous
// find some memory in already allocated space/allocate new memory
return {
memory: memory,
offset: new_offset,
maxLength: new_max_length
};
}
}
-
Philippe: How does this work? I was expecting to do this in the process
method.
-
Paul: process will get the memory and can operate it like how native
devs would work.
-
Philippe: I’ll have to think about this to see if this would work for us.
-
Philippe: Output buffer areas would need to be dynamic.
-
Paul: So a ring buffer?
-
Philippe: Yes
-
Paul: So registerBuffer can set up the buffers for input or output. But
process may want to set the output buffer.
-
Raymond: Please update the issue; I didn’t capture everything that was
discussed, and having it in the issue is easier to find.
Meeting adjourned; we’ll start tomorrow with headphone detection and output
selection.
Received on Monday, 17 May 2021 18:05:37 UTC