- From: Tom Greenaway <tomgreenaway@google.com>
- Date: Sun, 9 Aug 2020 08:54:58 +0200
- To: Simon Robertson <retromodular@gmail.com>
- Cc: Francois Daoust <fd@w3.org>, Matthew Atkinson <matkinson@paciellogroup.com>, public-games@w3.org, Noel Meudec <noelm@fb.com>
- Message-ID: <CAMgVZKA-W7WC1ZNmbKU+BBcat9qH=i0rBG6qM04PYH4DqmmL_Q@mail.gmail.com>
Hi Simon, If you can put together some API examples as you suggested I can share them with some of the engineers who focus on Web Audio in Chrome and get their thoughts. Cheers Tom On Fri, 31 Jul 2020, 6:49 pm Simon Robertson, <retromodular@gmail.com> wrote: > Hi Francois > > Audio decoding aside (you made some good points regarding user control), I > believe I can sum everything up by saying the playback of audio data needs > to be simplified. The initial connection of audio nodes is fine, because > that's typically static, but having to recreate and reconnect nodes every > time a sound needs to be played should be unnecessary > > Imagine how many times, and how rapidly, sounds need to played in > something like a bullet-shooter game. The numbers stack up very quickly > > One way to handle this at the moment is to create an audio worklet and > handle the playback of sounds (and their pitch/volume) that way, which > isn't a bad way to do it really, but it shouldn't be something that's > needed. Plus, Safari is still lacking support for audio worklets, > unsurprisingly > > I could put together some pseudo APIs and examples if you think it > would be useful > > > On Fri, 31 Jul 2020, 15:54 Francois Daoust, <fd@w3.org> wrote: > >> Hi Simon, >> >> >> ------ Original message ------ >> From: "Simon Robertson" <retromodular@gmail.com> >> To: "Matthew Atkinson" <matkinson@paciellogroup.com> >> Cc: "Francois Daoust" <fd@w3.org>; public-games@w3.org; "Noel Meudec" >> <noelm@fb.com>; "Tom Greenaway" <tomgreenaway@google.com> >> Date: 31/07/2020 00:25:54 >> >> >Hello >> > >> >I don't know if this is the correct place to drop this comment, but I >> >have been following this thread for a while and it is good to see it >> >becoming active again >> > >> >So, my thoughts on "web gaming" from my experiences thus far: >> > >> >One of the biggest issues that I have encountered is audio. The web >> >audio API is really nice but it could be improved. For example, we >> >currently have to recreate and reconnect nodes every time we need to >> >play a sound (BufferSource), and that's after we have jumped through >> >the hoops of loading and decoding the audio data via fetch or >> >XMLHttpRequest >> > >> >If a new audio node was available that could (a) handle the loading, >> >and (b) allow that audio data to be replayed in a monophonic or >> >polyphonic way, it would ease the pain immensely >> > >> >Rough example ... >> > >> >const ping = context.createSamplerNode({ polyphonic: true }); >> >ping.load("snd/ping.ogg", "audio/ogg"); >> >ping.connect(destination); >> >ping.play(); // does nothing, not loaded >> >... >> >if (ping.loaded) ping.play(); // ok >> >ping.time = 0.5; // half original speed >> >ping.play(); >> > >> >Essentially, the loading and playback of audio in games needs to be a >> >lot simpler and less GC heavy. One node should be able to play one >> >audio file multiple times if needed >> Thanks for the feedback! That seems a very good start, and a good >> example of needs/gaps the CG could help document. I am no expert in Web >> Audio and may be missing some points here. >> >> If I understand things correctly, there are two requests here: >> >> 1. The ability for the Web Audio API to load an audio file directly. It >> may be useful to add a Node that can directly operate on files, but >> would that allow more than replacing a few lines of code to fetch and >> decode (through a call to decodeAudioData) audio data? >> >> For instance, the WebAudio API example on MDN shows a canonical way to >> play an audio file with a few lines of code: >> >> https://developer.mozilla.org/en-US/docs/Web/Guide/Audio_and_video_delivery >> >> Are you trying to simplify that? More involved scenarios? Would your >> proposal also allow scenarios that are currently impossible or hard to >> achieve today? >> >> The ability to add decoded audio data seems useful as it gives >> applications more controls over data loading (using fetch, WebSockets, >> RTCDataChannel, WebTransport when it's available, or local storage) as >> well as over decoding (through "decodeAudioData" today but the WebCodecs >> proposal could perhaps give more knobs in the future to decode audio >> data: https://github.com/wicg/web-codecs ) >> >> >> 2. The ability to replay audio data without having to re-create >> AudioBufferSourceNode, and with the possibility to change parameters >> (monophonic/polyphonic, playback rate) >> >> Experts in Web Audio may be able to chime in to explain the current >> design. It could be useful to have a clearer picture on how the >> inability to do that impacts performances. For instance, you mentioned >> GC. I suppose you do not need to re-create the audio data buffer each >> time, only the AudioBufferSourceNode. Is it possible to describe typical >> scenarios more thoroughly (e.g. number of times this needs to be done >> per second, or performance measures)? >> >> Thanks, >> Francois. >> >> >> > >> > >> >Most of the other APIs such as WebGL2, gamepads, and VR are pretty >> >solid already in my opinion. It's just the audio letting things down at >> >the moment >> > >> >Simon ++ >> > >> >On Thu, 30 Jul 2020, 22:07 Matthew Atkinson, >> ><matkinson@paciellogroup.com> wrote: >> >>Hi all, >> >> >> >>Thanks Francois for making the accessibility-related charter update; >> >>it looks great to me. I have now been officially accepted into the >> >>group from a W3C perspective, so I can act as a liaison with the >> >>Accessible Platform Architectures (APA) group if needed. >> >> >> >>It was good to hear about your backgrounds, Noël, Tom and Vincent, and >> >>from Francois with the latest update on progress since the workshop. >> >>Here's a bit more about me: I am an accessibility consultant with The >> >>Paciello Group (primarily helping clients make their web and mobile >> >>apps accessible). Before that, I was a researcher in academia and >> >>worked on digital accessibility projects [1]. I have worked on some >> >>game accessibility projects in the ancient past [2] and have recently >> >>found a bit of time to start playing games again [3]. Whilst I don't >> >>have a games industry background, I would be happy to help in any way >> >>I can :-). >> >> >> >>In order to keep the momentum going, I have two questions... >> >> >> >>1. Is there anything small we could work on right away? I know that >> >>discoverability is a major concern, and am catching up with the schema >> >>proposal Noël made [4]. Is there anything else that we could get >> >>started on? One thing I was wondering: do we have a recorded list of >> >>all of the things that other W3C groups (and external organisations) >> >>are working on that are relevant to this group? Here are a couple that >> >>spring to mind from work going on in APA: >> >> >> >> * XR Accessibility User Requirements: https://www..w3.org/TR/xaur/ >> >><https://www.w3.org/TR/xaur/> - just published and a really clear and >> >>helpful overview. >> >> >> >> * Framework for Accessible Specification of Technologies (FAST): >> >>https://w3c.github.io/apa/fast/ >> >> >> >> There is a page on this group's wiki about features we are tracking >> >>[5] but it was last edited in 2012; would it be helpful to go through >> >>that list and update it (or put the list somewhere else)? >> >> >> >>2. Francois mentioned the forum Noël set up [6] - is that the place >> >>where we should be having all discussions (i.e. not this list)? If so >> >>I'll move my question above to that forum. >> >> >> >>I just started re-playing Duet... I would love a slo-mo mode as my >> >>coordination isn't the best, but the bits I can play are a lot of fun, >> >>and the idea is really compelling. Some of my faves are Descent (6DoF >> >>robo-shooter from the '90s), Half-Life, Deus Ex, Braid, Beneath a >> >>Steel Sky, A Dark Room and many Amiga ones :-). >> >> >> >>best regards, >> >> >> >> >> >>Matthew >> >> >> >>[1] http://matatk.agrip.org.uk/research/ >> >>[2] I'm working on getting this working again on modern platforms, so >> >>there's nothing to show at the moment, but years ago a friend and I >> >>made a version of Quake with enhanced audio cues, and I've also worked >> >>on a proof-of-concept "Level Description Language" that allows people >> >>to describe in text, rather than visually design, maps. I am hoping to >> >>have a release for Windows 10 and the latest macOS soon: >> >>https://github.com/matatk/agrip >> >>[3] Here's a talk about my experiences of gaming with a vision >> >>impairment: >> >>http://matatk.agrip.org.uk/talks/2019/game-accessibility-low-vision/ >> >>(I recommend 'story mode' so you get the details). >> >>[4] https://github.com/schemaorg/schemaorg/issues/2565 >> >>[5] https://www.w3.org/community/games/wiki/Features >> >>[6] https://www.html5gamedevs.com/forum/40-web-gaming-platform/ >> >>-- >> >>Matthew Tylee Atkinson >> >>-- >> >>Senior Accessibility Engineer >> >>The Paciello Group >> >>https://www.paciellogroup.com >> >>A Vispero Company >> >>https://vispero.com >> >>-- >> >>This message is intended to be confidential and may be legally >> >>privileged. It is intended solely for the addressee. If you are not >> >>the intended recipient, please delete this message from your system >> >>and notify us immediately. >> >>Any disclosure, copying, distribution or action taken or omitted to be >> >>taken by an unintended recipient in reliance on this message is >> >>prohibited and may be unlawful. >> >> >> >>
Received on Sunday, 9 August 2020 06:55:26 UTC