- From: Chris Rogers <crogers@google.com>
- Date: Thu, 16 Feb 2012 11:07:28 -0800
- To: Olivier Thereaux <olivier.thereaux@bbc.co.uk>
- Cc: "public-audio@w3.org" <public-audio@w3.org>
- Message-ID: <CA+EzO0=ob62oar7CcErBLHKnYJnfxDgDfutVn9BOGmCG+ACTEA@mail.gmail.com>
On Thu, Feb 16, 2012 at 6:41 AM, Olivier Thereaux < olivier.thereaux@bbc.co.uk> wrote: > Hi all, > > At our meeting this week, we discussed which use cases were deemed to be > "highest priority", and I took the action to look at whether the "high > priority" UCs were covering a large number of our requirements. > > TL;DR: 23 of 28 requirements are covered by the "high priority" use cases. > And there are other ways to split them which may help us with setting the > "level 1" goalposts. > > > Now for the detail: > > 1) I updated the wiki page for use cases and requirements with a mention > of priority. At the moment, use cases 1 (Video Chat), 2 (HTML5 game with > audio, effects, music) and 7 (Audio/Music visualization) are HIGH priority, > and UC6 (Wavetable synthesis of a virtual music instrument) is MEDIUM > priority. All the others (including the newly added use case 13 for audio > recording+saving) are LOW priority. > > These priorities have gone unchallenged since the meeting on Monday, and > since my pointer to the minutes two days ago, so I assume we have consensus? > > > 2) I made a detailed mapping of the 13 uses cases and 28 requirements. See > e.g: > http://www.w3.org/2011/audio/**wiki/Use_Cases_and_** > Requirements#UC1_.E2.80.94_**Related_Requirements<http://www.w3.org/2011/audio/wiki/Use_Cases_and_Requirements#UC1_.E2.80.94_Related_Requirements> > > This is the part which I have done in good faith but I am sure there are > mistakes. There were many use cases where it wasn't entirely clear whether > “Modularity of Transformation" was required, or "Dynamic Compression" (the > latter because I am not enough of an expert to be certain). Likewise, some > of our requirements (such as "Audio Quality") are very vague and hardly > usable, and others (“Support for basic polyphony” and “Mixing Sources”) are > near-equivalent. > "Dynamic Range Compression" is a subclass of a type of processing you could call "Dynamics Processing" which includes: * compression * AGC (Automatic Gain Control: a form of slowly moving compression) * expansion * noise-gating * ducking I notice that you included noise-gating and ducking as required. In my view, "compression" is critical because of its importance is three areas: * music production: It's been considered one of the primary tools used in music production for decades - at the very top of the list * games: high-end native game engines such as FMOD incorporate a high-quality compressor to even out the many overlapping sounds to ensure the overall volume can be set properly, and also sweeten the overall mix * broadcast: compression is a key tool in both radio and television broadcast * WebRTC (related to broadcast I guess): I just met with Google's WebRTC team, and they're *very* interested in compression/AGC for conference call situations to even out the sound level. Imagine several people on the call, some sitting farther away from the microphone, some closer, transient noises, such as papers ruffling near the microphones, etc. I'd be interested also in what Chris Lowis has to say, both in regards to music, and broadcast. Chris > > That said, I think the map is a flawed but reasonable approximation of the > territory. I have re-mapped it to the table I had sent a couple of weeks > ago. I am attaching the latest version to this mail. The visual > representation makes it easy to know which requirements are associated with > more or fewer use cases. > > > Interestingly, if I count only whether a requirement is associated to a > high priority use case, I find that 23 out of the 28 we have are. > > The exceptions: > * Playback rate adjustment > * Dynamic range compression (possibly my mistake) > * Generation of common signals for synthesis and parameter modulation > purposes > * The ability to read in standard definitions of wavetable instruments > * Acceptable performance of synthesis > > Alternatively, we could split requirements thus: > > * 9 Requirements are shared by more than half of the UCs > Support for primary audio file formats > Playing / Looping sources of audio > Support for basic polyphony > Audio quality > Modularity of transformations > Transformation parameter automation > Gain adjustment > Filtering > Mixing Sources > > * 14 Requirements shared by less than half of the Use Cases, but required > by HIGH priority UCs > One source, many sounds > Capture of audio from microphone, line in, other inputs > Sample-accurate scheduling of playback > Buffering > Rapid scheduling of many independent sources > Triggering of audio sources > Spatialization > Noise gating > The simulation of acoustic spaces > The simulation of occlusions and obstructions > Ducking > Echo cancellation > Level detection > Frequency domain analysis > > * 5 Requirements shared by less than half of the UCs and not required by > HIGH priority UCs > Dynamic range compression > Playback rate adjustment > Generation of common signals for synthesis and parameter modulation > purposes > The ability to read in standard definitions of wavetable instruments > Acceptable performance of synthesis > > > > Thoughts? Opinion on whether this is helpful? Glaring mistakes in the > process? Other ways you'd go at it? > > > Thanks, > -- > Olivier >
Received on Thursday, 16 February 2012 19:08:00 UTC