W3C home > Mailing lists > Public > public-audio@w3.org > January to March 2012

Re: Use Cases and Requirements priorities

From: Chris Lowis <chris.lowis@bbc.co.uk>
Date: Fri, 17 Feb 2012 10:16:11 +0000
Message-ID: <4F3E28EB.5090802@bbc.co.uk>
To: public-audio@w3.org
On 16/02/2012 19:07, Chris Rogers wrote:
> I'd be interested also in what Chris Lowis has to say, both in regards
> to music, and broadcast.

With regards to broadcast there's considerable discussion on the whole 
issue of "loudness" when you switch between different radio stations or 
between commercials and programmes, for example. So much so that the EBU 
(European Broadcasting Union) has a whole task force dedicated to it![1]

Although most broadcasters already compress their broadcasts heavily at 
the point of transmission, I can well imagine a scenario where a 
listener is switching between IP-broadcast radio and other sources of 
audio (local music collections, or other stations) in an automatic 
fashion. We'd certainly want the ability to apply compression to "even 
out" the listening experience.

Perhaps we can capture this requirement in the video chat use case by 
adding something about compressing non-voice sounds ("on-hold music", 
perhaps) to match the (perceived) level of the voices?

Cheers,

Chris


[1] http://tech.ebu.ch/groups/ploud



> Chris
>
>
>     That said, I think the map is a flawed but reasonable approximation
>     of the territory. I have re-mapped it to the table I had sent a
>     couple of weeks ago. I am attaching the latest version to this mail.
>     The visual representation makes it easy to know which requirements
>     are associated with more or fewer use cases.
>
>
>     Interestingly, if I count only whether a requirement is associated
>     to a high priority use case, I find that 23 out of the 28 we have are.
>
>     The exceptions:
>     * Playback rate adjustment
>     * Dynamic range compression (possibly my mistake)
>     * Generation of common signals for synthesis and parameter
>     modulation purposes
>     * The ability to read in standard definitions of wavetable instruments
>     * Acceptable performance of synthesis
>
>     Alternatively, we could split requirements thus:
>
>     * 9 Requirements are shared by more than half of the UCs
>         Support for primary audio file formats
>         Playing / Looping sources of audio
>         Support for basic polyphony
>         Audio quality
>         Modularity of transformations
>         Transformation parameter automation
>         Gain adjustment
>         Filtering
>         Mixing Sources
>
>     * 14 Requirements shared by less than half of the Use Cases, but
>     required by HIGH priority UCs
>         One source, many sounds
>         Capture of audio from microphone, line in, other inputs
>         Sample-accurate scheduling of playback
>         Buffering
>         Rapid scheduling of many independent sources
>         Triggering of audio sources
>         Spatialization
>         Noise gating
>         The simulation of acoustic spaces
>         The simulation of occlusions and obstructions
>         Ducking
>         Echo cancellation
>         Level detection
>         Frequency domain analysis
>
>     * 5 Requirements shared by less than half of the UCs and not
>     required by HIGH priority UCs
>         Dynamic range compression
>         Playback rate adjustment
>         Generation of common signals for synthesis and parameter
>     modulation purposes
>         The ability to read in standard definitions of wavetable instruments
>         Acceptable performance of synthesis
>
>
>
>     Thoughts? Opinion on whether this is helpful? Glaring mistakes in
>     the process? Other ways you'd go at it?
>
>
>     Thanks,
>     --
>     Olivier
>
>
Received on Friday, 17 February 2012 10:16:36 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 17 February 2012 10:16:38 GMT