- From: fantasai <fantasai.lists@inkedblade.net>
- Date: Thu, 07 Jul 2011 10:33:25 -0700
- To: www-style@w3.org
On 07/07/2011 02:31 AM, Daniel Weck wrote: > Well, just to put things into perspective, let's say you have 2 pre-recorded audio clips, one for cue-before, one for > cue-after. The first one was recorded "normally" (whatever the convention is), whereas the second one is really loud on > average (for example, compressed waveform, narrow dynamic range). Unless the audio implementation is "clever" (e.g. automatic > normalization/equalization/filtering ... note that I am not an audio engineer), the user can't reduce the large variations of > perceived volume level. So authors obviously have a responsibility to prevent ear drum damage and to limit listening > inconvenience. Right, my point is, how can they do that if they don't know how loud 'medium' is? How can the author balance the loudness of the audio cue to the loudness of the voice, when it's unknown how loud the voice is? ~fantasai
Received on Thursday, 7 July 2011 17:33:54 UTC