Re: Review of Web Audio Processing: Use Cases and Requirements

A couple replies inline:

olivier Thereaux wrote:
> Hello Michael,
>
> General comments for now - I will go into details when entering issues into our tracker.
>
> On 27 Jun 2012, at 20:04, Michael Cooper wrote:
>   
>> I haven't proposed use cases but do think
>> we need to develop a use case that explains a user with a screen reader
>> and who depends on audio cues from the operating system, who is also
>> interacting with Web application audio as proposed in the other use cases.
>>     
>
> This sounds like a good opportunity to truly enrich our use case #8:
> https://dvcs.w3.org/hg/audio/raw-file/tip/reqs/Overview.html#uc-8--ui-dom-sounds
>
> Making the UC about a strong need to use and control UI audio cues will make it more interesting.
>   
Perhaps that use case could be enriched. I do think we need one use case
about a screen reader user interacting with a page that plays audio
automatically, whether to enhance controls like in a game, or as part of
a music education process, or something. I think PFWG members can write
it up, I just wanted to flag the need for that.
>   
>> Are there issues with needing to provide a way for limits e.g., on total
>> volume when multiple tracks layered, or is this handled by audio
>> equipment? 
>>     
> …
>   
>> Need a requirement to provide ways to avoid triggering audio-sensitive
>> epileptic seizures.
>>     
>
> These are interesting needs, and we need to have a good think about safeguards and whether/how to implement them. I like to think in terms of "enabling developers to do the right thing" rather than "adding limitations against such mistakes" so the answer to your concerns might reside in developer guidelines? Or are we seeing a need for user preference at the (browser) implementation level?
>   
For the most part I agree with enabling developers to do the right
thing, rather than introducing limitations against mistakes. However, if
there were some limitation that clearly prevented a class of problems,
and didn't introduce other problems as a result of the limitation, I'd
want to explore that. I don't have a high expectation we'll find such a
limitation, but wouldn't want to rule it out at this stage.

I think more that we might find some feature that enables applications
or user agents (as well as developers via automated checkers) to detect
that a risky situation might be emerging. For instance, if certain beat
frequencies are known to be problematic, perhaps there are cases with
some types of audio where it is automatically detectable that the risk
of creating such beat frequencies exists. I don't know if that will even
emerge as something doable, but want a stake in the ground to explore
it. It may be that most user agents would do nothing with that
information, but a user agent used by somebody prone to audio-sensitive
seizures could have a user preference option to take the conservative
path and mute all audio under certain conditions. Whether audio APIs
need to do anything to support this, or whether it's purely up to the
UA, I don't know yet.

Michael

-- 

Michael Cooper
Web Accessibility Specialist
World Wide Web Consortium, Web Accessibility Initiative
E-mail cooper@w3.org <mailto:cooper@w3.org>
Information Page <http://www.w3.org/People/cooper/>

Received on Friday, 29 June 2012 14:27:39 UTC