Re: Review of Web Audio Processing: Use Cases and Requirements

Hi, Michael-

Thanks for sending this review.

We will discuss this at the next telcon (and on this list).  In the 
meantime, I have a few personal comments inline...

On 6/27/12 3:04 PM, Michael Cooper wrote:
> Doug sent a request to Protocols and Formats Working Group to review Web
> Audio Processing: Use Cases and Requirements
> https://dvcs.w3.org/hg/audio/raw-file/tip/reqs/Overview.html. The PFWG
> is not able to assemble a consensus group review in a quickish
> turnaround right now, so I am sending my comments individually. I expect
> other members of PFWG to submit additions later. The version of the
> document I reviewed was accessed 25 June 2012.

I think it would be good to have a joint telcon, just to get on the same 
page.

My general comment, as in the IndieUI WG, is that instead of putting 
forth requirements, you should suggest Use Cases, from which we derive 
requirements.  I can help with that.


> My review focuses primarily on suggesting requirements for features that
> would improve accessibility. I haven't proposed use cases but do think
> we need to develop a use case that explains a user with a screen reader
> and who depends on audio cues from the operating system, who is also
> interacting with Web application audio as proposed in the other use cases.
>
> Need a requirement to allow both per-track and global (with the Web
> application) control of each audio channel / track / whatever for
> volume, mute, pause / restart, and autoplay. I think this is implied by
> the use cases but not spelled out in the requirements.

Seems reasonable, though I think maybe more details would be helpful.


> Suggest a requirement to allow audio channels to be designated as
> related to each other (e.g., a voice and instrumental overlay) so a
> volume change or pause of one of them affects all of the related ones,
> to allow them to stay in sync, yet allowing "unrelated" tracks (e.g.,
> sound effects) to be treated independently.

This is interesting; I don't know if this should be addressed at the 
webapp level, or the API level.


> Need a requirement that audio controls not affect system audio settings
> i.e., should have impact just for audio under control of the Web
> application. For instance, it would be highly problematic if a "mute" in
> the Web application muted all system audio. If other requirements mean
> system audio will be controllable from Web applications (this is not
> clear to me one way or the other right now), then instead the
> requirement is that users have an easy way to control whether to allow
> system audio settings to be impacted by in-application changes (e.g.,
> via a user preference option in user agent).

This one is more challenging.  In a traditional browser, there is no way 
for the API to do that anyway, so there's no need to spell out the 
requirement. In a "system OS" browser (something like Boot2Gecko), there 
may not be any distinction between the browser and the OS, so this 
requirement proposal doesn't make sense there either.


> The description of audio sprites brings up an issue but not sure of the
> exact requirement. Sprites that play in response to certain actions
> could prove extremely distracting to some users or could be more likely
> to cause momentary problems with comprehension of screen reader output
> etc. Users should therefore have the ability to prevent audio from
> playing, not just stop it after it's begun playing. This may be a
> requirement on Web applications, not on the audio APIs, but it might be
> the APIs need to make this possible in some way. It would also be
> helpful to come up with a small ontology of audio roles (for instance,
> music, speech, sound effects, etc.) so users could easily prevent audio
> of one type (e.g., sprites) from playing without preventing other types
> (e.g, music) from playing. Perhaps also needed is a requirement on user
> agents to offer a preference to users to allow audio to play
> automatically or only on specific request, recognizing that setting this
> preference this could interfere greatly with smooth function of some
> types of applications.

We've discussed the idea of a "global mute" of all Web Audio API sounds, 
and I agree this would be useful.

To enable the scenario you lay out, though, the UA might provide a 
preference to do so, or even have its own volume control in the UI; but 
that is not a requirement on the API... it's a requirement on the UA itself.


> Are there issues with needing to provide a way for limits e.g., on total
> volume when multiple tracks layered, or is this handled by audio
> equipment? Wouldn't want combinatorial effects to create excessively
> loud spots. Consider users who have audio volume higher than usual
> because of hearing impairment, but still we can't allow eardrum-damaging
> levels to come out.
>
> Need a requirement to provide ways to avoid triggering audio-sensitive
> epileptic seizures. The fact that sounds from a variety of sources might
> be combined, including script-generated sounds and transformations that
> could have unplanned artifacts, mean the final sound output may be less
> under the author's control than studio-edited sound. It is important to
> find ways to reduce unexpected effects triggering audio-sensitive
> epileptic seizures. To some extent this means warning authors to be
> careful, but any features we can build into the technology, we should.
> Unfortunately this is a new field to me and I don't know all the
> specifics, so it will take research (which of course I volunteer to be
> involved in, just looking for a placeholder for the issue now). A quick
> scan online suggests that certain beat frequencies and reverberance
> effects are known sources of problems. A set of user preferences
> allowing users to disable or control certain Web application-generated
> audio transformations might help with the latter issue.

This one seems, on the surface, to be really challenging.

I certainly acknowledge that this is an important issue, and that the 
implications are severe.

I'm not sure that we can actually address this, though, or how we would 
do so.  I agree with you that if we can find someone knowledgeable about 
this, we should solicit their feedback on if there are ways to prevent 
it.  But just as Javascript could be used to change background colors at 
a rate and combination that could cause seizures, I'm not certain we 
could control how the output of the Web Audio API might do something 
similar... it's a general-purpose piece of functionality.


> Need a requirement that audio from the Web application not interfere
> with system sounds (e.g., alerts), which may contain vital information
> for the user. While it's probably not desirable to pause Web application
> audio for system sound events, it's also not desirable to have system
> sounds drowned out by Web application audio. User preferences may be
> needed to indicate specifically how to handle the situation, but a way
> for Web application audio to be aware of system sounds will be needed.

Again, this would be a requirement on the UA, not on the Web Audio API. 
  A loud song or video in HTML could just as easily drown out system 
sounds, so this is a requirement at a different level of implementation 
than this API.


> Operating systems have a feature called "ShowSounds", which triggers a
> visual indication that an important sound like an alert has occurred.
> Enabling certain types sounds, like audio sprites, to take advantage of
> this feature may be important. I expect someone else to provide more
> details on this requirement but wanted to put a placeholder in this message.

On first reading, this seems like something the Web Notifications WG 
should be addressing. Or if you are suggesting a browser-based analog of 
this functionality, that should be a requirement at the content level, 
not the Web Audio API level.

Regards-
-Doug

Received on Thursday, 28 June 2012 07:37:33 UTC