RE: Fwd: Screen Reader Audio on Mobile

Hi Patrick,

I don't think there can bright line between mobile and non-mobile, as we tried to explain in the note:
http://www.w3.org/TR/mobile-accessibility-mapping/#wcag-2.0-and-mobile-content-applications 

Do you think there is anything more that needs to be added to that section?

Cheers,
Jan

(MR) JAN RICHARDS
PROJECT MANAGER
INCLUSIVE DESIGN RESEARCH CENTRE (IDRC)
OCAD UNIVERSITY

T 416 977 6000 x3957
F 416 977 9844
E jrichards@ocadu.ca

________________________________________
From: Patrick H. Lauke [redux@splintered.co.uk]
Sent: October-06-15 1:10 PM
To: public-mobile-a11y-tf@w3.org
Subject: Re: Fwd: Screen Reader Audio on Mobile

Is this not already covered by UAAG 2.0
http://www.w3.org/TR/UAAG20/#gl-volume-config ? I don't personally feel
this needs to be singled out specifically for mobile/tablet devices.

Incidentally, on a more general note (and I'm sure this was touched on
before): do we have a definition of what constitutes "mobile"? Are
laptops with a touchscreen "mobile"? Devices like the Surface which have
a detachable/optional keyboard/trackpad, and blur the line between
tablet and laptop?

P

On 06/10/2015 17:27, David MacDonald wrote:
> ​Hi Team
>
> Please consider this request by Janina, the chair of the Protocols
> Working group regarding her experience as a blind person on an iPhone.
> It is really a user agent issue but we may want to take up this cause on
> the user agent sside of things.
>
> ​
>
> ---------- Forwarded message ----------
> From: *Janina Sajka* <janina@rednote.net <mailto:janina@rednote.net>>
> Date: Mon, Oct 5, 2015 at 8:10 PM
> Subject: Screen Reader Audio on Mobile
> To: David MacDonald <david100@sympatico.ca <mailto:david100@sympatico.ca>>
>
>
> Hi, David:
>
> In a telephone conversation some weeks hence, you asked that I provide
> you my architectural concerns about why TTS audio on mobile should be
> managed as a separate audio channel. My apologies for taking so long to
> provide this to you!
>
> BACKGROUND:     There are multiple sources of audio on mobile devices.
> These sources include incoming audio in a telephone call, music that a
> user might have stored in nonvolatile memory on the device, audio for
> movies (and other video) stored on, or streamed to a mobile device,
> sounds for use as alarms (or other audible markers of system activity),
> and synthetic Text to Speech (TTS) engines commonly used with screen
> readers by users who are blind. Combining these various sources into a
> single audio presentation that can be heard when a phone is held to one
> ear, or that can play in stereo through a pair of speakers on a tablet
> device, is
> called "mixing" by audio processing professionals. When and how this
> mixing occurs, however, impacts the level of reliability and performance
> the user will experience. It is my contention here that the screen
> reader user is porrly served on today's devices as a result of
> inadequate audio design considerations.
>
> THE PROBLEM:    Both IOS and Android lump TTS audio together into what's
> called the same "audio channel." This unfortunate architectural design
> decision creates functional problems for users who rely on TTS
> including:
>
> *       It's harder to independently manage TTS volume vis a vis volume
> *       of other audio events. This is true also for other audio
> *       characteristics such as eq, panning, etc.
>
> *       It's impossible to independently direct audio output. If the
> *       user wants movie audio to go to an external Bluetooth soundbar,
> *       she must accept that the TTS will also now be heard via those
> *       same Bluetooth speakers. This makes no sense from a
> *       functionality perspective inasmuch as the TTS is ostensibly part
> *       of a highly interactive user interface paradigm, whereas the
> *       movie audio is simply presentational. Lag times for TTS matter a
> *       lot, but for movie audio only synchronization with the video
> *       matters.
>
> *       It's impossible for TTS events to be properly prioritized when
> *       they're lumped together with other audio events this way.
> *       Because TTS is part of a highly interactive user interface, its
> *       system priority should always remain quite high, and should
> *       remain closely correlated to on screen touch events. This breaks
> *       down when prioritization is driven by playback of presentational
> *       audio such as music or movie/video sound tracks. One result of
> *       such inappropriate prioritization is also the poor performance
> *       of DTMF after the completion of a telephone call. Both IOS and
> *       Android are very poor at this for reasons of inappropriate
> *       system event prioritization.
>
> THE SOLUTION:   With today's more powerful, multi-core CPUs, and with
> today's independent audio management subsystems, it's perfectly
> reasonable to request a better architecture for TTS dependent user
> interfaces. The TTS used by screen readers should be managed in its own
> audio channel until the final mixdown for speaker/headset presentation.
>
> Thank you, David, for conveying this concern to the Mobile A11y TF on my
> behalf.
>
> Janina
>
>
>
> --
>
> Janina Sajka,   Phone: +1.443.300.2200 <tel:%2B1.443.300.2200>
> sip:janina@asterisk.rednote.net <mailto:sip%3Ajanina@asterisk.rednote.net>
>                  Email: janina@rednote.net <mailto:janina@rednote.net>
>
> Linux Foundation Fellow
> Executive Chair, Accessibility Workgroup: http://a11y.org
>
> The World Wide Web Consortium (W3C), Web Accessibility Initiative (WAI)
> Chair,  Protocols & Formats http://www.w3.org/wai/pf
>
>
>


--
Patrick H. Lauke

www.splintered.co.uk | https://github.com/patrickhlauke
http://flickr.com/photos/redux/ | http://redux.deviantart.com
twitter: @patrick_h_lauke | skype: patrick_h_lauke

Received on Tuesday, 6 October 2015 17:18:15 UTC