W3C home > Mailing lists > Public > w3c-wai-ig@w3.org > April to June 2015

Re: Changing voices in screen readers

From: Rodrigo Prestes Machado <rodrigo.prestes@gmail.com>
Date: Sat, 16 May 2015 21:09:20 -0300
Message-Id: <395D4713-B79E-4924-8AFA-EAD83025FD68@gmail.com>
To: w3c-wai-ig@w3.org
Thank you Chaals and Phill!, 

I understood that JAWS can use different voices, for example, differentiate an operating system menu and an HTML content. However, I was thinking to change the sound of live regions (notification system) can be a useful feature for some users. For example, in text editor in Google Docs, there is a constant need for user perception, if JAWS enable diferente voices to live regions, it can create a different usability approach, Is it a bad ideia?

> Em 13/05/2015, à(s) 10:56, Phill Jenkins <pjenkins@us.ibm.com> escreveu:
> yes, 
> you can try multiple voices with JAWS (screen reader) and MAGic (screen magnifier with speech), even in "40-minute demo mode". 
> see http://doccenter.freedomscientific.com/doccenter2/doccenter/rs25c51746a0cc/voiceprofiles/02_voiceprofiles.htm <http://doccenter.freedomscientific.com/doccenter2/doccenter/rs25c51746a0cc/voiceprofiles/02_voiceprofiles.htm> 
> For example, one can change the voice in context, such as for any of the following: 
> Adjust combo box. The Adjust combo box defaults to All Contexts, but here you can also choose the following voices to change:
> PC Cursor Voice
> JAWS Cursor Voice
> Keyboard Voice
> Tutor and Message Voice
> Menu and Dialog Voice
> So if one changed the settings for the voice for Menu and Dialogs, they would "sound different" from regular text on the page/app.  I've actually done this when training sighted users to user JAWS as a test tool so they "hear" a different voice for different controls and labels vs regular text; kind of audio styling to match the visual styling.  As you can imagine however, the JAWS voice is still odd sounding to the first time listener, so we emphasize the visible lists of links and controls list that JAWS displays. 
> Having said all this, I still believe that 'voice settings' are generally the assistive technology's (AT's) and end user's responsibility, and NOT the web author's.  Remember, we're not creating a one-size fits all audio version of a web site; authors and developers are *enabling* non-visual access to a web app.  Enabling is both a design and coding practice. Novice screen reader users will "turn on" all the vebosity so they can to "learn", while power screen reader users will turn off most of the verbosity settings.  Quality AT like JAWS and MAGic allow the end user (and sighted testers) to create "profiles" that they can switch to in different contexts.  
> ____________________________________________
> Regards,
> Phill Jenkins, 
> IBM Accessibility, created 
> 'IBM Screen Reader for DOS' 
> 'IBM Screen Reader for OS/2' 
> 'IBM Home Page Reader'
Received on Sunday, 17 May 2015 00:09:54 UTC

This archive was generated by hypermail 2.3.1 : Thursday, 25 May 2017 01:54:15 UTC