W3C home > Mailing lists > Public > w3c-wai-ig@w3.org > July to September 2015

RE: Changing voices in screen readers Re: Múltiplas vozes em leitores de telas

From: Léonie Watson <lwatson@paciellogroup.com>
Date: Sun, 12 Jul 2015 19:32:42 +0100
To: <chaals@yandex-team.ru>, "'Rodrigo Prestes Machado'" <rodrigo.prestes@gmail.com>, <w3c-wai-ig@w3.org>
Message-ID: <006101d0bcd1$24d43c60$6e7cb520$@paciellogroup.com>
On 13 May 2015 14:01, Chaals wrote:

“There is auralCSS - styles specifically meant for speech. But they have not been properly implemented anywhere. A generic proof of concept could:


In a browser extension:

1. Look for relevant properties in stylesheets or style attributes

2. Add the rules as instructions in data-something attributes, as described above.


In a JAWS script

Look for the relevant data-attributes, and change speech parameters accordingly. Including changing back as you leave elements…


To do this properly would mean calculating the cascade rules, which means you have to calcualte specificity of all the rules you find.

document.querySelectorAll() would at least help in translating the text from stylesheets into attributes, since you don't have to calculate the selectors yourself


So doing this moderately effectively would be a mess, and perform very badly.”


It would. In the meantime, here is a thing that uses the Web Speech API to simulate CSS Speech support. It isn’t pretty, but hopefully it helps demonstrate how a screen reader might respond to author defined aural styles:



“Hacking the relevant voice properties into the style processors of a browser, so you could use getComputedStyle directly, would reduce the cost somewhat.”


It would help enormously. The demos mentioned in the above article resort to putting the style definitions inline and pulling out the values as strings. That’s a horrible way to do things for lots of reasons.





Received on Sunday, 12 July 2015 18:33:06 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 20:36:53 UTC