W3C home > Mailing lists > Public > w3c-wai-ig@w3.org > October to December 2010

Re: Using ARIA to control screen readers

From: Jamal Mazrui <empower@smart.net>
Date: Thu, 11 Nov 2010 06:25:59 -0500
Message-ID: <4CDBD2C7.9020609@smart.net>
To: Ian Sharpe <iansharpe@manx.net>
CC: w3c-wai-ig@w3.org
I agree that would be useful, Ian, but don't know how that could be 
done.  If you find a way, please let us know.


On 11/11/2010 4:06 AM, Ian Sharpe wrote:
> Hi
> I want to provide additional contextual information about the HTML
> element currently being voiced by a screen reader if the user presses a
> key as it reads through the content on a page. For example, the page
> might contain a list of used cars for sale and as this list is being
> read by a screen reader, I want the user to be able to press a key, say
> 'd', and have the screen reader read a description of that particular
> car before continuing reading through the list.
> I know I could simply include the description in the visible content and
> the screen reader would read this out, but there may be many cars in the
> list and do not want the user to have to keep skipping the descriptions
> of cars they may not be interested in.
> I do not know of any way of finding out which element the screen reader
> is currently reading at the time the user hits a key. I would be very
> interested to hear if anyone thinks this may be possible and how to
> achieve it.
> I have been looking at ARIA and thought it may be possible to loop
> through the elements on the page and update the content of an aria-live
> region as it progresses. But this wouldn't wait until the screen reader
> had finished reading the content of the live region before updating it
> with the new content and you would probably only hear the first and last
> elements read aloud.
> I believe that some screen readers may focus the element being spoken in
> certain modes which could then be used to determine the element being
> spoken but suspect this will not work for all screen readers and may
> require the user to switch to a particular reading mode that moves focus
> with speech.
> It would be straight forward to simply require the user to press a key
> to move to the next element and update a live region at the same time
> and leave the control to the user, but this would require the user to
> manually press a key to move through the list rather than simply sit
> back, listen, and only interract when they want to now more. It's a
> minor inconvenience and suspect wouldn't be a big issue for most screen
> reader users but thought I'd ask anyway. To be able to do something
> based on what is being read, as it is being read, would seem like a
> useful thing to do as well more generally. Maybe this is something more
> for AT though.
> I'm currently using NVDA to test this concept if it makes any difference.
> Thanks in advance.
> Cheers
> Ian
Received on Thursday, 11 November 2010 11:26:59 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 13 October 2015 16:21:41 UTC