- From: Joanmarie Diggs <jdiggs@igalia.com>
- Date: Wed, 11 Nov 2015 09:29:34 -0500
- To: Michiel Bijl <michiel@agosto.nl>
- Cc: www-style@w3.org, W3C WAI Protocols & Formats <public-pfwg@w3.org>
Hi Michiel. On 11/11/2015 04:57 AM, Michiel Bijl wrote: > I don’t understand the question regarding VoiceOver in the mail you link > to. Could you explain? Maybe I can test it. [...] >> [1] https://lists.w3.org/Archives/Public/public-pfwg/2014Nov/0094.html Does VoiceOver have commands to read rendered text by unit (character, word, line)? If so, what happens when you use those commands to read text with generated content? In the case of line, is the line spoken the same as the line rendered? Or does VoiceOver, like at least some Windows screen readers, have its own definition of "line" (e.g. 125-characters-long slices of the text within the element)? Some background: From what I have seen, generated content IS exposed to ATs when you ask for the entire text of this element. BUT it fails to be exposed on some platforms if you ask for a specific unit (line, word, character) for a given offset. As a result of that failure, ATs that provide navigation based on the layout of the rendered text, e.g. so the screen reader's next/previous line commands read the same, full line displayed to sighted users, might not be able to get that text using their platform's accessibility API to get text by a specified unit. Instead, they would have to suspect that their API didn't give them all the text and then implement workarounds to figure out what the actual line is. HTH. --joanie
Received on Wednesday, 11 November 2015 14:31:42 UTC