- From: Joshue O'Connor <joconnor@w3.org>
- Date: Tue, 2 Feb 2021 22:01:52 +0000
- To: Katie Haritos-Shea <ryladog@gmail.com>
- Cc: Andrew Arch <andrew@intopia.digital>, "kim@redstartsystems.com" <kim@redstartsystems.com>, "Bakken, Brent" <Brent.Bakken@pearson.com>, Shawn Henry <shawn@w3.org>, "EOWG (E-mail)" <w3c-wai-eo@w3.org>, WAI Coordination Call <public-wai-cc@w3.org>
Katie Haritos-Shea wrote on 02/02/2021 21:50: > Yeah, but what about non-accessibility related speech recognition > software, such as what is part of SIRI, Cortana, etc. and any voice > enabled UI - who use both of the systems that were originally AT - > voice recognition and screen reading speech software? How do we > differentiate there? That's a good question Katie. I also think there are/will be grey areas where the distinction may not be so clear. The definition of what AT is will also change as they become more ubiquitous. So discussions like this are good to tease these things out. I think one distinction is clear, back to Shawns original point - about identifying the speaker. That to me is a discreet thing. Then there may be differences between speech to text for a11y, and non-(explicitly) a11y related applications, that may also have a11y uses (such as SIRI and Cortana.) HTH Josh -- Emerging Web Technology Specialist/Accessibility (WAI/W3C)
Received on Tuesday, 2 February 2021 22:01:58 UTC