Speech can be input or output. So talking to computer is speech input.
Screen readers are speech output. The computer is doing the recognizing,
so speech recognition is the technology used to enable speech input.
I don't think there is reason to call speech input used for a11y
anything different from speech input used as a preference. It's all best
described as a speech input.
Intelligent agents can use speech or text input and speech or text
output or a mix.
Cheers,
Kim
On 2/2/2021 5:01 PM, Joshue O'Connor wrote:
> Katie Haritos-Shea wrote on 02/02/2021 21:50:
>> Yeah, but what about non-accessibility related speech recognition
>> software, such as what is part of SIRI, Cortana, etc. and any voice
>> enabled UI - who use both of the systems that were originally AT -
>> voice recognition and screen reading speech software? How do we
>> differentiate there?
> That's a good question Katie. I also think there are/will be grey
> areas where the distinction may not be so clear. The definition of
> what AT is will also change as they become more ubiquitous.
>
> So discussions like this are good to tease these things out. I think
> one distinction is clear, back to Shawns original point - about
> identifying the speaker. That to me is a discreet thing.
>
> Then there may be differences between speech to text for a11y, and
> non-(explicitly) a11y related applications, that may also have a11y
> uses (such as SIRI and Cortana.)
>
> HTH
>
> Josh
>
--
___________________________________________________
Kimberly Patch
(617) 325-3966
kim@scriven.com <mailto:kim@scriven.com>
www.redstartsystems.com <http://www.redstartsystems.com>
- making speech fly
PatchonTech.com <http://www.linkedin.com/in/kimpatch>
@PatchonTech
www.linkedin.com/in/kimpatch <http://www.linkedin.com/in/kimpatch>
___________________________________________________