Re: [EXTERNAL] Natural language interfaces and conversational agents

I've been sitting back and reading this conversation, and I'm
appreciating some good progress in scoping potential work.

I want to play the devil's advocate a bit.

I'm unconvinced it's helpful to say the universe of natural language
interfaces needs to be accessible to all pwd, because some of the edge
cases are, imo, too cumbersome to pursue when alternative technologies
exist that those users might find far more attractive. This is the
concept known in U.S. disability regulation as "equivalent

If one can achieve the same functionality in a more usable interface,
why would we insist they should work harder to do so in natural language

Seems to me the key function of natural language interfaces is not the
language, but the hands off nature of the interaction. One is able to
work with the technology without recourse to keyboard or mouse, and able
to do so while one's hands may be otherwise occupied. Thus, one is able
to wkr even from some physical distance. To my mind this explains
the recent success of these agents, and it explains why the same general
idea went nowhere in earlier days--think of the old MIT natural language
phonomen Eliza:

I suggest we need to find a way to honor suitability for purpose into
our scoping somehow.



White, Jason J writes:
> Thank you, Josh, for your thoughtful commentary. I think everyone agrees there are challenging scope boundaries in this area that we haven’t yet resolved. A good example of the problem that I’ve read in the research literature is as follows.
> Consider a navigation application with a natural language interface and a graphically displayed map. The user points to a place on the map and says “send the ambulance here” to the voice agent. It’s the combination of the utterance and the pointing that determines what the user is referring to, but the pointing gesture and the map aren’t strictly speaking part of the natural language aspect of the design.
> From: Joshue O'Connor <>
> Sent: Friday, 5 March 2021 9:30
> To: White, Jason J <>
> Cc: John Paton <>; Scott Hollier <>;
> Subject: Re: [EXTERNAL] Natural language interfaces and conversational agents
> Hi Jason and all,
> White, Jason J wrote on 05/03/2021 13:45:
> •  Focus our work on the accessibility of the natural language interaction itself. As far as I know, no one has documented the accessibility requirements for it elsewhere.
> •  Refer to other guidance (WCAG, XAUR, RAUR, etc.) for the accessibility of other aspects of the user interface.
> •  Note that natural language interaction can occur as part of a larger interface and that the whole interface needs to be accessible.
> +1 from me, with qualifying comments to signal to you all my (ever) shifting perspective on this. As I commented in a private mail to Jason, the situation we are in regarding scope, and various challenges can be broadly broken into:
> 1) The I/O aspect
> 2) The service (or agent) behind it
> There are also options on these approaches/perspectives, on these aspects which are 'narrow' - focusing initially on Speech/Voice User Interfaces only or much broader. My two cents are that starting from the narrow perspective would give us a basis to add other modalities later on, but there is push back on that, which I also appreciate and understand. If we were to then take the broader approach and try to widen the scope we can get into very muddy and indistinct water super quickly. For the broader scope approach my current thinking is that we may avoid confusion, mixing streams etc if we took up the idea of 'Natural Language Interface Accessibility User Requirements'. Thinking of Michael's sensible suggestion to have clearly defined terms etc this one if my fave, as it is already well defined, isn't just a marketing term etc. I prefer this to Smart Agents, which potentially pushes us into a sea of IoT and related services. One one level this may not be a bad thing, but we don't have infinite time either.
> To me, if we want to realise a user requirements document with a broader scope - this really nicely covers the need for a multi-modal, device independent descriptor for the I/O side and we can add a strapline or <h2> etc saying ' Accessibility infrastructure and supporting services' or similar. I'm thinking this would allow us to cover VUIs, and other I/O modalities for other groups, the kind of things that Jason refers to as 'Conversational' etc as well as look at the services behind them.
> This is really helpful Jason, and please lets continue to discuss these options, and indeed any more we may be missing. If we were to go down the broader road, then I find this terminology is the most suitable nomenclature that I've seen yet.
> Josh
> --
> Emerging Web Technology Specialist/Accessibility (WAI/W3C)
> ________________________________
> This e-mail and any files transmitted with it may contain privileged or confidential information. It is solely for use by the individual for whom it is intended, even if addressed incorrectly. If you received this e-mail in error, please notify the sender; do not disclose, copy, distribute, or take any action in reliance on the contents of this information; and delete it from your system. Any other use of this e-mail is prohibited.
> Thank you for your compliance.
> ________________________________


Janina Sajka

Linux Foundation Fellow
Executive Chair, Accessibility Workgroup:

The World Wide Web Consortium (W3C), Web Accessibility Initiative (WAI)
Co-Chair, Accessible Platform Architectures

Received on Friday, 5 March 2021 15:15:08 UTC