W3C

– DRAFT –
ARIA and Assistive Technologies Community Group (AT Automation)

11 April 2022

Attendees

Present
-
Regrets
-
Chair
-
Scribe
s3ththompson

Meeting minutes

<zcorpan> https://github.com/w3c/aria-at-automation/issues/18

Summary from previous working session

zcorpan: we talked about a "headless" mode for screenreaders... which could in theory allow running another instance of a screen reader for testing while keeping one installed to vocalize outputs. and also the idea of "sessions". someone pointed out the need for automatically downloading screen readers of a particular version from a script.

<zcorpan> https://github.com/w3c/aria-at-automation/issues/15

s3ththompson: as for meeting structure, we will have monthly "Group Meeting" where we try to get all vendors to attend and discuss topics relating to broad consensus, followed by a "Working session" 2 weeks later. The working session is open to all, but some vendors expressed that they only have capacity for a monthly meeting

s3ththompson: Bocoup will set up calendar event and write out meetings on wiki page

Matt_King: I'd like to send explicit invites to certain people as well

s3ththompson: great, let's talk about that

API Standard Roadmap

zcorpan: Milestone 0 would be architecture, API shape, protocol

zcorpan: Milestone 1 would be settins

James Scholes: how do we handle abstraction or cases where settings differ between screen reader

Matt_King: i assume we'd have the ability to change arbitrary settings for each screen reader, along with trying to have additional mechanisms for changing shared settings

s3ththompson: we also can decide whether an abstraction should live at the level of the Standard API or a higher-level library, like playwright or something

James Scholes: so there would be an API surface for shared settings, an API surface for changing arbitrary settings, and an API for enumerating arbitrary settings

Michael Fairchild: would vendors be implementing at the same time as we are developing the API

Matt_King: we will be working on implementation in NVDA alongside developing the API, and hoping other vendors work on implementation alongside to

James Scholes: our implementation roadmap isn't exactly captured here, but it will be parallel with this as much as possible

zcorpan: Milestone 2 API to capture spoken output without changing the TTS voice

Michael Fairchild: so would you be pressing key presses or sending via API? how would that differ based on whether you were in headless mode or not?

James Scholes: we haven't completely defined headless mode yet, but i think the idea would be that you could isolate the input and output of the AT from other instances running

James Scholes: in general, we might optimize for running in the cloud anyways, where there aren't necessarily multiple instances running

jugglinmike: perhaps the word "headless" isn't the best word here, since there's no UI involved... perhaps "process-bound" helps us understand our goals here better

Matt_King: i like the idea of process-bound. it helps communicate the idea very clearly of what we want

James Scholes: it's possible that screen readers say: this is a great idea, but tell Microsoft to go implement it...

mzgoddard: perhaps we should just focus on articulating these use cases with the vendors as early as possible. they might have just not had pressing use cases to motivate this kind of thing before

zcorpan: feel free to read and comment on the longer list of Milestones on the issue

zcorpan: in the remaining time, let's jump to Milestone 4

zcorpan: Milestone 4 is a list of commands for specific behaviors that we might want to implement

James Scholes: this was a confusing list to read because some of them are missing their directional counterpart... and some we didn't recognize. maybe we base this list on the published docs from each screenreader rather than parsing output from other tools

Matt_King: how much would ARIA-AT use the API commands vs. the keyboard presses?

James Scholes: we could deduplicate a fair amount of AT-specific test writing

<zcorpan> https://a11ysupport.io/learn/commands

zcorpan: in general, milestone 4 is more in the realm of the web developer audience than ARIA-AT... we care more about the actual user inputs matching keypresses and we don't necessarily trust abstract APIs to match reality...

James Scholes: or someone could wright a utility library: map "next heading" to the right keypress

Minutes manually created (not a transcript), formatted by scribe.perl version 185 (Thu Dec 2 18:51:55 2021 UTC).

Diagnostics

Maybe present: jugglinmike, Matt_King, mzgoddard, s3ththompson, zcorpan