MINUTES: ARIA-AT call, Thursday 2020-12-03

Link to minutes:

https://www.w3.org/2020/12/03-aria-at-minutes.html





[W3C]<http://www.w3.org/>
- DRAFT -
ARIA and Assistive Technology Community Group
03 Dec 2020
Attendees
Present
Jemma, Juliette_McShane, Matt_King_, jesdaigle, jongund, michael_fairchild, rob_fentress, s3ththompson, westont, zcorpan_, boazsender
Regrets
Chair
Matt King
Scribe
jongund
Contents

  *   Topics<https://www.w3.org/2020/12/03-aria-at-minutes.html#agenda>

     *   App workstream update<https://www.w3.org/2020/12/03-aria-at-minutes.html#item01>
     *   Automation workstream<https://www.w3.org/2020/12/03-aria-at-minutes.html#item02>
     *   Test writing<https://www.w3.org/2020/12/03-aria-at-minutes.html#item03>
     *   Issue 337<https://www.w3.org/2020/12/03-aria-at-minutes.html#item04>

  *   Summary of Action Items<https://www.w3.org/2020/12/03-aria-at-minutes.html#ActionSummary>
  *   Summary of Resolutions<https://www.w3.org/2020/12/03-aria-at-minutes.html#ResolutionSummary>

________________________________
<rob_fentress> +present
<scribe> scribe: jongund
MK: Reviewing open pull requests
... Community group project board, I have not done anything yet
... Some issues from last week
... Standard work stream updates
... Is Seth on?
ST: Yeah folks
MK: What do we need to cover today?
... Updates on each of the work streams, what are the priorities
ST: That may need some discussion, and added deep dive call discussion, two action items to discuss
... Title versus the task
... We should give a summary to the group
MK: A summary of where we stand, test writing updates Sina for test writing
... App work stream update
... Update on the automation
... Then test writing
ST: Issue 337
SP: 349 is about automation and test format
App workstream update
MK: Several things going on in the app space
ST: Quick update
... Ezak is doing some usability testing on interface
... The engineering team is getting the design right about the report page
... We need to spend more time on the structure of the table, working with Sina et al.
... Looking for a nice simple implemenation soon
MK: For this year, the reports page, where we have outstanding work?
ST: That's correct, and we will spend some time fixing highest priority usability issues
MK: We will not have anytime as the group to talk about the issues
... Usability testing results next week?
ST: yes
MK: We can talk about that on the 17th, but the issues maybe resolved by then
... What is the schedule for work as we approach the holidays, we will meet on the 17th, how much work after the 17th
ST: 6 days of work
MK: There will be a little bit of time for anything discussed on the 17th
ST: Some time before the end of the year other things may need to wait until next year
MK: Any other questions?
Automation workstream
MK: Simon you have a lot of progress
SP: Proposed a test format for instructions and assertions, implemented that in NVDA
... Working in NVDA, here is a link
<zcorpan_> https://docs.google.com/document/d/1YxHAf6r3E2RG4REaYcmM14DuxfBgP4XoWRdfSk5aAQM/edit

SP: In issue 349, describes this test format, some questions
...
https://github.com/w3c/aria-at/issues/349

<zcorpan_> open questions: https://github.com/w3c/aria-at/issues/349#issuecomment-737902356

MK: Should talk about this now or wait until 337
SP: People should look at the format and give feedback
MK: What does it feel like runnin... how do you feed this test file to NVDA
ST: It essentially opens chrome and then a driver intercepts NVDA to get the text strings
... you run it from the comand line, you see things happening on chrome, but the speech is being intercepted
... A server could be used to collect results and generate reports
MK: I also wonder what you think about, if we should try to do this with another screen reader, to see what the differences and constraints maybe
SP: Yes maybe
SB: Is this dependent on NVDA, is it just a speech driver
SP: Not sure
SB: We could write a SAPI driver, then it would work for NVDA and JAWS
... It could be screen reader agnostics
... You could even capture the audio
JS: If you want to tie in some annotations in the result, some string based agnostic
<Jemma> SAPI:Speech Application Programming Interface
MK: I am thinking about, the reason I asked that question, it may affect at one level we want to abstract
... This driver can drive any screen reader, ... the implement for all screen readers
SB: It will not work on macOS
MK: The test format should be independent of the screen reader, but what does a driver looks like
SP: The test format can be the same across all screen readers...
JS: We should do an abstrct, if its JAWS do this, if it is voiceOver do this
... If we want next item, and then a central command reference that is screen reader
MK: That's a different type of abstraction than what I was thinking about
... Back to this basic question about what we do right now. before we nail down to many things, should we repeat with another screen reader
SP: We can make decisions on the information we have now
... We may need to revisit with other screen readers
MF: I have done some command abstraction, and I can share
SP: I am interested
<michael_fairchild> https://github.com/accessibilitysupported/a11ysupport.io/blob/master/data/ATBrowsers.json rendered here https://a11ysupport.io/learn/commands

SP: Any other thoughts on this topic?
SB: We alos want to monitor braille output in addtion to speech
MF: This made me think
... Is it important, the SAPI, we get the text, but speech is sometimes speaks things differently
SB: For example 123 can be 1 2 3 or two hundred twenty three
JS: There are usability coincerns
SB: We are asking if it is recorded, not is it recorded the way we like
<westont> For anyone curious, here's how NVDA "system" tests are emulating key presses https://github.com/nvaccess/nvda/blob/master/tests/system/libraries/SystemTestSpy/speechSpyGlobalPlugin.py#L218

MK: Our job is definitely not to get all synthesizers to do the same thing
... braille displays are separate products
SB: I was talking about the speech
<westont> A practicality consideration is that this NVDA plugin has a way of knowing when NVDA is "finished" processing the keyboard input. Without that internal information, it becomes quite hard to know when the SAPI plugin should stop listening, and assume nothing was announced
MF: I am talking about human testing, if the speech is different than the text, e.g. synthiszer doesn't recognized by the synth
SB: There is some string replacement
JS: There are synthesizer dependent behaviors
<westont> Here's NVDA's symbol substitution dictionary https://github.com/nvaccess/nvda/blob/master/source/locale/en/symbols.dic

MK: Add screen reader and it gets pretty complicated
SB: The answer is no
MK: I agree
Test writing
SB: We have been writing some test
... Feedback is welcome
JS: We have 6 ready, there are 3 pull requests waiting
... 7 or 8 plans being written
MK: I was spending time on select only combobox today, it is easy to see in the preview to look at
... It is in github.io open in either form you like
... The hard part is making line by line comments
... I was going to write comments in the comment page
SB: Yes please, easier for screen reader users
MK: All the assertions are in the same row, so it is messy
SB: Another point if you have a META issue, we can look at repeated issues
https://github.com/w3c/aria-at/pull/338

https://raw.githack.com/w3c/aria-at/tests/combobox-select-only/index.html

MK: This is some great progress
Issue 337
<zcorpan_> https://github.com/w3c/aria-at/issues/337

ST: A few orverlapping issues in the test format
... Simon brought up the firt issue
... We need to be more precise in our terminology
... Assertions can have more than one part
... Doing a string comparison, breaking things down into the parts, that has issues with testing duplication and order, but better than full string matching
... Continue designing the automated test format, defining instructions..
... See the issue for more details
MK: The action there was specifically was on the instruction piece
... It is about how we write the instructions in the test
... There is still some complication when the ...
JS: If we are telling someone to navigate to a grid, if they press T they only hear the size of the table, they need to press down arrow
... When we need sequential commands how do we represent this
SB: This can be abstracted for specific screen readers
JS: We don't want to miss a case, for example navigating into a table
MK: It is similar to combobox issues
... This is an example of the kind o thing we need to capture
JS: Even with the same screen reader there are inconsistencies
MK: We will see this in examples like the carousel
JS: In highlights when there are roles inside of roles
MK: There is a part of me that sequential navigation should be multiple tests?
SB: You will have a jagged matrix, that will be sparsely populated
MK: One of the things Seth, did we come to an agreement that the whole string is what we are testing, satisfies all the assertions, and there are no additional unexpected behaviors
ST: It we don;t look at sub strings now, we might not be able to do it in the future
SP: We probably specific assertions for role ann accName, and also compare the entire output
MK: So we want to do both, we will always do the equal, ...
... If the equal passes, then everything should pass
NK: It it doesn't pass the sub tests give an indication of why it didn't pass
SB: I got my role, I got my state, if an exact string doesn't match, but it is still ok, we want to capture that
SP: The sub test could pass, but the whole string may not match
discussion of breaking examples...
SB: When doing this automatically we do not have to be human readible, the labels could be tokens, rather than something humans understand
MK: We still have some difficult things for sequential testing, multiple tests
... Most tests will have one press that you get the output for the tets
... Then other tests have a sequence and then there might be speech along the way
SB: More information the second time, you need a way tfor people to skip over redundant content
MK: Something like start here and then nend here to do the assertions
SP: You can use the press commands, the output is what you have after the last press
MK: Let's say you are doing sequential navigation, as soon as you navigate past that, then you start collection
... The other thing you should be able to do, press down arrow is on the combobox
... It expected to take 2 down arrows, but maybe it took 3 presses
JS: It may also addresses changes to screen reader releases
SB: I was also thinking about the menubar example, in those three pieces of information for the assertions
MK: These are the output of the test things I want to test
SB: You want to do grouping
JS: The table is completely different
MK: Press down arrow until they get tot he widget
JS: We have been doing that
SB: A boole can be used for automated testing
MK: As long as the screen reader says the role
SP: I am struggling to understand
JS: For example you navigate to the table, so JAWS will read info about table, then the down arrow to go to the first cell,
Summary of Action Items
Summary of Resolutions
[End of minutes]
________________________________
Minutes manually created (not a transcript), formatted by David Booth's scribe.perl<http://dev.w3.org/cvsweb/~checkout~/2002/scribe/scribedoc.htm> version (CVS log<http://dev.w3.org/cvsweb/2002/scribe/>)
$Date: 2020/12/03 21:09:32 $
________________________________
Scribe.perl diagnostic output
[Delete this section before finalizing the minutes.]
This is scribe.perl Revision of Date
Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/


Guessing input format: Irssi_ISO8601_Log_Text_Format (score 1.00)

Present: Jemma Juliette_McShane Matt_King_ jesdaigle jongund michael_fairchild rob_fentress s3ththompson westont zcorpan_ boazsender
Found Scribe: jongund
Inferring ScribeNick: jongund

WARNING: No date found!  Assuming today.  (Hint: Specify
the W3C IRC log URL, and the date will be determined from that.)
Or specify the date like this:
<dbooth> Date: 12 Sep 2002

People with action items:

WARNING: Input appears to use implicit continuation lines.
You may need the "-implicitContinuations" option.


WARNING: IRC log location not specified!  (You can ignore this
warning if you do not want the generated minutes to contain
a link to the original IRC log.)


[End of scribe.perl<http://dev.w3.org/cvsweb/~checkout~/2002/scribe/scribedoc.htm> diagnostic output]

Received on Thursday, 3 December 2020 21:15:05 UTC