[Conformance] Minutes from 28 January

Minutes from the Silver Conformance Options subgroup teleconference of
Thursday 28 January are provided here.

===========================================================
SUMMARY:
*            Continued discussion of use cases; auto-testing; and especially defect
              tolerance
===========================================================

Hypertext minutes available at:
https://www.w3.org/2021/01/28-silver-conf-minutes.html

===========================================================
   W3C

                                                                                                            - DRAFT -
                                                                                               Silver Conformance Options Subgroup

28 January 2021

   IRC log.

Attendees

   Present
          bruce_bailey, Jemma, PeterKorn, sajkaj, sarahhorton, Wilco

   Regrets
          John_Northup

   Chair
          sajkaj

   Scribe
          sarahhorton

Contents

    1. Agenda Review & Administrative Items
    2. Use Cases Review & Continued Discussion

Meeting minutes

  Agenda Review & Administrative Items

  Use Cases Review & Continued Discussion

   <jeanne> https://docs.google.com/document/d/1GyUYTnZp0HIMdsKqCiISCSCvL0su692dnW34P81kbbw/

   Peter: Reads through use case

   <sajkaj> ~~~~q?

   <sajkaj> ~~~~q?

   Peter: Want to engage Detlev on the case study

   Janina: Started email with Detlev, he'll try to expand on it
   ... started draft
   ... can't you tell if there's a programmatically associated label, isn't that testable
   ... can automatically test it but don't know if labels are correct
   ... provide guidance about what's in process or what's out of scope
   ... ratings are important to buying decisions, sharing information about products
   ... going to have to provide guidance on how to reach that decision
   ... obvious ones that are out, ones that are in, and gray area

   Peter: Make comments to Detlev in doc to capture thinking, can't resolve without him here
   ... adds questions to Google doc

   <Wilco> +10

   Jeanne: What can be detected by test is moving target, haven't thought of way to codify that's flexible

   Wilco: ACT Taskforce decided to move away from automated and manual, depends on context, how many assumptions, anything can be automated if you know the environment really well

   Peter: Wondering about after time has passed with introduction of code, testing for things, changes what's required to hit levels, don't know how to work into structure
   ... grace period reasonable amount of time to adopt techniques to when we expect they will be used

   Jeanne: Should only be thinking bronze level, bronze is regulatory
   ... new category of critical error, e.g., error identified by automated test
   ... error discovered by automated test, but then issue of false positives, complex sites where portions are not accessed by user, but trigger in automated audit, should add to use case list
   ... automated test, user accessible area, impact of accessibility, could be easily fixed

   Peter: Vendors developing tests, vendor A finds it, vendor B doesn't, need some notion of managing new technique, allowing some period of time

   Jeanne: Could include here with caveats, make note in document to add use case

   Janina: Working on response to issues in B, needs to think through different types of interactions

   Peter: Reads through #5
   ... break down use case into issues
   ... Read through #6

   Bruce: Wants to keep conformance compatible with gov space, okay with use case

   Wilco: Not telling use case, what is it trying to get at

   Peter: Separate from critical error, must be fixed, separate from primary and secondary areas, will always encounter bugs that impact users, sometime severe
   ... should accessibility be treated differently than issues that impact users generally
   ... e.g., train station, 2 entries, ticketing machine out affects everyone, elevator affects wheelchair users
   ... nothing of sufficient size/cmplexity will be free of bugs, if standard says need to be free of severe bugs, will never have site that passes bronze

   Jeanne: Giving range of what's acceptable, alt text, full rating if you meet X%

   Peter: One of two biggest concerns, need to figure it out

   Jeanne: Alt text, in critical flow and block user from accomplishing task
   ... if 95% passes and there are no critical errors it will pass

   Peter: Temporal concern, any moment in time may be bug, hours or days, will find error, fail site
   ... large sites some page will have critical error, esp if discovering error needs manual testing, no different from errors that affect everyone

   Janina: Luck of draw because sampling expectations

   Peter: Accurate sampling, might find it - most concerning when bug that can't be found with automated tests, still critical error

   Wilco: QualityMark, did audits, even ones that had mark year before, always found something new
   ... agree that it needs to be taken into account

   Peter: No idea how to take it into account

   Wilco: Critical errors works well page by page, very large - even moderate - will have errors, process question

   Jeanne: QA has normal testing procedures for everything, does the button work, does it work on mobile, 1000s of tests that QA runs
   ... small projects don't go out door until no critical bugs
   ... when testing on way out door, will do what org does for testing, accessibility bugs can't be worse than other bugs of other types
   ... lots of testing in QA, automated, manual - accessibility bugs are proportional
   ... maybe belongs in maturity model, nothing going out with critical bugs

   Peter: In agreement, not coming to solution about codifying in Silver
   ... blocker bugs discovered late, go to publish, fix quickly
   ... blocker bugs launch, add fix to next launch
   ... massive site/software, going to have some number of critical bugs always
   ... if not treating accessibility differently, critical things are critical, not treating disproportionately, should that be okay? How do we capture
   ... going to be period of time between when flashing is noticed and when it's fixed, could be 2 hours, will be flashing
  ... if test finds flash, is it noncompliant, penalties?

   Janina: User agent could check for flashing, doesn't work in real time, would catch this critical error

   Jeanne: Don't want to solve individual examples, we have categories, could add category for automated testing

   <Jemma> I agree with Jeanne.

   Jeanne: even with temporal problem, critical error that needs to be fixed, site overall has good accessibility, still only need 3.5 out of 4 to get bronze level, even with critical errors

   Peter: Few critical errors could bring down below 4 and could get bronze

   Jeanne: Guideline, write in different breakpoints of accessible and acceptable, overall score could address temporal, averaged and normalized over all guidelines

   Peter: That works
   ... will be lots of arguing about numbers, but it works
   ... will clean up example
   ... will report on extent to which use case is already addressed

   Jeanne: Write up article explaining to people, good to have to explain how things are working

   Peter: Yes but not until March when use cases are addressed

   Jemma: Is the information about scoring somewhere?

   Jeanne: It's in the FPWD, been looking at comments, questions, clearly didn't explain this well
   ... figured out last minute
   ... trying to address this important and valuable questions
   ... need to make it more understandable


    Minutes manually created (not a transcript), formatted by scribe.perl version 127 (Wed Dec 30 17:39:58 2020 UTC).



----------------------------------

Janina Sajka
Accessibility Standards Consultant
sajkaj@amazon.com<mailto:sajkaj@amazon.com>

Received on Friday, 29 January 2021 17:55:15 UTC