- From: Claire Grupe <cgrupe@epic.com>
- Date: Fri, 26 Feb 2021 21:19:49 +0000
- To: "public-agwg-comments@w3.org" <public-agwg-comments@w3.org>
- CC: Aris Blevins <Aris@epic.com>
- Message-ID: <DM5PR17MB14359E8E42BBE58F591D81E3BC9D9@DM5PR17MB1435.namprd17.prod.outlook.com>
Good afternoon, A group of Epic developers, quality assurance managers, and user experience designers who have an expressed interest and knowledge of accessibility development and testing reviewed the First Public Working Draft of WCAG 3.0. Our feedback and follow-up questions are below. We also have two big-picture questions that we are framing for the GitHub repository and will post those shortly. However, the majority of our questions were more specific and nuanced and felt better suited for a direct email (rather than publishing publicly). Please feel free to follow up with me directly if you have immediate questions, would like to meet to discuss any of this, or if we can clarify where any of our questions or feedback are coming from. On behalf of the group at Epic who reviewed this working draft, we want to thank W3C for publishing this draft and asking for feedback, and for involving its end-users in the guidelines. Feedback 1. Verbiage: The word "Atomic" may not be clear or intuitive, making the concept and deliverables difficult to define. We suggest a more universal term such as object-based, element-based, or objective as an alternative.. 2. The scoring system getting to Bronze using only Atomic/element-based objective tests may inadvertently have the result of encouraging organizations to conduct Atomic tests alone. If Bronze is meant to be encouraged as a sort of new AA, this may be satisfactory. If Bronze, however, is a minimum requirement but not the encouraged level, it may be more accurate to rename the levels Best/Good/Mediocre (or Fair). a. How does the proposed scoring system avoid the pitfall of tests being designed to meet the criteria or reach a certain percentage, rather than seeking out ways to find problems? 3. Overall, the scoring system explanation is detailed and complex. While this is a more accurate way to capture and describe the nuance of needs for accessible development, a more succinct description and set of steps will be helpful where those testing for conformance are not accessibility specialists. In its current state, the categories of scoring do not seem scalable for training a large number of testers for whom accessibility testing is one part of a complex set of responsibilities. Note - At our organization, we do not have dedicated Accessibility Specialists for testing software.. 4. 7.1 Text Alternatives. More definition and guidance is needed on the different types of images for which text alternatives are required. a. The types are listed, but not independently defined. b. We have specific concerns about medical images and interactive elements that clinicians may use to document problems for a patient. 5. The guidelines specifically note that WCAG 3.0 is designed to improve the usability of the web for those with certain disabilities and limitations. We recommend emphasizing in the literature that good overall usability guidelines are not covered in WCAG; that the inherent usability or conceptual model for a process or workflow is covered in usability for all users, and WCAG's guidance ensures that this overall usability is extendable to all audiences. As it is currently written, it is unclear that overall usability is considered a minimum baseline requirement, and may be potentially confusing. 6. 7.5 Visual Contrast of Text: a. There is significant legibility and use-case different between reading a badge of text (blank space and a short read) compared with a paragraph. Should additional factors like leading, and font weight be included in criteria? b. We have found that too much contrast can be detrimental to usability as well. Are there guardrails or planned tests for when contrast is too high? c. We have struggled to address and provide clear definitions for non-text elements, and request clearer guidance. For example, lines on line graphs. 7. Verbiage: "Functional Needs" as used to specifically define the unique functional needs of users with a wide range of disabilities may be confusing for some. This is because all users have functional needs on a site in order to perform tasks and complete processes. However, it appears the distinction exists in WCAG 3.0 language to differentiate the functional needs of completing a task, from the functional needs of a user with a disability to accomplish that task. We recommend more definition and disambiguation around this concept. 8. Non-speech audio: There seems to be a lack of guidance around this.. If something makes a non-verbal sound, captions may not fit the need, so scoring in regards to captions would be difficult. (Example: a text message notification making use of the flashlight to inform a patient of an alert).. Questions & Clarification 1. Scoring: How should the numerator be defined when calculating a %? 2. 7.2 Clear Words. Does this requirement include user-generated content, or only UI elements? Examples: Medication and procedure names are part of our system, and may be shared without much editing as patient-facing notes. This is done to ensure transparency and compliance with other government regulations. 3. 7.2 Clear Words. Is there some tolerance level granted for words and phrases that are clear to certain users but not the general public? Examples: Medication, scientific language, and procedure names would be clear to a physician specialist, but may not be understood by the general public. These precise words are needed for accuracy, efficiency, and industry sensitivity. In our technical communications, we assume a level of complexity tolerance for specific users within their field (which we would not expect for patient-facing UI, for example). 4. 7.2 Clear Words. How is this category scored? 5. 7.3 Captions. Our system might offer future integration with vendors that handle natural language processing for medical visits, turning them into notes. This might only involve them providing us an augmented intelligence output of the note and that might mean that the user is not provided a transcript of the interaction from within our system. When referring to captions and media files, is the boundary of this definition just video media? 6. 7.3 Captions. Should placement of captions factor into their score, particularly if including a caption will cover up other content? 7. 7.3 Captions. Should size, contrast of captions factor into scoring? 8. 7.4 Structured Content: It was unclear to our team whether Addresses, Phone Numbers or other structured content will require distinct headings. Thank you for your continued work in this area, Claire (Lempke) Grupe Epic | UX Designer - Billing Apps | she, her cgrupe@epic.com<mailto:cgrupe@epic.com> | (608) 771-4182
Received on Friday, 26 February 2021 21:21:33 UTC