W3C home > Mailing lists > Public > public-agwg-comments@w3.org > February 2021

Feedback - WCAG 3.0 FPWD

From: Chris Pycroft <chris@intopia.digital>
Date: Fri, 26 Feb 2021 22:06:21 +0000
To: "public-agwg-comments@w3.org" <public-agwg-comments@w3.org>
Message-ID: <2DA0FD13-1636-4810-BB4C-FA4579A59476@intopia.digital>
Accessibility Guidelines Working Group,

On behalf of Intopia, we congratulate you on the release of the first working draft of WCAG 3.0. We wanted to acknowledge this significant milestone and commend you on your work to date. These guidelines will ultimately be instrumental in increasing accessibility and inclusion.

We have reviewed the working draft and are providing feedback by e-mail to best address the questions raised in the FPWD blog post<https://www.w3.org/blog/2021/01/wcag-3-fpwd/>.

Are there additional Design Principles and Requirements that should be included in the WCAG 3 project?

The Design Principles seem comprehensive and useful. However, in comparison to similar work by other organisations, they are verbose and require considerable concentration to understand. Usually Design Principles are intended to be concise phrases which help people make decisions between choices. Having complex paragraphs which could be interpreted in different ways is not as helpful in those situations. This is also reflected in the first two items under the heading "Accessibility guidelines should", which seem to be very similar. We suggest:

  *   Editing the Design Principles to be concise. This might mean a pithy phrase followed by a complete sentence or two to expand or explain the reasoning, similar to the building design principles by Design Council<https://principles.design/examples/the-principles-of-inclusive-design>, Australian Government Digital Service Standards<https://www.dta.gov.au/help-and-advice/digital-service-standard/digital-service-standard-criteria> or  Jakob Nielsen's Usability Heuristics<https://principles.design/examples/10-usability-heuristics-for-user-interface-design>
  *   Clarifying the distinction between the first two Accessibility principles or merging them into one

The Requirements are also comprehensive and useful. Motivation is particularly good as a requirement – but is there a write-up of how a scoring system became the preferred way to achieve that goal? If not, it might be useful to have one to refer to as the standard moves through the draft stages. Or if the scoring system isn't seen as the only way to motivate people, maybe the reference to it needs to be removed from this Requirement.

There is considerable introductory information, including information about the guidelines structure, testing, and scoring. Are there usability improvements that would make it easier to use and find information?

We would suggest considering one of the following options to improve usability:

  *   Move the Guidelines from section 7 to section 2, immediately after the Introduction
  *   Keep the Guidelines where they are, and create additional support material that helps users find what they’re looking for (such as a wizard or question tree that links people directly to the relevant part of the specification)
  *   Keep the Guidelines where they are, and provide a supporting ‘Guide to WCAG’ for the general public

A plain language version of these sections could also be considered. We are conscious that previous iterations of WCAG have been developed to articulate technical information, but a plain language version in addition to the main standard will allow more people to understand the guidelines.

Outcomes are normative. Should guidelines, methods, critical errors, and outcome ratings be normative or informative, and why?

Guidelines are normative; they are used to address functional needs on specific topics and be directly tested against.

Critical errors can be both normative and informative. When critical errors specify an issue that will stop a user from participating or completing a process, critical errors are normative in that they are absolute. However, when critical errors are part of a fail rating system that describes how badly one has passed or failed and how to improve the error's criticality, this may be considered informative.

Methods are informative; inclusive of detailed information and guidance on achieving a particular outcome. Outcome ratings can be both normative and informative. Like critical errors, outcome ratings can guide how to improve the outcome while maintaining an informative aspect of strictly intended outcomes.

We would like constructive feedback on the testing approach, and examples of why you would or would not implement it in your organization.

Significant work would be required to adopt the testing approach within our organisation, but we will consider adoption as the guidelines continue to develop.

Initial areas that would impact on adopting the testing approach include:

  *   Clarity on recording test results – if results need to be recorded per atomic test result for every element, this will increase the amount of work needed to prepare documentation. The greatest impact will be for clients that have retesting that needs to be completed (as multiple sets of documentation may be required dependent on testing methodology)
  *   The suitability of atomic tests will need to be reviewed given that no assistive technology is required as a part of the tests
  *   For any tests that are technology specific, clarity on how often these will be reviewed (and superseded) is needed
  *   List(s) of tests should be collated as the guidelines are finalised – the current content structure makes it difficult for testers to work through a seamless flow of tests

Is this approach of using complete processes as the smallest unit of conformance workable for different types of organizations? These include organizations with very large, dynamic, or complex content, medium-sized organizations relying on external accessibility resources, and very small organizations with limited resources?

We believe that the approach can be workable across different types of organisations, as it allows them to focus on making core tasks accessible rather than individual pages, something that should be achievable regardless of organisation size.

Some additional clarity on the definition of a process is required. Processes can be complex (including across multiple pages of a website or views of an application).

Example: For a consumer merchant website that has an ‘Add to Cart’ and ‘Checkout’ step, would this be considered a single process or a number of processes? Depending on interpretation, only part of these steps could be accessible, but the organisation may still be eligible for Bronze status.

Does the model for scoring and aggregated ratings work? Why or why not? If not, please propose an alternative solution.

We believe that the model proposed can work, as percentage ratings can be more informative than a single pass or fail indicator across all pages assessed for conformance. Our feedback on the model:

  *   The use of percentages to help calculate the ratings may result in notably different ratings depending on what is being assessed for conformance (for example a small website with a few images verses a large website with a significant number of images could see a difference in ratings)
  *   Public messaging around the model needs to articulate the importance of reaching the Silver, Gold and in particular the Bronze level. Full conformance of accessibility guidelines should always be aimed for, but for business or organisations setting their own targets, they may choose to adopt a minimum percentage rating target as opposed to Bronze, Silver or Gold

We congratulate you again on the work done to date, and we look forward to providing feedback on future versions before its completion.


Received on Friday, 26 February 2021 22:06:46 UTC

This archive was generated by hypermail 2.4.0 : Friday, 26 February 2021 22:06:47 UTC