Feedback Accessibility Conformance Testing (ACT) Rules Format 1.0

Hi



Thank you for the great work on the ACT Rules Format 1.0.



We wish to provide some feedback based on our experiences with test rules and test data analysis.





1 Introduction and 3.5 Limitations, Assumptions or Exceptions

·         It is crucial to document the interpretation of the accessibility requirement/WCAG. Limitations, assumptions or exceptions at the accessibility requirement/WCAG level form basis for any limitations, assumptions or exceptions at test rule level.

·         Test rules may cover only parts of the requirement interpretation. Will this be documented in the section Assumption for each test rule? There should be an overview of which test rules cover which requirements, in order to be able to fully test compliance with each requirement.



3.4 Accessibility Requirements

·         It must be clear which accessibility requirements/WCAG success criterion a rule passes/fails. This also applies to rule groups. We need to ensure that test rule/rule group produces test results that identify the exact success criterion where non-compliance occurs.



6.1. Applicability

·         We support that applicability must be described objectively, unambiguously and in plain language.

·         Definitions of terms that are used in more than one test rule, must be described consistently and have the same meaning wherever they are used.

·         Applicability must take into consideration all kinds of content on a web page and not be limited to only one specific technology, such as HTML.



6.2 Expectations

·         Expectations need to be distinct. When all expectations are true for a test target it passed the rule. If one or more expectations is false, the test target failed the rule. Thus, it must be possible to identify the result for each expectation in a rule in order to identify what created the result passed/failed for the rule. In audits (with possible sanctions) we must document the grounds for any reactions.



7. Rule Grouping

·         In order to ensure consistency across all test rules, all rules should be grouped in rule groups, even if it implies that some rule groups will only have a single rule. The use of rule groups enriches the data material from test and enables us to identify and select test rules for different purposes.

·         We suggest that we establish criteria for rule grouping that ensure consistent use of rule grouping. As a minimum, success criterion should be one way to group the test rules. This allows us to identify all test rules that apply to for instance success criterion 1.3.1. In addition, we will be able to aggregate all test results concerning SC 1.3.1, regardless of the content (tables, headings, lists) that has been tested.

·         Similarly, the test rules could be tagged with other types of information, such as content type, technology, which users the requirement is meant to benefit, etc. Difi has experience with tagging test rules in this manner.

·         It important to have a standardised way to name the rule groups.

·         For each rule group, we need information about which rules are contained in the group. Similarly, we also need information about which rule group a test rule is a part of at the test rule level.



8. ACT Rule Data Format

·         Outcome must be specified for each test target. Output data must not only include test subject and rule identifier, but also other information with which the test rule has been tagged. This provides a data material that is far richer.



8.3. Outcome

·         The ACT rule outcomes do not map fully to EARL outcomes. EARL also has the outcomes Cannot tell (it is unclear if the subject passed or failed the test) and Untested (the test has not been carried out). In what way should the ACT rule format take these outcomes into account?

·         For outcome Failed: It must be possible to identify the result for each expectation in a rule, not only for all expectations combined.



8.4 Ensure Comparable Results

·         Test data must be quantifiable (converted to numbers) in a standardised way, in order to aggregate and perform statistical analysis on the test data.

·         It is insufficient to only be able to compare results. We must ensure that results are presented in a way that makes it possible to conduct statistical analysis and benchmarking on test results, combined with other data, such as information about website owners, content type, technology, which users the requirement is meant to benefit, etc. Difi has experience with this approach.

·         The paragraph heading should reflect that this applies to more than just comparable results.



9.3 Rule Aggregation

·         The paragraph heading should be changed to “Result Aggregation” or “Outcome Aggregation”.

·         As described in section §8 ACT Data Format (Output Data) a rule will return a list of results, each of which contain 1) the Rule ID, 2) the test subject, 3) the test target, and 4) an outcome. In addition, for the outcome Failed, it must be possible to identify the result for each expectation in a rule. A consistent approach to rule grouping (see feedback to 7 and 8.4), will enrich the data material considerably. Difi has experience with this approach.



Kind regards

Dagfinn Rømen and Brynhild Runa Sterri
A: Skrivarvegen 2, NO-6863 Leikanger
[Difi - Norwegian Authority for Universal Design of ICT (logo)]
Postbox 8115 Dep., N-0032 Oslo

uu.difi.no<https://uu.difi.no/om-oss/english>

Received on Wednesday, 18 April 2018 08:51:57 UTC