Evaluation metrics feedback

Hi,

Here's some very belated feedback on the Research Report on Web Accessibility Metrics. My RDWG symposium co-author Brian Kelly expressed his thoughts via his blog:
http://ukwebfocus.wordpress.com/2012/09/25/what-can-web-accessibility-metrics-learn-from-alt-metrics/

This feedback from me builds on Brian's views. Apologies in advance that it is a more of a general reflection than a list of suggested changes.

Firstly, great effort in producing this as output from the online symposium! 

One of the key challenges of accessibility evaluation is modifying the priority of a given WCAG SC to become a more accurate predictor of the effect of the related barrier on a user, given context of use of the site in question. So it's welcoming to see evidence throughout the report of a recognition for a more granular approach to identifying conformance, and an appreciation of the importance of "accessibility-in-use". Detlev Fischer shows us one possibility, by proposing a Likert-type grading scale, which would address the issue of a single SC failure having different levels of impact. 

In our paper, we introduced the idea that for metrics to have some meaning in the world outside academic research labs - i.e. to motivate organisations providing the sites being evaluated to take appropriate action in response to the metrics - we need to look firstly at user experience and secondly in efforts already made to address the effect of the problem. 

Our approach is driven partly by our interpretation of the UK legislative framework - which at the time of writing doesn't include any level of technical conformance in the terms of the legislation. But it's also driven by pragmatics, including the complexity of an organisation, and the resources - and therefore ability - it has to deal with accessibility issues. Metrics need to be sensitive to organisations' abilities to respond by improving performance, and recognise efforts already made, without being defined in a way that can create a situation of complacency or lack of no motivation for improving sub-standard content. 

Regarding user experience, it's good to see some of this acknowledged in Section 5.2, where user circumstance is considered as input to metrics. Another approach we've considered [1] is the potential of analytics being gathered by organisations to track user behaviour of people, and which could be mined to understand more about the location and presence of specific accessibility barriers,. In educational organisations, learner analytics allow gathering of data of learner progress through an online learning environment. If data indicates blocking points to journeys through an online learning environment, this could be used to prioritise investigation of barriers that appear to have real-world impact, which may or may not have been predicted by the corresponding WCAG SC priority rating.

Regarding effort already made to reduce the impact of a barrier, we appreciate that this brings another level of subjectivity to the accessibility assessment process - how do we judge the efforts of an organisation to provide an inclusive experience? Can we incorporate information of this nature into metrics in a way that encourages organisations to do even better, or is this a naïve hope? We understand and appreciate that WAI's remit focus on optimising web accessibility, but the impact of some web accessibility barriers may be mitigated by the existence of alternative (non-web in some cases) channels to an equivalent experience. How would we use knowledge of these alternative to adjust metrics?

In the UK, conformance with BS 8878 could be used as evidence on the efforts an organisation has made to address accessibility of its online presence more generally. However, we need a formal - but workable - way of describing efforts to meet BS8878. At the moment, finding, reviewing and assessing this evidence will be a complex and probably impractical task. As it stands, it wouldn't lend itself well to the repository approach of testing alternative metrics. But even so, it's a line of investigation with potential.

In summary, the report is a very useful collection of existing work in the area of metrics, and presents some interesting directions for development.  I think it would benefit from more emphasis on work needed to connect 'measurement-for-conformance' with 'measurement-for-encouraging-positive-action'.

[1] Cooper, M., Sloan, D., Kelly, B. and Lewthwaite, S. (2012 A challenge to web accessibility metrics and guidelines: putting people and processes first. In Proceedings of the International Cross-Disciplinary Conference on Web Accessibility (W4A '12). ACM, New York, NY, USA

Happy to discuss any of the above points further.
Dave





The University of Dundee is a Scottish Registered Charity, No. SC015096.

Received on Friday, 5 October 2012 15:56:17 UTC