- From: Michael Pluke <Mike.Pluke@castle-consult.com>
- Date: Mon, 30 Jan 2017 13:20:56 +0000
- To: Wilco Fiers <wilco.fiers@deque.com>, shilpi <shilpi@barrierbreak.com>
- CC: WCAG <w3c-wai-gl@w3.org>
- Message-ID: <9b761c98b8a84ba78217ac64b89abffe@E15MADAG-D05N03.sh11.lan>
Hi everyone To date it has been relatively easy to draw a clearly defined line between testing accessibility (i.e. meets the WCAG SCs) and usability (i.e. people find something easy and pleasant to use). One reason it has been relatively easy to draw such a hard line is that WCAG does not require things to clearly convey meaning, be understandable and logical (to users) etc. Usability will be badly impacted if these issues are not properly addressed but users with many disabilities (e.g. visual, hearing, dexterity, etc.) will not be adversely impacted and therefore it has been convenient to believe that there isn’t an accessibility issue here. But when one starts to consider cognitive and learning disabilities this hard boundary cannot be justified. Whereas many users will be able to compensate for poor clarity, understandability and logical design (i.e. they will just experience sub-optimal usability) some users with cognitive and learning disabilities will be unable to work around these issues and they will experience a disproportionate and potentially unsurmountable barrier. Many users with cognitive and learning disabilities will encounter a serious accessibility barrier that they may not be able to overcome. Wherever possible the COGA Task Force has tried to propose SCs that do not rely on subjective testing, but automatically assessing whether, for example, a label accurately and clearly describes the thing that it labels in a way that users with learning disabilities might be able to understand is currently not something that is easy to automate. For such cases, subjective testing will be the only practical way to assess whether a significant accessibility barrier exists. If we exclude new SCs related to how well people understand content, just because understandability is difficult to automatically test, then cognitive accessibility will continue to be poorly represented in WCAG. Whereas today we may have to rely on subjective testing to assess these softer concepts, with the advances in machine learning it is probable that more ways of automatically assessing these concepts will emerge. It would be good to avoid the situation where we have efficient ways of testing these concepts but have nothing in WCAG that relates to them. Best regards Mike From: Wilco Fiers [mailto:wilco.fiers@deque.com] Sent: 30 January 2017 12:09 To: shilpi <shilpi@barrierbreak.com> Cc: WCAG <w3c-wai-gl@w3.org> Subject: Re: Automated and manual testing process Hi everyone, I don't particularly like the use of the phrase "manual testing". I much prefer "expert testing", as it gets rid of this confusion, as well as of the question of: "if I use a accessibility tool, is it still manual testing?". I look at it similarly to how Alistair Garrison grouped it. Although I would label it slightly different. 1) Conformance testing: The goal here is to see if minimal requirements are met. This involves expert testing (or manual testing if you prefer), and if that expert is in any way concerned about meeting deadlines, she will be using accessibility test tools for this. 2) Usability testing: The goal here is to see where the best opportunities are for improving the user experience. Usability testing won't tell you if something meets WCAG, or at least, I've never known any usability tests that could do that. it's a very different kind of animal in my opinion. So I definitely have concerns about some of the new SCs that are based on user testing. Wilco On Mon, Jan 30, 2017 at 1:25 AM, shilpi <shilpi@barrierbreak.com<mailto:shilpi@barrierbreak.com>> wrote: We should specify the criteria to be met but avoid being prescriptive on which testing approach is to be adopted or with how many users, etc. As one can see numerous organization's take different approaches and yet achieve compliance. Often this is based on scale of test required, time, budgets, etc. The aim is to get more organization's to adopt accessibility. We should look at how to simplify the approaches. Regards Shilpi Sent from my Samsung Galaxy smartphone. -------- Original message -------- From: Alastair Campbell <acampbell@nomensa.com<mailto:acampbell@nomensa.com>> Date: 1/30/17 02:29 (GMT+05:30) To: Andrew Kirkpatrick <akirkpat@adobe.com<mailto:akirkpat@adobe.com>>, WCAG <w3c-wai-gl@w3.org<mailto:w3c-wai-gl@w3.org>> Subject: Re: Automated and manual testing process Andrew wrote: > What if testing cannot be done by a single person and requires user testing – does that count as manual testing, or is that something different? We use, and I've come across quite a few variations, so to focus on the general ones I tend to see main methods as: - Automated testing, good coverage across pages or integrated with your development, but can't positively pass a page. - Manual review/audit, where an expert goes through a sample of pages using the guidelines. This can assess 'appropriateness' of things like alt text, headings, markup and interactions (e.g. scripted events). - Panel review, where a group of people with disabilities assess pages from their point of view, with the guidelines as reference. (A couple of Charity based organisations offer that in the UK, but not my favoured methodology [1]) - Usability testing with people with disabilities, run as a standard usability test but with allowances for different technologies etc. Tends to find the whole range of usability & accessibility issues, but coverage across a whole website/app is difficult. - Usability testing with the general public, although not accessibility oriented will often an overlap in issues found. I would stress that 'manual testing' must be by experts who have a wide understanding of accessibility and can balance different concerns. Whereas 'usability testing' must not be with people who test for a living. If they are expert in the domain, technology or accessibility then they are not typical users. If something 'requires' multiple testers then we need to (try to) write the guideline or guidance better. (Is that the question?) Usability is about the optimisation of an interface or experience, rather than barriers in the interface. I came from a Psychology & HCI background and started work as a Usability Consultant, I've done thousands of test sessions, but it is quite a different thing from testing accessibility... I hope that helps, but I have a feeling there is a question behind the question! -Alastair 1] https://alastairc.ac/2006/07/expert-usability-participants/ -- Wilco Fiers Senior Accessibility Engineer - Co-facilitator WCAG-ACT - Chair Auto-WCAG [cid:image002.jpg@01D27AF6.ACC52370] ________________________________
Attachments
- image/jpeg attachment: image002.jpg
Received on Monday, 30 January 2017 13:23:34 UTC