Is it worth drafting a task hierarchy for finding the right silver audit methodology?

Following on from Jake's discussion today of usability testing based on 
benchmarks across (most) common tasks, I wonder whether it would make 
sense to compile a draft of a task hierarchy, with top level tasks like 
login, navigate to leaf/navigate back to root, search an item and go 
there, fill out & submit form, etc.

Below that top level, the task hierarchy could then have children for 
different sub-tasks that may or may not be entered by the user (like 
login > request password reset).

The aim of this excercise would be to see what such a hierarchy of tasks 
would mean for a reporting scheme, i.e. the methoddology / work flow of 
a WCAG 3.0 audit and its documentation. We now have 50 SCs - would we 
also have a finite number of possibly applicable tasks and then select / 
rate completion on those that apply to the site under test? If this is 
not a pre-set, pick-what-you-need type of thing (that can also be 
implemented in tools), we would end up with a situation where each 
evaluation would have a standard technical part and a non-standard 
usability task-related part that depends entirely on whatever 'relevant' 
tasks are derived from the site under test - so that part could not be 
templated in the same way as the technical task, and it is harder to see 
how it could enter a common score.

I am really not sure whether some form of task templating is possible or 
the best approach - I just believe it will be easier to make a judgement 
on the approach to the evaluation methodology under WCAG 3.0 and its 
link to scoring if we have a first draft od a task hierarchy to assess 
whether this could work, whether it is manageable, too messy, too 
restrictive, etc.

Detlev

-- 
Detlev Fischer
DIAS GmbH
(Testkreis is now part of DIAS GmbH)

Mobil +49 (0)157 57 57 57 45

http://www.dias.de
Beratung, Tests und Schulungen für barrierefreie Websites

Received on Tuesday, 12 May 2020 14:48:10 UTC