Re: Is it worth drafting a task hierarchy for finding the right silver audit methodology?

Detlev,

I think that would help, and gave it a shot a couple of weeks ago using as
simple a task as came to mind to sketch out a possible task testing
structure
<https://lists.w3.org/Archives/Public/public-silver/2020Apr/0035.html>. It
may help to do that for a more common task than seeing a pizza place's
hours of operation?

I am really not sure whether some form of task templating is possible or
> the best approach - I just believe it will be easier to make a judgement
> on the approach to the evaluation methodology under WCAG 3.0 and its
> link to scoring if we have a first draft od a task hierarchy to assess
> whether this could work, whether it is manageable, too messy, too
> restrictive, etc.


Very much agreed, well put.

Thanks,

Shawn

On Tue, May 12, 2020 at 10:48 AM Detlev Fischer <detlev.fischer@testkreis.de>
wrote:

> Following on from Jake's discussion today of usability testing based on
> benchmarks across (most) common tasks, I wonder whether it would make
> sense to compile a draft of a task hierarchy, with top level tasks like
> login, navigate to leaf/navigate back to root, search an item and go
> there, fill out & submit form, etc.
>
> Below that top level, the task hierarchy could then have children for
> different sub-tasks that may or may not be entered by the user (like
> login > request password reset).
>
> The aim of this excercise would be to see what such a hierarchy of tasks
> would mean for a reporting scheme, i.e. the methoddology / work flow of
> a WCAG 3.0 audit and its documentation. We now have 50 SCs - would we
> also have a finite number of possibly applicable tasks and then select /
> rate completion on those that apply to the site under test? If this is
> not a pre-set, pick-what-you-need type of thing (that can also be
> implemented in tools), we would end up with a situation where each
> evaluation would have a standard technical part and a non-standard
> usability task-related part that depends entirely on whatever 'relevant'
> tasks are derived from the site under test - so that part could not be
> templated in the same way as the technical task, and it is harder to see
> how it could enter a common score.
>
> I am really not sure whether some form of task templating is possible or
> the best approach - I just believe it will be easier to make a judgement
> on the approach to the evaluation methodology under WCAG 3.0 and its
> link to scoring if we have a first draft od a task hierarchy to assess
> whether this could work, whether it is manageable, too messy, too
> restrictive, etc.
>
> Detlev
>
> --
> Detlev Fischer
> DIAS GmbH
> (Testkreis is now part of DIAS GmbH)
>
> Mobil +49 (0)157 57 57 57 45
>
> http://www.dias.de
> Beratung, Tests und Schulungen für barrierefreie Websites
>
>
>

Received on Tuesday, 12 May 2020 16:13:10 UTC