- From: Detlev Fischer <detlev.fischer@testkreis.de>
- Date: Thu, 25 Apr 2019 12:07:52 +0200
- To: public-silver@w3.org
- Message-ID: <b314983f-118e-f59e-a3d7-67e412e4008e@testkreis.de>
Hi Alastair, Charles, list,
I still feel uneasy about including methods used by an organisation at
the design )or re-design) stage in a conformance evaluation. There are
several problems with that:
1. Evaluators will often not have the domain knowledge to assess, say,
whether a grouping of navigation items (e.g. products) works well
for the target audience (think a chemicals supplier)
2. An expert might arrive at as a good a navigation structure as a
group that went through a card-sorting excercise (if one were to
carry out user testing to assess the quality of the result) - why
should the fact that the structure was arrived in card sorting lead
to a higher score if what counts is the accessibility/usability of
the site for the end user?
3. The fact that changes were made as a result of testing is the back
story of the site being conformance-evaluated - for the user it has
no impact on the actual user experience of the content used. So why
should it appear in a conformance result? (I do not mind - actually
welcome - it those measures appear in another kind of rubic that may
be labeled "accessible organisational processes", "proactive
organisation", "company digs accessibility" or whatever).
I think there must be a clear separation of a conformance score that can
be arrived at ideally by any external evaluator based on published
techniques and common tools, without the need for domain knowledge and
without access to company internals, and something that for want of a
better word I will call a 'proactivity score', which is derived from
insight into the organisation's internal processes. These may be shown
as "stacking up" - conformance leading to "bronze" and the 'proactivity
score' adding points for "silver" and finally "gold" - but I would
personally prefer a side-by-side presentation to make it clear that this
is addressing different aspects - site properties on the one hand,
organisational properties on the other.
Just to be clear: I welcome extending the scope of conformance testing
to things that are 'hard(er) to measure' but still do not rely on
knowledge of company internals. These aspects might include things such
as "proximity of related information" or "Concise navigation structure"
(e.g., not more than x elements per level, consistent display of
hierarchy in nested structures and of process steps in processes, etc -
things that may often enter assessments in things like 3.2.3 Consistent
Navigation already, but may not be explicitly measured). For these, the
challenge will be to find a way to integrate measurement scales with the
current PASS/FAIL approach.
Detlev
Am 25.04.2019 um 11:30 schrieb Alastair Campbell:
>
> Hi Charles,
>
> Thank you for the extra context, and I hope it didn’t come across as
> negative. I do appreciate the thought-process, I was just worried
> about how it might be taken.
>
> Taking a little step back to consider the various UX/User Centered
> Design methods, I’ve long been of the opinion that:
>
> * UCD is good for optimising for the majority of people within a
> particular context / domain.
> * Accessibility guidelines (so far) have been good for ensuring that
> interface works for the most people possible.
>
> In our work UX tends to lead accessibility, so you define a good
> solution for the task, then make sure it is as robust & accessible as
> possible. (They aren’t separate, but iterative. Oh, and obviously
> people with disabilities are part of the user-research, but we work
> out the task first, then the interface.)
>
> Where the UCD methods shine is dealing with the context of the
> problem, and getting out of your own mindset & assumptions.
>
> That means they are method to get to a more optimal solution, but not
> a way to /compare/ solutions. That’s a really tough problem as the
> context matters hugely, which is something that world-wide guidelines
> cannot take account of.
>
> As a quick example of ‘context’ differences, the main UX problems you
> work on in e-commerce are Information Architecture based, such as how
> to display 10,000 products in a way that people can navigate to what
> they want. Whereas something like web-based email is much more of an
> interface problem.
>
> > The general idea would simply be to encourage practices that go
> beyond the minimum, but not require them.
>
> In that context I can see at least one way forward then, where there
> are a set of guidelines oriented around usability/IA that are
> process-based.
>
> For example, the guideline could be (quick hypothetical example):
>
> * Users can understand and use navigation elements which have more
> than 10 options.
>
> The method(s) would be /process/ based, like ISO 27001 where you
> essentially self-mark but have to show improvement each year.
>
> For example:
>
> * Conduct a card-sorting exercise to establish the best groupings
> and terms for the navigation.
> * Conduct a menu test to optimise the terms used in the navigation.
> * Conduct a heuristic evaluation of the navigation’s placement and
> design.
>
> The ‘conformance’ for each of these is that you record that this
> method has been used, and perhaps what changes you made as a result,
> or even /that/ you made changes as a result.
>
> Then Silver is not trying to define a ‘good’ or replicable result
> across the multitude of different websites, but provide a way of
> scoring higher for organisations following best-practice UCD. In the
> context of ‘going above the baseline’, that makes sense to me.
>
> I think it also helps to have these tasks as methods under particular
> guidelines, rather than as an overall methodology for testing all the
> guidelines. Then they could mix with some baseline methods from WCAG
> 2.x as well, with these methods there for higher scoring.
>
> Cheers,
>
> Alastair
>
--
Detlev Fischer
Testkreis
Werderstr. 34, 20144 Hamburg
Mobil +49 (0)157 57 57 57 45
http://www.testkreis.de
Beratung, Tests und Schulungen für barrierefreie Websites
Received on Thursday, 25 April 2019 10:08:32 UTC