- From: John Foliot <john@foliot.ca>
- Date: Thu, 6 Oct 2022 14:41:25 -0400
- To: Alastair Campbell <acampbell@nomensa.com>
- Cc: "w3c-waI-gl@w3. org" <w3c-wai-gl@w3.org>
- Message-ID: <CAFmg2sVfdbQsCWJUFe7K=vr22hv76vi5C7XN4K=3=gAK9NgePA@mail.gmail.com>
> the importance of various features is evaluated by stakeholders all the time, it is part of the business and (if they do it) UX work. The problem here is identifying stakeholders. On balance, it seems that *any* issue could be viewed as severe (to some degree) for *some* users, and there are times when a user-need for one functional requirements group may directly conflict with the needs of another functional requirements group. I'm not saying this is easy (it isn't, it's realy, really hard), but it is also a reality. Shadi presented the use-case/analogy of a mirror situated too high on a wall... inconvenient for most users in wheelchairs (and far less severe than, for example, the door to the room with the too-high mirror being too narrow for a wheelchair user to go through), but that same 'inconvenience' may potentially be life-threatening for some (Gregg: "*For example someone who has to attach a medical device to themselves and, if it becomes detached, has to rely on the mirror in the public place to replace it.*") Is that an edge case? Perhaps (OK, yes), but it also underscores that the contextual needs are, as I stated previously, ultimately linked to the individual. > You evaluate each issue in context, noting how severe an issue it would be for each task. Severe an issue to whom? You noted multimedia content in your scoping sample, which presents an interesting and obvious reflection... missing captions are *severe* for users who are deaf or hard-of hearing, but have minimal impact on blind and visually impaired users (who are instead likely most concerned with audio-descriptions). So in WCAG 3, will missing captions for blind users be a low-severity issue, and audio-descriptions be a low-severity issue for deaf and HoH users... OR are missing captions and/or missing audio descriptions equal in severity depending on user-group? And if those captions contain a significant quantity of numbers (yes, a strawman argument, but not unrealistic), what is the impact on users who rely on captions but also have to deal with issues related to dyscalculia? (or is that type of compound barrier out of scope?) I'd suggest that captions with a lot of numbers will be more inaccessible to those users, over captions that have no numbers. *Everything* is contextually relative to the individual user. > So the tricky aspect of applying severity at the test level is whether that would need to be normative, and how you score (or otherwise use the severity) for passing guidelines. Here we are mostly in agreement, although if it is not normative then how does it factor into conformance and scoring? Industry feedback to date has been fairly consistent in noting that they wish to avoid subjective evaluations (which are not consistently repeatable, and potentially suffer from the same "lack of imagination" Gregg noted earlier). JF On Thu, Oct 6, 2022 at 1:21 PM Alastair Campbell <acampbell@nomensa.com> wrote: > > Which (I will suggest) is also why traditionally issues related to > "screen readers" are addressed before issues related to cognitive > disorders. That too needs to be acknowledged and so far I have not read any > proposal to address that imbalance. > > The sub-group proposal was to map the critical issues against each > functional need group. Whether we try to balance or score them is another > issue, but at least it would be transparent. > > What we can’t address on a per-test basis is things which build up with > multiple instances (the “spoons” issue). > > The rest is on the other approach to issue severity, around context. > > > > > Many 'shopping sites' are more than just pure-play product listings and > shopping cart functions… > > > > Sure, but the importance of various features is evaluated by stakeholders > all the time, it is part of the business and (if they do it) UX work. It > can be done by value (to the business), overall usage, etc. > > > > I’m not suggesting it is the context of the user, but the context of the > issue within the features of the website. > > > > I’m obviously not explaining this well, but a useful parallel is the > sampling methodology in WCAG-EM: > > https://www.w3.org/TR/WCAG-EM/#step2 > > > > So imagine: > > - You are doing an additional post-test process to assess severity / > priority. > - You are the site owner, or you are doing this in collaboration with > them. > - You select the key tasks / processes that are the most used, or most > important for the business. > - You also are required to select features or tasks directly related > to disability, e.g. finding audio-described videos. > - You evaluate each issue in context, noting how severe an issue it > would be for each task. > - You categorise each issues and use that to inform your backlog > priority. > > > > We go through this sort of process for a lot of our testing work as the > first question when receiving audit results is usually “what are the most > important ones to fix”. > > > > As an external provider it isn’t always obvious what the key features are, > but if you’re building the product you a generally very aware of that! So, > it is best done as a collaborative process (or in-house if you have the > expertise). > > > > That is not a user-focused process, it isn’t a substitute for usability > testing or other user-research. However, as a process it is feasible to do, > and something that I think **could** be added at a silver/gold level. > > > > It is something that will be further explored by the Issue Severity > sub-group. > > > > Kind regards, > > > > -Alastair > > > > -- > > > > @alastc / www.nomensa.com > > > -- *John Foliot* | Senior Industry Specialist, Digital Accessibility | W3C Accessibility Standards Contributor | "I made this so long because I did not have time to make it shorter." - Pascal "links go places, buttons do things"
Received on Thursday, 6 October 2022 18:41:56 UTC