W3C home > Mailing lists > Public > public-silver@w3.org > June 2019

RE: Conformance and method 'levels'

From: Bruce Bailey <Bailey@Access-Board.gov>
Date: Mon, 24 Jun 2019 13:04:13 +0000
To: "Abma, J.D. (Jake)" <Jake.Abma@ing.com>, John Foliot <john.foliot@deque.com>, "Hall, Charles (DET-MRM)" <Charles.Hall@mrm-mccann.com>
CC: Alastair Campbell <acampbell@nomensa.com>, Silver Task Force <public-silver@w3.org>, Andrew Kirkpatrick <akirkpat@adobe.com>
Message-ID: <MWHPR22MB0046069454D234F8B0E066D3E3E00@MWHPR22MB0046.namprd22.prod.outlook.com>
I am just replying to a few bits, so not to the last message in the thread.

Jake, I like what you outline below.  The difficulty I think is ensuring that a baseline (close enough to WCAG 2.0 Level AA) is kept with all the other factors also scoring points.  I think a second currency (for achievements) greatly simplifies this difficulty)

With your strawman below, for example, suppose the “Original WCAG score” is 50/100 – so not really close enough to WCAG 2.0 Level AA – but four other factors score 100/100.  Your net score is then 90/100, which seems pretty good!  But is it?

From: Abma, J.D. (Jake) <Jake.Abma@ing.com>
Sent: Saturday, June 22, 2019 10:33 AM
To: John Foliot <john.foliot@deque.com>; Hall, Charles (DET-MRM) <Charles.Hall@mrm-mccann.com>
Cc: Alastair Campbell <acampbell@nomensa.com>; Silver Task Force <public-silver@w3.org>; Andrew Kirkpatrick <akirkpat@adobe.com>
Subject: Re: Conformance and method 'levels'

Just some thoughts:

I do like all of the ideas from all of you but are they really feasible?

With feasible I mean in terms of time to test, money spend, the difficulty of compiling a score and the expertise to judge all of this?

I would love to see a simple framework with clear categories for valuing content, like:​

  *   ​Original WCAG score => pass/fail                         ​= 67/100
  *   How often do pass/fails occur => not often / often / very often
  *   = 90/100
  *   What is the severity of the fails => not that bad / bad / blocking
  *   = 70/10
  *   How easy it is to finish a task => easy / average / hard​​                         = 65/100
  *   What is the quality of the translations / alternative text, etc.         = 72/100
  *   How understandable is the content => easy / average / hard
  *   = 55/100
Total = 69/100

And then also thinking about feasibility of this kind of measuring.
Questions like: will it take 6 times as long to test as an audit now? Will only a few people in the world be able to judge all categories sufficiently?

Received on Monday, 24 June 2019 13:04:40 UTC

This archive was generated by hypermail 2.4.0 : Thursday, 24 March 2022 20:31:45 UTC