W3C home > Mailing lists > Public > public-silver@w3.org > July 2019

Re: thoughts points system for silver

From: Léonie Watson <lw@tetralogical.com>
Date: Fri, 19 Jul 2019 12:29:36 +0100
To: John Foliot <john.foliot@deque.com>, "Hall, Charles (DET-MRM)" <Charles.Hall@mrm-mccann.com>
Cc: Chris Loiselle <loiselles@me.com>, Silver Task Force <public-silver@w3.org>
Message-ID: <781a360a-3afe-9be8-f202-96f44b3b6b7f@tetralogical.com>
On 18/07/2019 18:28, John Foliot wrote:
[...]


> Well, actually, in 3rd party development shops, level of effort is 
> measured in hours-to-perform any given task: that is usually how they 
> pay their staff, and bill their clients, and so it is both measurable 
> and important. (At Deque, we bill routinely our clients on a combination 
> of time and materials.)

This is true, but it isn't the same thing.

Trying to describe a metric for Silver based on level of effort would be 
analogous to asking an agency to provide a cost to audit a website, 
without telling them anything about the website in question.

We don't have the ability to judge level of effort on a case by case 
basis. We'd need to come up with some semblance of an average, and that 
would necessarily be inaccurate for almost every scenario.

> 
> When it comes to meeting specific requirements, some are trivially easy 
> to do (adding the language of page declaration to the top of the site's 
> template page(s)), whereas others are significantly harder (producing 
> captions and audio-description resources for multi-media content). 

Trivially easy for  whom? This simplification is exactly why gauging 
level of effort won't work in this context.

Adding a lang attribute may be trivial for a small business with a one 
person web team, but it isn't trivial for a large organisation that has 
to renegotiate its contract with the third party supplier that created 
the site before accessibility was a requirement.

I've recently discussed a similar thing with a large organisation. It 
wants to fix a number of missing and/or incorrectly associated form 
labels on its flagship website.

It would take a single developer with complete autonomy over the website 
perhaps two days to fix, test, and deploy the fixes. Let's call that 
£800 all told.

It will take this organisation around 6 to 9 months at a cost of 
£1million to do the same thing, because:

* The website was produced by a third party and accessibility wasn't in 
the requirements. This means this issue isn't considered to be a defect, 
and so the contract must be renegotiated with the third party supplier. 
That means that legal and procurement are now involved, as well as the 
relevant people from the project team.

* The websites belonging to this organisation use a design system. Like 
many organisations, they intended to produce a single component library 
for use across all websites, but what happened in reality is that each 
website layered additional behaviours on top of the original components. 
This means that before they can change the original components, they 
need to do a gap analysis to understand what, if any, impact that will 
have on all the different flavours of the original being used. This 
means the QA team are now also involved, plus the design team since 
styling is also likely to be impacted by some of the changes to the 
custom form components.

* When the issues have been implemented in the original form components, 
the organisation needs to do a staged roll out, to make sure that the 
fixes in the original haven't broken any of the different 
implementations on any of their websites. By now the third party has 
been included in the time and effort calculations, as well as the time 
and effort of multiple web teams across the organisation.

This is not in the least bit uncommon in large organisations.

By comparison, the cost of providing a transcript, captions, and AD for 
a video is around $12 per minute (based on current prices from 3 Play 
Media). According to YouTube, the average length of its most popular 
videos is about 4.5 minutes [1], so let's call it $54 on average to 
provide the three different formats for the same video.

  Arguably using a service to provide these formats is trivial in terms 
of effort. The burden of cost is subjective depending on budget 
availability, but it it isn't likely to be prohibitive for most 
organisations of most sizes either.

So how do we reconcile these vastly different realities into something 
we can reasonably apply to any website wanting to conform to Silver?

[...]

> 
> In my proposal, I am simply suggesting that 'effort' be used as a 
> multiplier in a base-score calculation: in my straw-man proposal I 
> suggested 3 levels of easy, harder, hardest. Easy has a multiplier of 1, 
> harder is 2X and hardest is 3X. Then, as we look at individual 
> requirements I am suggesting that impact on a user group or groups (or, 
> more accurately user-requirement(s)) would also be a scoring factor. I 
> had used a proposed level of 1 - 10, where lower benefit requirements 
> have a lower impact value, and requirements with a higher user-benefit 
> has a higher value. Thus the calculation for a base score per 
> requirement would be (benefit to user X effort multiplier = base score).

Setting aside user impact, which I think is a different metric, how 
would you define "easy", "harder", and "hardest"?

> 
> I'll note in closing that this was also why some existing SC in WCAG 2.x 
> are AA versus A (even if the requirements are both important to the end 
> user) - that the impact on the creator was also a consideration in the 
> A/AA/AAA calculation back during the WCAG 2.0 development days.

It's also arguable that it didn't work there either. The WebAIM million 
accessibility analysis found that 33% of websites are missing document 
lang attributes, and that 68% have missing alt texts (another frequently 
cited example of a trivial SC to meet).

Léonie
[1] https://www.minimatters.com/youtube-best-video-length/
[2] https://www.minimatters.com/youtube-best-video-length/
> 
> JF
> 
> On Thu, Jul 18, 2019 at 10:34 AM Hall, Charles (DET-MRM) 
> <Charles.Hall@mrm-mccann.com <mailto:Charles.Hall@mrm-mccann.com>> wrote:
> 
>     My understanding is that there is interest (but possibly not
>     consensus) that the practice of usability testing – especially when
>     it includes participation of people with a wide range of functional
>     needs – is a behavior the guideline intends to encourage.____
> 
>     __ __
> 
>     What is undecided / not agreed upon is how. If attached to
>     conformance, then it must consider the level of effort and cost
>     associated with that practice, because now there is a specific
>     action dependency on ability to conform (more on effort below). If
>     attached to a second currency, then that currency should have
>     significant value, or there is little to no encouragement. ____
> 
>     __ __
> 
>     My opinion (and I say this as a UX person) is that testing itself is
>     the wrong emphasis. What the guideline should encourage is outcomes.
>     This point has been made in a few email threads: the act of testing
>     is not an indicator that the results of testing and insights gained
>     were applied or that those changes had any measurable human impact.
>     I also have a pretty strong opinion that the level of effort of the
>     author / creator is both immeasurable and moot. It is possible to
>     create a conforming site {x} ways with {n} effort. It is equally
>     possible to create a non-conforming site with clear barriers {x}
>     ways with {n x n} effort. There is rarely causation or even
>     correlation between effort and outcome, and when there is, it is
>     fairly difficult to measure. It also scales down with maturity – in
>     this case, accessibility maturity. So I could spend months and
>     millions on usability testing and building or modifying a thing
>     based on insights. The next thing I build or modify is going to take
>     less effort to get the same outcome from both reusable patterns and
>     institutional knowledge.____
> 
>     __ __
> 
>     __ __
> 
>     *Charles Hall* // Senior UX Architect____
> 
>     __ __
> 
>     (he//him)____
> 
>     charles.hall@mrm-mccann.com
>     <mailto:charles.hall@mrm-mccann.com?subject=Note%20From%20Signature>____
> 
>     w 248.203.8723____
> 
>     m 248.225.8179____
> 
>     360 W Maple Ave, Birmingham MI 48009 ____
> 
>     mrm-mccann.com <https://www.mrm-mccann.com/>____
> 
>     __ __
> 
>     MRM//McCann____
> 
>     Relationship Is Our Middle Name____
> 
>     __ __
> 
>     Network of the Year, Cannes Lions 2019____
> 
>     Ad Age Agency A-List 2016, 2017, 2019____
> 
>     Ad Age Creativity Innovators 2016, 2017____
> 
>     Ad Age B-to-B Agency of the Year 2018____
> 
>     North American Agency of the Year, Cannes 2016____
> 
>     Leader in Gartner Magic Quadrant 2017, 2018, 2019____
> 
>     Most Creatively Effective Agency Network in the World, Effie 2018,
>     2019____
> 
>     __ __
> 
>     __ __
> 
>     __ __
> 
>     *From: *Chris Loiselle <loiselles@me.com <mailto:loiselles@me.com>>
>     *Date: *Tuesday, July 16, 2019 at 10:05 AM
>     *To: *Silver Task Force <public-silver@w3.org
>     <mailto:public-silver@w3.org>>
>     *Subject: *[EXTERNAL] thoughts points system for silver
>     *Resent-From: *Silver Task Force <public-silver@w3.org
>     <mailto:public-silver@w3.org>>
>     *Resent-Date: *Tuesday, July 16, 2019 at 10:04 AM____
> 
>     __ __
> 
>     Hi Silver,____
> 
>     __ __
> 
>     Just a thought off of today's call:____
> 
>     __ __
> 
>     In regard to point system, would the fact that user testing was
>     completed at a given organization during the development of a
>     product give them extra points vs. not completing user testing at
>     all? ____
> 
> 
> 
>     ____
> 
>     For each demographic of user testing, grading all user tests
>     equally, would someone who tests with a user that has limited sight
>     and a user that is hard of hearing not receive as many points as
>     someone that tests with someone who is Blind, someone who has low
>     vision, someone who is Deaf,  someone who is hard of hearing,
>     someone with a cognitive disability (etc.)? ____
> 
> 
> 
>     ____
> 
>     What if the organization went deep on depth of testing with the user
>     who is Blind and the user who has limited sight, but only went
>     surface level (breadth) with multiple users each with a different
>     disabilities vs. diving deep with two users ? Would those be
>     weighted differently? The same? I know there was discussion on
>     ribbons, points, badges, where would that come into play?____
> 
>     __ __
> 
>     ____
> 
>     Thank you,
>     Chris Loiselle____
> 
>     This message contains information which may be confidential and
>     privileged. Unless you are the intended recipient (or authorized to
>     receive this message for the intended recipient), you may not use,
>     copy, disseminate or disclose to anyone the message or any
>     information contained in the message. If you have received the
>     message in error, please advise the sender by reply e-mail, and
>     delete the message. Thank you very much.
> 
> 
> 
> -- 
> *​John Foliot* | Principal Accessibility Strategist | W3C AC Representative
> Deque Systems - Accessibility for Good
> deque.com <http://deque.com/>
> 

-- 
@TetraLogical TetraLogical.com
Received on Friday, 19 July 2019 11:30:05 UTC

This archive was generated by hypermail 2.4.0 : Thursday, 24 March 2022 20:31:46 UTC