Contrast 695 (Was Re: A color tutorial from Tom Jewett

Hello Gregg, I am a new member of the LowVision Task Force, but also, I am the one that brought this issue forward when I opened issue #695 on GitHub about two months ago: https://github.com/w3c/wcag/issues/695 <https://github.com/w3c/wcag/issues/695> 

As you can see in that thread, I have been doing substantial research on the subject, and I’ve posted a great deal of commentary and experimental examples there. I’ve also commented at length on related issue, #665.

Additionally, I have a a page with links to some of the experiments I am conducting: https://www.myndex.com/WEB/Perception <https://www.myndex.com/WEB/Perception> and an account on ResearchGate.

I am taking this seriously, researching thoroughly with due diligence and the goal to create a more useful metric, accepted by more designers — I have experiments and demonstrations of how the current methods present flawed results on the GitHub thread, as well some commentary on my pages and my ResearchGate.net <http://researchgate.net/> account. But the GitHub thread is the “main compendium” summarizing much of the work thus far.


> On Sun, May 26, 2019 at 11:19 PM Gregg Vanderheiden RTF <gregg@raisingthefloor.org <mailto:gregg@raisingthefloor.org>> wrote:
> Hi Wayne    (sorry tired) 
> Here is some information that might be helpful. 
> This topic seems to come up again every few years. 
I hope you don’t mind me mentioning this, but if this issue comes up on a regular basis, it’s probably because there are unresolved problems such as the ones I discuss on Github. Perhaps in the past the objections came from someone unable to examine and research the issue. But I can and am actively doing so.

> Before diving into it again — it might be helpful to know all the work and research that went into developing the measure in the first place. 
This is good, I’ve been asking for this on GitHub, and it would be very useful to know how this was developed and supporting data. It would be *ideal* if I could see the empirical facts that are referred to (but not cited) in the WCAG.


> It takes into account much more than most measures of contrast do - - including both low vision and the different types of color blindness. The current contrast measure was developed based on both international standards and research on low-vision and color blindness - and was done in collaboration with research scientist at the Lighthouse for the blind.  Over a year was spent on researching and developing it.   It was based on international standards and then adjusted to control for legibility and contrast when the different types of color blindness and low vision were applied.     We did this work because we were unable to find any other researchers who had done any work to account for these when coming up with their contrast measures. 

I am assuming you mean Dr. Arditi. I have read a lot of his research papers (among others) and I know he’s well respected. However, I never found anything in the papers I have read that indicate the current math or approach. ???

It would be extremely useful for me to review the research/studies/trials to better understand the basis for what is presented. The current “Understanding” has some odd statements in it that are not supported by published research. 
>  The current measure takes into account the following things  
> Reseach on standard contrasts levels
Yet standard contrast levels are not being used?

> Research quantifying the need for increased contrast with reduced visual acuity 
I would love to see this research. Visual acuity is best helped by increased stimulus size and adequate surround (padding — see Bartleson and Breneman Surround Effects.) and total luminance relative to ambient.


> The quantification of the differences in contrast perceived with different color combinations for people with different types of color vision differences (including Protan, Deutan, Tritan, and Mono or Achro (no color) vision differences.

Hmmm. Okay, however the contrast equation is a simple-ratio of luminance with a tiny offset. There is no color involved.  

  C = (Lhi + 0.05) / (Llo + 0.05)   
  
No color. And the sRGB equation to get Y (Luminance) is nothing more than the standard sRGB coefficients and math (the wrong threshold the WCAG lists notwithstanding).

In short, there is nothing “special” about the WCAG equations. They do nothing special in regards to color. They are working only with luminance and luminance contrast. All color information is discarded when converting to Y. But really that’s fine, luminance (not color) contrast is appropriate (or a perceptually uniform model), but don’t imagine you are doing anything “special" for CDV with this math.

And that’s okay — In reality, you don’t have to. There is plenty of research on these deficiencies that indicate that adequate luminance contrast solves the needs of at least the dichromatic ones (Pro/Deu/Tri) though the Protan’s see pure red as black, so avoiding pure red/black combos helps them.  Cone monochromat is rare, again red appears as black, and greens are very dark. 

Green makes up 71% of luminance for normal vision, so cone monochromats with only blue & rods are at a disadvantage. And rod monochromats have even further problems due to the resultant photophobia. But those are impairments outside of anything a designer can help with (other than avoiding pure red and black).

Back to the Math

Also, the act of putting + 0.05 on both sides of the slash is a little weird, unsupported by any research or purpose that I’ve seen elsewhere, and does little except limit the reported contrast (max 21:1). 

Why not on the divisor side only?? This is a little better but still not the best answer:

  C = (Lhi) / (Llo + 0.05)    

Then there is the Modified Weber:

  C = (Lhi - Llo) / (Lhi + 0.1)   

Is better and supported by research — but since I am involved in this research now, I have created some further advancements, and enhancements specifically targeted toward graphically rich web pages, which have more complex contrast requirements. CE14, 15 and on are using various perceptually uniform models. At the moment I am finalizing some experiments on some maths.

I will present the equation as soon as I complete a few more tests, then you guys can bash on it.

Also in the works is a web-based contrast “test" , eventually intended for public use, to collect a LARGE sample size of relevant perception data.

> The range of contrast that would allow three items to maintain color contrast with each other.  (That is -   A contrasts sufficiently with B which contrasts sufficiently with C  without A and C having to be pure black and white. 
??? Not using WCAG AA math you can’t. A is going to be white and C black with the mid at 808080 if using the AA version, and AAA can’t do three colors as described.

Using Grey: A: #FFFFFF  B: #767676  C: #040404. (#040404 is pure black for all practical purposes - it’s 0.12 Luminance which is well below ambient). This is the 4.5:1 WCAG math, and it is limited to this narrow range. Designers are pretty vocal about how they dislike this math.

> And the full range of colors that would be possible and still meet any color contrast requirements.   (In WCAG’s case   4.5:1 and 7:1) 
And here’s a funny thing about the WCAG math: it rejects a lot of bright color pairs that should pass, and it passes a lot of dark color pairs that should fail. 

Some things to consider in terms of related future standards to add:
The WCAG seems to have nothing to say about “padding”  See Bartleson and Breneman Surround Effects aka local adaptation). The amount of space around text is critical to perceived contrast if the text container DIV is itself against a background of substantial contrast.
The weight of a font vs its size is critical, as below a certain weight, the font will be blurred into the background losing the contrast of the color it was assigned due to anti-aliasing of the rasterizer.

> Any new efforts to revisit should be at least as thorough and take all of these into account quantitatively.   

I have been, and much of it I have left as pubic on Github in the spirit of open discussion. I am pursuing this with all due diligence, and will provide a working, easy to use & implement, robust solution. 


> As to the age of the tool — we are using tools that are hundreds of years old in science all the time. 
>  The age is not really relevant.

I mostly agree - CIEXYZ was developed in 1931, and it is still the standard reference device-independent color space. 

HOWEVER, for standards that rely on obsolete technology, then age is VERY relevant.

We no longer use CRT displays. And we especially no longer use CRT displays that only do Green & Black !! Yet that was the common technology in 1988 when one of the cited standards was created. Monitor technology standards prior to circa 2005 are obsolete and are no longer authoritative. 

Also, the WCAG cites the IEC standard for sRGB, yet it obviously was never consulted as the wrong math from the obsolete working draft was used in the WCAG spec, not the correct embodiment from the IEC standard. As it happens, the wrong sRGB math in this case has no effect on 8bit data, so it’s not a huge deal — EXCEPT that people look to the W3C/WCAG as an authoritative document and they cite and use this wrong math often— these kinds of errors affect credibility.
> Is there something else that makes you think the old tool is no longer valid?
In addition to my comments in this email, I discuss the problems at length in issue 695, along with examples, references, and experimental results.

> If so — that is where we should start.  With what the perceived problem is with the old tool. 
> What has changed that made it no longer work? 

It never worked. A broken clock is right twice a day, and in this case it’s easy to see some colors that “work” despite inaccuracies. As I have demonstrated in experiments, it is just as easy to break as it is to “use”. Consider the effect of confirmation bias, and the flexible nature of perceptual contrast (once above a person’s contrast sensitivity threshold by at least 10%, further contrast improvements have a minor/less noticeable effect).

This is actually a more fuzzy grey area than may be apparent. Contrast threshold is the only “easy” to measure metric. In normal vision it’s 1%, is moderately weak vision it might be 3%, a more profound impairment could make CS 10%. But this is just the threshold of “just able to see the stimuli”. Next above that is the point of “useful” contrast, where we can read, and then above that is an ideal contrast that we can perceive clearly enough to read at maximum speed.

For NORMALLY sighted individuals, maximum reading speed can require a contrast as high as 10:1 (using Weber) See this from NIH:

(abridged) https://www.ncbi.nlm.nih.gov/books/NBK207559/#ddd00103

Reading is remarkably robust to contrast variations in normally sighted readers (Legge, Rubin, & Luebker, 1987; Legge, Rubin, Pelli, & Schleske, 1985). 

Rubin and Legge suggest that there is a subset of individuals with low vision (with cataract and cloudy media) who are essentially normal readers, except for an early stage of reduction in retinal image contrast. Based on this and other evidence, Leat et al. (1999) suggest that a Pelli-Robson contrast sensitivity score of less than 1.5 would result in visual impairment and a score of less than 1.05 would result in disability.

The Pelli-Robson score represents the logarithm of the subject's contrast sensitivity. Thus a score of 2, indicating a contrast sensitivity of 100 percent, means that the lowest contrast letters the observer can read correctly have a contrast of 1 percent (i.e., 1/100).

Whittaker and Lovie-Kitchin (1993) surveyed the literature on the effects contrast, on reading speed. They defined the “contrast reserve” as the ratio of print contrast to threshold contrast. From their survey of the published data on low and normal reading rates versus text contrast, they concluded that the contrast reserve had to be at least 10:1 for reading at a low normal speed of 174 wpm; a 4:1 reserve to read at 88 wpm, and a 3:1 reserve for “spot reading,” i.e., 44 wpm. 

For newsprint with a contrast of 70 percent, then the reader's contrast threshold would have to be lower than 7 percent to achieve the desired 10:1 reserve. A contrast threshold of 10 percent corresponds to a Pelli-Robson score of 1.0. 


I have loads more to discuss, but will continue later. To add fwiw, I became involved with the W3C/WCAG specifically because of this issue, and as I researched it I saw even more opportunities to advance the cause of visual accessibility. 

Thank you!

Andy



Andrew Somers
Senior Systems Engineer
Myndex Technologies <http://www.myndex.com/>
Box 1867, Hollywood, Ca. 90078
213.448.4746
andy@myndex.com





>  
> All the best. 
>  
> Gregg
>  
>  
>  
> On May 23, 2019, at 2:53 PM, Wayne Dick <wayneedick@gmail.com <mailto:wayneedick@gmail.com>> wrote:
>  
> I think it is time to look at contrast and color.
> Our formula may be the one, but it may not. This would really be a research effort. 
> As mentioned before, we can calibrate any new test on the same scale we use now so that the user interface of tests won't need to change much. 
> What we need muster is our talent in the mathematics, physics, electrical engineering, vision science, photography and art. 
>  
> There has been enough concern expressed about the current formula that it seems reasonable to review our research and improve it if needed. 
>  
> Maybe we need a different formula. Maybe we need to do more with accessibility testing to ensure standardized evaluation. I just don't know, but I am concerned with the distrust of our numbers. 
>  
> I could use some suggestions about how to proceed organizationally. This is not controversial. We are using a 10 year old tool in rapidly evolving technology. A calm scientific review is in order. Tom Jewett and I are happy to contribute.
>  
> Best to All, Wayne
>  
> Best, Wayne

Received on Friday, 7 June 2019 23:36:44 UTC