Comments on your contrast ratio article

Hi Dr Waller,

I was reading your interesting article

Does the contrast ratio actually predict the legibility of website text?

I didn't see any obvious place to provide general feedback, such as a 
GiHub repo, so am sending some comments that I had via email. Comments 
on the two exercise I posted to your LinkedIn.

As background, I am a Technical Director at W3C, CSS Working Group staff 
contact, and co-editor of CSS Color levels 3, 4, and 5. I'm also the W3C 
liaison to the International Color Consortium (ICC).

I was please to see your clear and succinct summary:

> For dark background colours (RGB<118), both algorithms predict that 
> white text is more legible. For bright background colours (RGB>163), 
> both algorithms predict that black text is more legible. However, for 
> backgrounds with RGB values between 118 and 168, the two algorithms 
> contradict each other. For any background within this range, ‘WCAG 
> Contrast Ratio’ predicts that black text is more legible, whereas 
> ‘APCA Lightness Contrast’ predicts that white text is more legible. 

Given the voluminous discussion around the use of APCA in WCAG Silver, 
terse summaries like this are very helpful.

I was also pleased to see the in-browser legibility experiments. These 
both help validate the model for those with typical color vision, and 
helps testing for those with atypical color vision.

Your comments on the need to measure reading speed, as well as 
legibility, were well made and I would also like to see more testing in 
this area.

On to my specific comments, which I hope you find constructive and helpful:

1) sRGB rather than just "RGB".

 From CSS1 until CSS Color 3, all colors in Web content were specified 
in sRGB, the exception being colors in raster images with embedded ICC 
profiles. Earlier on, the accuracy in representation of those colors was 
very variable (websites "designed for Mac" with a different gamma, and 
so on) but nowadays modern browsers represent such colors consistently, 
with the notable exception of Firefox in its default configuration.

Since 2016 there has been increasing use of the display-p3 colorspace, 
both in native content, and Web content in HTML, and now in Canvas as 
well. This corresponds to widely-available wide gamut screens on 
laptops, TVs and phones. TV and streaming services are also using the 
even wider gamut Rec BT.2020 colorspace. CSS Color 4 adds a way to 
explicitly use these color spaces for specifying, modifying and mixing 

Thus, it would be helpful had your article referred early on to sRGB and 
noted that all examples were given in sRGB rather than in some random or 
unspecified RGB space.

2) Color appearance models vs. colorimetric models

You wrote:

> Additionally, the examples in this article show that black text on 
> coloured backgrounds becomes considerably more legible when the page 
> background becomes black. At the time of writing in March 2022, both 
> ‘APCA Lightness Contrast’ and ‘WCAG Contrast Ratio’ were two-colour 
> models that do not account for the effect of the page background, 
> which limits the accuracy of the models. 

That is exactly the difference between a colorimetric model (foreground 
and background colors only) and a color appearance model (which also 
considers proximal and surround fields, and the overall room 
illuminance). Color appearance models give better predictions, but 
require more measurements to characterize the viewing environment and 
give worse results if those measurements are estimated or incorrect.

While it is certainly valuable to use such models to standardize user 
testing in a controlled environment, it remains unclear how to integrate 
a color appearance model into general Web development which would need 
to take into account the complete web page, plus other windows on the 
same screen and also the room illuminance and the current adapted white 
point. Such complexity is beyond the scope of current models such as 

3) "Color blindness" vs. atypical color vision.

You wrote:

> The eye perceives light using 3 different cones, one of which is most 
> sensitive to red light, another to green light, and another to blue 
> light. 

They really don't, and such simplifications are more harmful than 
helpful. In particular, on hearing that the eye has "RGB sensors" many 
people readily assign the percentages of R G and B in a color to the 
amount perceived by each cone, which is totally not the case.

The peak sensitivities of the S, M and L cones are at 445nm (violet), 
535nm (green) and 570nm
(green-yellow). Al three cone types are mutations of a single original 
cone, the S type being characterized by sensitivity dropping off after 
450nm in contrast to the 5-600nm peaks of the other two types.

The luminance, red vs. green and blue-violet vs. yellow signals in the 
retina are formed by addition and subtraction of these signals in the 
retinal ganglion cells.

> Protanopia occurs when the red-cones are impaired, Deuteranopia occurs 
> when the green-cones are impaired, and Tritanopia occurs when the 
> blue-cones are impaired.
Yes (although L M and S cones), but more commonly we see partial loss of 
discrimination in one color pair: protanomaly or deuteranomaly giving 
reduced red-green discrimination rather than a complete lack of 
red-green discrimination; and tritanomaly giving reduced 
yellow-blue/violet discrimination.

> Both APCA and WCAG predict the lightness contrast, and ignore the hue 
> contrast completely. 

Yes, they do. It isn't clear to what extent ignoring chromatic contrast 
is a problem though, because chromatic contrast is unlikely to ever 
fully compensate for inadequate lightness contrast. So thee simplified 
lightness-only contrast model has a small, variable under-estimation of 
contrast compared to a full model.

> WCAG uses a Luminance formula that is approximately 0.2R^2.4 + 
> 0.7G^2.4 + 0.07B^2.4 .
> APCA uses a Luminance formula that is approximately 0.2R^2.2 + 
> 0.7G^2.2 + 0.07B^2.2 

I often see this quoted as "the WCAG formula" which is odd and 
incorrect. WCAG didn't invent it, although it is where many people first 
saw these particular coefficients.

The CIE defines luminance, from the CIE standard observer and CIE XYZ 
linear-light space; luminance is (deliberately chosen to be) the Y 

The conversion from a given RGB space, such as sRGB, is defined by 
firstly the Electro-Optical Transfer function ('undoing gamma encoding') 
to convert to linear light. For sRGB this is defined by the sRGB 
standard, which (like many other broadcast-derived color space 
standards) uses a linear portion at very dark levels, to limit noise 
amplification, followed by a power function for medium-dark to lightest 

WCAG 2.0 quoted values from obsolete version of the sRGB standard, with 
a known error, and then had to cite a specific working draft of sRGB 
rather than the final standard. However in practice that error was 
small, and not observable as an error in 8 bit per component encodings; 
and has been corrected in the latest WCAG 2.x draft (though they have 
yet to update the references).

Some people approximate this transfer function with a simple power law. 
The best fit (lowest error) is an overall gamma of 2.223. Other people 
(and this applies to the APCA algorithm) simply copy the 2.4 exponent 
value from the full sRGB equation, without considering how the piecewise 
formula affects it, producing much larger errors. Both approximations 
under-estimate the relative luminance for dark colors. The 2.4 
approximation greatly under estimates it for all colors, being worst in 
the mid range. It is not clear whether this is a deliberate change or an 
inadvertent error.

Secondly, a conversion from linear-light RGB to CIE XYZ uses a 3x3 
matrix derived from the red, green, blue and white chromaticities. So 
for sRGB that is again defined by the chromaticities in the sRGB 
standard, which are in fact the same as those in the ITU Rec. BT.709 
standard for HDTV. If we only want Y, this reduces down to the three 
weights cited. Since the standard does not define the matrix but instead 
defines the chromaticities, there are small variations in practice due 
to round-off error or variations in the precision of the white chromaticity.

(Sorry for the big digression on inaccuracies in the APCA luminance 
calculation, but it is germane to my next point).

> If these models were to be adapted to specifically consider 
> red-impaired cones, the multiplier in front of the R term would likely 
> be much closer to 0. 

The spectral response of human cones and of monitor LCDs or OLEDS are 
very different. Approximating the HVS by simply manipulating sRGB 
channel values will not produce an accurate model. Instead, rather than 
the CIE standard 2 degree observer, an alternate observer model should 
be used to convert spectral values to a modified XYZ space.

Chris Lilley
Technical Director @ W3C
W3C Strategy Team, Core Web Design
W3C Architecture & Technology Team, Core Web & Media

Received on Sunday, 5 June 2022 23:09:34 UTC