- From: Chris Lilley <chris@w3.org>
- Date: Thu, 16 Jun 2022 16:15:41 +0300
- To: Sam Waller <sdw32@cam.ac.uk>, Sam Waller <sam.waller@eng.cam.ac.uk>
- Cc: "www-archive@w3.org" <www-archive@w3.org>
- Message-ID: <49b23c49-25c4-8098-e5ff-6c659e8be75f@w3.org>
On 2022-06-09 13:54, Sam Waller wrote: > > Regarding the different cones, this is beautifully described by the > image below (from > http://hyperphysics.phy-astr.gsu.edu/hbase/vision/colcon.html) > > http://hyperphysics.phy-astr.gsu.edu/hbase/vision/imgvis/colcon.png > > I have asked the author of this image for permission to use this image > in my article, as I think this would be an excellent way of addressing > your feedback. If you have rights to any similar image that you can > give me permission for, I'd be glad to incorporate it. > I do have a couple of images, one showing the un-normalized sensitivities and the other showing them on a log scale, which better represents how we see things. Not sure if those are of interest. The one you show here is misleading in a couple of ways: - the sensitivity of the S cones ("blue"), which is low, is here scaled to be the same height as the other two. This gives an inaccurate idea of the sensitivity at short wavelengths, which is actually only slightly greater than the other two cone types. The big difference in S cones compared to M and L is the absence of the big green or yellow-green peaks - The curve with the 575nm peak is labelled "Red", while 575nm is greenish-yellow. For example, try entering 575 into this calculator: https://www.luxalight.eu/en/cie-convertor > My article is aimed at a general audience, so I would prefer to avoid > introducing L,M,S cones. I have however changed the text so this it > talks about impaired red-sensitive cones, rather than impaired red cones. > That is an improvement, agreed. > > > On the topic of the contrast formula, I have been trying to find where > the formula that WCAG uses to calculate contrast as a ratio came from. > I traced the reference from the WCAG guidelines, which led to an ISO > standard that includes the formula, but this ISO standard didn't > itself give any traceable reference for where it came from. You did better than I was able (I don't have a budget to buy ISO standards). > I have not thus far been able to find any traceable empirical or > theoretical evidence to support this formula as a predictor of > legibility/readability, so if you are aware of any such published > evidence, then I would be very grateful if you could point me towards > it, and then I can better update my article to reflect where this > formula actually came from. I am interested to know where it comes from as well. Clearly, basing it on luminance rather than an estimate of perceptually uniform lightness (such as CIE L* or OKLab L) is incorrect, for a start. > Regarding the power exponents of the luminance formula, APCA appears > to use an exponent of 2.2 in its luminance formula, so I'm not quite > sure I would agree with your implication that APCA copied the 2.4 > exponent. > No, the APCA formula uses a simple power law with a 2.4 exponent https://github.com/Myndex/apca-w3#current-version-014-g-w3-beta WCAG uses the correct sRGB to luminance transfer function from the sRGB standard, which has a small linear portion followed by a scaled and offset power law segment. The exponent in that formula is indeed 2.4, which is where I assume Myndex copied it from, but the best-fit simple power law to that curve has a 2.223 exponent, which gives a worst-case 0.5% error in the calculated luminance. APCA, with a 2.4 exponent, has a worst-case 2.5% error in calculated luminance, for mid-tones. https://svgees.us/Color/errors.html > Please let me know if you have any further feedback on any of the > above, and thank you for the detailed feedback sent so far. You are most welcome. By the way, I came across a third estimate of contrast. Google in their Material Design use a color model called HCT, which is a hybrid of CIE L* for the lightness (tone) axis but hue and chroma from CIECAM16. Their contrast measure is simply the difference in Tone (CIE L*) which is trivial to calculate without getting into the complexities of CIECAM16. It does not appear to differentiate between light text on dark, and dark text on light. https://material.io/blog/science-of-color-design > Best wishes > > Sam Waller (he, him) > > University of Cambridge, Engineering Design Centre > > 01223 332826 > > -----Original Message----- > > From: Chris Lilley <chris@w3.org> > > Sent: 06 June 2022 00:10 > > To: Sam Waller <sam.waller@eng.cam.ac.uk> > > Cc: www-archive@w3.org > > Subject: Comments on your contrast ratio article > > Hi Dr Waller, > > I was reading your interesting article > > Does the contrast ratio actually predict the legibility of website text? > > https://www.cedc.tools/article.html > <https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.cedc.tools%2Farticle.html&data=05%7C01%7Csdw32%40universityofcambridgecloud.onmicrosoft.com%7C524bfaf0ec1e4e11126008da47487626%7C49a50445bdfa4b79ade3547b4f3986e9%7C0%7C0%7C637900673784793945%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=a7Jdvl1W8zf6w%2BeZXmpweYpViR42T9fpYNx3NhUPpBo%3D&reserved=0 > <https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.cedc.tools%2Farticle.html&data=05%7C01%7Csdw32%40universityofcambridgecloud.onmicrosoft.com%7C524bfaf0ec1e4e11126008da47487626%7C49a50445bdfa4b79ade3547b4f3986e9%7C0%7C0%7C637900673784793945%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=a7Jdvl1W8zf6w%2BeZXmpweYpViR42T9fpYNx3NhUPpBo%3D&reserved=0>> > > > I didn't see any obvious place to provide general feedback, such as a > GiHub repo, so am sending some comments that I had via email. Comments > on the two exercise I posted to your LinkedIn. > > As background, I am a Technical Director at W3C, CSS Working Group > staff contact, and co-editor of CSS Color levels 3, 4, and 5. I'm also > the W3C liaison to the International Color Consortium (ICC). > > I was please to see your clear and succinct summary: > > For dark background colours (RGB<118), both algorithms > predict that white text is more legible. For bright background colours > (RGB>163), both algorithms predict that black text is more legible. > However, for backgrounds with RGB values between 118 and 168, the two > algorithms contradict each other. For any background within this > range, ‘WCAG Contrast Ratio’ predicts that black text is more legible, > whereas ‘APCA Lightness Contrast’ predicts that white text is more > legible. > > Given the voluminous discussion around the use of APCA in WCAG Silver, > terse summaries like this are very helpful. > > I was also pleased to see the in-browser legibility experiments. These > both help validate the model for those with typical color vision, and > helps testing for those with atypical color vision. > > Your comments on the need to measure reading speed, as well as > legibility, were well made and I would also like to see more testing > in this area. > > On to my specific comments, which I hope you find constructive and > helpful: > > 1) sRGB rather than just "RGB". > > From CSS1 until CSS Color 3, all colors in Web content were specified > in sRGB, the exception being colors in raster images with embedded ICC > profiles. Earlier on, the accuracy in representation of those colors > was very variable (websites "designed for Mac" with a different gamma, > and so on) but nowadays modern browsers represent such colors > consistently, with the notable exception of Firefox in its default > configuration. > > Since 2016 there has been increasing use of the display-p3 colorspace, > both in native content, and Web content in HTML, and now in Canvas as > well. This corresponds to widely-available wide gamut screens on > laptops, TVs and phones. TV and streaming services are also using the > even wider gamut Rec BT.2020 colorspace. CSS Color 4 adds a way to > explicitly use these color spaces for specifying, modifying and mixing > color. > > Thus, it would be helpful had your article referred early on to sRGB > and noted that all examples were given in sRGB rather than in some > random or unspecified RGB space. > > 2) Color appearance models vs. colorimetric models > > You wrote: > > Additionally, the examples in this article show that black > text on coloured backgrounds becomes considerably more legible when > the page background becomes black. At the time of writing in March > 2022, both ‘APCA Lightness Contrast’ and ‘WCAG Contrast Ratio’ were > two-colour models that do not account for the effect of the page > background, which limits the accuracy of the models. > > That is exactly the difference between a colorimetric model > (foreground and background colors only) and a color appearance model > (which also considers proximal and surround fields, and the overall > room illuminance). Color appearance models give better predictions, > but require more measurements to characterize the viewing environment > and give worse results if those measurements are estimated or incorrect. > > While it is certainly valuable to use such models to standardize user > testing in a controlled environment, it remains unclear how to > integrate a color appearance model into general Web development which > would need to take into account the complete web page, plus other > windows on the same screen and also the room illuminance and the > current adapted white point. Such complexity is beyond the scope of > current models such as CIECAM 16. > > 3) "Color blindness" vs. atypical color vision. > > You wrote: > > The eye perceives light using 3 different cones, one of > which is most sensitive to red light, another to green light, and > another to blue light. > > They really don't, and such simplifications are more harmful than > helpful. In particular, on hearing that the eye has "RGB sensors" many > people readily assign the percentages of R G and B in a color to the > amount perceived by each cone, which is totally not the case. > > The peak sensitivities of the S, M and L cones are at 445nm (violet), > 535nm (green) and 570nm > > (green-yellow). Al three cone types are mutations of a single original > cone, the S type being characterized by sensitivity dropping off after > 450nm in contrast to the 5-600nm peaks of the other two types. > > The luminance, red vs. green and blue-violet vs. yellow signals in the > retina are formed by addition and subtraction of these signals in the > retinal ganglion cells. > > Protanopia occurs when the red-cones are impaired, > Deuteranopia occurs when the green-cones are impaired, and Tritanopia > occurs when the blue-cones are impaired. > > Yes (although L M and S cones), but more commonly we see partial loss > of discrimination in one color pair: protanomaly or deuteranomaly > giving reduced red-green discrimination rather than a complete lack of > red-green discrimination; and tritanomaly giving reduced > yellow-blue/violet discrimination. > > Both APCA and WCAG predict the lightness contrast, and > ignore the hue contrast completely. > > Yes, they do. It isn't clear to what extent ignoring chromatic > contrast is a problem though, because chromatic contrast is unlikely > to ever fully compensate for inadequate lightness contrast. So thee > simplified lightness-only contrast model has a small, variable > under-estimation of contrast compared to a full model. > > WCAG uses a Luminance formula that is approximately 0.2R2.4 > + 0.7G2.4 + 0.07B2.4. > > APCA uses a Luminance formula that is approximately 0.2R2.2 > + 0.7G2.2 + 0.07B2.2 > > I often see this quoted as "the WCAG formula" which is odd and > incorrect. WCAG didn't invent it, although it is where many people > first saw these particular coefficients. > > The CIE defines luminance, from the CIE standard observer and CIE XYZ > linear-light space; luminance is (deliberately chosen to be) the Y > component. > > The conversion from a given RGB space, such as sRGB, is defined by > firstly the Electro-Optical Transfer function ('undoing gamma > encoding') to convert to linear light. For sRGB this is defined by the > sRGB standard, which (like many other broadcast-derived color space > standards) uses a linear portion at very dark levels, to limit noise > amplification, followed by a power function for medium-dark to > lightest levels. > > WCAG 2.0 quoted values from obsolete version of the sRGB standard, > with a known error, and then had to cite a specific working draft of > sRGB rather than the final standard. However in practice that error > was small, and not observable as an error in 8 bit per component > encodings; and has been corrected in the latest WCAG 2.x draft (though > they have yet to update the references). > > Some people approximate this transfer function with a simple power > law. The best fit (lowest error) is an overall gamma of 2.223. Other > people (and this applies to the APCA algorithm) simply copy the 2.4 > exponent value from the full sRGB equation, without considering how > the piecewise formula affects it, producing much larger errors. Both > approximations under-estimate the relative luminance for dark colors. > The 2.4 approximation greatly under estimates it for all colors, being > worst in the mid range. It is not clear whether this is a deliberate > change or an inadvertent error. > > Secondly, a conversion from linear-light RGB to CIE XYZ uses a 3x3 > matrix derived from the red, green, blue and white chromaticities. So > for sRGB that is again defined by the chromaticities in the sRGB > standard, which are in fact the same as those in the ITU Rec. BT.709 > standard for HDTV. If we only want Y, this reduces down to the three > weights cited. Since the standard does not define the matrix but > instead defines the chromaticities, there are small variations in > practice due to round-off error or variations in the precision of the > white chromaticity. > > (Sorry for the big digression on inaccuracies in the APCA luminance > calculation, but it is germane to my next point). > > If these models were to be adapted to specifically consider > red-impaired cones, the multiplier in front of the R term would likely > be much closer to 0. > > The spectral response of human cones and of monitor LCDs or OLEDS are > very different. Approximating the HVS by simply manipulating sRGB > channel values will not produce an accurate model. Instead, rather > than the CIE standard 2 degree observer, an alternate observer model > should be used to convert spectral values to a modified XYZ space. > > -- > > Chris Lilley > > @svgeesus > > Technical Director @ W3C > > W3C Strategy Team, Core Web Design > > W3C Architecture & Technology Team, Core Web & Media > -- Chris Lilley @svgeesus Technical Director @ W3C W3C Strategy Team, Core Web Design W3C Architecture & Technology Team, Core Web & Media
Attachments
- image/svg+xml attachment: APCAw3_0.1.17_APCA0.0.98G.svg
Received on Thursday, 16 June 2022 13:15:50 UTC