[css-houdini-drafts] [css-paint-api] Ideas for increasing font contrast with Paint API? (#1042)

bkimmelSQSP has just created a new issue for https://github.com/w3c/css-houdini-drafts:

== [css-paint-api] Ideas for increasing font contrast with Paint API? ==
https://drafts.css-houdini.org/css-paint-api-1/

For a project I'm working on, I'm exploring how the CSS Paint API could be used to help solve contrast readability & accessibility problems by implementing a pixel-level "Dodge / Burn" effect ( https://en.wikipedia.org/wiki/Dodging_and_burning ) so that no matter what background a font overlays, the background dodges/burns itself (nominally through the paint server provided by a PaintWorklet?) to provide sufficient contrast with the letterforms in the text and the nearby-ish pixels in the background image. I'm somewhat familiar with the concept of paint servers in SVG, and I thought I heard Houdini described as something like that (-ish), just addressable on a pixel/subpixel basis. In the end, I'd like to end up with some kind of rule like:

`background-image: paint(minimum-contrast, 4.5, 'mypicture.jpg')`

Some cursory research/tinkering has dashed my hopes that this can be done with the CSS Paint API alone (at least in its current state) because the context passed to the registered _paint_ function doesn't allow the pixels to be read via the provided context because the `Pixel Manipulation` parts of the rendering context aren't provided (among other things I would probably need to edge-detect the letterforms). I haven't given up: I'm still imagining using the Paint API to go the "last mile" I was just wondering if anyone had any good ideas or advice to supplement my approach below:

1. Clone the element and all its computed styles and stuff it in an SVG `<foreignObject>`
2. Read the SVG into a canvas (? can that even be done?), turn off the alpha on the text and store it as `BackgroundPixelMap`
3. Turn off the background alpha and store the result as `TextPixelMap`
4. Turn off the background alpha, run the text through an edge detection filter and store the image as `TextEdgePixelMap`
5. Take the intersection of `Backround` and `TextEdge` and for each pixel in `Text`, either "Dodge" (turn up lightness) or "Burn" as necessary it to meet the minimum contrast specified as `AdjustedBackground`
6. Composite `AdjustedBackground` on top of `Background` and ... hopefully that would work?

Is this viable at all? Or are there better ways to use Houdini/Paint or something? I feel like if it worked, it could be a really neat little "one-shot" trick to solving a lot of accessibility problems because it offers a "scalpel" instead of the "hammer" approaches (e.g. "put a big hideous background behind the text", "put an ugly text-shadow behind all the text, even the letters that don't have contrast problems") that are hard to sell to designers.

Thank you so much for all your work on this API: I've really enjoyed reading the specs that have been written and I'm excited for the future of this API on the Web.



Please view or discuss this issue at https://github.com/w3c/css-houdini-drafts/issues/1042 using your GitHub account


-- 
Sent via github-notify-ml as configured in https://github.com/w3c/github-notify-ml-config

Received on Monday, 17 May 2021 20:41:06 UTC