RE: Gaussian blur as testing tool

Hi Andy,

Thanks for that. I might have been reading too much into it, I think Sam’s proposal is for testing graphics and icons, not text.

That is still useful and worth at least an initial discussion, we can carry on keeping text separate from other content.

Cheers,

-Alastair


From: Andrew Somers <andy@generaltitles.com>
Sent: 27 November 2020 12:23
To: Alastair Campbell <acampbell@nomensa.com>
Cc: public-low-vision-a11y-tf <public-low-vision-a11y-tf@w3.org>
Subject: Re: Gaussian blur as testing tool

Hi Alastair,

I’ve discussed lightly once or twice… the idea (at least my iteration) originated with Dr. Arditi’s 2017 article:
Rethinking ADA signage standards for low-vision accessibility<https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5433805/>
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5433805/



This is one of the more eye-opening articles (pun intended) I’ve read, Dr Arditi points out the lack of real empirical support for 3:1, and promotes the idea of “degrading normal vision by a set amount to allow for useful judgement."

This concept is good for spot reading and certain kinds of object recognition (buttons, controls).

But it fails for “readability” of blocks of text, because the complex nature of how whole words end up at the Visual Word Form Area of the brain for lexical processing is not something that can be accurately judged “by eye” regardless if degraded or not.

We can easily degrade then judge for “legibility” — which includes things like spot reading, involving the lexical processing one letter at a time. But for readability, it’s all about the supra-threshold acuity and contrast metrics, and that’s more complicated.

The thumbnail guideline for readability is:


  *   Text with an x-height of the lower case x that is at least twice the height of the capital E that define an individuals acuity in the acuity test chart.
  *   Rendered contrast that is a minimum of 10:1 relative to the individual's contrast sensitivity, with 20:1 preferred. (NOTE these are clinical ratios, NOT WCAG ratios!!!)

     *   The contrast calculation must include the spatial frequency of the stimuli (i.e. the font size and weight).

  *   The lightest color is above the eye's light adaptation level PLUS reserve for high spatial frequencies (small thin fonts) (for the sake of argument, use #a0a0a0);

AND…. all this stuff is interdependent…. and acuity impairments are only one of MANY impairment types.

Gaussian degrading …..even an optical simulated blur... is mostly going to try to model acuity issues — and that is only one small aspect of the vision and visual impairment model.

SOLVING THE ACUITY NEED

The best solution for acuity is size of stimulus (assuming best refraction is available). That is “best” handled by a reasonable minimum size (body text > 16px for instance) and the ability of the user to scale that UP by five times without breaking content or horizontal scrolling. Ideally, larger text like headlines only scales up as needed to remain a little larger than the smaller text, but this is more a browser/technology issue (and on my list).

Or put another way, ideally the user can scale the smallest text to be read up to 80px — 96px.


Thank you!

Andy


Andrew Somers
Senior Color Science Researcher
PerceptEx Perception Research Project<https://www.myndex.com/perceptex/>
Redacted for public list
[cid:image001.png@01D6C4B9.66D29940]<https://www.myndex.com/perceptex/>





On Nov 25, 2020, at 10:51 AM, Alastair Campbell <acampbell@nomensa.com<mailto:acampbell@nomensa.com>> wrote:

Hi folks,

I don’t know if anyone else was attending Techshare this year, but  I saw Dr Sam Waller’s presentation which was about using Gaussian blur to test text & icon visibility. (He’s at the Cambridge (UK) University Engineering Design Centre http://www.cedc.tools/)

I’m not going to do it justice here, but the thing that stood out to me was that:
If you know the testers visual acuity (established by which line of text they could read on a chart), you can then apply a gaussian blur to an interface to reduce that acuity by a standardised amount. Then review the text and other items that need differentiation for use with low-vision.

I got in touch to see if there was interest in using that as a tool in Silver and we’re looking to set up a meeting soon.

Please let me know if you’d like to be involved, it will probably be as part of a Silver sub-group.

Kind regards,

-Alastair

--

www.nomensa.com<http://www.nomensa.com/>
tel: +44 (0)117 929 7333 / 07970 879 653
follow us: @we_are_nomensa or me: @alastc
Nomensa Ltd. King William House, 13 Queen Square, Bristol BS1 4NT

Company number: 4214477 | UK VAT registration: GB 771727411

Received on Friday, 27 November 2020 12:33:01 UTC