Re: Research for Visual Indicators

In the Google doc I pointed out the effectiveness of some of the different cues. Pop out was the most effective. Circular cues are effective when they are closer to the actionable target, rather than a wider diameter. Movement is also highly effective.

There’s always pushback on being too prescriptive, yet most of the objection that keeps coming up is a request to provide definitive styles that have some kind of exact effect.

We can simply give a list in addition to Rachael’s suggestion, that gives designers room for creativity. It feels like this proposed SC is being scrutinized to an unreasonable degree.

The fact that the research we’ve pulled wasn’t specifically conducted on an individual with X, Y, or Z, disability does not mean that it doesn’t have accessibility applications. The age of the research doesn’t make it irrelevant, or obsolete either. We know that cognitive disabilities can significantly impair visual processing, visual search, and attention. We know that these techniques increase saliency (the capturing of attention). Thus, they enhance the probability that a user with cognitive with disabilities will engage with them. The other component is that these techniques make the actionable mechanism more comprehendible, or obvious, which is a user need we have identified in COGA.

I’ve added some failure and pass examples to the google doc.

Looking forward to a lively discussion about this on Tuesday

From: Rachael Bradley Montgomery <rachael@accessiblecommunity.org>
Date: Friday, April 24, 2020 at 9:28 AM
To: "public-cognitive-a11y-tf@w3.org" <public-cognitive-a11y-tf@w3.org>
Cc: "WCAG list (w3c-wai-gl@w3.org)" <w3c-wai-gl@w3.org>, public-cognitive-a11y-tf <public-cognitive-a11y-tf@w3.org>
Subject: Re: Research for Visual Indicators
Resent-From: <public-cognitive-a11y-tf@w3.org>
Resent-Date: Friday, April 24, 2020 at 9:27 AM

Hello,

(Chair/Facilitator hats off) I've spent a few hours researching this topic further through straight keyword and search strategies but also by tracing similar HCI recommendations back to their original research.  One challenge I am running into is that the research on which visual indicators draw attention and help individuals distinguish between different types of objects or determine importance are all grounded in research conducted mid 20th century. The newer studies and recommendations all trace back to high quality research but the studies often haven't been repeated within a computer interface. That said I don't really know that they need to be. I believe that the anecdotal evidence of many HCI experts supports the earlier research creating a body of evidence that is reasonable support for an SC.  I've included one example at the bottom of this email so you can see what I mean.

That said, what I have not found is 1) research that shows what visual indicators provide the best support for people with cognitive disabilities or 2) research that compares the effectiveness of single and combined visual indicators against one another.
I did find research that supports the value of distinct shape, color, location/position, spacing, size, pop out, and realism in facilitating search/scanning and reducing cognitive processing (again mostly outside of computer interfaces).

Here are my personal thoughts on this SC as it stands now:

  *   I am hesitant to scope this SC all the way down to the financial and data processes only because the need is wider and once we create an SC with that constraint set it will be difficult to expand it. (see David Mcdonald's email)
  *   I am hesitant to dictate exactly which visual attributes are needed because while we have research that shows certain attributes support visual scanning and cognition, I haven't been able to find any that states that one visual attribute is better than another or one combination is more effective than another. I would like to put this need into our future research page.
  *   Because of the ambiguity of research, I think we would need to generalize the SC to something broader such as "Interactive user interface components are visually distinct from non-interactive content"then discuss the various attributes that can be used in the understanding document. I brought this up in the last meeting and the objection to this approach, which has merit, is that it is so broad it isn't useful. In addition, I think it loses some of the intent of ushering users to the important controls on the page. We could look into something like "Controls needed to progress or complete a process are visually distinct from non-interactive content" but then we circle back to the scoping issue of a process. Also, both of these would require a test that asked the tester to look at the interactive content of the page and list at least one visual attribute (color, location, spacing, size, etc) that set apart the interactive controls from static content. I think it is a reasonable test but others may disagree.
Regards,


Rachael


Example or Research Tracing

The ResearchBased Web Design and Usability Guidelines by Koyani, Bailey and Nall references the following source in this area: . Guidelines for Designing and Evaluating the Display of Information on the WebAuthor: Williams, Thomas R.Source: Technical Communication, Volume 47, Number 3, August 2000, pp. 383-396(14)Publisher: Society for Technical Communication.

This source provides a list of design elements, stating "In general, any element in a visual display that contrasts in its visual qualities with other display elements will attract the eye (Kosslyn 1994). The following are some specific perceptual attributes that have been found to draw disproportionate attention (and consequently imply greater importance):

  *   Color Consider the use of color to call attention to those elements you believe to be most important (Goldsmith 1987). Color is thought to draw attention largely from the contrast it can provide; nevertheless, displaying an element in color will suggest to the viewer that it is more important than elements displayed in black and white.
  *   Position Consider placing the more important elements in the upper left-hand corner of the screen (Brandt 1945). Western readers typically fixate (look at) visual elements placed in the upper left-hand quadrant early in the processing of a visual display. Sequence, in this context, implies importance.
  *   Size Make important elements larger than less important display elements (Edwards and Goolkasian 1974). Larger elements are more easily discernible in peripheral vision, which guides subsequent foveal (central vision) fixations. People also typically fixate longer on larger elements in a display (see Figure 5).
  *   Isolation Surround important elements with lots of white space. Elements surrounded by generous white space are thought to be accorded greater attention. As a result, isolating an element in a display implies that it is more important (Goldsmith 1987).
  *   Complexity The eye naturally seeks out the most “informative” areas of a visual display; consequently, it spends little time processing predictable contours. In fact, informative areas are typically found and fixated in less than two seconds (Mackworth and Morandi 1967).
  *   Tonal contrast Important information resides at boundaries demarked by contrasts in tone (darkness or lightness). We use those differences to identify forms and to infer their relative distances; we are consequently conditioned to attend to them psychologically and are, it could be argued, “hard wired” to attend to them physiologically. Neurons in our visual system are fundamentally “difference detectors,” so our eyes are naturally attracted to areas where visual stimuli change. The perceptual principle of “salience” asserts that we are consequently compelled to attend to those areas (Kosslyn 1994).
The expanded references are below:

  *   Brandt, H. 1945. The psychology of seeing. New York, NY: Philosophical Library.
  *   Goldsmith, E. 1987. “The analysis of illustration in theory and practice.” In The psychology of illustration, H. A. Houghton and D. M. Willows, eds. New York, NY: Springer-Verlag, 2: 53–85.
  *   Kosslyn, Stephen M. 1994. Elements of graph design. New York, NY: W. H. Freeman and Company.
  *   Mackworth, N. H., and A. J. Morandi. 1967. “The gaze selects informative details within pictures.” Perception and psychophysics 2:547–552.

On Thu, Apr 23, 2020 at 6:58 PM Alastair Campbell <acampbell@nomensa.com<mailto:acampbell@nomensa.com>> wrote:
Hi David,

It is interesting, but these are fairly low-level perceptual experiments. For example, the stimulus from the first (and second) paper is a set of crosses on a (2002 era) screen:
https://www.sciencedirect.com/science/article/pii/S0042698902000160#FIG1


When “All tasks were performed while subjects fixated a small spot in the center of the screen”, these are not equivalent tasks to looking around a screen.

This type of research is working at several layers below interface design, it doesn’t consider:

  *   Comparison of different features, where the task is to find targets in a mixture of surrounding non-targets.
  *   Meaning, where the user is trying to work out what things are actionable, rather than spotting pattern variations.

Design on the web has a bucket-load more variables, this type of research doesn’t relate easily.

I think what is needed is more at the HCI level of research (interfaces), rather than vision research, but I’m not aware of anything directly applicable.

Kind regards,

-Alastair


From: David Fazio

I’ve compiled some research into this Google Doc: https://docs.google.com/document/d/12Z3qxSk88OPvKAqvCCUEd-Ld2sKiMm4jK2rKq20PKFI/edit?usp=sharing


I’ve added links to the research, pulled out relevant excerpts and commented on how I feel they apply and what we can extrapolate from it.

From: Alastair Campbell <acampbell@nomensa.com<mailto:acampbell@nomensa.com>>
Date: Wednesday, April 22, 2020 at 5:56 AM
To: "WCAG list (w3c-wai-gl@w3.org<mailto:w3c-wai-gl@w3.org>)" <w3c-wai-gl@w3.org<mailto:w3c-wai-gl@w3.org>>
Cc: David Fazio <dfazio@helixopp.com<mailto:dfazio@helixopp.com>>
Subject: Research for Visual Indicators

Hi everyone,

On the call yesterday one of the things needed for Visual Indicators to progress is a solid research basis for the requirement. This is my overview of where I think we are with that need.

To be clear about what we need:

  *   If we have a general “interaction controls should have salience”, we need to define what that means with lots of examples for different types of control. (Big project in itself.)
  *   If it is restricted to buttons/links, that helps, but we still need to define salience in terms that work across different contexts.
  *   If it takes the approach of listing design attributes (e.g. font, border, background, spacing etc) then we need to know what a minimum difference is, AND whether these attributes are equivalent.

Ideally it would say something like “A border of X contrast improved visual acquisition by Y%”, or something we could take that sort of conclusion from.

The ultimate test is that we test several example sites, and the controls which fail are the ones which cause issues for people in practice (and almost as important, it does not catch controls which are not an issue).

David Fazio kindly provide an example [1], and there are several in the document from Lisa & the COGA TF [2].

This is the list of things I looked through, and my quick conclusion from each whilst hunting for suitable findings:


  *   https://www.nngroup.com/articles/flat-design/

Gives background to flat design & the (usability) issues, links to the next article.
  *   https://www.nngroup.com/articles/clickable-elements/

This is looking at the right level of attribute (e.g. colour, position). Highlights that location can be as useful as other indicators, and whether other items are available also has an impact (making this task harder). Arrow icons were not thought to be as useful. Not referencing research (that I could see), experience based.
  *   https://www.sciencedirect.com/science/article/pii/S0042698902000160

Tested target acquisition performance by making 1 of 12 targets on screen ‘salient’, with either contrast change, movement or an extra ring around the target. It showed (I think) that more change = quicker acquisition.
This article linked to quite a few related research papers giving me more to look at, but mostly from the ‘90s.
  *   https://www.sciencedirect.com/science/article/pii/S0960982210001594

Some things automatically stand out, some we can be attentive to. Deals with real life stimuli rather than interface design.
  *   https://www.sciencedirect.com/science/article/pii/S0141938203000350

Spacing didn’t affect search times for finding an icon within a set, but small icons were harder to identify and took longer to find.
I really struggled to spot anything that would support a visual indicators SC, including from scanning the results of research listings.

However, I’m not a scholar and there are probably better terms to use when searching. If anyone knows some better sources for this information now would be the time to send them in…

Kind regards,

-Alastair
1] https://www.scopus.com/record/display.uri?eid=2-s2.0-0027719840&origin=inward&txGid=83d4899571123207f27b0d9343ae40e2


2] https://docs.google.com/document/d/1U_NVxB-eIljhYSNcW0A7_2aHt9GwNfLnRF5n_stHrbc/edit


--

www.nomensa.com<http://www.nomensa.com/>
tel: +44 (0)117 929 7333 / 07970 879 653
follow us: @we_are_nomensa or me: @alastc
Nomensa Ltd. King William House, 13 Queen Square, Bristol BS1 4NT

Company number: 4214477 | UK VAT registration: GB 771727411



--
Rachael Montgomery, PhD
Director, Accessible Community
rachael@accessiblecommunity.org<mailto:rachael@accessiblecommunity.org>

"I will paint this day with laughter;
I will frame this night in song."
 - Og Mandino

Received on Saturday, 25 April 2020 23:04:46 UTC