Re: Automatically testing Web content for flicker

I was cleaning up some old mail and noticed that this thread may not have
completely ended [1].

Is the question about flicker being a content requirement now in WCAG the
WG?  Flicker is addressed in UAAG Guideline 3 [2], so we now have redundant
requirements in WCAG 1.0 and UAAG 1.0.  I couldn't find a related issue in
the WCAG 2.0 issue list [3].

Wendy mentioned:
>In the meantime, I heard back from Professor Harding and there is a system
>that checks for flicker.  It's based on his research and produced by
>Cambridge Research Systems. http://www.hardingfpa.co.uk/
>
>Not sure how much it costs, how easy it is to use, or how well it works on
>web content...but I'll contact CRS to find out.

I couldn't find an update on this.  Perhaps it is in the mail archives, but
since they are archived every month, it is difficult to search multiple
months.  Is there any way to archive ER-IG every quarter (3 months) instead
of once a month?

and Terje wrote:
>Whether it makes sense to attempt to _test_ for this is a different
matter.

How to determine if the content meets some testable criteria _is_ the issue
here.  It is easy to say from a single authors point of view that I should
_avoid_ things that _might_ cause flicker.  But WCAG 2.0 owes it to the
author to clearly set the criteria that the author has to meet.  Now that
these Web Content Accessibility Guidelines are parts of policy,
regulations, and laws, it needs to be testable - did I meet the standard or
not?

As the evaluation and repair interest group, we need to determine how to
evaluate and repair what's in WCAG 1.0 and influence 2.0 success criteria.
If we look at it from the view point of someone representing 100's or
1,000's of web authors and pages, for example from a government agency or
large company view - how do I evaluate if any of my 1,000's of pages have
bad flicker or not?  Did all my 100's of author avoid it or not?  Asking
them to all say yea or nea is not sufficient because content is changed
constantly and the author trail of who actually changed/authored a piece of
content at a particular point in time is rarely known.  So, we come back to
the question of evaluation - how do we evaluate it - which begs the
question - do we really need to evaluate the requirement of bad flicker at
all?

To close this thread, I propose that
1.  an issue be opened in WCAG 2.0, that success criteria for 2.3 be moved
from the minimum and level 2, to level 3 only, such that it included the
existing criteria of not "visibly or purposely flicker between 3 and 49
Mhz", etc.
2. Web examples of "bad flicker" be identified and evaluated such that
common testable characteristics be identified.  In other words, further the
science here for the benefit of the browsers and authors and especially
evaluation tools testing for compliance.  The work referenced by Wendy of
Professor G. Harding and the HardingFPA system from Cambridge Research
Systems is mostly, if not all about traditional TV broadcast images and
needs to be applied to the Web browsers and content.
3. Once 1 & 2 have closed, re-open a work item for the ER interest/working
group.


[1] ER-IG thread
http://lists.w3.org/Archives/Public/w3c-wai-er-ig/2002Jun/0010.html
[2] UAAG Guideline 3
http://www.w3.org/TR/UAAG10/guidelines.html#gl-feature-on-off
[3] WCAG 2.0 issues tracking list http://www.w3.org/2002/09/wcag20-issues

Regards,
Phill Jenkins
IBM Research Division - Accessibility Center
http://www.ibm.com/able



Charles McCathieNevile <charles@w3.org> on 06/10/2002 06:05:11 PM

To:    Nick Kew <nick@webthing.com>
cc:    Terje Bless <link@pobox.com>, Phill Jenkins/Austin/IBM@IBMUS,
       <w3c-wai-er-ig@w3.org>
Subject:    Re: Automatically testing Web content for flicker



Hmmm. The conclusion I would draw is that people should use software that
protects them, and that this is important. However, it should be noted that
this is something that can be a problem, and there is value in avoiding
flicker if it isn't necessary (often, like red-green combinations, it isn't
that it was necessary, and in many cases authors are happy to try and
change
something that affects people if it is easy enough to do).

As a web content requirement it should still stand I think, although it
might
be a different priority level. (Back to the "until user agents..."
discussion)

Chaals

On Sun, 9 Jun 2002, Nick Kew wrote:


  On Sun, 9 Jun 2002, Terje Bless wrote:

  > should be possible even accidentally. Making sure your content does not
  > contain any such seems emminently suitable for the WCAG, if perhaps a
  bit
  > obscure.

  But is that really helpful on the Web?  It could only work if _every_
  site is _guaranteed_ to conform; otherwise the epileptic is at risk
  when visiting an unknown site.

  Hence my comment that people affected by this should seek to use client
  software that doesn't expose them to the risk.



--
Charles McCathieNevile    http://www.w3.org/People/Charles  phone: +61 409
134 136
W3C Web Accessibility Initiative     http://www.w3.org/WAI  fax: +33 4 92
38 78 22
Location: 21 Mitchell street FOOTSCRAY Vic 3011, Australia
(or W3C INRIA, Route des Lucioles, BP 93, 06902 Sophia Antipolis Cedex,
France)

Received on Monday, 2 December 2002 11:28:45 UTC