Re: Checkpoint 3.4 again

At 12:38 AM 2001-07-29 , Kynn Bartlett wrote:
>
>>Checkpoint: 3.4 Utilise content in a wide range of modalities where
>>possible to assist the users of your content.
>
>Mmmm, it still faces the same problems with "uncheckable" that I'm
>currently harping on, but I guess it's okay.  How is it different from
>Guideline 1, though?
>

AG::

a) how different from Guideline 1 -

     * [32]Guideline 1 - Presentation. Design content that allows
       presentation according to the user's needs and preferences

The core of Guideline 1 is compatibility with diverse presentations.

3.4 addresses maximizing throughput at the cognitive, or post-sensory
processing layer.

So long as Guideline 1 says 'presentations' it is going to be read as
referenced to the immediate actual human:computer interface.  Compare
'exposition' as connoting more depth.

What we have to capture, somehow, is that the goal line is at the end of a
slalom where barriers must be avoided at multiple stages.

Note:  I am not myself against having somewhere some sort of a deeper tree
where we show 1.1 and 3.4 to be both instance of a common principle "utilize
redundancy to assure that you don't fall prey to single-point failure modes,
whether in sensory or cognitive capability."  But that's just me, not our
readership, speaking.  Images fail without vision; words fail without
reflexive
lexing (whatever it is that is absent in dyslexia).

Both guidelines 1 and 3 could be construed as sub-cases of checkpoint 4, if we
just back up and put the processing that goes on inside the person within the
system diagram.  But these sweeping unifications leave many people cold.
As we
have long debated, it is important to have the information available at
multiple levels of concreteness and specificity.  To expose both broad
principles and nitty gritties.

This is an example of the redundancy required to satisfy Guideline 3, <dog
food, eh, Gregory?>

I think that a restatement of this question [that might rathole us] is to ask
whether the current phrasing of Guideline 3 isn't at the level of of
"effective
communication," without any hint of specialization to communication
effectiveness that survives in the presence of disabilities.  Maximizing
comprehensibility is an aspect, approaching the totality, of effective
communication.  The approach of Guideline 3 is therefore _very_ Universal
Design in problem attack.  It talks about ways to maximize comprehensibility,
and, oh, by the way, these methods will also avoid certain single-point
failure
modes involving cognitive disabilities.

Maybe we need to work from the small to the large a bit more, and say
"eliminate single-point failure modes triggered by cognitive disabilities
through redundancy in exposition and media."  Verisimilitude achieved via
visual and auditory imagery is a central technique for avoiding cognitive
gotchas.  Then follow up with "Oh, and by the way, this will increase your
comprehension rate among the general public, because of the endemic diversity
of learning styles and literacy, right brain vs. left brain dominance, etc."

b) Checkability.

Confirming by machine that some pages won't work for dyslexics should not be
hard.
Confirming that they will work probably won't be easy.  One can do better by
applying more difficult remedies -- on and on, more cost, more benefit.  No
sharp knee in the curve.

As with the "use quality link text" checkpoint, sometimes there is no way to
check without a person in the loop.  Maybe we are discovering that there are
some checks that for practical purposes require _two people_ in the loop.

But the way I glaze people's eyes with in-group idioms is readily checkable
with a mildly modified spell checker algorithm.  This is a gold plated Web
Service waiting to be marketed.  Doing word-use checking from a checking
factory instead of on the personal equipment means the dictionary can be much
larger and can contain buzz phrases rather than just lexemes.  Positive
recognition of idiomatic phrases, as opposed to negative results for
un-colloquial words alone.

So exotic word use can be checked to a practical degree.

The absence of graphic exposition is trivial to check, since we have
deprecated
ASCII graphics.  Checkpoint 3.4 is checkable because it is about the multi
count in multimedia.

The difficulty is that there is no hard and fast way to say "enough" for
cognitive issues.  I fear that is beyond what we can expect of ourselves.  And
it is our checkpoint religion that should give, in this case, not Guideline
3. 
We should not fail to be accurate just because we can't meet an arbitrary
standard of precision.

There are readily-described, highly-cost-effective checking methods which
contribute significantly to the satisfaction of Guideline 3.  Not necessarily
sufficient or definitive.

What list of questions should Sean ask himself so that he could, by himself,
conclude that Anne probably means 'elements' as described in the OED and
not as
specialized in SGML?  A brute force trigram correlation applied to her history
of posts to W3C lists, if asked that as a head-to-head question, "which
interpretation is more likely?" should probably come back with the right
answer
as well.

Designing for diversity takes a mental step back.  Especially hard in the
cognitive case where what congnitive processing we do in the course of
synthesis is very unconscious.  There are lots of self-improvement books
selling that are basically checklists that help one take just such a step
back.  Going back at least to _What Color is Your Parachute_ by Richard Nelson
Bolles.  I think that this book is a precendent we should look to in coming up
with methods for Guideline 3 which will help jog the author out of a obsessive
lock on a single cognitive view of what they are creating.  Just as greeking
and lifting link text out of context are mechanical transformations that
aid in
recognizing what the question is that needs to be addressed.

Actually, interactive drills in search of good keywords is another good
exercise by which to discover how to enable authors to create more robust,
cognitive-diversity-ready content.

I second your keywords idea, Kynn, provided the keyword-driven "try harder"
resource is readily available.  [we can push this further in PF]  One
technique
is to characterize your content with the metadata schema that is supported by
[think IMS metadata for subject classification] the corpus of "how things
work"
multimedia literature that is extant on the Web.  That is a good "try harder"
thing to try if the allusion in an image or the denotation in a term is not
ringing bells with the user.  And greasing the skids of the "try harder"
request is one of the generic strategies we want to roll into the recipe.

Al

>--Kynn
>
>-- 
>Kynn Bartlett <kynn@reef.com>
>Technical Developer Liaison
>Reef North America
>Accessibility - W3C - Integrator Network
>Tel +1 949-567-7006
>________________________________________
>BUSINESS IS DYNAMIC. TAKE CONTROL.
>________________________________________
><http://www.reef.com/>http://www.reef.com
>  

Received on Sunday, 29 July 2001 12:09:31 UTC