techniques, and a separate checkpoint, for word use

[warning -- this is a little out of order, and mostly at a techniques level. 
But I think we need to understand the techniques to understand where to divide
and where to combine the checkpoints.  At the moment I think that word use and
structural complexity should be separate checkpoints; as the mechanical aids
for dealing with these two components of language complexity are mostly
disjoint.  But y'all can decide that.  I wanted to put some flesh on the
bones.]

At 11:56 AM 2001-08-11 , Gregg Vanderheiden wrote:
>Al wrote
>I think that to eat our own dog food, we may have three shrort
>checkpoints, here, not two long ones.
>
>a) use well-known words [check 'em]
>
>b) use simple constructions [yes, the constructions are
>language-specific]
>
>c) document the exceptions [using markup and/or other forms of metadata]
>
>
>GCV:  I think these are good techniques.  But are they guidelines or
>checkpoints?   

AG::

I don't know.

Actually, I don't think that any of the above are final checkpoint-scopes.

Can we work a bit more on getting the "strategies and techniques" in place and
come back and write the "guidelines and checkpoints" as an executive summary a
little later?

Macro-strategy: manage reading challenges
Sub-strategy: state audience assumptions
  Sub-strategy: use appropriate assumptions
  Technique:  use measurable scales 
  Technique:  make this information available [how is TBD for now]
Sub-strategy: manage word use
Sub-strategy: manage argument [plot, rhetorical] complexity

For this message, may I limit the discussion to word use?  This is getting
long
and the development for managing word use is less speculative and easier to
follow mentally and at an implementation level.  I will return to long
sentences and complex arguments a bit later if somebody else doesn't chime in
and fill that in before I get to it.

Sub-strategy: manage word use
Problem space model:  There are two parts to how you use a word, the sign and
the sense.  There are three "discourse contexts" where sign-to-sense
associations live that are of interest here: a) your current utterance b) your
individual [or typical expected] reader's control of this language and c) the
general distribution of language control among the speakers of this language. 
This is a systematic way to set up the kinds of separate issues or breakpoints
we want to look at:

-- will your reader recognize the word and bind it to a sense?
-- will your reader bind it to the sense you intend?

which is related to

-- is the sign (spelling of word) common or exotic?
-- is the sense ususal or conventional?

Based on the two answers to each of these questions, we get four cases:

Easy vernacular terms:  commonly used sign bound to a generally understood
sense
Hard vernacular terms:  uncommonly used sign bound to a generally understood
sense
Technical jargon:  uncommonly used sign bound to a special sense for a special
use
Term of Art:  commonly used sign bound to a special sense for a special use

If you are using a Term of Art, then you need to include explicit
documentation
of the intended sense to avert misunderstanding, that is having your readers
recognize the sign but bind it to a common but unintended sense.  [Separate
discussion on how to minimize your dependency on conventional sign:sense
bindings suppressed.  There will be some left even after you work to eliminate
them.]

If you are using a term that your reader may not recognize, that is a
different
matter.  Here there are various approaches to assistive intervention: passive
reliance on readily available aids, various forms of expediting the use of
special aids, down to the explicit provision of explanatory resources much as
for Terms of Art coined in the current context.  

Now, of course, the likelihood that your reader will recognize a given term is
a distribution, not a binary question.  But for purposes of mananging the
application of assistive interventions, we can reduce it to a binary question
through the useful fiction of reading levels.  Let's assume that we set two
reading level thresholds defining a band we expect our readers to fall in. 
Terms above the upper threshold always get active treatment.  Above the lower
threshold terms get explicitly checked that they work with assistive aids such
as Atomica.  This is passive accomodation.  Below the lower threshold, what we
do is just do our best to give general guidance on the background that we
assumed through things like keywords, which will help a student tell a teacher
what they need help with to read this page.  These last are examples, not
recommendations.  But I have a nominal model for the general web that above a
seventh grade reading level you should be aware you are suffering some
word-control dropouts, and above a tenth grade level you had better think
about
footnotes and such.  For other more targeted materials these two thresholds
would float up or down.

For the person with a real word block, I think that we want to make sure that
there is a words-free mode that works for the site, even if what one gets is
not everything the site has to offer, and we also need to work on the
assistive
Web services, such as an Atomica clone that serves a cluster of pictures where
the word being documented is the common thread through all the pictures.  And
making sure that the "try harder" escape to using these services is really
easy.

On the matter of a words-free mode, consider a catalog shopping site such as
Land's End.  This should work.  There are no essential words, here.  You can
buy stuff in a 'bricks' establishment just by picking up the goods, handing
them to a clerk at the register, and handing the money over when the clerk has
rung the item up.  At the 'clicks' equivalent there are pictures of all the
goods that are for sale if you get deep enough into the site.  The imagery is
good at the leaf level.  What we need to get is a structure and methods so
that
the words-free user can grok how to get to the pictures, some element of seven
league boots navigation to move to a whole different department if they are in
the wrong product category, and how to execute a purchase.  The latter may
require a wallet in the software.  But if "browse by product category" and
"shopping cart and checkout" procedures are obvious from the look of
things, it
should work.  Someone's essay of literary critizism of Moby Dick, no.  But
retail sales of commodity goods, yes.

In terms of overt actions to aid in the correct binding of terms by your
reader, we use a variety of techniques, listed roughly in order of increasing
invasiveness.

Check that standard references give adequate guidance on the interpretation of
the term, if consulted.

Indicate what domain- or discipline-specific lexicons need to be added to
[optionally in precedence before] standard vernacular usage in interpreting a
given body of text.  This is for example a general note indicating that terms
defined in a professional reference volume will be used as if vernacular
without further mention.  This can be formalized in the automation of any
glossary that is provided for more tutorial service or more peculiar senses.

Mark all uses, or the first use, of an above-threshold-rarity term with markup
which starts one down a path to an expanation [dictionary or glossary
entry] in
a standard reference.  There is a middle ground here where all uses are marked
but only the first use linked.  Access from other instances is by
navigating to
the glossary and text searching for the term.

Provide a dedicated scope-bound exposition of the interpretation to be given a
term [local glossary] and link from first or all uses of the term to this.

Deciding where the break-points will come in the binding of various word
difficulty metrics to these levels of intervention is the stuff of a word use
management policy.  If people have thought through such a policy and engage in
an active program of checking, documenting and marking to satisfy it, their
material will be more widely accessible at the same level of 'content' than if
they don't.

Al

Received on Thursday, 16 August 2001 16:54:35 UTC