W3C home > Mailing lists > Public > w3c-wai-gl@w3.org > January to March 2001

Re: Action Item: 3.3 Proposal (Writing Style)

From: Al Gilman <asgilman@iamdigex.net>
Date: Wed, 14 Mar 2001 11:41:26 -0500
Message-Id: <200103141620.LAA6256212@smtp2.mail.iamworld.net>
To: Kynn Bartlett <kynn-edapta@idyllmtn.com>
Cc: <w3c-wai-gl@w3.org>
At 07:35 AM 2001-03-14 -0800, Kynn Bartlett wrote:
>At 06:19 PM 3/13/2001 , Al Gilman wrote:
>>The difficulty with incorporating reading-level checking in what we are
doing
>>is that reading-level checking makes sense as a _process requirement_, a
>>required activity in the content development process; but there is no single
>>tool or threshold that makes sense as a _product requirement_ for all web
>>content.  
>
>Yes.  Well stated.  I agree.
>

AG:: Thank you.

>I have always told people that web accessibility is a mindset -- a
>methodology, a way of solving problems, a view of the web -- not a
>series of binary checkpoints.
>
>>The product requirements for reading level should indeed float, as people
have
>>pointed out.  How to get help from the IRS in preparing your tax return is a
>>topic that has to be explained in very accessible language.  A doctoral
>>dissertation in Physics should still be written as simply as possible,
but it
>>may be appropriate to assume a lot more knowledge among the readers than one
>>can get away with in general writing.  Both of these will be more
>>successful in
>>clear writing if they use reading-level measures as a checking tool.
>
>At risk of beating a drum, I really need to emphasize that the above
>description -- which has been used by a number of people to illustrate
>the problem -- is incomplete because it is really only along one
>axis.  There are -numerous- reasons to try to communicate, and applying
>a fog index (e.g.) to -every- form of written or verbal communication
>is improper.  E.g., editorials, advertisements, parodies, humor,
>fiction, and a vast number of other content types are even -harder-
>to apply such a standard to.
>

AG::  I suspect that the difference in assessment here may be based on
different assumptions about how the results of the tool are applied.  I
absolutely agree that a mindless application of what the tool tells you can
reduce valuable artistry to mindless mush.

Let me just try to communicate my experience.  What I find is that the points
where I actually decide to change how I wrote something are a distinct
minority
of the cases where a mechanical checker raises issues.  [With a human
editor it
is closer to 50:50.]  But I rarely check a piece of writing without the
checker
turning up _some_ warning where I am really glad that I got nudged to change
what I had written.

I guess I am extrapolating from this "change the glaring problems, but only a
minority of the grumble-generating points" behavior in my expectations for the
application of reading level tools.  A reasonably stupid tool, in the hands of
a good writer, can generate perceptible improvement, but the smartest tool we
have won't save a clueless writer.

I suppose I am also in particular extrapolating from my experience using a
grammar critic.  Not just an overall reading-level score.  I would think that
reading level is most useful as a tuning parameter in a critic tool.  But the
critic will flag specific points of pain in the text, not just render a bottom
line score.  That is, I believe, the scenario where I expect it to be [not
everywhere, but often] an appreciable help.

Al

>--Kynn
>
>Kynn Bartlett <kynn@reef.com>
>Technical Developer Liaison
>Reef North America
>Tel +1 949-567-7006
>________________________________________
>ACCESSIBILITY IS DYNAMIC. TAKE CONTROL.
>________________________________________
><http://www.reef.com/>http://www.reef.com
>  
Received on Wednesday, 14 March 2001 11:21:12 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Monday, 7 December 2009 10:47:09 GMT