W3C home > Mailing lists > Public > w3c-wai-gl@w3.org > July to September 2001

RE: Disability Type Analysis of WCAG 1.0

From: Charles F. Munat <chas@munat.com>
Date: Fri, 24 Aug 2001 15:37:36 -0700
To: "WAI Guidelines WG" <w3c-wai-gl@w3.org>
Message-ID: <LHEGJAOEDCOFFBGFAPKBIEELCJAA.chas@munat.com>
> [C Munat] So what does this mean? It means that Kynn's experiment tells us
> NOTHING.
>
> [Paul] Charles, Your strength is in your command of the concept of
> _understatement_ <smile>. I disagree with your assertion that it tells us
> nothing. The numbers and percentages are the result of one
> person's inquiry
> into the composition of the guidelines. From what I can tell, Kynn is not
> necessarily advocating that we increase or decrease the percentage of
> requirements related to any specific disability (although he may have
> opinions on the issue). Mere percentages do not convey the full meaning of
> the data they are meant to represent. Still, his analysis shows a few
> interesting facts. For example: for whatever reason, there are more
> guidelines which benefit those who are blind than those with other
> disabilities. Whether this is a result of the nature of the
> disability, the
> ease of validation, the amount of knowledge about the disability, the
> advocacy of blind people, or whatever else, this is still something worth
> looking at. In fact, just seeing the numbers causes us to reflect on the
> matter. We may disagree with the findings, the methodology and so forth,
> but, as you have said in other posts, it is important to have this sort of
> information because it makes us think.

Reply:

[Note: some people like to jog, some people like to swim. I prefer mental
gymnastics. So if in my replies I seem to be working up a sweat, remember
that I'm just having fun exercising. In other words, don't take my posts too
seriously. This isn't a flame.]

I don't think that it's something worth looking at. I think it is a
distraction.

It is just as likely that we have not adequately addressed the needs of
blind users and have fully addressed the needs of cognitively disabled users
as the reverse. Your belief that this experiment may lead us to find needs
that haven't been addressed depends on an assumption: that the number of
checkpoints bears some relationship to the degree to which needs have been
addressed. There is no basis for this assumption. Therefore it is just as
likely that a shift in our attention to focus on the needs of the
cognitively disabled may cause us to overlook the needs of blind users.

Thanks for noticing my use of understatement. What I really should have said
is:

It means that Kynn's experiment not only tells us nothing of value, but also
that it may be outright LYING to us.

Don't believe me? Ask any scientist.

The point of any experiment is to test a hypothesis. The experiment succeeds
if it either proves the hypothesis is false or proves that it is not false
(i.e., reject or fail to reject the null hypothesis). An experiment is a
failure only if it fails to test the hypothesis.

You could say that Kynn's hypothesis is that the distribution of checkpoints
across disabilities served is not uniform. But that's not really Kynn's
hypothesis. Otherwise, so what? Who cares if it is not uniform?

Kynn's actual hypothesis has a subtext: the checkpoints are not uniformly
distributed AND *this has some meaning for accessibility*. Since his
experiment did not control all the variables properly (heck, it didn't
control ANY of them), it fails to test this hypothesis. Therefore it is of
no value.

To make the test valuable, it will have to be reformulated to take into
account the many variables, some of which Kynn, Paul, and I have already
mentioned. Baring such a reformulation, the results of the experiment are
just as likely -- perhaps more likely -- to send us off on a wild goose
chase as to bear fruit.

The real question, as I mentioned previously, is ARE THERE SOME NEEDS THAT
WE ARE NOT ADDRESSING? In the absence of data to that effect, there is no
point in us running off on another digression. We have enough work to do
just to clean up all the messes we already have.


> [C Munat] We can't use these results because we haven't designed the
> experiment properly,
> and there may be serious problems with our results. If we depend on these
> results in any way, we could be making a big error.
>
> [Paul] Replication of studies, or alternative versions of studies are the
> usual way of remedying this kind of complaint.

No. Failed studies are failed studies. Why would anyone want to replicate a
failed study? It's not about "alternative" versions of the study, it's about
doing a real study in the first place. One that has a clearly stated
hypothesis and is carefully designed to both test that hypothesis (and only
that hypothesis) and to produce valid results.

> [C Munat] But the biggest problem with this sort of informal experiment is
> that it may
> fool us into thinking that we DO know something.
>
> [Paul] Good point. We have to be careful about jumping to conclusions.

But you have already jumped to a conclusion, Paul! You've jumped to the
conclusion that this data can tell you something of value, and it cannot. It
may -- by pure accident -- correlate with some true situation, but we have
no way of knowing that. Your eagerness to make something useful out of this
(and I'd be very surprised if you were alone) merely proves my point that
this sort of informal experiment is very dangerous.

> [C Munat] Worse, this sort of experiment gives support to partisanism.
>
> [Paul] If the results are interpreted that way, then yes, you're right.

And they will be. Perhaps not in this group, but on the IG list where Kynn's
message was originally posted.

>
> [C Munat] Here is what I recommend:
>
>  Instead of looking for bias in the WCAG, why don't we look for needs that
> haven't been addressed?
>
> [Paul] It seems to me that this analysis can be used as one of
> many tools to
> do just that.

Except it is not an analysis, it's a survey dressed up as an analysis. And
as a failed experiment, it is not even a tool. So it is of no use in
pointing us at needs that have yet to be addressed (and may point us in the
opposite direction). I can say, "let's find needs that haven't been
addressed" (and I think I did) and get a better result. At least my
suggestion doesn't imply a bias without evidence to support it.

>
> [C Munat] Has it occurred to anyone that it might take more
> checkpoints to address the needs of one group than it does another? (Kynn
> acknowledges this in his comments about photo-epileptics.) Who cares how
> many checkpoints address this group or that group? This isn't a contest to
> see whose is longer.
>
> [Paul] Good point. We should be very careful about using Kynn's
> methodology
> as a litmus test for equality across disability types in the
> guidelines. In
> fact, I can say right now that it is my opinion that we should NOT use his
> methodology for this purpose.

I have a better idea. Reject it out of hand. I'm not saying that Kynn's
intentions weren't good, just that the method he used is faulty. If Kynn can
formulate a real hypothesis and propose an experiment that *will* control
all the variables, then that would be great! I'm all for doing testing to
improve the guidelines. I just want us to avoid another wild goose chase
(and the concomitant flame wars).


> [C Munat] The real question is HAVE WE FAILED TO ADDRESS ANY NEEDS? If we
> have, then
> please state the specific problem, and, if possible, some solutions.
> SOLUTIONS ALONE DON'T CUT IT. . . .
>
> [Paul] This sounds like a good idea to me. Perhaps I can spend
> some time on
> this one. I'd be interested to see others do the same.

I look forward to seeing your results!

Chas. Munat
Received on Friday, 24 August 2001 18:35:18 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Monday, 7 December 2009 10:47:12 GMT