RE: Confidential-Draft UA Charter

Kitch::
>> "Establish a mechanism for having users with
>> disabilities test the usability of user agent
>> features."

What is to be tested?  Existing browsers, mockups, paper walkthroughs? Who
will run these tests?

It should be expected that the browser developer perform usability testing
of their own product (we do, I am sure the others do too), which is
inclusive of the intended user population. When we hit design issues we
can't resolve ourselves, we turn to outside research. If we can help explore
an issue through quick changes in our code, we are more than happy to do so.

Al::
>> The WAI may or may not be
>> organizing user testing, but somebody is going to be doing it
>> somewhere, and we probably want to make it part of our plan to
>> find out what they are learning and use that knowledge here.

The WAI setting up usability testing is not the answer.  Facilitating the
exchange of information between the groups doing this work is a better use
of resources.

Perhaps we can say in 2.1:  "Identify and develop resources to support the
implementation of the user agent guidelines by developers.  This includes
industry, governmental, and university groups who will participate in the
exchange of information and research findings relevant to guideline
implementation."

If we need to include "evaluation" in 2.1, perhaps it should be based on
6.2, where it says the guidelines are:
"... consensus-based, technically sound, and reflect the most current
technology."
This implies some analysis/evaluation led to the adoption of given
guideline.

Where we don't have any findings (or where they are unclear) it would be
of value to have a "hit list" of design issues in need of research, which
are not presently addressed by any group.

Mark



> -----Original Message-----
> From: w3c-wai-ua-request@w3.org [mailto:w3c-wai-ua-request@w3.org]On
> Behalf Of Al Gilman
> Sent: Tuesday, July 14, 1998 11:13 AM
> To: w3c-wai-ua@w3.org
> Subject: Re: Confidential-Draft UA Charter
>
>
> Charles::
>
> > >How do we measure success?  Items 2 "improved access to the
> > >WWW by PWD" is really vague and unmeasurable.
>
> There are some techniques people have used to measure this.  At
> the Federal WWW Consortium Seminar on Universal Access last
> Tuesday, Educational Testing Service (the people who brought you
> the SAT) briefed what they have done as a part of quality
> improvement project for the accessibility of their web
> publishing.  They have an approach to consolidating user
> evaluations that seems to make some sense.  There is also some
> work out of Cork, I hear.
>
> I suppose the questions are:  Do we want to listen to actual
> users?  If so, how?
>
> Kitch::
>
> > I agree that measuring "improved access" is vague. The charter
> > also says that we will "Evaluate the usability of accessibility
> > features" under 2.1 scope of work items. What if instead we say
> > something like, "Establish a mechanism for having users with
> > disabilities test the usability of user agent features." I
> > think it would be important to get feedback formally or
> > informally from users who are not directly connected with the
> > WAI activities.
>
> This is one area where it will be good to coordinate with the
> Evaluation and Repair Interest Group.  The WAI may or may not be
> organizing user testing, but somebody is going to be doing it
> somewhere, and we probably want to make it part of our plan to
> find out what they are learning and use that knowledge here.
>
> Al
>
>

Received on Tuesday, 14 July 1998 14:02:27 UTC