- From: phoenixl <phoenixl@sonic.net>
- Date: Wed, 29 May 2002 19:45:56 -0700
- To: w3c-wai-ig@w3.org
Hi, Al We've basically used conference calls where everyone is on their computer looking at the same web page. We wanted to keep the technology somewhat simple. The main draw-back was not being able to test people who didn't have a direct link to the net and had only one phone. I may not have been clear in my original posting. The document I'm working on is basically to apply the methodology used in one project to a specific set of other projects. It's not a standard test protocol for all projects that other people may be working on. It could probably be used as a basis for developing protocols for other projects outside of the ones initially being considered. I think that the methodology probably cannot be automated, but is probably best used in combination with automated processes. One of the challenges is knowing what questions to ask. Some people are great interviewers and instinctively know when to explore certain areas in depth. Others lack that intuition. In some ways, it is like counseling. Scott > > Is this an intrinsic limitation on the tasks or an infrastructure problem? > > For a remote lecture that Jennifer Sutton and I gave to Dan Andresen's class at Kansas State[1], we lashed up a collaboration infrastructure where the remote class had her computer screen projected and had Jennifer, me, and Perfect Paul all on the phone to the lecture room together. > > This was all done with commodity technology, so long as you count a cable modem as commodity technology. To make Net Meeting work fast enough for a lecture/demonstration situation, modem speeds were not enough. And the speech synthesis had to be done in an external hardware synthesizer. But it was doable. > > The infrastructure that one needs to do this sort of remote collaboration for purposes of usability testing is a subset of what is needed for disability access to virtual meetings. The people with disabilities have to have enough access to what is happening so that they can function as presenter, recorder, or chair[2]. > > So the infrastructure issues _have to be_ worked. > > As usual, to get it running today takes some special tricks of the trade. But we're getting closer[3]. > > Scott: > > It may be a little premature to define a standard test protocol for this sort of testing, but perhaps defining an infrastructure kit and doing some usability testing of the kit with multiple users would get us a notch up the ladder. This would serve usefully to give constraints on experimental protocols that researchers planned to use in this remote mode. Check with the folks at ATRC in Toronto at least to see what they do with their scattered-site testers. > > I don't think the answer is known well enough for you to get an immediate answer here, but if you > > a) put a working body into the Authoring Tools Working Group where they are working on automation of evaluation techniques techniques, and > b) same for the Evaluation and Repair Working Group for how to blend the usability assertions with the automation results from the rule-checking tools via EARL, > c) do some actual usability studies in this mode, > > ..you could come up with a solid contribution to the accessibility knowledge base. > > Al > > [1] A dip into accessibility on the WWW (K State lecture notes) > http://www.cis.ksu.edu/~dan/cis726/web/lecture_notes/aDip.html > > [2] Notes for planning purposes for the Advanced Collaborative Environments Working Group in the Global Grid Forum > http://www-unix.gridforum.org/mail_archive/ace-grid/msg00026.html > > [3] Modality Translation Services Program > http://trace.wisc.edu/world/modtrans/
Received on Wednesday, 29 May 2002 22:46:33 UTC