- From: Morten Tollefsen <morten@medialt.no>
- Date: Mon, 27 Feb 2017 11:02:01 +0000
- To: Juliette <piazza.juliette@gmail.com>, "w3c-wai-ig@w3.org" <w3c-wai-ig@w3.org>
- Message-ID: <HE1PR07MB1467528495F272051EDFAB47A6570@HE1PR07MB1467.eurprd07.prod.outlook.com>
Hi, Juliett! I think this discussion is important and exciting, and I’ll try to share some of my experiences. I’m a blind programmer and accessibility expert. Another blind friend and I started a company in 1999 (at the moment we’re 6 persons and some is hired when needed). We have managed several research projects in Norway and also participated in some EU-projects. We offer accessibility evaluations, user testing and accessibility training. Some years ago (perhaps 6) I got the same idea: remote testing would be great. Several reasons, and you’ve mentioned most of them: travel, problems with technology configurations (e. g. screen reader, speach preferences, colour preferences, browser setup, ...). In short: is it possible to offer high quality remote testing? Both to make more cost effective tests and to make it easyer for disabled persons to participate? Yes, my interest is persons with disabilities, it is much easier to do good testing with persons without disabilities - and others can offer to do this testing. I started to build a test team (called Team WWW). I managed to recruit persons with very different skills, disabilities, age, gender etc. etc. At the moment the team has about 60 participants, and because Norway is not too large, and because the interest of web user testing including disabled persons is not too big this team is large enough. Testers get paid, and in the long run I think this is important to get quality (at least here in Norway). With quality here, I mean that users take their time to test, write suitable reports etc (I’ll write a little bit more about that below). Main conclusion: The first remote tests were extremely successful. Much more successful than I would have dreamt about. And this has continued to surprise me: I’ll argue that sometimes remote testing with disabled persons works better than observation tests. This is of course not always true, but remote testing is a good method for many websites and test scenarios. The reason for these good experiences probably has something to do with performance anxiety. When beeing observed it is very important to be good, this is not so dangerous when testing remotely. My method (KOMET), eh, the name of this method was constructed because I wrote a Norwegian book about web accessibility and the publisher wanted me to find a name. Directly translated KOMET is an abbreviation for something like: Concrete Exercises and remote testing). Here is the steps (a little bit simplified, but): 1: Usually we do a professional accessibility evaluation first and problems are fixed. Users should not be misused to report bad/missing alt-texts, missing labels, no visible keyboard focus, etc. etc. The testers should be used to make a website, service, app or other software better. 2: We figure out: what is it most important to test. Sometimes this is very obvious, however not always. Our goal is that a single user test should never take more than 2 hours. No rules without exceptions of course. It is possible to do repetitivbe testing with the same testers (possible in some cases, not desirable in others). 3: We (MediaLT) and the product owner create a user test. The quality of this tests, usually 5 to 10 exercises is of course important to get valuable and usable feedback. My experience is that you’ll need knowledge about users, assistive technology and accessibility to prepare really good questions for remote testing. In traditional observation tests it is a little bit easyer to compensate for bad/unprecise exercises. In this step we also define our target group (number of participants, disabilities, if certain knowledge is needed etc). 4: Expert evaluation One or more of our experts take the test and if needed tasks are changed or modified. Except for cognitive disabilities my experience is that blind or very low vision use the most complicated assistive technology. And it should not surprise anybody that when presenting an user interface which is always developed for visual presentations, the largest differences is when presenting the same interface with Braille and/or speach? Therefore we always perform the tests with a screen reader (sometimes both with PC and mobile). 5: Recruit testers Usually this is very easy because we use Team WWW. Sometimes we have to recruit others of course (60 persons is not a very large group, and we do not want to use the same person very often). We have a relatively good knowledge about each user in Team WWW: ICT knowledge, type/version of assistive technology, age, and much more. To participate in Team WWW the users with some disabilities need a minimum knowledge level, and we have defined this quite detailed (and in some cases also checked if the users have the knowledge they think they have). E.g. for blind users we have defined wich screen reader functionality they need to know etc. Even if it is a point to test something with users who do not have our defined competence level, it is very valuable to know what the user can to be able to verify and understand test findings. MediaLT offer training and tests to reach the competence level. The competence level is far from expert knowledge, but again, e. g. for screen readers a minimum knowledge about how to use the screen reader in web interfaces is needed to get valuable test results. 6: Send the test to the users 7: Collect test reports We have explained how the users should try to answer. Write down as detailed as possible how they try to solve an exercise (also when they do not succeed of course). Examples: “I press Ins+F7 to open the link list, write sp to find the link Sport and press Enter” (from a Jaws user); “Sport is easy to locate in the main menu and i click the button” (from a sighted user). These two answers are quite typical: the Jaws user select a link, but visually the link is styled to look like a button. An expert need (at least I do think so) to collect the answers and to write an understandable report to the principal. And to understand the reports I get from my testers, it is important to understand why testers write like they do (as I tried to demonstrate in my example). Sometimes it is not important if Sport is a link or button, but this could be quite important when writing help texts, when a blind should get instructions from a sighted etc. The Jaws command for showing a link list will for example not show buttons, and if a help text states that you have to press the Sport-button this could be quite misleading for a blind (jump to buttons or show a list of buttons are other screen reader commands). Answers are anonymous (not for MediaLT, but for the principal). If answers are not understandable it is therefore possible to figure out what a tester means. Actually we do not need to ask very often. 8: Prepare the test report I believe remote testing is efficient for relatively simple things which do not require too much specialized knowledge. In other words: test things which is meant to be used by everybody. In many development situations remote testing is very difficult or impossible because of security. For example I work with a large bank which require very spesific PC configurations to be able to login (VPN-client, server aliases etc. etc.). To login from outside the bank actually require much more knowledge than to use the things they develop. Not realistic to do normal user testing remotely, but this could be possible when things are published on the banks normal site. I could have written much about this, but thats off topic. An international test team is exciting. Some challenges include: language, knowledge about the users, cost, time schedule, ... Perhaps disabled persons in other countries like to use their time to test for free, I do not know, but that would have been impossible in Norway if you should expect serious results. I as a blind person would not use my sparetime for free testing, but of course I’m not representative because of my work. However this is something to take into consideration. By the way: language is really important. As you can see, I write far from perfect English, and I’ve not read through what I’ve written - therefore I hope it is understandable:-). The same is true when reading. I had a couple of Swedish testers (on a Norwegian site), and the results was different from what I got from Norwegian testers. The reason was language: the Swedish testers did not understand labels and alternative picture text, and therefore did not succeed in some tasks. And if you do not know it: Norwegian and Swedish are quite similar languages. BR: Morten Tollefsen +47 90899305, www.medialt.no<http://www.medialt.no> Fra: Juliette [mailto:piazza.juliette@gmail.com] Sendt: lørdag 25. februar 2017 14.19 Til: w3c-wai-ig@w3.org Emne: Remote usability testing with disabled people Hello, I launched, Inclusight<http://www.inclu-sight/>, a startup that provides disabled participants for user testings. After providing for a while, disabled participants for face-to-face user testing, I figured out this was not the best solution. It's not convenient at all for disabled people as they need to travel and to plan the session a long time in advance. And when they start the testing, they figure out they cannot use their own familiar configurations. It's also a pain for user researchers who, on top of that, are not always aware of how is it to work with disabled people. That's how I came up with the ambition of offering remote usability testings for disabled people. At this stage, I am looking for professionals willing to share with me their experience in doing remote user testing with vulnerable or disabled people. I want to understand how you could make the most benefit from Inclusight. I am looking forward to hearing from user researchers, web accessibility experts or any other professionals. Kind Regards, -- Juliette
Received on Monday, 27 February 2017 11:19:02 UTC