- From: Maritza Johnson <maritzaj@cs.columbia.edu>
- Date: Fri, 19 Jan 2007 17:04:14 -0500
- To: W3 Work Group <public-wsc-wg@w3.org>
- Cc: Mary Ellen Zurko <Mary_Ellen_Zurko@notesdev.ibm.com>
- Message-Id: <FBCA137A-713B-4B97-A1CC-F10A65739471@cs.columbia.edu>
> All, from a process and schedule point of view (keep reading > folks!), my assumption (as laid out in assumptions) is that we'll > do the user test verification where testing usually occurs, between > Candidate Recommendations and Proposed Recommendations. Right now > the schedule gives us three months. So we have to either scope the > testing to fit, or grow the time estimate. Now is the time for us > to put a stake in the ground on what testing we will do. Can this > fit in the timeframe? My experience is that the hardest part to > control, time wise, is lining up the participants. How much are we planning to do with the user studies? What I mean is, how much are we planning to iterate on our recommendations? And how much feedback are we trying to draw? Do we have access to participants? If we were to conduct an in-person interview or something similar, do we have the resources? For the time frame for the stage between Candidate Recommendations and Proposed Recommendations ... do we have actual dates for this? ( If we've talked about it before, I must have spaced out for a few minutes :) I can see a couple user studies being applicable to our group ... One of the most manageable might be conducting a user study with the goals Mike McCormick suggested a few weeks ago. But, I think a user study with these goals would be most beneficial to us if we did it prior to drafting the candidate recommendations. If we have access to a group of participants representative of various user groups, and if we decide it's ok to distribute the questionnaire by email, the time consuming part would be writing the appropriate questions and drawing conclusions for the responses. I think collecting this information would be useful and may offer some additional insight about our target user group. I also think this information has the potential to benefit others who are concerned with similar problems (like phishing) - there have been several times when I've wished there was more information about what the average user does and doesn't know about security while they browse the internet. But at the same time, we do have some information about average users from previous user studies so this user study isn't an absolute necessity. Of more direct relevance ... Since we're making recommendations and not implementing these solutions ( I don't think we are anyway ) we're mostly limited to lo- fi prototyping and interviewing individuals or having focus groups. Depending on the number of participants we have, and the types of recommendations we're evaluating, this user verification will be more time consuming. It's tough to say how the user studies will go without knowing how our recommendations will look. I'm thinking a user study can be conducted where someone draws out what a recommendation would look like in the browser. Then either one person, or a small group of people is presented with the drawing and asked "If you were doing activity X and you were concerned about Y, is the information displayed enough to ease your concerns?" Then questions would be asked like is it clear to you what this means? Compared to the information you normally see, does this convey the meaning more clearly? Is there more information you feel you should be shown? Is there anything that you consider irrelevant to what you're doing? ( followed by more questions as necessary with possible changes being made to the lo-fi prototype based on the participants suggestions/ answers) Discussing a user study like this assumes we have the resources to conduct an actual user study including the participants, a facility, possibly some means of compensation for the participants, and people to create the lo-fi prototypes and conduct the user studies changing the lo-fi prototypes as necessary. Maybe others are thinking something completely different in terms of user studies ... thoughts? - Maritza
Received on Friday, 19 January 2007 22:04:42 UTC