Re: Section 6 - User Test Verification

Can everyone see this kind of font change? (I'm pretty sure the archive 
can't, but if I go to ">" now, the difference between my original text and 
Martiza's reply will be lost in email as well.)

Like this...




Maritza Johnson <maritzaj@cs.columbia.edu> 
01/19/2007 05:04 PM

To
W3 Work Group <public-wsc-wg@w3.org>
cc
Mary Ellen Zurko <Mary_Ellen_Zurko@notesdev.ibm.com>
Subject
Re: Section 6 - User Test Verification






All, from a process and schedule point of view (keep reading folks!), my 
assumption (as laid out in assumptions) is that we'll do the user test 
verification where testing usually occurs, between Candidate 
Recommendations and Proposed Recommendations. Right now the schedule gives 
us three months. So we have to either scope the testing to fit, or grow 
the time estimate. Now is the time for us to put a stake in the ground on 
what testing we will do. Can this fit in the timeframe? My experience is 
that the hardest part to control, time wise, is lining up the 
participants. 

How much are we planning to do with the user studies? 

See 
http://www.w3.org/2006/WSC/wiki/NoteAssumptions

What I mean is, how much are we planning to iterate on our 
recommendations? See the updated schedule I just sent out (or the original 
milestones in the charter, and insert user testing between CR and PR). And 
how much feedback are we trying to draw? Do we have access to 
participants? That's one of the big questions. Where would they come from 
and how would we get them? We need to start laying out now what we're 
planning on doing and what we need, to see if we can get it. If we were to 
conduct an in-person interview or something similar, do we have the 
resources?

For the time frame for the stage between Candidate Recommendations and 
Proposed Recommendations ... do we have actual dates for this? ( If we've 
talked about it before, I must have spaced out for a few minutes :) See 
charter or recent email (recent email is more correct):
http://lists.w3.org/Archives/Public/public-wsc-wg/2007Jan/0190.html



I can see a couple user studies being applicable to our group ...

One of  the most manageable might be conducting a user study with the 
goals Mike McCormick suggested a few weeks ago. The problem with that is 
that it will be hard to draw clear lines between our recommendations and 
the user testing. The closer we can do to actual user testing of the 
recommendations, the better. But, I think a user study with these goals 
would be most beneficial to us if we did it prior to drafting the 
candidate recommendations. If we have access to a group of participants 
representative of various user groups, and if we decide it's ok to 
distribute the questionnaire by email, the time consuming part would be 
writing the appropriate questions and drawing conclusions for the 
responses.

I think collecting this information  would be useful and may offer some 
additional insight about our target user group. I also think this 
information has the potential to benefit others who are concerned with 
similar problems (like phishing) - there have been several times when I've 
wished there was more information about what the average user does and 
doesn't know about security while they browse the internet. But at the 
same time, we do have some information about average users from previous 
user studies so this user study isn't an absolute necessity. I agree. If 
there was some group or organization interested and willing to take this 
on, we would definately want to cooperate with them. That was my take away 
from Mike M's last email on the subject; it's likely to be a bigger task 
than we can own. At its broadest, it might be most appropriate in 
conjunction with Pew Internet or a similiar organization:
http://www.pewinternet.org/



Of more direct relevance ...

Since we're making recommendations and not implementing these solutions ( 
I don't think we are anyway ) we're mostly limited to lo-fi prototyping 
and interviewing individuals or having focus groups. 
We do actually have people who can implement them. The browser vendors on 
board, and anyone who can write a plugin. 
Depending on the number of participants we have, and the types of 
recommendations we're evaluating, this user verification will be more time 
consuming.

It's tough to say how the user studies will go without knowing how our 
recommendations will look. I'm thinking a user study can be conducted 
where someone draws out what a recommendation would look like in the 
browser. Then either one person, or a small group of people is presented 
with the drawing and asked "If you were doing activity X and you were 
concerned about Y, is the information displayed enough to ease your 
concerns?" Then questions would be asked like is it clear to you what this 
means? Compared to the information you normally see, does this convey the 
meaning more clearly? Is there more information you feel you should be 
shown? Is there anything that you consider irrelevant to what you're 
doing?  ( followed by more questions as necessary with possible changes 
being made to the lo-fi prototype based on the participants 
suggestions/answers) 

Discussing a user study like this assumes we have the resources to conduct 
an actual user study including the participants, a facility, possibly some 
means of compensation for the participants, and people to create the lo-fi 
prototypes and conduct the user studies changing the lo-fi prototypes as 
necessary. We'll need to, to validate our recommendations. 
 


Maybe others are thinking something completely different in terms of user 
studies ... thoughts?

- Maritza

Received on Monday, 29 January 2007 15:31:17 UTC