Re: ACTION-301: Usability review of Identity Signal

I also want to point out that in testing, we should keep realism in mind. I
went around on Wednesday trying to find sites with EV certs, since we're
talking so much about them and putting them on such a pedestal. I couldn't
find any except for paypal and Verisign. Not Bank of America, not Wells
Fargo, not any banks that I tried were using EV. IMHO testing should reflect
that.

-Ian

On 10/26/07, Dan Schutzer <dan.schutzer@fstc.org> wrote:
>
>
> I think we can break this into two parts:
>
> 1. Part 1 - How effective is the Agents message - does the user respond to
> the message
> 2. Part 2 - How effective is the Agent in spotting false sites (false
> alarms
> and false rejects) vs the user without the aid of an agent; and if the
> Agent
> is not very effective (poor performance - high false alarm and false
> reject)
> how does that affect the user's reaction to the Agents message
>
> -----Original Message-----
> From: public-wsc-wg-request@w3.org [mailto:public-wsc-wg-request@w3.org]
> On
> Behalf Of Rachna Dhamija
> Sent: Friday, October 26, 2007 12:35 PM
> To: Mary Ellen Zurko
> Cc: Johnathan Nightingale <johnath; W3C WSC W3C WSC Public
> Subject: Re: ACTION-301: Usability review of Identity Signal
>
>
> On 10/26/07, Mary Ellen Zurko <Mary_Ellen_Zurko@notesdev.ibm.com> wrote:
> >
> > > I appreciate that "help users understand the identity of sites they
> > > interact with" is a harder testing problem than "prevent phishing
> > > attacks" and I don't actually have a good methodology suggestion.  An
> >
> > I don't see why it is (and I expect kind and informative responses to
> > naivete :-). The testing of understandability of visual icons goes much
> > further back than usability testing around user attacks. I would expect
> that
> > kind of UT would be the most appropriate.
> >         Mez
>
> I agree with Mez.  It is actually easier to test if the scheme helps
> "users understand the identity of sites they interact with" than to
> test if it prevents phishing attacks.
>
> To do this, you need to define what you mean by "understanding
> identity".  What exactly do you want users to know?  E.g. "when a user
> visits the Bank X website, they understand that they are at Bank X and
> not Y", or "when they visit site A that does not have an EV
> certificate they understand that a third party has not verified the
> identity of the site".  Your standard might be higher e.g. "they might
> be suspicious" in some circumstances or be able to verify the identity
> in a phishing attack that spoofs Larry (I know this is not your goal).
> Once you define the goals, we can ask users to use the interface and
> then test them or interview them to see if your goals were met.
>
> We can do this in a lab, by distributing the client to users and then
> interviewing them, or you could instrument the client. Obviously, you
> can get more accurate answers to behavior questions (e.g. do users
> discover Larry on their own?) if you have a long term study with an
> instrumented client.  However, if you have questions about what users
> *understand*, there is nothing that beats the kind of data you can get
> by showing users the interface and interviewing them face to face.
> Computer scientists really discount the value of this methodology, and
> I think our designs suffer for it.
>
> Rachna
>
>
>
>

Received on Friday, 26 October 2007 17:50:14 UTC