W3C home > Mailing lists > Public > public-wsc-wg@w3.org > November 2007

Re: ISSUE-117 (serge): Eliminating Faulty Recommendations [All]

From: Mary Ellen Zurko <Mary_Ellen_Zurko@notesdev.ibm.com>
Date: Fri, 9 Nov 2007 15:55:54 -0500
To: Web Security Context Working Group WG <public-wsc-wg@w3.org>
Message-ID: <OF42E2CB99.0E6662B9-ON8525738E.006FBA6D-8525738E.0072FB2A@LocalDomain>
Our discussions on baseline success criteria and ISSUE-112 at the f2f 
provided the input I needed to respond to this. 
[also see the minutes from November 6 on the topic of  ISSUE-112, 
currently members only] 

I would argue to eliminate any recommendation that we believe we could not 
get buy in for (and that we did not believe in the future of uptake of) 
from the appropriate community (browsers, web app developers, web site 
administrators, users) (see criteria 2). 

I would also argue to eliminate any recommendation that neither captured 
current best practice (criteria 3) nor had WG consensus that it would be 
demonstrably better at aiding trust decisions than the state before the WG 
started (criteria 4). 

The last line of this issue seems to ask about the place of prior user 
studies and literature in this process. I see them feeding into criteria 
4. For any of our recommendations, anyone can challenge whether or not 
they help in aiding trust decisions. Prior user studies and literature may 
be the reason why (or part of the reason why). We discuss it, including 
any other information or data on the topic, then see what group consensus 
is. Other sorts of data may be brought to bear on the topic; see 

I bring up the ISSUE-112 here as well because I do not want anyone wasting 
time doing any user studies if the results will be discounted by the group 
during discussions. That would be unfair and disrespectful. My advice is 
that for any user study done specifically for this group, we specify ahead 
of time what we're doing, what sort of outcomes might be expected, and how 
that should influence our recommendation. We then discuss _that_ and get 
group consensus on the trajectory and impact of a user study before 
actually running it. If we can run this process with something modest 
soon, it can helpfully provide input to anything more resource intensive 
we do later, and see if that's a reasonable way to integrate them into our 

As a side note, since I consider myself an actual expert (for some value 
of expert) on the topic of usable security, I'm likely to want to read the 
data on prior user studies and literature that people cite. I've tried to 
stay on top of our bookmarks, and will continue to try to stay on top of 
citations used in discussion. I find it deeply irksome when there's a 
reference that I can't get to (the ACM Portal being the canonical example 
for me). That doesn't mean that citations there won't have impact, just 
the way deployment or product experience that is based on data not 
directly available to all of us have impact. It means it will be subject 
to the same sort of engagement by wg members, to try to understand and 
reason about it in the WG context. 

Any other takers on this issue before I put it on a meeting agenda? 
Additional ideas, expectations, suggestions, assumptions, presumptions? 

Web Security Context Working Group Issue Tracker <sysbot+tracker@w3.org>
10/08/2007 12:56 PM
ISSUE-117 (serge): Eliminating Faulty Recommendations [All]

ISSUE-117 (serge): Eliminating Faulty Recommendations [All]


Raised by: Serge Egelman
On product: All

At what point can we say that a recommendation is unlikely to work and 
should be removed from consideration?

For some of these we obviously need user studies to see how effective the 
techniques are.  However, if prior user studies and literature have 
already tested similar concepts, it would be a waste of our time and 
resources to test them again.
Received on Friday, 9 November 2007 20:56:11 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:14:19 UTC