RE: Note Section - Design Principles

I think the original description of average user was pretty accurate
actually.  We work with a population of close to 10 million consumers
across a broad demographic spectrum, and most of them seem to have
little or no understanding of how the web works much less what the
security dangers are, what certificates represent, etc.
 
Some survey or focus group based research in this area might prove
illuminating.  Without hard data we're all just guessing.

Michael McCormick, CISSP 
Lead Architect, Information Security 

This message may contain confidential and/or privileged information.  If
you are not the addressee or authorized to receive this for the
addressee, you must not use, copy, disclose, or take any action based on
this message or any information herein.  If you have received this
message in error, please advise the sender immediately by reply e-mail
and delete this message.  Thank you for your cooperation.

 

  _____  

From: public-wsc-wg-request@w3.org [mailto:public-wsc-wg-request@w3.org]
On Behalf Of Brad Porter
Sent: Sunday, December 31, 2006 12:01 PM
To: Maritza Johnson
Cc: W3 Work Group
Subject: Re: Note Section - Design Principles


I'm not quite sure I agree with many of the "Characteristics of the
Average User".  Being the IT person for friends and family, most of
those users do care about the security dialogs and it isn't uncommon to
get a phone call saying "What do I do with this, am I at risk?"  

I also worry that the "Characteristics of the Average User" as described
below could lead us into the trap of modeling a user as "dumb".  Our
experience at Tellme has been that users are not dumb, but systems are
often notoriously bad at communicating, and they're often especially bad
if they treat the user at a kindergarten level.  Instead, we tend to
follow a design pattern that assumes the user is a capable, intelligent
adult, but the burden is on the system to communicate effectively.

I have some suggested amendments below.  

Maritza Johnson wrote: 

	A list of design principles extracted from the shared bookmarks
in the wiki.
	
	
	General Design Principles:
	
	
	- Dialogues should not contain information which is irrelevant
or rarely needed.
	- The user should be able to conveniently access more
information as required by their level of experience.
	- The cues should be displayed consistently in location and
across sites and browsers in an attempt to prevent spoofing and
confusion of the user.
	- False positive warnings rapidly dilute warning usability.
	- False positives and negatives should be kept to a minimum to
avoid degrading the user's level of confidence in the security cue.
	- The system should speak the user's language, with words,
phrases and concepts familiar to the user, rather than system-oriented
terms. Follow real-world conventions, making information appear in a
natural and logical order.
	- Provide explanations, justifying the advice or information
given.
	- Integrated security aligns security with user actions and
tasks so that the most common tasks and repetitive actions are secure by
default. Provides information about security state and context in a form
that is useful and understandable to the user, in a non-obtrusive
fashion.
	- When possible, safe staging should be used. Safe staging is "a
user interface design that allows the user freedom to decide when to
progress to the next stage, and encourages progression by establishing a
context in which it is a conceptually attractive path of least
resistance."
	- Metaphor tailoring starts with a conceptual model
specification of the security related functionality, enumerates the
risks of usability failures in that model, and uses those risks to
explicitly drive visual metaphors.
	- If a feature or cue is included in the design with the
intention of improving some aspect of usability (learnability, better
functionality through more information ...) it must be clear to the user
the feature is available, and the action or process expected of the user
must be clear by the way the feature is presented.
	- The visual cues presented to the user must represent the
state/action of the system in a way that is consistent with the actual
state/action of the system to allow the user to create an accurate
conceptual model.
	- The user must be aware of the task they are to perform.
	- The user must be able to figure out how to perform the task.
	- The user should be given feedback when the state of the
security of a page is changed.
	
	
	
	
	Characteristics of the average user (is this what was meat by
the assumptions section?)

I would call this the "naive user" instead of the "average user".


	
	
	- Security is always a secondary goal, it is never the main
focus of a user. 
	

"Users are typically task-driven and security is a secondary
consideration."


	- Users lack the knowledge that would help them make security
decisions on the internet. This includes being unaware of security
protocols and concepts, the meaning of current security cues, and the
difference between the web content and the browser chrome.

In my experience, most naive users can differentiate web content from
browser chrome and in many cases do understand the very high-level
concepts but not the mechanisms.  My friends and family typically
understand that a certificate is something provided by a site stating
that it is who it says it is.  They typically understand that encrypted
means the data is not easily read by someone else.  They don't typically
understanding public-key, private-key, trust-networks, signing
authorities, etc.  I might say:

"Users can make analogies to real-world concepts, but are typically
unfamiliar with the detailed mechanisms involved in implementing web
security."


	- A user has only a single locus of attention, a feature or an
object in the physical world or an idea about which you are intently and
actively thinking
	
	- Users can be visually deceived.
	- Users have bounded attention.
	- Users ignore warning signs, or reason them away.

This hasn't been my experience given the number of calls I've answered
saying "Is this OK?  What do I do?".  I think the current security
dialog approach has trained people to do this, but I don't think it is
the default behavior.  


	- Users rely on the content of a web page to make security
decisions. 
	- Users ignore warning signs, or reason them away.

(Duplicate)


	
	
	Can anyone think of any I've left out? Any suggestions for
modifying the ones listed?
	

I think the other thing I would try to add is that typically if a site
is violating a security principle (cross-posting form data, including
HTTP and HTTPS frames on the same page, transitioning from HTTPS to
HTTP, presenting an expired certificate, presenting an unsigned
certificate) the user is unempowered to do anything about it.  If a
valid site is doing something insecure, the user has little recourse.
How do I complain to E*TRADE, Fedex, AOL that their site is presenting
me a security warning?  So the user has only two choices:  1)  Proceed
anyways  2) Don't use their services until it is fixed (which might mean
never use their services).

Maybe:

- Users are unempowered to request that a site fix its security problems
and therefore are forced to decide whether to take the risk in order to
complete the task

--Brad


	
	
	
	- Maritza
	
	
	http://www.cs.columbia.edu/~maritzaj/
<http://www.cs.columbia.edu/%7Emaritzaj/> 
	
	
	
	

Received on Tuesday, 2 January 2007 17:27:08 UTC