[fwd] Re: Shared Public Knowledge (from: dan.schutzer@fstc.org)

Forwarding to the WG list... -request is for administrative stuff.
-- 
Thomas Roessler, W3C  <tlr@w3.org>





----- Forwarded message from Dan Schutzer <dan.schutzer@fstc.org> -----

From: Dan Schutzer <dan.schutzer@fstc.org>
To: public-wsc-wg-request@w3.org
Cc: 'Dan Schutzer' <dan.schutzer@fstc.org>
Date: Sun, 15 Apr 2007 11:40:49 +0000
Subject: Re: Shared Public Knowledge
X-Spam-Level: 
Old-Date: Sun, 15 Apr 2007 07:39:57 -0400
X-Diagnostic: Already on the subscriber list
X-Diagnostic:  12 dan.schutzer@fstc.org              32734
 dan.schutzer@fstc.org
X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.1.5

Of course this is where strong mutual authentication is necessary, otherwise man-in-the-middle attacks could be used to capture and rely all the strong contextual clues

 

  _____  

From: public-wsc-wg-request@w3.org [mailto:public-wsc-wg-request@w3.org] On Behalf Of Chuck Wade
Sent: Saturday, April 14, 2007 11:44 AM
To: Mary Ellen Zurko
Cc: public-wsc-wg@w3.org
Subject: Re: Shared Public Knowledge

 

Mez,

The "last login" message is useful as both a means for confirming site authenticity to the user as well as letting a user detect unauthorized use of their account. This convention goes way back to some of the earliest time-sharing systems, and was originally motivated by the desire to detect misuse of expensive computing resources. Later, it became an indirect way to confirm site authenticity, since an impostor generally has no way of knowing the last time someone logged into an account. 

However, this is again only effective if users pay attention to the "last login" message. Bank of America and many other financial sites do use this technique, though it is independent of other authentication measures, such as SiteKey. The key point, though, is that real world authentication depends on lots of contextual clues that observant participants can use throughout the course of a dialogue. Many financial institutions now monitor many aspects of the communications channel with a user along with user behavior to determine if there are reasons to suspect improper access. However, users do not have similar resources at their disposal to analyze the sites they communicate with.

Of course, most users probably ignore contextual information, such as a "last login" message. At the same time, this is a no-cost option, and it does get used in some situations that are important to financial institutions. Since a high percentage of the fraud that takes place today is perpetrated by "friends and family," the "last login" clue will be used by someone who suspects that a friend or family member is accessing their account without permission. In other words, it's a clue people use when they have reason to suspect a problem.

Similarly, if someone suspects that they might have just logged into a bogus Web site, they might look at the "last login" message to confirm their suspicion, or to restore confidence so that they can proceed. I doubt if anyone has studied the effectiveness of this measure, but it probably helps in a small percentage of cases. Again, it's a no-cost option, so it delivers a high benefit/cost ratio, even for small benefits.

It is also worth noting that legitimate sites that have a prior relationship with a user tend to present a lot of context to the user that helps them establish confidence in the site. If I access my bank accounts online, I will quickly see a bunch of indicators that could only have come from my bank, such as balances and the arrangement of account information. Similarly, good practices among online retailers involve demonstrating to the customer that they have been recognized, and presenting lots of information that is really unique to that customer (e.g., recently viewed items, order status). 

Now, if you'll forgive the tangential observation, I think it is helpful to recognize that human beings are complex entities that survive through a variety of trained responses to an extremely broad array of contextual clues. Sure, simple interfaces are best, but finding one set of indicators that will work for all users in all transactional settings is a very difficult problem. An alternative approach is to provide a broader set of indicators more tightly integrated into the context of the interaction as a way to trigger both positive and negative confidence assessments on the part of real human users and their robust pattern matching abilities. We recognize who we're dealing with when interacting with other people through an amazingly complex set of pattern matching operations that process enormous amounts of input data through multiple sensory channels. The problem with the cyber world is that digital patterns really are all the same. However, by introducing contextual clues that vary from circumstance to circumstance, we may re-enable human abilities to detect patterns that are either consistent or inconsistent with our expectations. Perhaps this is an alternative approach to user interface design that could lead to more effective user determination of site authenticity.

The real challenges, though, come back to Man-in-the-Middle attacks and purloining of userids and passwords by bogus sites. Where we most need purely technical counter measures is in detecting and preventing MitM attacks. However, if a user can quickly detect that something is wrong when they land on a bogus site, then there is both greater likelihood that corrective actions can be taken in a timely manner, and that the bogus site will be blocked or taken off line. Ultimately, adequate security and safety require on a "defense in depth" strategy that employs many complementary measures. We just need to find ways to better engage the survival skills that we human beings have evolved over eons as part of the solution.

...Chuck

_____________________________
   Chuck Wade, Principal
   Interisle Consulting Group
   +1  508 435-3050  Office
   +1  508 277-6439  Mobile
   www.interisle.net



Mary Ellen Zurko wrote: 


I disagree, and if it makes sense as a site to user antipattern (and I sense the jury still out on that), if there is concensus, we can say something appropriate about what, if anything, should be implied for the other direction (and the going in position from me would be, nothing should be implied for the other direction). 

What things other than SiteKey use information (secret, public, or shared public) to (attempt to) authenticate the site to the use? Anyone have more examples? Thanks Chuck for the Sitekey one. And Chuck, is the last login time _really_ meant to authenticate the site to the user? I thought it was to give the user a hint if the account had been unknowingly used by someone else. 

          Mez

Mary Ellen Zurko, STSM, IBM Lotus CTO Office       (t/l 333-6389)
Lotus/WPLC Security Strategy and Patent Innovation Architect




 <mailto:michael.mccormick@wellsfargo.com> <michael.mccormick@wellsfargo.com> 

04/12/2007 07:34 PM


To

 <mailto:Mary_Ellen_Zurko@notesdev.ibm.com> <Mary_Ellen_Zurko@notesdev.ibm.com>


cc

 <mailto:public-wsc-wg@w3.org> <public-wsc-wg@w3.org>


Subject

RE: Shared Public Knowledge

 


 

 




Thanks for this clarification.  But my concern is if W3C declares SPK based site-to-user authentication to be an anti pattern, that certainly implies it should never be used in the other direction either.

  _____  

From: Mary Ellen Zurko [mailto:Mary_Ellen_Zurko@notesdev.ibm.com] 
Sent: Thursday, April 12, 2007 3:17 PM
To: McCormick, Mike
Cc: public-wsc-wg@w3.org
Subject: Re: Shared Public Knowledge


I would like to do a rewind on this thread. Everyone who participated, go back to the proposed recommendation that we discussed:

http://www.w3.org/2006/WSC/wiki/SharedPublicKnowledge

It's about authenticating the server to the user (since that's one of our primary goals). Not the user to the server. 

So I will assume all discussion of the latter was interesting and informative (it was for me), but not about the actual proposal being discussed. Maybe that's because the proposal is about something nobody does or wants to do. That would make it nice and safe for our recommendations :-).

         Mez

Mary Ellen Zurko, STSM, IBM Lotus CTO Office       (t/l 333-6389)
Lotus/WPLC Security Strategy and Patent Innovation Architect


 <mailto:michael.mccormick@wellsfargo.com> <michael.mccormick@wellsfargo.com>
Sent by: public-wsc-wg-request@w3.org 

04/11/2007 07:47 PM

 


To

 <mailto:public-wsc-wg@w3.org> <public-wsc-wg@w3.org>


cc

 


Subject

Shared Public Knowledge

 


 

 





I had to drop off the line for a few minutes at the top of the hour during this morning's meeting.  Regrettably that moment came during the Lightning Discussions just as Chuck Wade was responding to MEZ's presentation on Shared Public Knowledge (SPK).  By the time I rejoined to discussion had moved on to the next topic. 

What I would have said given the opportunity is that Chuck is 100% right.  In our industry this battle has been fought many times and I see little good coming from taking a hard line against all online use of SPK. 

Many US companies rely on services provided by the likes of Choicepoint & Acxiom to perform Knowledge Based Authentication (KBA) or Out of Wallet Authentication (OOWA) of consumers in certain situations, especially in cases where no prior business relationship exists between the FI and said consumer. 

These KBA systems typically ask a series of randomly chosen multiple choice questions designed to score a user's knowledge of semi-private information about himself or herself.  Examples might include "What model car do you drive"? or "What’s the amount of your monthly mortgage payment?".  A determined criminal could undeniably obtain this information from public sources, perhaps even use it to impersonate others, but that doesn't mean there is no legitimate use case for KBA. 

A blanket prohibition against KBA is unnecessary and would never be accepted.  Asking the user enough SPK based questions is not an unreasonable authentication technique as long as the associated risk is low, or when SPK is only being used to supplement some other credential for extra assurance. 

The much maligned Mother's Maiden Name is an example of weak KBA … but much stronger ones are possible using the enormous databases of personal data that are available from brokers today.  So I think the SPK "anti-pattern" would benefit from being softened a bit to acknowledge there's a place for it under certain conditions.

Thanks, Mike 

Michael McCormick,CISSP
Lead Architect, Information Security Technology 
Wells Fargo Bank 
255 Second Avenue South 
MAC N9301-01J 
Minneapolis MN 55479 
*      612-667-9227 (desk)             *      612-667-7037 (fax) 
(       612-590-1437 (cell)             :-)      michael.mccormick@wellsfargo.com (AIM) 
*       612-621-1318 (pager)            *       <mailto:michael.mccormick@wellsfargo.com> michael.mccormick@wellsfargo.com

“THESE OPINIONS ARE STRICTLY MY OWN AND NOT NECESSARILY THOSE OF WELLS FARGO"
This message may contain confidential and/or privileged information.  If you are not the addressee or authorized to receive this for the addressee, you must not use, copy, disclose, or take any action based on this message or any information herein.  If you have received this message in error, please advise the sender immediately by reply e-mail and delete this message.  Thank you for your cooperation. 

 

 


----- End forwarded message -----

Received on Sunday, 15 April 2007 17:13:50 UTC