Re: use case: CA acceptance (ACTION-74)

"List" meant mailing list.  Your message had gone to the
administrative address, not to the list address; I was forwarding to
the list.

Cheers,
-- 
Thomas Roessler, W3C  <tlr@w3.org>






On 2007-01-23 14:52:23 -0500, Dan Schutzer wrote:
> From: Dan Schutzer <dan.schutzer@fstc.org>
> To: public-wsc-wg@w3.org,
> 	'Mary Ellen Zurko' <Mary_Ellen_Zurko@notesdev.ibm.com>,
> 	'Bob' <rubell@stevens.edu>, 'Chuck Wade' <Chuck@Interisle.net>,
> 	chris.nautiyal@fstc.org, maritzaj@cs.columbia.edu,
> 	'Thomas Roessler' <tlr@w3.org>
> Date: Tue, 23 Jan 2007 14:52:23 -0500
> Subject: RE: use case: CA acceptance (ACTION-74)
> X-Spam-Level: 
> 
> Could you reconsider or can we present at the meeting?
> 
>  
> 
> -----Original Message-----
> From: Thomas Roessler [mailto:tlr@w3.org] 
> Sent: Tuesday, January 23, 2007 2:47 PM
> To: Dan Schutzer
> Cc: public-wsc-wg@w3.org; 'Mary Ellen Zurko'; 'Bob'; 'Chuck Wade';
> chris.nautiyal@fstc.org; maritzaj@cs.columbia.edu
> Subject: Re: use case: CA acceptance (ACTION-74)
> 
>  
> 
> This one didn't make it to the list...
> 
> -- 
> 
> Thomas Roessler, W3C  <tlr@w3.org>
> 
>  
> 
>  
> 
>  
> 
>  
> 
>  
> 
> On 2007-01-23 17:44:19 +0000, Dan Schutzer wrote:
> 
> > From: Dan Schutzer <dan.schutzer@fstc.org>
> 
> > To: public-wsc-wg-request@w3.org,
> 
> >     'Mary Ellen Zurko' <Mary_Ellen_Zurko@notesdev.ibm.com>
> 
> > Cc: 'Dan Schutzer' <dan.schutzer@fstc.org>, 'Bob' <rubell@stevens.edu>,
> 
> >     'Chuck Wade' <Chuck@Interisle.net>, chris.nautiyal@fstc.org,
> 
> >     maritzaj@cs.columbia.edu
> 
> > Date: Tue, 23 Jan 2007 17:44:19 +0000
> 
> > Subject: RE: use case: CA acceptance (ACTION-74)
> 
> > X-Spam-Level: 
> 
> > Old-Date: Tue, 23 Jan 2007 12:43:22 -0500
> 
> > X-Diagnostic: Already on the subscriber list
> 
> > X-Diagnostic:   8 chris.nautiyal@fstc.org            32752
> chris.nautiyal@fstc.org
> 
> > 
> 
> > Hi Mary Ellen,
> 
> > 
> 
> >  
> 
> > 
> 
> > I have rewritten my use case. There are now two use cases and supporting
> 
> > recommendations
> 
> > 
> 
> >  
> 
> > 
> 
> > Best Regards
> 
> > 
> 
> > Dan Schutzer
> 
> > 
> 
> >  
> 
> > 
> 
> > ----------------------------
> 
> > 
> 
> > Use Cases: 
> 
> > 
> 
> >  
> 
> > 
> 
> > Use case 1: Alice sees an advertisement from a bank regarding opening an
> 
> > account online and getting a very favorable interest rate. Alice has heard
> 
> > of the bank and goes on line to open the account. However, when she gets
> to
> 
> > the bank's website, she learns that to open the account, she has to
> provide
> 
> > sensitive personal information. Alice wants assurance about the website
> 
> > before providing this information.
> 
> > 
> 
> >  
> 
> > 
> 
> > Use Case 2: Alice has repeatedly visited her bank's web site. Every time
> she
> 
> > visits our bank's website she wants to be reassured that she is actually
> at
> 
> > the bank's website and not a spoofed website. She wants that reassurance
> to
> 
> > be accurate every time; never telling her it's not her bank when it is,
> nor
> 
> > telling her a web site that does not belong to her bank is from her bank.
> 
> > And Alice wants this reassurance even when she is using different machines
> 
> > in different locations (e.g. her laptop in a hotel room). Alice is
> 
> > particularly worried about attacks that have occurred and are on the news
> 
> > and her friends talk about (though she doesn't always understand what they
> 
> > are).
> 
> > 
> 
> >  
> 
> > 
> 
> > These use cases are getting at the concern a user has when everything
> 
> > appears valid on the screen they see, but they are not completely
> 
> > comfortable. In other words, they want to take some concrete step to get
> 
> > assurances that they're not being fooled. Even when technology "gets it
> 
> > right," the user should still be given the option of asking for
> confirmation
> 
> > in a way that is meaningful to a human being. And the confirmation should
> 
> > not make matters worse by (e.g. the confirmation or warning should not be
> 
> > spoofable).
> 
> > 
> 
> >  
> 
> > 
> 
> > Recommendation:
> 
> > 
> 
> >  
> 
> > 
> 
> > There is a class of web services that users are particularly anxious about
> 
> > interacting with in regard to safety. Users aren't particularly concerned
> 
> > when they visit many sites (such as Wikepedia, Google, Entertainment, and
> 
> > information sites such as electronic newspapers) because these involve
> their
> 
> > consuming information with little risk of their losing sensitive personal
> 
> > information or important credentials that permit access to important
> 
> > resources and assets. But some sites, such as banking sites (this is not
> 
> > restricted to banking however), to whom the user trusts to safeguard
> 
> > important resources and assets, and who permits access via the web, does
> 
> > provide anxiety to user. They are afraid that if are tricked into
> providing
> 
> > sensitive information to sites that are spoofs of the actual bank site,
> the
> 
> > information collected could be used by the criminal to commit fraud (e.g.
> 
> > take-over their account, assume your identity). 
> 
> > 
> 
> >  
> 
> > 
> 
> > Web Service Providers, such as banking web sites, are motivated to take
> the
> 
> > necessary extra steps to help the browser and service providers
> 
> > unambiguously distinguish between their web pages and an imposter site.
> This
> 
> > could be accomplished by a combination of things, some outside the domain
> of
> 
> > the browser: 
> 
> > 
> 
> >  
> 
> > 
> 
> > *   The website is signed by a special class of certificates with
> 
> > extended validation. 
> 
> > *   The web page has attributes that are consistent and kept constant;
> 
> > e.g. known IP addresses that are registered with that URL, 
> 
> > *   Contextual clues - images and information known only to user and the
> 
> > real web service provider
> 
> > *   Strong mutual challenge/response protocol; e.g. making use of client
> 
> > and server certificates
> 
> > *   Use of out of channel signals (e.g. ask for a security code provided
> 
> > via an email or voicemail
> 
> > 
> 
> >  
> 
> > 
> 
> > If the browser checks for a combination of this confirming information, we
> 
> > need indicators that can:
> 
> > 1. Communicate to Alice that the site is valid, where that communication
> 
> > cannot be forged by a spoofed site, or ignored in their absence (this
> 
> > includes preventing invalid pop-ups)
> 
> > 
> 
> > 2. Communicate to Alice when the site is not valid in a way that is not
> 
> > ignored and cannot be covered up
> 
> > 3. Have close to zero error in this communication (very little to no
> 
> > instances where the browser is communicating false rejects or false
> accepts)
> 
> > to prevent chicken little effect where user gets blocked or warned against
> 
> > going to a real bank site, or where user gets assured it is the real bank
> 
> > site when it is not.
> 
> > 4. Have this scheme still work against credible spoofing attacks (e.g.
> 
> > man-in-the-middle, false links embedded in phishing emails,
> 
> > picture-in-picture attacks)
> 
> > 
> 
> > Below are examples of solutions that might be considered for each of the
> two
> 
> > use cases.
> 
> > 
> 
> > First Use Case Recommendations
> 
> > 
> 
> > 
> 
> > 
> 
> > For the first use case, if the website is signed with an EV certificate,
> the
> 
> > browser can verify that a website is using an EV certificate, and can
> 
> > unambiguously display the name of the website (contained in the EV cert,
> and
> 
> > presumably verified to a rigorous degree by the certificate issuing
> 
> > procedure). This might even include a special type logo (e.g. Bank type
> 
> > logo).  This case would correspond to a situation where there is no prior
> 
> > trusted relationship between the user and the website, but where the user
> 
> > wants assurance about the website before providing sensitive information.
> 
> > It is not perfect, but if done correctly, along with other out-of-channel
> 
> > checks and balances, it should suffice.
> 
> > 
> 
> > So let's say you open your browser and are viewing XYZ Bank's webpage.  No
> 
> > sensitive information is requested on this page, so you might be in
> ordinary
> 
> > browsing mode.  Now you click a link that takes you to another page for
> 
> > opening a new account.  Your browser senses the EV cert, and opens this
> page
> 
> > under the Safe Browsing tab.  The name of the bank is clearly displayed.
> 
> > The user verifies that he/she is in Safe Browsing Mode by noting the
> colored
> 
> > or marked tab, and verifies the name of the bank.   The user then provides
> 
> > personal information for opening a new account.  [If the bank chooses,
> every
> 
> > page on its website could be associated with an EV cert, so the user could
> 
> > only access the bank's site in Safe Browsing Mode.]
> 
> > 
> 
> > Second Use Case Recommendations
> 
> > 
> 
> > The second use case corresponds to a situation where there exists a prior
> 
> > trusted relationship between the user and the website.  Here we need
> strong
> 
> > mutual authentication to take place between your computer and the website,
> 
> > so that both sides have assurance of who the other is.   This would be the
> 
> > case where the FI website uses an EV cert, and where there is also a
> 
> > client-side certificate and private key on the user's side (possibly on
> the
> 
> > computer's HD or TPM or USB token). An effective approach would be to have
> 
> > an easy way to allow the user put the browser in a Safe mode (Trusted
> path)
> 
> > where only certain classes of websites, and/or user-specified sites that
> can
> 
> > pass strong mutual authentication protocol, or other such tests, can be
> 
> > viewed. This can be tested by the user, by attempting to access a web site
> 
> > that is not on the trusted list when in the trusted mode. An example would
> 
> > be a user accessing a bank's webpage only after placing the browser in a
> 
> > Safe Browsing Mode that allows existing banking customers to access their
> 
> > accounts.  If there is a client-side certificate and private key on the
> 
> > user's side, mutual authentication between the bank website and the user's
> 
> > computer can take place.  If the mutual authentication fails, the web page
> 
> > will not be displayed. I would envision that browsers could be redesigned
> so
> 
> > that when a user initially opens the browser, two tabs are opened by
> 
> > default.  One tab would be for ordinary browsing, and this is where the
> user
> 
> > would be when the browser first opens.  A second tab, with some special
> tab
> 
> > color or indicator, would be the Safe Browsing Tab. In the Safe Browsing
> 
> > Tab, only webpages that conform to the Safe Browsing criteria would be
> 
> > viewable.   
> 
> > 
> 
> > If you already have an account at the bank, and have a client-side
> 
> > certificate, you might again be initially accessing the bank website in
> 
> > ordinary browsing mode, but when you click on a link for accessing your
> 
> > account, a new webpage would be opened under the Safe Browsing tab.  You
> now
> 
> > put in your User ID, and the bank realizes that your computer should have
> a
> 
> > client-side cert for mutual authentication.  If the bank's site can now
> 
> > authenticate your computer via the cert, you are now prompted for your
> 
> > password.  You recognize that you are in Safe Mode because of the special
> 
> > tab, and recognize the bank's name, so you provide the password. Or, maybe
> 
> > you have bookmarked the webpage where account access takes place.  That
> 
> > bookmark would automatically open this account access page under the Safe
> 
> > Browsing tab. One can always check that they are in the safe browsing
> mode,
> 
> > by testing to see if they can access a page that is has not been specially
> 
> > designated and authenticated.
> 
> > 
> 
> > If there is a client-side certificate/private key residing on Alice's
> 
> > computer HD or TPM for mutual authentication, but if Alice is going to be
> 
> > using a different computer at an internet cafe, there will be a problem.
> 
> > But that's the same problem you have even if you're not using a
> client-side
> 
> > cert, but are using the HD "signature" as the SYH authentication factor,
> as
> 
> > in the Passmark case.  Alice would then need to answer challenge
> questions,
> 
> > or authenticate via an out-of-band phone call.  If client-side certs and
> 
> > private keys are used in the mutual authentication process, the most
> 
> > portable solution would seem to be if Alice carries around a USB token on
> 
> > her keychain with the cert/private key.   If Alice wants to use the Safe
> 
> > Browsing Mode even at an Internet cafe, one approach is for Alice's USB
> 
> > token to also implement a properly-configured browser, in addition to the
> 
> > client-side cert.   One incentive for having banking customers carry
> around
> 
> > USB tokens for these purposes might be if the token also stored every
> 
> > password that's needed by Alice for all her financial transactions over
> the
> 
> > web.  The advantage to Alice would be that by carrying around such a
> token,
> 
> > she would not only have better security, but wouldn't have to keep track
> of
> 
> > all these other passwords.  She would only have to remember one password,
> 
> > the password that unlocks the token.  
> 
> > 
> 
> > Another approach would involve including a solution that combines the
> "under
> 
> > the covers" OCSP verification with the "click here to check this site"
> 
> > badge. The latter is easier for the user to understand and relate to, but
> is
> 
> > vulnerable since it depends on the web page to provide the appropriate
> 
> > hooks/links. The former may be effective, but it doesn't address the
> 
> > concerns that a person might have that they could be fooled regarding what
> 
> > is displayed. However, if a browser provided a button contained within the
> 
> > chrome to check the validity of the web site that is currently displayed
> 
> > (i.e., the url in the location bar), then third party validation would be
> 
> > less vulnerable, and it would become a more consistent option for users to
> 
> > invoke. IE7 already has such an option, but it is contained within special
> 
> > web pages that get displayed when a user attempts to visit a questionable
> 
> > site. 
> 
> > 
> 
> >   _____  
> 
> > 
> 
> > From: public-wsc-wg-request@w3.org [mailto:public-wsc-wg-request@w3.org]
> On
> 
> > Behalf Of Mary Ellen Zurko
> 
> > Sent: Wednesday, January 17, 2007 5:52 PM
> 
> > To: dan.schutzer@fstc.org
> 
> > Cc: 'WSC WG'
> 
> > Subject: RE: use case: CA acceptance (ACTION-74)
> 
> > 
> 
> >  
> 
> > 
> 
> > 
> 
> > Hi Dan,
> 
> > 
> 
> > This seems like a hybrid between a use case and a proposed recommendation.
> 
> > The use case would be:
> 
> > 
> 
> > Alice has repeatedly visited her bank's web site. Everytime she visits our
> 
> > bank's website she wants to be reassured that she is actually at the
> bank's
> 
> > website and not a spoofed website. And she wants that reassurance to be
> 
> > accurate every time; neither telling her it's not her bank, nor telling
> her
> 
> > a web site that does not belong to her bank is her bank. She is
> particularly
> 
> > worried about attacks that have occured and are on the news and her
> friends
> 
> > talk about (though she doesn't always understand what they are). 
> 
> > 
> 
> > The rest of your mail would be the core of a proposed recommendation. 
> 
> > 
> 
> > If you agree, you could write up both the use case (for the Note) and
> draft
> 
> > a proposed recommendation (since we'll start discussing our
> recommendations
> 
> > in less than two weeks). 
> 
> > 
> 
> >           Mez
> 
> > 
> 
> >  
> 
> > 
> 
> >  
> 
> > 
> 
>  
> 
>  
> 

Received on Tuesday, 23 January 2007 19:57:15 UTC