- From: Bob Pinheiro <Bob.Pinheiro@FSTC.org>
- Date: Fri, 24 Aug 2007 13:54:54 -0400
- To: Serge Egelman <egelman@cs.cmu.edu>,public-wsc-wg@w3.org
You seem to be saying that if a user's email client is smart enough to determine whether an email *may* be from a bank, then the email client can be made smart enough to actually distinguish between real banking emails and fake ones. I'm not sure why one would necessarily follow from the other. Do you have any papers/links that would explain how this works? Thx. At 01:21 PM 8/24/2007, Serge Egelman wrote: >The issue is, if the software is intelligent enough to think that the >message *may* be from a bank (and do this with a low enough false >positive rate that users don't ignore it), then why not just >automatically filter out the phishing message? Based on studies we've >done with phishing detection, if a message can be categorized as being >bank-related (either from a bank or a phishing message) or all other >mail, it's then fairly straightforward to make a distinction between >real bank messages and phishing messages. At that point we can alert >the user to the phishing message fairly effectively. This is why I >don't think the SBM mode is practical. > >serge > >Bob Pinheiro wrote: > > Yes, there well may be an issue with users invoking SBM before clicking > > a link in their email. That's why I proposed that one alternative might > > be to remove that issue by making the user's computer (email client? > > browser?) "smart" enough to sense that when an email might potentially > > be from a bank, the browser could prompt the user and ask if SBM should > > be invoked. So I am assuming some sort of "intelligent" link between > > the email client and the browser, with the email client triggering the > > browser to invoke a procedure for prompting the user to invoke SBM based > > on some keywords or phrases in the email header. But is that so > > wrong? It may not exist today - all I am suggesting is that it might > > be one avenue to consider (and not necessarily by this group) as a way > > to prevent users from visiting fraudulent banking sites by clicking on > > email links if they haven't first invoked SBM. But this is getting of > > the beaten track, I guess...... > > > > At 11:55 AM 8/24/2007, Ian Fette wrote: > >> This is going to rapidly take me down a divergent path, but I shall > >> follow said path anyways. > > > >> One of the biggest problems I have with SBM is invocation. You can't > >> really expect users to invoke SBM before clicking a link in their > >> email, because when they're reading their email their browser might > >> not even be open (except for all the wonderful gmail users out there > >> ;-). But seriously, when you click on a link in Thunderbird or Outlook > >> or Lotus Notes or whatever it is that you use to read email, that > >> email program just knows that it's supposed to open that link in a > >> browser (sometimes... if it has no clue, it might just shellexecute > >> the URL and let the OS figure out what to do with it). Either way, > >> unless the default browser is set to "Browser with SBM Mode Turned > >> On", links from email are going to get loaded in non-SBM mode. > >> > >> So, let's now go back to your response. Let's say that the user is > >> educated enough to understand that SBM should be invoked before > >> visiting any banking websites. (I personally find this a troublesome > >> assumption, but let's run with it). Is the user then supposed to start > >> a web browser, enter SBM mode, and then cut and paste the link from > >> their email? That's a usability disaster, and I doubt anyone would > >> actually figure out that those steps were required. Even if a user > >> opens a browser and starts SBM, clicking on a link in an email program > >> would very likely just start a new browser window (probably without > >> SBM enabled... and when a user is in SBM mode, do you really want > >> links from external programs to be able to clobber the current > >> window?). In my mind, we're heading for a usability disaster here. > >> > >> Further, in your use case below, you're assuming a strong tie-in > >> between a user's MUA (email client) and their browser, which is often > >> not the case. In some cases the two are strongly tied together, but in > >> many cases when an email client gets a URL and the user clicks on it, > >> it just throws the URL to the operating system and says "deal with > >> it". And we're already well down the path of suggesting extensions to > >> MUAs (email clients) to do machine learning to detect possible > >> bank-like emails, and I fear this is getting way out of scope of the > >> WG... > >> > >> On 8/24/07, *Bob Pinheiro* <Bob.Pinheiro@fstc.org > >> <mailto:Bob.Pinheiro@fstc.org>> wrote: > >> > >> I think there may be a tie-in here with Safe Browsing Mode. > >> Suppose the user is educated enough to understand that SBM should > >> be invoked before visiting any banking websites. Then upon seeing > >> the email, the user should invoke SBM before clicking on the > >> apparent banking link. If that is done, then instead of > >> displaying the ERROR 404 message, the user should see whatever is > >> displayed by SBM when the user attempts to visit a non-safe website. > >> > >> But if it is true that "education does not consistently produce > >> the results desired", then there may be numerous times when even > >> users who are aware of SBM do not actually invoke it when they > >> should; that is, before visiting banking websites. So a question > >> worth asking might be: can a user's browser be made "smart" enough > >> to sense that a website that the user wants to visit might > >> possibly be a banking website? The user can easily sense this > >> because the Use Case says that the email claims to be from the > >> user's bank. If the user's computer can somehow "read" the > >> email header, it might display a message saying "I sense that you > >> are attempting to visit a possible banking website. However, it > >> is possible that this is a fraudulent website. Would you like me > >> to invoke Safe Browsing Mode to prevent you from visiting a > >> fraudulent site?" The user could respond, Yes or No. > >> > >> Some sort of artificial intelligence that could read and interpret > >> email headers might be needed, possibly triggered by certain > >> banking-like keywords or phrases in an email header. I don't know > >> if such exists, or if it does, whether it is "ready for prime > >> time" and would produce reliable results. But it might be one > >> possible answer to the dilemma of needing to educate users to do > >> certain things to protect themselves online. > >> > >> At 08:25 AM 8/24/2007, Mary Ellen Zurko wrote: > >> > >>> We have two sections in wsc-usecasee that touch on education: > >>> > >>> http://www.w3.org/TR/wsc-usecases/#learning-by-doing > >>> > >>> http://www.w3.org/TR/wsc-usecases/#uniformity > >>> > >>> The first says that experience shows that while users learn, > >>> education does not consistently produce the results desired. > >>> > >>> The second cites on study that shows that education does not > >>> impact susceptability to phishing. It's possible that > >>> Brustoloni's latest shows that as well: > >>> > >>> http://cups.cs.cmu.edu/soups/2007/proceedings/p88_sheng.pdf > >>> is more hopeful, but shows no transfer to "realistic" > >>> behavior, in a study or in the wild. > >>> > >>> I gather from the discussions with the usability evaluation > >>> folks, they believe they can address education. > >>> > >>> Personally, I'm not a believer in direct education, mostly > >>> because no one's brought up a single data point where users > >>> were directly educated to do something, and did it, even when > >>> they had options that were more attrractive for some reason > >>> (e.g. more familiar, easier). All the promising anti > >>> phishing research makes sure that the secure option is the > >>> most attractive (or at least comparably attractive). > >>> > >>> On the other hand, I do believe that in circumscribed > >>> oganizations, like the military and large companies, a system > >>> of education, reward, and punishment can be (and is) set up > >>> to change user behavior. I would again refer to > >>> http://www.acsa-admin.org/2002/papers/7.pdf as showing an > >>> upper bound on how successful that can be with the option is > >>> not the most attractive (order of 30% of the overall > >>> population). > >>> > >>> I would be more comfortable with an education use case if we > >>> said more somewhere about how we'll come to terms with it. Do > >>> the usability evaluation folks know how we'll do that? > >>> > >>> Mez > >>> > >>> > >>> > >>> > >>> [] > >>> New Use Case for W3C WSC > >>> Dan Schutzer to: public-wsc-wg > >>> 08/24/2007 07:52 AM > >>> > >>> Sent by: public-wsc-wg-request@w3.org > >>> <mailto:public-wsc-wg-request@w3.org> > >>> Cc:"'Dan Schutzer'" > >>> > >>> > >>> > >>> > ------------------------------------------------------------------------ > >>> > >>> > >>> > >>> I'd like to submit a new use case, shown below, that several > >>> of our members would like included. It looks for > >>> recommendations on how to educate customers who have fallen > >>> for a phishing email, and improve the type of response > >>> customers generally get today when they try to access a > >>> phishing site that has been taken down. I hope this is not > >>> too late for consideration. > >>> > >>> Use Case > >>> > >>> Frank regularly reads his email in the morning. This morning > >>> he receives an email that claims it is from his bank asking > >>> him to verify a recent transaction by clicking on the link > >>> embedded in the email. The link does not display the usual > >>> URL that he types to get to his bank's website, but it does > >>> have his bank's name in it. He clicks on the link and is > >>> directed to a phishing site. The phishing site has been shut > >>> down as a known fraudulent site, so when Frank clicks on the > >>> link he receives the generic Error 404: File Not Found page. > >>> Frank is not sure what has occurred. > >>> Destination site > >>> > >>> prior interaction, known organization > >>> Navigation > >>> > >>> none > >>> Intended interaction > >>> > >>> verification > >>> Actual interaction > >>> > >>> Was a phishing site that has been shut down > >>> Note > >>> > >>> Frank is likely to fall for a similar phishing email. Is > >>> there some way to educate Frank this time, so that he is less > >>> likely to fail for the phishing email again? > >>> > >>> > >> > >> > >> > >> Content-Type: image/jpeg; name=9faa15.jpg > >> Content-ID: <7.1.0.9.0.20070824105938.01b6d470@bobpinheiro.com.1> > >> X-Attachment-Id: 0.1 > >> Content-Disposition: inline; filename="9faa15.jpg" > >> > > > > > >-- >/* >Serge Egelman > >PhD Candidate >Vice President for External Affairs, Graduate Student Assembly >Carnegie Mellon University > >Legislative Concerns Chair >National Association of Graduate-Professional Students >*/
Received on Friday, 24 August 2007 17:57:24 UTC