- From: Serge Egelman <egelman@cs.cmu.edu>
- Date: Thu, 26 Jul 2007 13:41:47 -0400
- To: Stephen Farrell <stephen.farrell@cs.tcd.ie>
- CC: Web Security Context WG <public-wsc-wg@w3.org>
Hmm, good question. I'm really not sure. We engineered the attacks for maximum effect (since we were only interested in the reaction to the warnings, we needed them to be taken in by the messages in order to actually see the warnings). I would suspect (again, no experimental data for this question) that the effect would last up until they actually receive the order in the mail. Though testing this would be a real pain in the ass (they'd have to check the phishing sites from home, and we'd have to have a group for every day up until the orders are received). It would be interesting to contrast this effect with sending phishing messages to account holders of various institutions (i.e., how much stronger is the effect when there's a pending order and the phishing message can be construed to reference that order versus the effect of receiving a phishing message claiming to be from your bank). There have been studies that estimate phishing success rates, though these have been for general phishing attacks (e.g. when spoofing eBay, it is assumed that only a fraction of the recipients have eBay accounts; one would assume that recipients who do not have eBay accounts will not fall for it). This paper (http://www.cs.bell-labs.com/cm/cs/who/pfps/temp/web/www2007.org/papers/paper620.pdf) estimates that number as 0.4%. Though obviously we would expect a targeted attack to be magnitudes higher. This article (http://online.wsj.com/public/article/SB112424042313615131-z_8jLB2WkfcVtgdAWf6LRh733sg_20060817.html?mod=blogs) also presents a study at West Point where 80% of cadets fell for a targeted attack. This study (http://www.indiana.edu/~phishing/social-network-experiment/phishing-preprint.pdf) found that 72% responded to a targeted attack. So yeah, looking at contextual information may be a good direction. The difficulty is recognizing the site. We built a filter that does something like this using text analysis. You can read about it here: http://portal.acm.org/citation.cfm?doid=1242572.1242659 serge Stephen Farrell wrote: > > > Hi Serge, > > Good stuff. A possibly silly idea occurs to me: > > Would you have any speculation on whether or how the delay > between the transaction and the spear-phishing mail would > affect the outcome? > > Were the delay to be significant that might hint (again) > that paying attention to the dynamics of the user's > interactions might be a way to try improve things, e.g. if > the browser was more paranoid for a period following a > transaction with a given site. > > Partly I guess I'm assuming that a delayed spear-phish > attempt would be easier/more likely, say if some DB > leaks, and that it'd be less common to see an immediate > attempt since the bad actor would probably have to be > on-path to act so quickly. (In which case, they can > probably affect the initial transaction.) > > S. > > Serge Egelman wrote: >> We conducted a study of active phishing indicators found in current web >> browsers by simulating spear phishing attacks. Active phishing >> indicators differ from passive indicators in that they interrupt the >> user's primary task, forcing a decision to be made. Previous studies >> (no doubt you've read the Shared Bookmarks, right?) have shown that >> passive indicators often go unnoticed, and when they are noticed, are >> untrusted because users place more trust in the look and feel of the >> destination web page. Both IE7 and Firefox 2 include active phishing >> warnings. >> >> Participants came to our lab under the guise of an online shopping >> study. Purchases were made from Amazon and eBay using their own >> information. Upon the completion of a purchase, participants were sent >> phishing message from these sites, and were told to check their email >> accounts to make sure that their orders were confirmed. Participants >> were then observed interacting with the phishing websites. Participants >> were placed in one of four groups: 12 users of Firefox 2 >> (http://switchersblog.com/files/firefox-phishing-protection.png), 10 >> users of IE7 who were shown the passive warning >> (http://www.itwriting.com/images/localphishing.gif), 10 users of IE7 who >> were shown the active phishing warning >> (http://www.billp.com/blog/images/ie7phishing.jpg), and a control group >> (10 users) that was shown and phishing warnings. The purpose of the >> control group was to determine whether participants would enter personal >> information in the absence of a warning. >> >> Of the 42 participants, all but two individuals (one in the control >> group, one in the active IE7 group) clicked at least one of the phishing >> URLs. The 9 participants in the control group who clicked the URLs all >> entered login information at the phishing sites. 9 participants in the >> passive IE7 group entered login information (1 participant obeyed the >> warnings). Participants ignored the passive warnings for two reasons: >> habituation with popup messages, and lack of choices in the dialog (some >> participants read the warnings, but since there were no options, they >> were unsure of what to do, and thus dismissed the warnings and >> proceeded). Additionally, some participants were so focused on the >> primary task (entering login information on the phishing websites) that >> they did not notice the warnings appear in the first place. >> >> Among those shown the active warnings, all of the Firefox users obeyed >> the warnings. In the active IE7 warning group, all but two participants >> obeyed the warnings, however there was no statistically significant >> difference between these two groups. Of the two who ignored the >> warnings, one blamed habituation, and the other was fooled by the >> message coinciding with the purchase. This both shows that the IE7 >> warning is designed too similar to other warnings in IE (e.g. the 404 >> page), and that there will always be some users who fall for phishing >> attacks, regardless of the strength of the warnings. >> >> Overall, the active warnings were effective because they interrupted the >> users' primary tasks ("attention switch") and they forced the users to >> make a choice in order to dismiss them ("attention maintenance"). These >> properties were lacking in the passive indicators. Additionally, when >> visiting the eBay site, users were shown the EV certificate indicator >> (i.e. the green address bar) in IE7. None of the 42 users noticed the >> green address bar, much less the absence of it when visiting the >> phishing sites. Thus, it is unreasonable to expect users to be warned >> by the absence of an indicator. >> >> We also found that prior experiences with phishing had zero correlation >> with falling for a phishing attack in our study. One third of the >> participants claimed to have either fallen for a phishing attack, had >> credentials stolen, or been the victim of credit fraud in the past. >> These individuals were equally as likely to both click on the URLs and >> ignore the warnings as other participants. Additionally, participants >> who could define the term "phishing" were not anymore likely to obey (or >> ignore) the warnings than participants who could not. Finally, when >> asked how they believed the phishing messages got to them, participants >> could not answer. They understood the websites were fraudulent, however >> they still trusted the email messages. This shows that there is a huge >> disconnect with users' mental models of phishing. >> >> Overall we concluded that warnings within the phishing context need to >> interrupt the user's primary task to be effective. These warnings must >> present clear recommendations on how to proceed. To prevent >> habituation, these warnings should be designed differently than dialogs >> and need to be presented rarely (i.e. only when there's a high >> probability of immediate danger). Finally, warnings about high risks >> need to fail safely, for when users do become habituated. One >> participant in this study who was exposed to the active IE7 warning did >> not read it (or the options it presented), and thus clicked the red 'X' >> in the corner to dismiss it (thus closing the browser window). She went >> back to the original email, clicked the link again, and again closed the >> window. She repeated this process five times before finally giving up, >> and was thus prevented from giving away information to the phishing >> website despite the fact that she never read any part of the warning. >> >> If you have any questions, feel free to ask. I'm still working on the >> paper. >> >> >> serge >> >> > -- /* Serge Egelman PhD Candidate Vice President for External Affairs, Graduate Student Assembly Carnegie Mellon University Legislative Concerns Chair National Association of Graduate-Professional Students */
Received on Thursday, 26 July 2007 17:42:06 UTC