- From: Amir Herzberg <herzbea@macs.biu.ac.il>
- Date: Tue, 28 Nov 2006 18:27:13 +0200
- To: Mike Beltzner <beltzner@mozilla.com>
- CC: public-wsc-wg@w3.org
Mike Beltzner wrote: > > On 27-Nov-06, at 10:45 AM, Amir Herzberg wrote: > >> Browser security should make it harder for spoofers/phishers to trick >> users into believing false site identification. The challenge is that >> users look mostly at the content of the site, which can present fake >> identification (tokens, etc.). Same for email... Which is why >> identification indicators, like the secure letterhead (or TrustBar, >> or PetName...) are useful. > I can't let an opportunity to sing my usual song go by, and I can't > remember if I've sung it lately, so here it goes: > > Why not create indicators of *in*security and *non*-trusted > identification instead of indicators of security? Excellent question!! Unfortunately, there is an answer :-) Answer: unfortunately, most sites are insecure (in not using SSL/TLS, which is easily detected, and in fact often also in other ways which the browser can't detect, e.g. XSS, or can't reliably detect, e.g. different mal-scripts...). Even a mild warning on these can cause user confusion and mistrust, as we found out [early versions of TrustBar presented the mild warning `This site is not protected` or something like that... so we know]. Furthermore, I think _users_do_not_want_ indicators of *in*security and *non*-trusted identification!! In fact, if we know a site is not trustworthy... why present it at all?? Users (have the right to) expect defense mechanisms to _block_ malicious stuff, not to show it with `this (file) can cause damage to your computer` or `this (drug) can be cause damage to your brain` sign.(Cigarette, anybody? how effective are these warning labels!) > Recent studies on user behaviour show that many users don't look for > indicators of security, and those who do are easily fooled by simple > spoofing techniques[1]. The results of these studies need to be understood carefully, to avoid drawing wrong, pessimistic conclusions. We have found that with good security indicators, detection rates change (improve) dramatically. Did you read our results (and rationale for the apparent - and not real- conflict with the study that seems to show indicators do not help, by Wu et al.)? We are doing a much large, more realistic experiment now, to confirm and refine the findings. Another direction my group works on is automated _blocking_ of suspect malicious content (sites, email). Blocking, not warning!! And of course I'm not talking about blacklisting, the fools-gold of computer security, adopted by browsers... See [2]. So obviously I do agree with the statements below, however, our research shows that proper identification indicators can result in pretty high detection rates. BTW, another direction I believe in, and prototyped, is improved password-management - avoiding some of the most common attacks, again without depending on users to notice stuff. > Users are often focused on the task they're trying to complete (ie: > "my profile needs to be updated!") not checking around them for > indicators of whether or not the website is "secure". Further, > training users to look for indications of safety means that we need to > train them to detect the absence of such signals to infer non-safety, > which is a harder thing for humans who are predisposed to singular > evaluation approaches[2]. > > Phishers and spoofers have had an easier time of things because it is > easy for them to copy the look and feel of a website, or of browser > chrome. So instead of giving them indicators which they can copy and > spoof, why not create indicators which they have no incentive to copy? > Make the message to the user be "Hey! This isn't safe, don't do this", > not "You're happy and secure to keep doing what you're doing." It also > makes it easier for us to put this message in front of users at the > point of the task. The only design challenge left for us is to avoid > click-through fatigue (which, sadly, I fear will be exacerbated by > well-meaning security UI in the upcoming Windows Vista OS release). Yes, I think we need to forget about avoiding the `click-through syndrome`. Best, Amir > > cheers, > mike > > [1]: "Why Phishing Works", Dhamija, Tygar & Hearst > (http://people.deas.harvard.edu/~rachna/papers/why_phishing_works.pdf) > [2]: "Phishing Tips and Techniques", Gutmann > (http://www.cs.auckland.ac.nz/~pgut001/pubs/phishing.pdf) > > > > . >
Received on Tuesday, 28 November 2006 17:01:25 UTC