- From: Michael(tm) Smith <mikes@opera.com>
- Date: Thu, 30 Nov 2006 16:19:12 +0900
- To: public-wsc-wg@w3.org
Amir Herzberg <herzbea@macs.biu.ac.il>, 2006-11-28 18:27 +0200: > Furthermore, I think _users_do_not_want_ indicators of *in*security and > *non*-trusted identification!! In fact, if we know a site is not > trustworthy... why present it at all?? Users (have the right to) expect > defense mechanisms to _block_ malicious stuff, not to show it with `this > (file) can cause damage to your computer` or `this (drug) can be cause > damage to your brain` sign.(Cigarette, anybody? how effective are these > warning labels!) A good number of users actually also feel they have the right to go ahead and access sites that browsers know are not trustworthy. Outright blocking of a particular site would be acceptable to users only if it is blocked outright by all browsers the user has access to. If browser X does the right thing and blocks users from accessing the site, but browser Y lets users go ahead and access it, I can tell you that some user is going to complain to the vendor of browser X and demand that they be given the option to override the browser's blocking and access the site. And there is a big difference between a site that is untrustworthy and one that is malicious. All users would be happy to have browsers blocking them from accessing known phishing sites that have the explicit purpose of doing something malicious, and many browsers already have mechanisms for outright blocking of that class of site. But what exactly constitutes a untrustworthy site? I personally would judge a site that was, say, using a revoked SSL certificate to be an untrustworthy site. But I know that there are users who would insist on being able to access such as site anyway. > Another direction my group works on is automated _blocking_ of suspect > malicious content (sites, email). Blocking, not warning!! And of course > I'm not talking about blacklisting, the fools-gold of computer > security, adopted by browsers... See [2]. So obviously I do agree with > the statements below, however, our research shows that proper > identification indicators can result in pretty high detection rates. But isn't such as a protection system going to be judged by users as effective only if along with how well it may be able to detect content that is genuinely malicious, it also has zero false positives? Or at least zero false positives that the user is aware of. If a user ever discovers that the system has misidentified non-malicious content as malicious and completely blocked access to that content, the user is no longer going to trust the protection system and will disable it. That is, the user will disable it if the application it's deployed it enables it to be disabled. If not, the user will probably start to consider using an alternative application that doesn't misidentify content. --Mike
Received on Thursday, 30 November 2006 07:19:27 UTC