- From: Ian Fette <ifette@google.com>
- Date: Thu, 13 Sep 2007 15:45:01 -0700
- To: "Close, Tyler J." <tyler.close@hp.com>
- Cc: "WSC WG" <public-wsc-wg@w3.org>
- Message-ID: <bbeaa26f0709131545ne05cc99vc2c1549df8a1b9ec@mail.gmail.com>
Tyler, The use cases say "This is something we should consider looking at." It doesn't say "This will make it into the rec", "this has to be done", or "this is the best thing since sliced bread." You say we need to find something that - users can understand - can be implemented without introducing new protocols - can be made easy and pleasant to use So let's talk about those three first, since that's the first point you raise. Firefox 2 already uses blacklists for phishing. You might find them distasteful, messy, whatever, but the reality is that they work very well. They're protecting users. I'd argue that they can be understood, but frankly I don't even know if that matters as long as it keeps people from getting to the phishing site. Firefox 3 will include anti-malware support, also via blacklists. So far we've seen great response from the phishing protection, and I hope that malware protection will be similar. We feature similar warnings on google.com - i.e. if you get a page in the search results that links to a site we know to be distributing malware, we give you an interstitial dialog. People seem to understand it. It's implemented without any new protocols. (We distribute the lists for free. If you're using Firefox 2, you're using the protocol.) But you seemed to be saying that we shouldn't dictate how these things are implemented (heuristics, blacklists, etc), and I would tend to agree. So why do you care how it's implemented? Can be made easy and pleasant to use? Well, I'd say it's pretty easy. Today, if you browse to a phishing site, you get a warning. That's pretty easy. As for pleasant... well, if you browse to a malware site, I'd say just about anything is more pleasant than actually letting your browser go ahead and get owned. YMMV. I also find it interesting how you aren't even willing to give this a chance - "I suspect it is highly susceptible to being misconstrued, or poorly applied, by users." We haven't even gotten to talking about how it could be implemented, or how it should look. If I said "Browsers must do X and present Y", then yes, I could see you being worried about it being misconstrued, or poorly applied. In my revised version, I haven't said at all how the browser is supposed to get this information, be it from 3rd party blacklists, real time lookup, whatever. I just tried to say "it has knowledge." You've gone and more or less dictated exactly how PII bar should work, look and behave when no browsers are using anything close to PII bar, and yet you're worried about "presupposing new things" when the malware stuff is already out there, and virtually identical things are already being done for phishing for both IE and FF? Please. As for phishing blacklists: So people misconstrued them. Wow, hold the press. Depending on how you set up a study, you can get people to misconstrue just about anything. Do I think that because some people didn't exactly understand what's going on that we should stop trying to protect users? No. The reality is that these things do a heck of a lot better than anything out there. Are they perfect? No, of course not. But that doesn't mean they have no value. And by the way, I specifically took out blacklists. I'm merely saying "The browser has knowledge." I think you're setting a real double standard here, and it bothers me, Tyler. You say "Before recommending a user interface to a bad site list, we must first show that the interaction can be a helpful one for users." We're not recommending a UI for a blacklist - we're saying that we should consider what to do if the browser has knowledge that a site is malicious. It's not saying blacklist, and it's not saying a particular UI. It's saying that we as a group should look into this and see if there's anything that we can recommend. On the other hand, your PII bar text is exactly recommending a UI, and we haven't shown that the interaction is helpful rather than frustrating for users. And yet you're somehow fine with that. You're dictating UI, I'm trying to say "Hey, we should figure out if there are UI guidelines that would help in this case." There's something a bit off here. You haven't explained any significant dangers. All I'm saying is "Hey, let's see if there's something we can do to help here." On 9/13/07, Close, Tyler J. <tyler.close@hp.com> wrote: > > > > Ian Fette wrote: > > A second concern was seemingly > > deeper, more fundamental, raised by Tyler in the call and in multiple > > emails (I don't think I can really re-state it in a way that everyone > > would agree with, so I will simply say that there were other concerns > > raised by Tyler and leave it there). > > To arrive at a resolution that reflects these concerns, I think we need > to come to a shared understanding of them. To that end, I'm making > another attempt at documenting them. I've tried to state them as clearly > and succinctly as I can, so please give the text below a careful read. > > I want us to make something that will actually work and so help users > make trust decisions on the Web. It's a tall order. To succeed, we have > to find an interaction that users can understand, that we can implement > (without introducing new protocols), and that can be made easy and > pleasant to use. Those are really three separate hurdles, and we have to > make it over all of them. When evaluating a recommendation proposal we > must show that it passes all three of these tests before recommending > it. For example, it is not enough to find something that we can > implement and that we can make a nice user interface for. If users don't > understand the interaction, we haven't actually helped them make better > trust decisions. > > The Note use-cases should provide us with a variety of situations to > consider when trying to figure out whether or not a particular > recommendation proposal passes these three tests. We need to be careful > that we don't structure the use-cases such that we are led to ignore one > of these tests. Falling into this trap could lead us to recommending > something that won't actually help. > > To me, Ian's proposed use case presupposes an interaction where the user > agent keeps track of bad sites and notifies the user when they are about > to visit one. As I've explained above, I think this kind of use case is > a bad idea, just on principle. However, in this case, I am further > concerned that the presumed interaction could very well be a poor one. > The concept of a not quite accurate list of bad sites is a tricky one to > work with. I suspect it is highly susceptible to being misconstrued, or > poorly applied, by users. For example, users might assume the list is > more comprehensive than it actually is. I am not aware of any studies > that show this concept is a helpful one for users. On the other hand, we > do know that the Jackson study, listed in our shared bookmarks, showed > that users did in fact misunderstand the IE7 Phishing Filter and made > more bad trust decisions because of it. > > Before recommending a user interface to a bad site list, I think we must > first show that the interaction can be a helpful one for users. We don't > know yet that it is, and have reason to believe it is not. It would be > unfortunate if we recommended something because it satisfied Ian's > use-case, but we hadn't actually made the case that users are being > helped. > > It is appropriate for members of this WG to further investigate bad site > lists and make recommendation proposals for them. I think the existing > use-cases provide sufficient scenarios for examining the effectiveness > of such proposals. I haven't seen anyone claim that the existing > use-cases provide an insufficient basis for studying bad site lists. > Absent such a claim, I think it is prudent to avoid the significant > dangers I have explained in this email. > > --Tyler > > http://usablesecurity.org/papers/jackson.pdf > >
Received on Thursday, 13 September 2007 22:45:38 UTC