- From: Close, Tyler J. <tyler.close@hp.com>
- Date: Thu, 13 Sep 2007 21:43:42 -0000
- To: "WSC WG" <public-wsc-wg@w3.org>
Ian Fette wrote: > A second concern was seemingly > deeper, more fundamental, raised by Tyler in the call and in multiple > emails (I don't think I can really re-state it in a way that everyone > would agree with, so I will simply say that there were other concerns > raised by Tyler and leave it there). To arrive at a resolution that reflects these concerns, I think we need to come to a shared understanding of them. To that end, I'm making another attempt at documenting them. I've tried to state them as clearly and succinctly as I can, so please give the text below a careful read. I want us to make something that will actually work and so help users make trust decisions on the Web. It's a tall order. To succeed, we have to find an interaction that users can understand, that we can implement (without introducing new protocols), and that can be made easy and pleasant to use. Those are really three separate hurdles, and we have to make it over all of them. When evaluating a recommendation proposal we must show that it passes all three of these tests before recommending it. For example, it is not enough to find something that we can implement and that we can make a nice user interface for. If users don't understand the interaction, we haven't actually helped them make better trust decisions. The Note use-cases should provide us with a variety of situations to consider when trying to figure out whether or not a particular recommendation proposal passes these three tests. We need to be careful that we don't structure the use-cases such that we are led to ignore one of these tests. Falling into this trap could lead us to recommending something that won't actually help. To me, Ian's proposed use case presupposes an interaction where the user agent keeps track of bad sites and notifies the user when they are about to visit one. As I've explained above, I think this kind of use case is a bad idea, just on principle. However, in this case, I am further concerned that the presumed interaction could very well be a poor one. The concept of a not quite accurate list of bad sites is a tricky one to work with. I suspect it is highly susceptible to being misconstrued, or poorly applied, by users. For example, users might assume the list is more comprehensive than it actually is. I am not aware of any studies that show this concept is a helpful one for users. On the other hand, we do know that the Jackson study, listed in our shared bookmarks, showed that users did in fact misunderstand the IE7 Phishing Filter and made more bad trust decisions because of it. Before recommending a user interface to a bad site list, I think we must first show that the interaction can be a helpful one for users. We don't know yet that it is, and have reason to believe it is not. It would be unfortunate if we recommended something because it satisfied Ian's use-case, but we hadn't actually made the case that users are being helped. It is appropriate for members of this WG to further investigate bad site lists and make recommendation proposals for them. I think the existing use-cases provide sufficient scenarios for examining the effectiveness of such proposals. I haven't seen anyone claim that the existing use-cases provide an insufficient basis for studying bad site lists. Absent such a claim, I think it is prudent to avoid the significant dangers I have explained in this email. --Tyler http://usablesecurity.org/papers/jackson.pdf
Received on Thursday, 13 September 2007 21:44:34 UTC