- From: Johnathan Nightingale <johnath@mozilla.com>
- Date: Mon, 9 Jul 2007 10:25:04 -0400
- To: Thomas Roessler <tlr@w3.org>
- Cc: W3C WSC Public <public-wsc-wg@w3.org>
On 6-Jul-07, at 6:35 PM, Thomas Roessler wrote: > There is of course a part to that formula which is based on factors > that cannot be triggered by the attacker (assuming, e.g., that an > attacker can't produce an EV certificate with chosen information, > seems safe). > > There is, however, another part that relies on information that can > (and will) be tuned by the attacker. This part will need to be > adapted as attacks evolve -- or might even turn out to be useless in > the end of the day, or best used through an interactive service. > ... > Leaving the concerns as to whether or not these kinds of advanced > heuristics are actually in our scope aside for the moent, I'd say > that the "tuned by the attacker" inputs better shouldn't show up in > that formula. I'd suspect that it would then turn into a set of > basic profiles of using existing security technology that lead to > certain user communication. > > EV certificates and letterheads are actually examples of that > approach. > > I wonder if a security score really has much to add over these kinds > of approaches when you leave out the possibly attacker-chosen > inputs... This is probably a better way of expressing my concerns with the proposal than I had offered, originally. If we provide a score, and give it any emphasis whatsoever, we create a game. And the problem with this game is that the range of scores that attackers can produce by gaming the indicator has strong overlap with the range of scores that legitimate businesses can be expected to produce. Of course, we can reduce this game-ability of the indicator by restricting the score to a couple key variables that attackers can't game (e.g. EV certs, user's own browsing history). My contention before was that this is a small enough set, and whose components are meaningful enough to users, that an aggregate score doesn't facilitate understanding *in this instance*, the way one might expect it to do so in general. Clearly, as the number of indicators rises, the appeal of an aggregate becomes more compelling but there are not, as far as I can see, a wide and varied selection of indicators at our disposal that do not allow for gaming. The question, to my mind, is whether we want to invent a new SCI here which we know a priori will be a tempting target for attackers, and which does not meaningfully safeguard against it (unless we constrain the inputs so tightly that we start to be a pretty weak aggregation.) Having said that, I share Tim's excitement about the idea that of a marketplace of security indicators. That would be great, and in a less-specific way, we see some of that with the add-ons people develop for Firefox. But I don't know that this recommendation will create that commons, and even if it did, I'm not sure that's the right reason for us to write a rec we fear to be pretty critically flawed. I should also mention here, in case I'm seeming harsher than I intend, that I am really not the type to let perfect stand in the way of good. Anti-phishing blacklists have demonstrable and theoretical weaknesses both in terms of false positives and false negatives, but I will defend them because they have a net-positive (and significant) impact. If my concern was just that the formula is wrong, then Mike M would be right to point out that it was a get-the-discussion- rolling formula. My concern is that, given the ability to game so many of the things we might rely on when calculating score and the ability of that score to adapt to changing conditions, I'm not convinced that this rec has a net-positive impact. I can easily envision worlds, indeed, where it has a net-negative impact, particularly in the short term, in terms of user confusion and user deception if they trust hackable versions of the scoring formula. Cheers, J --- Johnathan Nightingale Human Shield johnath@mozilla.com
Received on Monday, 9 July 2007 14:25:43 UTC