- From: Thomas Roessler <tlr@w3.org>
- Date: Thu, 15 Jun 2006 10:10:36 +0200
- To: George Staikos <staikos@kde.org>
- Cc: "Undisclosed.Recipients": ;, public-usable-authentication@w3.org
On 2006-06-15 00:55:02 -0400, George Staikos wrote:
> Excellent points. I also realized that some of us are
> talking about different things here. Some of us are
> talking about protecting users, others are talking about
> preventing successful phishes. I think they're both
> excellent goals, and are not identical. We should make it
> hard to phish, and that will make it hard to harm users.
> We should attempt to protect users, at least the most
> vigilant to start, and that will make it hard to phish
> them. They are complementary things but may require
> slightly different approaches. Blacklists don't make it
> hard to phish, just annoying. They do go a long way
> toward protecting users though. On the other hand,
> closing software security holes doesn't directly protect
> all users, but it does make it harder to phish since there
> are fewer vectors and probably more tedious ones left. We
> need to tackle both of these things, and find effective
> ways to do it, especially without confusing the two too
> much.
Excellent analysis.
The things that I'd think would be most useful to do (doing in
the sense of having a working group about them) in order to
meet the goal of helping vigilant ("suspicious", whatever we
call them) users:
- Define a baseline set of security context information that
will be presented consistently, across browsers, e.g., "pick
these elements from your X.509 certs", "add that information
from whateversecurityprotocolcomesnext";
- define best practices for how to present them nicely,
non-scarily and usably;
- define requirements that list precisely what browsers should
not let content do to user interface elements, in particular
those that are used to present security relevant context.
Comments welcome.
Regards,
--
Thomas Roessler, W3C <tlr@w3.org>
Received on Thursday, 15 June 2006 08:10:51 UTC