- From: Jeffrey Altman <jaltman@secure-endpoints.com>
- Date: Fri, 14 Apr 2006 09:48:57 -0400
- To: George Staikos <staikos@kde.org>
- CC: public-usable-authentication@w3.org
- Message-ID: <443FA849.4080808@secure-endpoints.com>
George Staikos wrote:
> On Monday 10 April 2006 13:30, Thomas Roessler wrote:
>
>> This kind of work would cover best practices in terms of what
>> sites should or should not be able to control in a browser's
>> user interface, and, possibly, a switching mechanism between a
>> rich and a safe browser mode, as discussed at various occasions
>> in New York.
>
> For those who have been advocating this approach, what do you envision in
> this mode? What would make it "safe"?
George:
I came away with the following from the discussions I held at the
workshop:
(1) Secure Chrome is only secure if the user is able to distinguish
the "secure" from the "insecure".
Suggestions have included that secure chrome be displayed by
the browser whenever the browser encounters a form that contains
a password field.
When supported by the operating system is available, whenever
the user clicks within the secure chrome the user would be
removed to a separate desktop on which the user would be displayed
information about whom the data is being sent to as extracted from
a certificate (logos, common name, url, etc), a distinguishing
identifier selected by the user (perhaps a photo), and the form
to be filled in.
(2) Attackers must not be able to determine what the secure chrome
looks like on any particular system.
This requires that scripts not be able to display the real
web site on the screen, take snapshots of the visual representation
of the chrome and then redirect users to the attackers site where
the same chrome would be faked.
Nor should it be possible for the secure chrome to be guessed
by the attacker. If the browser provides twenty random choices of
secure chrome that would still leave approximately a 5% of a random
attack against that browser release being successful.
(3) Another thing that was discussed was a hardware indicator of the
use of secure chrome. This would require that the indicator be
protected by the operating system and that the secure chrome itself
could be triggered by the operating system.
One of the concerns I have about the notion of secure chrome is that
even if we were able to prevent an attacker from displaying a site that
emulates the appearance of another site complete with lock icons and
certificate properties dialogs, etc. The use of the secure chrome
can also be used by the attacker. The secure chrome is only as secure
as the certificate validation is. Therefore, if the user clicked on a
link and was directed to
https://www.paypa1.com
and the attackers provided a certificate that claimed to be:
www.paypa1.com
PayPal Inc.
Information Systems
(including the PayPal logo)
signed by a fake intermediary certificate:
VeriSign Class 3 Secure Server CA
VeriSign, Inc.
VeriSign Trust Network
(including the Verisign Logo)
the behavior as seen by the user would be no different than if that user
attempts to access a mis-configured server with Firefox such as
https://www.chillinforamillion.com/. The user wants to access the site
and will therefore click "accept the certificate for this session" and
then the Secure Chrome will be displayed with all of the fake info
provided by the attacker. The end user will believe they are safe
because the secure chrome has appeared. This would leave us essentially
where we are today except for one subtle difference:
Verisign would now be a party to the attack and the use of
trademarked logos would allow the application of international
treaties governing trademarks in the crack down on the attackers.
I believe the use of secure chrome is a good idea, but it certainly
would not be a cure all. It would simply raise the bar for the attacks
in the case where users can be trained not to accept certificates that
are not validated.
Jeffrey Altman
Received on Friday, 14 April 2006 20:27:32 UTC