Re[2]: Secure Chrome and Secure MetaData (correction)

Hi Jeff,

You wrote:-

sf> We need to determine techniques which are unspoofable, such as
sf> personalization known only to the user ...

This is why I am trying to eradicate the word "Chrome" - since you're
suggesting none-chrome things as solutions here - the continued use of
that word is serving only to give everyone the wrong impression and
point newcomers down known dead-end paths.

This entire debate has been going round-and-round in circles for some
months now.  I think it's time to start documenting problems, and
posing some solutions.

Here again is my threat table.  How about you turn your suggestions
#'s 1 through 7 into a "solutions and techniques" table and add it on
to the end of my stuff.

We can then look through my list of problems, work out which ones your
list of solutions solves, and get a good idea about what's missing.

I like where you're heading with your thoughts - although I can't get
my head around lost passwords, or how you can block the theft of
"personalization" *and* passwords - it seems either one or the other
can be protected - not both?  that is: you can either alert users to
fake sites (after they've potentially already given their password to
a spoof one, which can then go ahead and impersonate them anyhow...),
or you can show random strangers what the users "personalization" is
(which does potentially block users from giving away passwords to
spoof sites ... until the spoof sites wize up and go get the
"personalization" info from the legit site in real time...)

Kind Regards,
Chris Drake


########################################
###  Authentication Threat List 1.0  ###
########################################

1. Confidence Tricks

   1.1. phishing emails
    1.1.1. to lure victims to spoof sites
    1.1.2. to lure victims into installing malicious code
    1.1.3. to lure victims towards O/S vulnerabilities to inject
           malicious code
    1.1.4. to lure victims into revealing information directly via
           reply or via embedded FORMS within the email

   1.2. telephone phishing
    1.2.1. to directly extract auth info
    1.2.2. to direct victim to spoof site
    
   1.3. person-to-person phishing / situation engineering
    1.3.1. to directly extract auth info (ask)
    1.3.2. to direct victim to spoof site
    1.3.3. shoulder surfing (aka 4.5.2)
    1.3.4. physical attack - see 4.7

   1.4. typographic attacks
    1.4.1. spoofing (eg: paypa1.com - using a number 1 for a little L)
    1.4.2. direct download of malicious code
    1.4.3. browser exploit injection

   1.5. online phishing
    1.5.1. pop-up/pop-behind windows to spoof sites
    1.5.2. floating <DIV> or similar elements (eg: emulating an entire
           browser UI)


2. Remote Technical Tricks

   2.1. spoof techniques
    2.1.1. vanilla fake look-alike spoof web sites
    2.1.2. CGI proxied look-alike web site (server CGI talks to real
           site in real time - "man in the middle attack")
    2.1.3. popup windows hiding the address bar (3.4.1/3.4.2)
    2.1.4. <DIV> simulated browsers (1.5.2)

   2.2. iframe exploits (eg: 1.5.1/1.1.3) (spammers buy iframes to
        launch 1.5 and 1.4 attacks)
   2.3. p2p filesharing publication of products modified to
        remove/limit protection - PGP, IE7, Mozilla, ...
   2.4. DNS poisoning (causes correct URL to go to spoof server)
   2.5. traffic sniffing (eg: at ISP, telco, WiFi, LAN, phone tap...)
   2.6. proxy poisoning (correct URL returns incorrect HTML)
   2.7. browser exploits (correct URL returns incorrect HTML)
   2.8. targeted proxy attack
    2.8.1. directs to vanilla spoof web site (2.1.1)
    2.8.2. uses CGI re-writing to proxy legitimate site (eg: convert
           HTTPS into HTTP to activate traffic sniffing) (2.1.2)
    2.8.3  activates 5.7
   2.9.  Authorized exploitation - see 3.5.


3. Local Technical Tricks

   3.2. Software vulnerabilities (aka exploits - eg - 1.1.3)
    3.1.1. Known
    3.1.2. Unknown

   3.2. Browser "toolbars" (grant unrestricted DOM access to SSL data)
   
   3.3. Trojans
    3.3.1. Standalone modified/hacked legitimate products (eg: PGP or
           a MSIE7) with inbuilt protection removed/modified.
    3.3.2. Bogus products (eg: the anti-spyware tools manufactured by
           the Russian spam gangs)
    3.3.3. Legitimate products with deliberate secret functionality
           (eg: warez keygens, sony/CD-Rom music piracy-block addins)
    3.3.4. Backdoors (activate remote control and 3.4.1/3.4.2)

   3.4. Viruses
    3.4.1. General - keyloggers, mouse/screen snapshotters
    3.4.2. Targeted - specifically designed for certain victim sites
           (eg paypal/net banking) or certain victim actions (eg:
           password entry, detecting typed credit card numbers)

   3.5. Authorized exploitation (authority (eg: Microsoft WPA/GA,
        Police, ISP, MSS, FBI, CIA, MI5, Feds...) engineer a Trojan or
        Viral exploit to be shipped down the wire to local PC,
        potentially being legitimately signed/authenticated software.)
   
   3.6. Visual tricks
    3.6.1. browser address bar spoofing
    3.6.2. address bar hiding

   3.7. Hardware attacks
    3.7.1. keylogger devices
    3.7.2. TEMPEST
    3.7.3. malicious hardware modification (token mods, token
           substitution, auth device substitution/emulation/etc)

   3.8. Carnivore, DCS1000, Altivore, NetMap, Echelon, Magic Lantern,
        RIPA, SORM...

4. Victim Mistakes

   4.1. writing down passwords
   4.2. telling people passwords
    4.2.1. deliberately (eg: friends/family)
    4.2.2. under duress (see 4.7)
   4.3. picking weak passwords
   4.4. using same passwords in more than one place
   4.5. inattentiveness when entering passwords
    4.5.1. not checking "https" and padlock and URL
    4.5.2. not preventing shoulder surfing
   4.6. permitting accounts to be "borrowed"
   4.7. physical attack (getting mugged)
    4.7.1. to steal auth info
    4.7.2. to acquire active session
    4.7.3. to force victim to take action (eg: xfer money)
   4.8. allowing weak lost-password "questions"/procedures

   
5. Implementation Oversights

   5.1. back button
   5.2. lost password procedures
   5.3. confidence tricks against site (as opposed to user)
   5.4. insecure cookies (non-SSL session usage)
   5.5. identity theft? site trusts user's lies about identity
   5.6. trusting form data
   5.7. accepting auth info over NON-SSL (eg: forgetting to check
        $ENV{HTTPS} is 'on' when performing CGI password checks)
   5.8. allowing weak lost-password "questions"/procedures
   5.9. replay
   5.10. robot exclusion (eg: block mass password guessing)
   5.11. geographical exclusion (eg: block logins from Korea)
   6.12. user re-identification - eg - "We've never seen you using
         Mozilla before"
   6.13. site-to-user authentication
   6.14. allowing users to "remember" auth info in browser (permits
         local attacks by unauthorised users)
   6.15. blocking users from being allowed to "remember" auth info in
         browser (facilitates spoofing / keyloggers)
   6.16. using cookies (may permit local attacks by unauthorised
         users)
   6.17. not using cookies (blocks site from identifying malicious
         activity or closing co-compromised accounts)

   
6. Denial of Service attacks

   6.1. deliberate failed logins to lock victim out of account
   6.2. deliberate failed logins to acquire out-of-channel subsequent
        access (eg: password resets)

        
7. Please contribute to this document!

   7.1. on-list - just reply
   7.2. off-list - send to christopher@pobox.com
   

Contributors:  Chris Drake
v.1.0 - July 2, 2006
#########################################
###  /Authentication Threat List 1.0  ###
#########################################





Thursday, July 6, 2006, 2:46:28 AM, you wrote:


>> Chris Drake wrote:
>>  > The word "Chrome" is so cool that nobody wants to put
>>  > it back on the shelf where it belongs!

sf> I don't think the concept of secure chrome needs to be entirely
sf> abandoned, just redefined.  The problem is with chrome which is static
sf> and spoofable.  By secure chrome, we mean "unspoofable chrome".

sf> Historical implementations assume that anything in the chrome is
sf> trusted, since an attacker can't control the chrome.  However, the
sf> picture in picture attack demonstrates that the chrome is spoofable,
sf> even when its trusted.

sf> http://guardpuppy.com/BrowserChromeIsDead.gif

sf> We need to determine techniques which are unspoofable, such as
sf> personalization known only to the user or OS layer features, such as
sf> dimming the desktop.

sf> Suppose we did have a set of techniques that proved to be effective,
sf> what form would a standard take?  We'll have to specify something like

sf> For personalization, I suspect the rough outline would be something like

sf> 1) User can set some personalization.
sf> 2) Personalization must be determine based on some secret known to the
sf> user in a sufficiently large key space, eg. a large set of pictures,
sf> visual hashes, or words.
sf> 3) Personalization must be integrated with authentication flows.
sf> 4) After authentication, personalization must be presented as proof of
sf> mutual authentication.
sf> 5) Personalization may be presented when requesting other sensative information.
sf> 6) Personalization may be presented at any time during the session to
sf> prove the session is not spoofed or taken over.
sf> 7) Personalization must not be retrievable or usable by third party sites.

sf> I'm not sure if we should promote the "may" in (5) and (6) to "must".

sf> Also, this assumes user training and recognition.  Solutions which
sf> don't train the user to use personalization and recognizing spoofing
sf> will remain spoofable.

sf> Thoughts?

sf>  - Jeff

Received on Wednesday, 5 July 2006 17:13:55 UTC