- From: Anthony Y. Fu <ayf@MIT.EDU>
- Date: Sun, 2 Jul 2006 12:03:48 -0400
- To: <public-usable-authentication@w3.org>
- Message-ID: <000301c69df1$1aea9c20$6f06fa12@IBM118CD3BB50F>
Your phishing trick list is great. Maybe Unicode Attack (a kind of homograph
attack) is one of typographic attacks?
PhAshing (faked application, e.g. faked web browsers, and faked
anti-phishing applications) is also an important and interesting topic. We
demonstrate no place in the screen can be guaranteed secure. Demo Video:
http://www.mit.edu/~ayf/my_free_sw/activeXAttack.zip
So we propose methods that probably stop PhAshing. (see Context Sensitive
Password.pdf and irb-fu.pdf in attachment)
Yours Sincerely,
Anthony Y. Fu
========================
ayf@mit.edu
http://www.mit.edu/~ayf
========================
> -----Original Message-----
> From: cups-friends-bounces@CUPS.CS.CMU.EDU [mailto:cups-friends-
> bounces@CUPS.CS.CMU.EDU] On Behalf Of Lorrie Cranor
> Sent: Sunday, July 02, 2006 8:58 AM
> To: cups@CUPS.CS.CMU.EDU; cups-friends@CUPS.CS.CMU.EDU
> Subject: [Cups-friends] Authentication threat list
>
> Begin forwarded message:
>
> Resent-From: public-usable-authentication@w3.org
> From: Chris Drake <christopher@pobox.com>
> Date: July 2, 2006 12:28:25 AM EDT
> To: public-usable-authentication@w3.org
> Cc: dix@ietf.org, idworkshop@googlegroups.com, ietf-http-
> auth@lists.osafoundation.org
> Subject: Comprehensive list - known Threat and Protection table
>
>
> Hi All,
>
> Has anyone attempted to document the threats and/or what protection
> we're trying to provide to users ?
>
> If so - please point me - if not - please add-to or amend my list:
>
> ########################################
> ### Authentication Threat List 1.0 ###
> ########################################
>
> 1. Confidence Tricks
>
> 1.1. phishing emails
> 1.1.1. to lure victims to spoof sites
> 1.1.2. to lure victims into installing malicious code
> 1.1.3. to lure victims towards O/S vulnerabilities to inject
> malicious code
> 1.1.4. to lure victims into revealing information directly via
> reply or via embedded FORMS within the email
>
> 1.2. telephone phishing
> 1.2.1. to directly extract auth info
> 1.2.2. to direct victim to spoof site
>
> 1.3. person-to-person phishing / situation engineering
> 1.3.1. to directly extract auth info (ask)
> 1.3.2. to direct victim to spoof site
> 1.3.3. shoulder surfing (aka 4.5.2)
> 1.3.4. physical attack - see 4.7
>
> 1.4. typographic attacks
> 1.4.1. spoofing (eg: paypa1.com - using a number 1 for a little L)
> 1.4.2. direct download of malicious code
> 1.4.3. browser exploit injection
>
> 1.5. online phishing
> 1.5.1. pop-up/pop-behind windows to spoof sites
> 1.5.2. floating <DIV> or similar elements (eg: emulating an entire
> browser UI)
>
>
> 2. Remote Technical Tricks
>
> 2.1. spoof techniques
> 2.1.1. vanilla fake look-alike spoof web sites
> 2.1.2. CGI proxied look-alike web site (server CGI talks to real
> site in real time - "man in the middle attack")
> 2.1.3. popup windows hiding the address bar (3.4.1/3.4.2)
> 2.1.4. <DIV> simulated browsers (1.5.2)
>
> 2.2. iframe exploits (eg: 1.5.1/1.1.3) (spammers buy iframes to
> launch 1.5 and 1.4 attacks)
> 2.3. p2p filesharing publication of products modified to
> remove/limit protection - PGP, IE7, Mozilla, ...
> 2.4. DNS poisoning (causes correct URL to go to spoof server)
> 2.5. traffic sniffing (eg: at ISP, telco, WiFi, LAN, phone tap...)
> 2.6. proxy poisoning (correct URL returns incorrect HTML)
> 2.7. browser exploits (correct URL returns incorrect HTML)
> 2.8. targeted proxy attack
> 2.8.1. directs to vanilla spoof web site (2.1.1)
> 2.8.2. uses CGI re-writing to proxy legitimate site (eg: convert
> HTTPS into HTTP to activate traffic sniffing) (2.1.2)
> 2.8.3 activates 5.7
> 2.9. Authorized exploitation - see 3.5.
>
>
> 3. Local Technical Tricks
>
> 3.2. Software vulnerabilities (aka exploits - eg - 1.1.3)
> 3.1.1. Known
> 3.1.2. Unknown
>
> 3.2. Browser "toolbars" (grant unrestricted DOM access to SSL data)
>
> 3.3. Trojans
> 3.3.1. Standalone modified/hacked legitimate products (eg: PGP or
> a MSIE7) with inbuilt protection removed/modified.
> 3.3.2. Bogus products (eg: the anti-spyware tools manufactured by
> the Russian spam gangs)
> 3.3.3. Legitimate products with deliberate secret functionality
> (eg: warez keygens, sony/CD-Rom music piracy-block addins)
> 3.3.4. Backdoors (activate remote control and 3.4.1/3.4.2)
>
> 3.4. Viruses
> 3.4.1. General - keyloggers, mouse/screen snapshotters
> 3.4.2. Targeted - specifically designed for certain victim sites
> (eg paypal/net banking) or certain victim actions (eg:
> password entry, detecting typed credit card numbers)
>
> 3.5. Authorized exploitation (authority (eg: Microsoft WPA/GA,
> Police, ISP, MSS, FBI, CIA, MI5, Feds...) engineer a Trojan or
> Viral exploit to be shipped down the wire to local PC,
> potentially being legitimately signed/authenticated software.)
>
> 3.6. Visual tricks
> 3.6.1. browser address bar spoofing
> 3.6.2. address bar hiding
>
> 3.7. Hardware attacks
> 3.7.1. keylogger devices
> 3.7.2. TEMPEST
> 3.7.3. malicious hardware modification (token mods, token
> substitution, auth device substitution/emulation/etc)
>
> 3.8. Carnivore, DCS1000, Altivore, NetMap, Echelon, Magic Lantern,
> RIPA, SORM...
>
> 4. Victim Mistakes
>
> 4.1. writing down passwords
> 4.2. telling people passwords
> 4.2.1. deliberately (eg: friends/family)
> 4.2.2. under duress (see 4.7)
> 4.3. picking weak passwords
> 4.4. using same passwords in more than one place
> 4.5. inattentiveness when entering passwords
> 4.5.1. not checking "https" and padlock and URL
> 4.5.2. not preventing shoulder surfing
> 4.6. permitting accounts to be "borrowed"
> 4.7. physical attack (getting mugged)
> 4.7.1. to steal auth info
> 4.7.2. to acquire active session
> 4.7.3. to force victim to take action (eg: xfer money)
> 4.8. allowing weak lost-password "questions"/procedures
>
>
> 5. Implementation Oversights
>
> 5.1. back button
> 5.2. lost password procedures
> 5.3. confidence tricks against site (as opposed to user)
> 5.4. insecure cookies (non-SSL session usage)
> 5.5. identity theft? site trusts user's lies about identity
> 5.6. trusting form data
> 5.7. accepting auth info over NON-SSL (eg: forgetting to check
> $ENV{HTTPS} is 'on' when performing CGI password checks)
> 5.8. allowing weak lost-password "questions"/procedures
> 5.9. replay
> 5.10. robot exclusion (eg: block mass password guessing)
> 5.11. geographical exclusion (eg: block logins from Korea)
> 6.12. user re-identification - eg - "We've never seen you using
> Mozilla before"
> 6.13. site-to-user authentication
> 6.14. allowing users to "remember" auth info in browser (permits
> local attacks by unauthorised users)
> 6.15. blocking users from being allowed to "remember" auth info in
> browser (facilitates spoofing / keyloggers)
> 6.16. using cookies (may permit local attacks by unauthorised
> users)
> 6.17. not using cookies (blocks site from identifying malicious
> activity or closing co-compromised accounts)
>
>
> 6. Denial of Service attacks
>
> 6.1. deliberate failed logins to lock victim out of account
> 6.2. deliberate failed logins to acquire out-of-channel subsequent
> access (eg: password resets)
>
>
> 7. Please contribute to this document!
>
> 7.1. on-list - just reply
> 7.2. off-list - send to christopher@pobox.com
>
>
> Contributors: Chris Drake
> v.1.0 - July 2, 2006
> #########################################
> ### /Authentication Threat List 1.0 ###
> #########################################
>
> Kind Regards,
> Chris Drake
>
>
>
>
> _______________________________________________
> Cups-friends mailing list
> Cups-friends@CUPS.CS.CMU.EDU
> http://CUPS.CS.CMU.EDU/mailman/listinfo/cups-friends
Attachments
- application/pdf attachment: Context_Sensitive_Password__clean_.pdf
- application/pdf attachment: irb-fu.pdf
Received on Monday, 3 July 2006 00:18:22 UTC