Re: [XHR] Open issue: allow setting User-Agent?

>
>
>> I've had trouble writing extensions and user scripts to work around
> backend sniffing, due to being unable to simply set User-Agent for a
> specific script-initiated request and get the "correct" content. As I've
> attempted to explain to Anne, I think this experience is relevant to
> scripts using CORS, because they also want to interact with backends the
> script author(s) don't choose or control.
>

 If the backend sniffs out (all or some) browsers, it's the backend's
choice. CORS has been specified so that you NEED a cooperative backend.
Unlock a header and some other means to sniff you out will be found and
used :/

>
> Interacting, in a sane way, with a backend that does browser sniffing is a
> *very* compelling use case to me.


Thing is CORS makes it mandatory to handle things both client-side AND
server-side already anyway. I don't like this any more than you do (I'd
prefer a fully client-side approach but security snipers will probably
shoot me on-sight ;)).


>> The changed User-Agent will of course only be sent with the requests
> initiated by the script, all other requests sent from the browser will be
> normal. Hence, the information loss will IMO be minimal and probably have
> no real-world impact on browser stats.
>

var XHR = window.XMLHttpRequest;

window.XMLHttpRequest = function() {
   var xhr = new XHR(),
       send = xhr.send;
   xhr.send = function() {
       xhr.setRequestHeader( "User-Agent", "OHHAI!" );
       return send.apply( this, arguments );
   };
   return xhr;
};


>
>> If your backend really relies on User-Agent header values to avoid being
> "tricked" into malicious operations you should take your site offline for a
> while and fix that ;-). Any malicious Perl/PHP/Ruby/Shell script a hacker
> or script kiddie might try to use against your site can already fake
> User-Agent


Oh, I agree entirely. Except checking User-Agent is a quick and painless
means to protect against malicious JavaScript scripts. I don't like the
approach more than you do, but we both know it's used in the wild.


> A malicious ad script would presumably currently have the user's web
> browser's User-Agent sent with any requests it would make to your site, so
> unless you want to guard yourself from users running
> HackedMaliciousEvilWebBrowser 1.0 I don't see what protection you would
> loose from allowing XHR-set User-Agent.


The malicious script can trick the server into accepting a request the
backend expects to be able to filter out by checking a header which the
standard says is set by the browser and cannot be changed by user scripts.
Think painless DOS with a simple piece of javascript. I'm not saying the
assumption is stupid, I'm just saying we'll break a lot of things and may
cause harm in the end (by not aknowledging how bad people are at security).

Yes, sniffing is stupid, not it's not secure (in the sense a server can
already pretend to be a browser... but the use-case is really to prevent
browsers from mascarading as servers). If the header is used against a
white list (which, when you think about it, is "good" in all this
"badness"), then a backend not considering such or such browser is a pain.
But, just like CORS itself, it's something you need to handle server-side.

Now, my GMAIL dinged like crazy when I wrote this, so I guess I missed some
other back and forths :P

Received on Tuesday, 9 October 2012 14:34:38 UTC