W3C home > Mailing lists > Public > public-webapps@w3.org > October to December 2012

Re: [XHR] Open issue: allow setting User-Agent?

From: Julian Aubourg <j@ubourg.net>
Date: Thu, 11 Oct 2012 17:35:50 +0200
Message-ID: <CANUEoetE_34XZx+Yaa2DvNZ7YcbaZFb5Q7B2JsreCHfQx_JtMw@mail.gmail.com>
To: public-webapps@w3.org
> Are you really saying that backend developers want to use User-Agent to
limit the
> number of requests accepted from Firefox?  (Not one user's Firefox, but
all Firefox
> users, at least of a particular version, combined.)  That doesn't make
sense at all.
> If that's not what you mean, then please clarify, because I don't know
any other way
> the User-Agent header could be used to limit requests.

A more likely scenario is a URL that only accepts a specific user agent
that is not a browser (backend). If user script can change the UserAgent,
it can request this URL repeatedly. Given it's in the browser, a shared
resource (like an ad provider or a CDN) becomes a very tempting point of
failure.

AFAIK, you don't have the same problem with PHP libs for instance (you
don't request same from a third-party server, making it a potential vector
of attack).

I'm not saying it's smart (both from the hacker's POW or the backend POW)
but I'm just being careful and trying to see if there is potential for
abuse.

On 11 October 2012 16:22, Glenn Maynard <glenn@zewt.org> wrote:

> On Thu, Oct 11, 2012 at 8:09 AM, Julian Aubourg <j@ubourg.net> wrote:
>>
>> > I still don't fully understand the scenario(s) you have in mind.
>>
>> You're confusing the script's origin with the site's origin. XHR requests
>> from within a script are issued with the origin of the page that the script
>> is included into.
>>
>> Now, read back your example but suppose the attack is to be pulled
>> against cnn.com. At a given time (say cnn.com's peek usage time), the
>> script issues a gazillions requests. Bye-bye server.
>>
>
> I'm confused.  What does this have to do with unblacklisting the
> User-Agent header?
>
> That's why I took the ad example. Hack a single point of failure (the ad
>> server, a CDN) and you can DOS a site using the resource from network
>> points all over the net. While the frontend dev is free to use scripts
>> hosted on third-parties, the backend dev is free to add a (silly but
>> effective) means to limit the number of requests accepted from a browser.
>> Simple problem, simple solution and the spec makes it possible.
>>
>
> Are you really saying that backend developers want to use User-Agent to
> limit the number of requests accepted from Firefox?  (Not one user's
> Firefox, but all Firefox users, at least of a particular version,
> combined.)  That doesn't make sense at all.  If that's not what you mean,
> then please clarify, because I don't know any other way the User-Agent
> header could be used to limit requests.
>
> --
> Glenn Maynard
>
>
>
Received on Thursday, 11 October 2012 15:36:21 GMT

This archive was generated by hypermail 2.3.1 : Tuesday, 26 March 2013 18:49:55 GMT