W3C home > Mailing lists > Public > public-webapps@w3.org > October to December 2012

Re: [XHR] Open issue: allow setting User-Agent?

From: Glenn Maynard <glenn@zewt.org>
Date: Thu, 11 Oct 2012 09:22:37 -0500
Message-ID: <CABirCh-J+1f+braM8ZCRTy4JCxmGgxeHQxZxbYCGoCXPCCGyCQ@mail.gmail.com>
To: Julian Aubourg <j@ubourg.net>
Cc: public-webapps@w3.org
On Thu, Oct 11, 2012 at 8:09 AM, Julian Aubourg <j@ubourg.net> wrote:
>
> > I still don't fully understand the scenario(s) you have in mind.
>
> You're confusing the script's origin with the site's origin. XHR requests
> from within a script are issued with the origin of the page that the script
> is included into.
>
> Now, read back your example but suppose the attack is to be pulled against
> cnn.com. At a given time (say cnn.com's peek usage time), the script
> issues a gazillions requests. Bye-bye server.
>

I'm confused.  What does this have to do with unblacklisting the User-Agent
header?

That's why I took the ad example. Hack a single point of failure (the ad
> server, a CDN) and you can DOS a site using the resource from network
> points all over the net. While the frontend dev is free to use scripts
> hosted on third-parties, the backend dev is free to add a (silly but
> effective) means to limit the number of requests accepted from a browser.
> Simple problem, simple solution and the spec makes it possible.
>

Are you really saying that backend developers want to use User-Agent to
limit the number of requests accepted from Firefox?  (Not one user's
Firefox, but all Firefox users, at least of a particular version,
combined.)  That doesn't make sense at all.  If that's not what you mean,
then please clarify, because I don't know any other way the User-Agent
header could be used to limit requests.

-- 
Glenn Maynard
Received on Thursday, 11 October 2012 14:23:09 GMT

This archive was generated by hypermail 2.3.1 : Tuesday, 26 March 2013 18:49:55 GMT