Re: [cors] Review

On Tue, Jun 16, 2009 at 8:09 PM, Mark Nottingham<mnot@mnot.net> wrote:
> On 17/06/2009, at 12:27 AM, Anne van Kesteren wrote:
>>>
>>> Indeed. Were other approaches to applying policy to multiple URLs
>>> considered?
>>
>> I and others have certainly tried to think of ways to make this work, but
>> so far we have not found anything satisfactory.
>
> If you rule out the well-known location solution, I agree it's a very hard
> problem to solve.

One solution is:

1. Don't add any client credentials to requests.
2. Allow the script to use whatever HTTP method, headers and request
entity it wants, restricting use of some headers, such as Referer.

This leaves resources relying solely on a firewall for authentication
vulnerable. There are a number of good heuristics the browser can
apply to determine which resources are behind the firewall, such as:
if an HTTP proxy is configured, did the request go through the proxy,
or directly to the host server; does the server have a non-routable IP
address. Since these are resources living behind a firewall, there is
also likely a corporate IT department that controls the
distribution/configuration of client software and so can apply
appropriate client configuration.

3. In the default configuration, prohibit cross-domain requests to
resources suspected of being behind the firewall. Provide good
configuration options for the corporate IT department to define the
set of restricted servers.

People have also been worried about distributed password guessing
attacks. There are other, better solutions to this problem than
restricting use of cross-domain requests.

With this approach, it should be possible to protect all existing
resources, while not impairing the use of HTTP and webarch.

--Tyler

-- 
"Waterken News: Capability security on the Web"
http://waterken.sourceforge.net/recent.html

Received on Wednesday, 17 June 2009 05:42:20 UTC