Re: [access-control] non-GET threat model and authorization choreography

On Tue, 16 Oct 2007, Bjoern Hoehrmann wrote:
> >
> >Method-Check is needed to distinguish invokations intended to just 
> >check whether a POST (or whatever) is safe, from invokations intended 
> >to get the actual resource. If a resource would do a lot of work on a 
> >GET normally, we don't want it to have to do a lot of work when the UA 
> >is going to ignore all but the headers and the prologue.
> 
> Why can't you use Referer-Root for this purpose?

Referer-Root gets sent with every request; how would it be used to 
distinguish the two?


> Clearly though this will lead to authors sending the Allow header only 
> if you specify the Method-Check or Referer-Root header in the request, 
> so you would have to deal with a number of caching problems to get this 
> right.

Could you elaborate on this? Some examples might be helpful as I'm not 
sure I follow. I thought the proposal was to have a separate cache 
(non-HTTP) for the pre-flight test requests.


> You'd be better off if, say, you specify a redirect to the resource 
> you'd want to send out for GET requests (but see below), or if you use 
> OPTIONS.

Redirects seem inefficient, requiring two network accesses and 
corresponding access checks, and are also far more prone to 
inconsistencies due to the extra checks. It's unclear to me why you think 
this would make us better off.


> >We can't use OPTIONS because Apache returns
> >
> >   Allow: GET,HEAD,POST,OPTIONS,TRACE
> >
> >...by default, which would basically mean that out of the box, any 
> >resource that support cross-site GET would automatically support 
> >cross-site POST.
> 
> My understanding is that you don't think the check is necessary for 
> POST, so this issue would seem moot?

It has been pointed out that there are attack vectors I had not considered 
with POST, so my initial optimism regarding POST was mislaid. In any case 
there are other methods at issue here as well.


> You are also mistaken about the default; Apache does this only for 
> static files, and sending a POST in that case would have no effect.

Actually the "Allow" line above is directly copied and pasted from the 
response headers sent in response to an OPTIONS to this URI:

   http://software.hixie.ch/utilities/cgi/test-tools/echo

...which is a CGI script.


> If you want to handle the posted data, you would use methods like CGI 
> and PHP scripts to do that, and in that case Apache won't send any Allow 
> header on its own, so you might get incorrect information in some cases, 
> but nothing bad will happen.

My experiments suggest otherwise.


> >Also, OPTIONS doesn't return a body, which is useful to authors who 
> >want to include the cross-domain rights in XML PIs rather than HTTP 
> >headers.
> 
> I am not sure why OPTIONS does not return a body? Some servers might not 
> send one in some cases, but e.g. if you are using Apache and the PHP 
> module, you'll actually have to stop it from sending a body. HTTP 
> certainly allows sending a body. But I don't see the point either way, 
> you have to set the Allow header correctly, so you can use the header.

Again, my experiments with the script cited above indicate that no body is 
returned.

It is possible that with work, I could change Apache's behaviour. However, 
I do not think the bar should be put higher than writing a simple script 
and being done with it. In general I think we want to design our APIs to 
be easy to write for, but in _particular_ I think it is absolutely 
imperative that we make it as trivial as possible to get security APIs 
correct even when the author is trying to shoot himself in the foot.


> Why then would you use the processing instruction instead, and why is 
> providing this convenience more important than ensuring that the check 
> completes quickly and wastes little resources? It seems the draft con- 
> siders it out of scope to define how you'd deal with redirects for the 
> GET request, so we'd have per-specification rules for that. Why does 
> this convenience option warrant the additional complexity?

We certainly shouldn't be leaving redirects out of scope. Could you 
elaborate on what it is that the spec leaves undefined, so it can be 
fixed?

The XML PIs are an important convenience because they work in all other 
cases for this API and consistency here is important (consistency in 
security APIs is critical -- anything surprising in security APIs will 
almost always lead to security holes).


> >We need the server to have access to the source's "origin" 
> >(scheme-host- port) so that if the host has an ACL that is longer than 
> >conveniently transmittable via headers (or, for that matter, if the 
> >list is somewhat sensitive, like a list of paying customers) the host 
> >doesn't have to send the entire list each time, and can instead just 
> >check the header for which host is being tested, just returning that 
> >one.
> 
> >Passing the origin information in the URI on a per-origin basis would 
> >be technically possible but has much higher costs in terms of author 
> >education (instead of being able to copy-and-paste code from site to 
> >site, authors would have to actually change each URI being fetched). 
> >This doesn't scale well in environments like common libraries using 
> >this kind of feature.
> 
> You can do all of this using the Referer header and other methods (like
> passing origin information in the ultimate request).

The Referer header, as previously indicated, cannot be used because it is 
stripped in HTTPS->HTTP requests. It also cannot be used because, due to 
its potential inclusion of private data in path components, it is often 
stripped by "privacy" proxies.

Sending the information in the "ultimate request" seems like it would make 
it somewhat difficult for the original request to have the information, 
which is required, as I explained in my last e-mail.


> This has a certain convenience factor and success rate. What I don't 
> understand is when you need something slightly better and why disclosing 
> this information needs to be the default.

The server needs the origin information so that it can include it in the 
access-control headers of the first response in the case where it cannot 
practically send all possible allowed origins in its first request.


> The service provider will always have some amount of unwelcome requests 
> it has no choice but to service (automated requests spoofing the Referer 
> header, for example)

We are only concerned with stopping cross-origin client-side XSS attacks 
on a third party and cross-network-boundary attacks using a client as a 
bridge into an intranet for information smuggling, so server-to-server 
two-party scenarios on the public Web are out of scope (as you point out, 
any request can be sent in such a situation).


> so you might aswell deal with Referer-less requests as if they had an 
> accepted Referer.

It would seem highly undesirable and the height of irony to make all 
financial information of users who have privacy-protecting proxies 
available to all sites.


> That embedding the service fails for certain sites for all clients 
> sending a proper Referer would seem to be a sufficient deterrent.

I don't understand what you mean by this sentence.

-- 
Ian Hickson               U+1047E                )\._.,--....,'``.    fL
http://ln.hixie.ch/       U+263A                /,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'

Received on Tuesday, 16 October 2007 09:46:58 UTC