Re: Design issues for access-control

Hi all,

Sorry for the late posting of this, I've been heads down in firefox 3 
beta work the past week. Had hoped to get this to you guys well before 
mondays meeting, but unfortunately it ended up just being a few hours 
before.

What I'm not thrilled about in the current spec, and I think Thomas 
touched on this in this thread, is that we're mixing server-side and 
client-side authentication when performing non-GET authorization.

On one had we're sending both the requesting domain (in Referer-Root) 
and the requested method (in Method-Check?) to the server. This is 
enough data for the server to simply send back a yes/no reply.

But then we're letting the server send back both a set of allowed 
domains (in Access-Control/<?access-control?>) and a set of allowed 
methods (in Allow). This data too would be enough on its own to make a 
yes/no decision about if to authorize the non-GET request.

Why do we solve the problem twice?

I have heard arguments that the site might not want to broadcast who it 
is authorizing. However this could effectively be figured out still by 
simply brute-force testing all interesting servers and methods directly 
from an evil server to the target server. No browser involved, simply 
send HTTP requests containing Referer-Root and Method-Check headers.

Another thing that occurred to me is does HTTP caches take the full set 
of request headers into account when caching? Otherwise it could be 
directly harmful to include Referer-Root and Method-Check headers. The 
cache might store an "authorize" reply when the request is made for 
Referer-Root A and wrongly respond with the same document is checked for 
Referer-Root B.

/ Jonas

Anne van Kesteren wrote:
> 
> On Wed, 31 Oct 2007 16:59:43 +0100, Thomas Roessler <tlr@w3.org> wrote:
>> On 2007-10-31 16:26:38 +0100, Anne van Kesteren wrote:
>>> XMLHttpRequest POST allows more than <form> POST.
>>
>> Please elaborate.
> 
> The scenario was doing cross-site XML POST as that might hurt SOAP 
> servers. I seem to recall someone saying this is possible with <form> 
> POST as well though I'm not sure exactly how, but only with 
> XMLHttpRequest the Content-Type header would be an XML MIME type.
> 
> 
>>> Servers will have to deal with cross-site <form> POST, but
>>> probably don't deal with cross-site XMLHttpRequest POST. As such,
>>> XMLHttpRequest POST is not guaranteed to be as "safe" as
>>> cross-site <form> POST is.
>>
>> Please explain the differences from the perspective of the site that
>> needs to handle these requests, and explain how they are relevant
>> for the discussion at hand.
> 
> <form> POST is not relevant to the discussion at hand. XMLHttpRequest 
> POST follows the model with Method-Check, etc.
> 
> 
>>> Also, this makes it work for arbitrary method names, not just POST.
>>
>> Fair point.  One question is, then, whether cross-site XHR
>> should be limited to GET and POST.
> 
> It should not.
> 
> 
>>> Method-Check is done by the client.
>>
>> The If-Method-Allowed (or Method-Check) header is *set* by the
>> client.  That presumably happens so the server can evaluate it and
>> do something interesting.  If you don't expect any server-side
>> processing, please drop the header.
> 
> If the server gets that header it knows this is an authentication 
> request and can give an appropriate reply.
> 
> 
>>> Allow is done by the server.
>>
>> Allow is set by the server, yes, and becomes part of the client's
>> decision.  That actually adds new meaning to this header; we might
>> want to check the interaction with possible other uses.
> 
> If that's the case we can simply introduce a new header. Although 
> "simply" I believe Firefox might be shipping soonish :-(
> 
> 
>>> Non-GET requests are indeed more difficult, but since non-GET is
>>> already more complicated than just sending a reply (you have to
>>> do some more "advanced" processing on the server as a result of
>>> the request) I don't see this as a problem.
>>
>> The main use case here is POST, which is deployed in existing
>> servers.  The additional header needs to be dealt with when it
>> occurs on a GET request.
>>
>> Requiring special server-side processing for an existing method
>> means a significant change in terms of deployment scenario.
> 
> Well, you'd have a single resource on the server I assume that takes 
> care of both the GET and POST responses. My point was that if the author 
> of the server is going to handle POST he/she already needs to do a 
> certain amount of coding rather than just putting a data source online. 
> Handling the additional request isn't that more complicated than.
> 
> 
>>>> In particular, with the current model, and currently-deployed
>>>> servers, if a GET request for a resource returns an XML document
>>>> that includes an access-control processing instruction, then any
>>>> policy included in that document will spill over to permitting POST
>>>> requests for the same resource; mitigating that requires a change to
>>>> server behavior.
>>
>>> No, because such content would not include an Allow HTTP header
>>> that allows that.
>>
>> With the currently-specified use of the Allow header, such content
>> could include that header.  See RFC 2616, section 14.7.
> 
> Ok, but that content wouldn't have Access-Control/<?access-control?>.
> 
> 
>>>> Meanwhile, we also have a Referer-Root header of which we don't
>>>> say what it is supposed to mean or do.
>>
>>> It allows you to not expose all the sites you make your content
>>> available to by just emitting the value from the Referer-Root
>>> header if you indeed allow that site.
>>
>> So this is the second HTTP header that we expect to influence the
>> result of a GET request that isn't really a GET request?
> 
> My magic eight ball says yes.
> 
> 
>>> This is what using Allow solves. It has been suggested to use a
>>> new HTTP header for that purpose in case some servers have this
>>> header by default.  Given that you also need
>>> Access-Control/<?access-control?> I'm not sure if that's really
>>> worth it, but I'm open to feedback that suggests otherwise.
>>
>> See above; the combination of Allow's current definition and the
>> processing instruction makes this a nasty trap.
> 
> As far as I can tell it's only used for OPTIONS requests, but ok. What 
> do other people think?
> 
> 
>>> I hope the above clarifies the ideas.
>>
>> It clarifies some of the ideas, but it doesn't make the current spec
>> good.
> 
> The current spec doesn't reflect the ideas.
> 
> 
>>> I also hope to find some time soonish to rewrite the draft.
>>
>> Maybe wait with that till we're through this discussion. ;-)
> 
> So far the only potential change is renaming the Allow header. But sure, 
> I can wait.
> 
> 

Received on Monday, 5 November 2007 08:25:56 UTC