W3C home > Mailing lists > Public > public-appformats@w3.org > October 2007

Re: Design issues for access-control

From: Anne van Kesteren <annevk@opera.com>
Date: Wed, 31 Oct 2007 16:26:38 +0100
To: "Thomas Roessler" <tlr@w3.org>
Cc: "WAF WG (public)" <public-appformats@w3.org>
Message-ID: <op.t02iiojb64w2qv@annevk-t60.oslo.opera.com>

Thanks for the reply. I'll address some points below.

On Wed, 31 Oct 2007 14:07:31 +0100, Thomas Roessler <tlr@w3.org> wrote:
> However, XHR is hardly the only source of unauthorized POST requests
> -- the submit() method is another one --, so there is a question
> here to what extent we're perpetuating a safety fiction for no
> particular benefit.  Or are we going out on a crusade to fix
> submit()?

XMLHttpRequest POST allows more than <form> POST. Servers will have to  
deal with cross-site <form> POST, but probably don't deal with cross-site  
XMLHttpRequest POST. As such, XMLHttpRequest POST is not guaranteed to be  
as "safe" as cross-site <form> POST is.

Also, this makes it work for arbitrary method names, not just POST.


> 2. What are our deployment scenarios?
>
> The current specification on the one hand aims at being lightweight
> on the server (therefore, the processing instruction; therefore, the
> deny clause); on the other hand, the processing model for non-GET
> requests involves a new HTTP header (Method-Check or whatever it's
> called today) which conveys a critical piece of information in an
> initial GET request.  That's just plain inconsistent.

Method-Check is done by the client. Allow is done by the server. Non-GET  
requests are indeed more difficult, but since non-GET is already more  
complicated than just sending a reply (you have to do some more "advanced"  
processing on the server as a result of the request) I don't see this as a  
problem.


> In particular, with the current model, and currently-deployed
> servers, if a GET request for a resource returns an XML document
> that includes an access-control processing instruction, then any
> policy included in that document will spill over to permitting POST
> requests for the same resource; mitigating that requires a change to
> server behavior.

No, because such content would not include an Allow HTTP header that  
allows that.


> Meanwhile, we also have a Referer-Root header of which we don't say
> what it is supposed to mean or do.

It allows you to not expose all the sites you make your content available  
to by just emitting the value from the Referer-Root header if you indeed  
allow that site.


> So, do we aim at minimum deployment impact on the server side,
> leaving all interpretation to the client?  If that's the case, and
> if we opt to cover protection goal (b) above, then the language
> would need to be extended to include per-method policies.

This is what using Allow solves. It has been suggested to use a new HTTP  
header for that purpose in case some servers have this header by default.  
Given that you also need Access-Control/<?access-control?> I'm not sure if  
that's really worth it, but I'm open to feedback that suggests otherwise.


> Or do we assume that a certain amount of lightweight server-side
> modification is in order, and that the access-control header is the
> vehicle of choice?  If that's the case, I'd advocate dropping the
> processing instruction and the deny clause along with it.  If (b) is
> a goal, I'd also advocate to include per-method policies in the
> language, and to drop the Method-Check (or whatever it's called
> today) header.
>
> (Incidentally, this choice has a side effect on the additional
> exposure to some cross-site scripting vulnerabilities.)
>
> I'm looking forward to talking more about this at TPAC.

I hope the above clarifies the ideas. I also hope to find some time  
soonish to rewrite the draft.


-- 
Anne van Kesteren
<http://annevankesteren.nl/>
<http://www.opera.com/>
Received on Wednesday, 31 October 2007 15:26:39 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Tuesday, 8 January 2008 14:10:23 GMT