Design issues for access-control (Re: Lightning talk about access-control)

On 2007-10-25 11:45:17 +0200, Anne van Kesteren wrote:

> Could you guys give an update on this? I want to start working on
> the new proposal, but I want some amount of certainty I don't
> have to redo it again soon after.

Sorry for being slow on this.  These comments are against the latest
Working Draft:

  http://www.w3.org/TR/2007/WD-access-control-20071001/

Stepping back a bit, I think there are at least two design decisions
that we have been dancing around for a while, and that might be
worth being looked at as design decisions.


1. What's our protection goal?

There are at least two goals that we're mixing:

  (a) Prevent leakage of the data that have been returned, and
  authorize certain data flows.

  (b) Prevent unauthorized requests from occuring; authorize certain
  requests.

Protection goal (a) is the one which is kind of being upheld by
current environments, and considered a serious flaw if broken.

The fact that protection goal (b) isn't achieved on the web today is
a source of endless troubles for web application developers; it's
the source of cross-site request forgeries.

For GET, at least, only (a) is our goal.  We don't care about (b),
since GET requests are all over the place.

For POST and friends, both (a) and (b) seem to be our goals.  The
current processing model achieves (a), and attempts to achieve (b)
as far as XMLHttpRequest is concerned.

However, XHR is hardly the only source of unauthorized POST requests
-- the submit() method is another one --, so there is a question
here to what extent we're perpetuating a safety fiction for no
particular benefit.  Or are we going out on a crusade to fix
submit()?


2. What are our deployment scenarios?

The current specification on the one hand aims at being lightweight
on the server (therefore, the processing instruction; therefore, the
deny clause); on the other hand, the processing model for non-GET
requests involves a new HTTP header (Method-Check or whatever it's
called today) which conveys a critical piece of information in an
initial GET request.  That's just plain inconsistent.

In particular, with the current model, and currently-deployed
servers, if a GET request for a resource returns an XML document
that includes an access-control processing instruction, then any
policy included in that document will spill over to permitting POST
requests for the same resource; mitigating that requires a change to
server behavior.

Meanwhile, we also have a Referer-Root header of which we don't say
what it is supposed to mean or do.

So, do we aim at minimum deployment impact on the server side,
leaving all interpretation to the client?  If that's the case, and
if we opt to cover protection goal (b) above, then the language
would need to be extended to include per-method policies.

Or do we assume that a certain amount of lightweight server-side
modification is in order, and that the access-control header is the
vehicle of choice?  If that's the case, I'd advocate dropping the
processing instruction and the deny clause along with it.  If (b) is
a goal, I'd also advocate to include per-method policies in the
language, and to drop the Method-Check (or whatever it's called
today) header.

(Incidentally, this choice has a side effect on the additional
exposure to some cross-site scripting vulnerabilities.)

I'm looking forward to talking more about this at TPAC.

Cheers,
-- 
Thomas Roessler, W3C  <tlr@w3.org>

Received on Wednesday, 31 October 2007 13:07:40 UTC