W3C home > Mailing lists > Public > public-appformats@w3.org > May 2007

Re: [AC] Access Control Algorithm

From: Jonas Sicking <jonas@sicking.cc>
Date: Thu, 03 May 2007 06:33:12 -0700
Message-ID: <4639E498.1030900@sicking.cc>
To: Anne van Kesteren <annevk@opera.com>
Cc: "WAF WG (public)" <public-appformats@w3.org>

Anne van Kesteren wrote:
> On Thu, 03 May 2007 13:24:01 +0200, Jonas Sicking <jonas@sicking.cc> wrote:
>> I know, but I propose we change that since I think the current 
>> algorithm is hard to easily see what results it produces, as you 
>> described in the initial mail in this thread.
> With the algorithm you are proposing now that is true as well, fwiw. 

What is true?

> Because even though it can say deny= in the processing instruction that 
> isn't actually true for same-origin requests for instance.

That is going to be true for any solution we are building, so I don't 
see how that is an argument for or against any algorithm.

> And for non 
> same-origin requests the default is deny. Therefore the allow / exclude 
> mechanism makes sense.

Just using allow/exclude will not cater for all usecases I brought up in 
my initial proposal, i.e. that you want to be able to, using headers, 
deny access to all files from all or a set of remote servers.

> It also cateters for:
>   allow <*.example.org> exclude <*.public.example.org>
>   allow <webmaster.public.example.org>
> I'm not really convinced we should throw that away in favor of deny=.

Throw what away?

I'm getting very confused about what the arguments for each side is at 
this point actually :)

The thing I don't like about the original proposal you made in this 
thread are:

1. Given:
    allow <*.bar.com> exclude <foo.bar.com>
    allow <*.bar.com>

It is very easy IMHO to missinterpret that as that it should not give 
foo.bar.com access, even though it does.

2. It does not allow a server administrator to block access from all 
remote servers to all files on the server. Quoting my mail to Thomas 

"However once I deploy access control PIs/headers they allow other sites 
to read data from my server. But if I then realize that I have put 
errorous access control information in my files, for example not having 
a restrictive enough deny/exclude lists, or putting the PIs in too many 
files, it would be very useful to immediately being able block evil.com 
or any other site from reading any of the files on the server.

Another scenario is a server administrator for a server behind a 
firewall at a corporation wants to make sure that no data is 
accidentally leaked even though the employees are responsible for 
putting files on the server. The administrator could then add a access 
control header that denied all external servers from reading any data."

Would be great if you could specify what you don't like about my 
proposal in detail. Short summary of it below:

Have "allow", "deny" and "default". There is no "exclude". Order is 
important. If headers say "deny" then immediately deny. If headers say 
"allow" or "default" check with PIs. If PIs say "deny" deny. If PIs say 
"allow" allow. If PIs say nothing and headers said "allow" allow. 
Otherwise deny.

If we allow "default" in PIs or not doesn't really matter to me. In the 
end they are useless, but it would be consistent.

/ Jonas
Received on Thursday, 3 May 2007 13:33:27 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 20:50:07 UTC