Re: P3P - Feedback on Access Control

On Thu, 24 Jan 2008, Mark Nottingham wrote:
> > On Thu, 24 Jan 2008, Mark Nottingham wrote:
> > > 
> > > The heart of the issue is how policy is discovered; the current ED 
> > > uses a per-resource OPTIONS, while almost every other solution in 
> > > this space uses a well-known-location.
> > 
> > robots.txt is a per-domain policy (to prevent a host from being 
> > overwhelmed); there are per-resource ways of controlling spiders as 
> > well.
> 
> So why not take that approach? E.g., HTTP headers / PI for safe methods, 
> well-known location (or an addition to robots.txt) for unsafe methods.

Because this is a security-related API, and every additional iota of 
complexity is another opportunity for a bug. We need to keep the attack 
surface of this specification as small as humanly possible while still 
fulfilling our requirements.


> What does that really mean? Whether or not a spider can access 
> something, whether or not a privacy policy applies, and what metadata is 
> associated is also a "per-resource concern."

Right, and all three have per-resource mechanisms, just like Access 
Control should (and does).


> > > The decision to Recommend a new mechanism for discovering policy 
> > > shouldn't be taken lightly.
> > 
> > I hardly think that HTTP headers and "OPTIONS" can be called a "new 
> > mechanism". After all, every per-resource policy mechanism uses them 
> > already!
> 
> Which other per-resource policy mechanism uses OPTIONS (discounting 
> WebDAV, which is a stretch here)?

If you want to go back to GET for what we currently use OPTIONS for, I'm 
certainly happy to do so. However, the change to OPTIONS was done upon the 
request of members of the HTTP working group, so presumably it's the right 
thing to do.


> I have no quarrel with using HTTP headers / PIs for GET responses; it's 
> the per-resource authorisation request (whether OPTIONS or GET) that is 
> problematic.

I don't understand why a per-resource authorisation request is problematic 
given that the nature of the problem is resource-specific.


> [unresolved issues are:]
> * Inability to cache OPTIONS, and the resulting problems for scaling 
> this mechanism by caching policy in anything but the client

While I agree that the spec doesn't agree with what you are requesting 
here, I do not agree that your feedback hasn't been taken into account 
here. It's just that not everyone agrees with you.


> * per-resource OPTIONS requests are too chatty, don't scale to large 
> numbers of resources, eventually causing developers to come up with 
> workarounds such as boxcarring messages

See my comments below on this topic, but basically, again, I don't think 
your feedback has been ignored, it's just that not everyone agrees with 
you. Sometimes specs get conflicting feedback. It's then not possible to 
make the spec agree with everyone. There are aspects of the spec that I 
don't agree with too, for instance. However, that's different from the 
feedback being ignored.


> * Access-Control syntax is still suboptimal

As far as I'm aware Anne has fixed all the issues that were raised on the 
syntax; if you have any specific concerns, I'd recommend reraising them.



On Thu, 24 Jan 2008, Close, Tyler J. wrote:
> > >
> > > I think Mark raises an important point here. Anne's response that 
> > > the authorization request can be cached does not mitigate this 
> > > performance problem, since the application may only issue a single 
> > > request to a series of distinct resources.
> >
> > This only applies when you're doing many non-GET requests. Can you 
> > describe a case in which you'd be doing that enough that the extra 
> > round trips would matter?
> 
> I dispute your implied argument that it should be up to me to disprove 
> this assumption, rather than up to you to substantiate it, but I'll list 
> some plausible use-cases anyways.

I don't see how I could prove the lack of something.


> This kind of web interaction is likely in any application that populates 
> a URI namespace operated by a server, such as:
> 
> 1. any web application that uses the ATOM publishing protocol

As far as I can tell, the number of distinct resources that will regularly 
receive requests with atom-pub is small, and the frequency with which a 
client will send requests to different resources is also small. I don't 
see that the extra round trip per URI is a problem here.


> 2. a web application that puts a new GUI on another web application, 
> such as skining an auction site, or email application
>
> 3. a content authoring web application that stores user created content 
> in a data store provided by another web application, such as one 
> operated by Amazon.

Again, the number of URIs involved in this kind of case is limited, so I 
don't see the extra roundtrip per resource to be especially worrying.


> Are you suggesting that the recommendation document add to its list of 
> assumptions one stating that web applications don't do lots of non-GET 
> requests to distinct resources?

I don't think this needs to be listed explicitly, but I do think it is an 
accurate portrayal of likely use cases for APIs using this mechanism.

-- 
Ian Hickson               U+1047E                )\._.,--....,'``.    fL
http://ln.hixie.ch/       U+263A                /,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'

Received on Friday, 25 January 2008 02:11:51 UTC