RE: Access Control Draft

2 questions and an idea.

1)  Isn't LDAP the standard (or about to become the de facto standard) for verifying who an agent is? Shouldn't we use LDAP (call it X.500)  (especially in conjunction with X.509) for this task.
2)  Isn't the cookie methodology the way for clients to tell servers "more about themselves" (in the extended sense of the meaning)?

I think we should leverage these two existing technologies allowing X.500 cookies to be obtained by clients from an X.500/LDAP server and then be able to pass this cookie to all servers which lie within the domain of the X.500 server as identification credentials. If we don't restrict the access control model to this approach it should at least allow for it to be implemented this way.

Cheers
Dylan

----------
From:  Gregory J. Woodhouse[SMTP:gjw@wnetc.com]
Sent:  Donnerstag, 15. Mai 1997 19:09
To:  Jon Radoff
Cc:  w3c-dist-auth@w3.org
Subject:  Re: Access Control Draft

On Thu, 15 May 1997, Jon Radoff wrote:

> 
> Permissions are essentially a "server" technology.  It is unclear
> to me what information regarding permissions would ever be
> transacted between the client and the Web server, other than
> perhaps a denial-of-access response (which is already handled
> in the HTTP 1.0 spec).  In addition, the predominant leaning so
> far from everyone has been that a spec that governs the creation
> and management of permissions is out of scope.  That leaves us with
> the issue of resolving what a particular user can or can't do.
> 

I think there's a distinction that needs to be made here. If permission
refers to the ability of processes or threads on the server to access
local resources, then I agree that is out of scope.  On the other hand, if
it refers to access to resources across via a network, then I believe it
is very much in scope. As far as I can see, the only tricky part is
assigning different access rights to multiple agents (such as processes)
on the same host. To be a bit more concrete, suppose host A is home to two
processes, p1 and p2, and host B is home to resource r. Suppose further
that p1 should be able to access r and p2 should not. Historically, this
has been handled by means of credentials. If a process (and, yes, I know
I'm using OS specific language) posesses the proper credentials, it can
obtain access to resource r. This basically comes down to an
authentication problem. If B accepts a TCP connection from A, how does it
know that the connection was initiated by p1 and not p2? Authentication is
not an issue that falls outside the scope of the IETF. It would be
inappropriate for an IETF protocol to use OS specific mechanisms such as
process IDs to differentiate between p1 and p2, but that is another
matter, and there is certainly nothing wrong with having an abstract
number space like TCP/UDP ports that can be mapped to OS specific object
like processes. This is what we do now.

To be honest, I am not entirely satified with the approach taken by, say,
NFS. NFS simply includes the uid and gid of the requesting process in each
RPC (because it, like HTTP, is stateless). This is obviously not secure
because any process could claim whatever uid and gid it wanted. As a
result, an approach referred to as SecureNFS has been developed which uses
cryptographic methods to prevent rogue processes from claiming uids or
gids not rightfully theirs. But realistically, any server allowing PUTs is
going to have to require something like digest authentication, anyway, so
maybe this isn't so bad. The concepts of user ID and group ID are UNIX
specific, but I don't see why we couldn't use a similar approach with an
abstract number space as I suggested above.


---
Gregory Woodhouse
gjw@wnetc.com    /    http://www.wnetc.com/home.html
If you're going to reinvent the wheel, at least try to come
up with a better one.

Received on Friday, 16 May 1997 04:33:00 UTC