W3C home > Mailing lists > Public > public-xg-webid@w3.org > April 2011

Re: butlers, secretaries, robots and access control

From: Henry Story <henry.story@bblfish.net>
Date: Sat, 9 Apr 2011 23:35:59 +0200
Cc: foaf-protocols@lists.foaf-project.org, WebID XG <public-xg-webid@w3.org>
Message-Id: <DF2E518C-766F-4657-9AC3-DAFA05D11832@bblfish.net>
To: Peter Williams <home_pw@msn.com>

On 9 Apr 2011, at 19:48, peter williams wrote:
> Though you dressed up the lower class butler in upper class finery (as
> befits his social standing),

Perhaps the idea of a secretary is better. Like Hillary Clinton, Secretary of State - keeper of secrets.
We can have butlers and secretaries.

> at the end of the day you are advocating that a
> server hold the private key of a user, yes? And, it can just spoofs the user
> to the resource server audit logs, no? 

If you assume that everyone who is working for you is spoofing you then I can see that you would come to this misanthropic and paranoid conclusion. But that is not the case, and what I was proposing does not boil down to this.

I was looking at the logical question of how you can get a server to do work for you. If you want your FreedomBox to be able to fetch information for you, such as your friends' protected foaf files in order that it can then make access control decisions based on that information you need to:

  - give that agent a WebID
  - create a relation between you and it so that other agents will know to trust your butler when it makes requests for information

Let us consider two applications:

1. Freedom Box, Cell phones

In the case of the FreedomBox or cell phone, where there is one user per device, all is easy. There is a one to one correlation between the user and the box. So there is in fact no need to give the box a different WebID to its owner. The box is really acting on behalf of it's user and could be thought of as a mereological extension of her. 

 One could even have the user and the device use the same public key, which I suggested could in fact be a signal to people that this is a FreedomBox: ie the user owns the device.  So I suppose we all agree on the FreedomBox case, and we agree that in that case it can work for you without "spoofing" you, that this is very different from someone maliciously taking over your computer. I mean we don't want to talk of your computer spoofing you whenever you browse the web, right? And yet your computer holds your public key for you - I assume you don't mentally remember each one of them, and type them out.

2. Organisations

Where a company, church, school, medical institution or other larger organisation is serving profiles and other resources there is the issue of the relation between the user and the organisation that I was pointing out. Here I pointed out that there were 2 tools one can use

 A. have the robot/secretary/butler make the request as himself
 B. have the robot/secretary/butler make the request as you

There is in fact a third option I did not look at

 C. Create secretary agent for each user, give the secretary a WebID and have it make the request for the user. Perhaps this would be the :secretary relation (keeper of secrets)

  she:anne :secretary she:d1 .
  she:alice :secretary she:d2 .

A. is very honest, but it makes it difficult for people to distinguish between what they want to tell different members of an organisation, as I pointed out. (So if Facebook made a request it would only make a request as one user.)

C. Is also very honest. It comes with the notion of a contract such that talking to :d1 is like talking to :anne, and no one else; and talking to :d2 is like talking to :alice and no one else. Perhaps this is better.

B. could seem like the robot is pretending to be you, but how different is that then from the Freedom box owner acting on your behalf?

In the above cases the server does keep private keys. But not all the private keys. He can hold his own - which he does anyway to get TLS started, those of each of the secretaries, or one for each user.

The problem for any agent communicating with either an agent is that cases A, B, and C are possible. In any case the owner of the service has to be taken into account in all communication with any of its members, since he sees, or could if he cared, see everything. (And he could deny any communication if he wished to.) That is why we distinguish people who talk as employees, as church members or as citizens. This oversight is also what makes us give more weight sometimes to people who talk from their organisational role, since those presumably follow rules which many members are interested in maintaining. 

So these are in part just observations on how organisations work. We always should take into account who we are talking to and what their affiliations are.

> This concept is evidently a webby-variant  variant of what one of the major
> firewall vendors does today, using their SSL MITM proxy based on web proxy
> CONNECT technology.

It is similar, but not identical. In both cases it is true an agent works on behalf of another. But it is important to be clear of the differences here, in order to avoid confusing things which can then be fodder for a FUD (Fear Uncertainty and Doubt) campaign.

In the case of the FreedomBox the box works for you. In the case of an employee behind the firewall, the firewall works for the company (and so does the employee)

In the case of the firewall all the users' connections go through the firewall: it has a monopoly, it is bottleneck. In the case of the FreedomBox the user can connect directly to every site he wants to. The FreedomBox is his identity holder, but his communication need not proceed only through it. He can go to his friends web sites and edit something there, that his freedom box may never see.

Let us be even more clear because your talk of MITM (Man in the Middle) Proxy is bound to lead people to think of Man In the Middle Attacks as one would worry about having in a Café. Could the Café or your competitor play the same game and set up a MITM proxy for you?  If that were the case then TLS would of course be completely doomed. And it is not possible because to do that they would need to access to your hardware.

> Having received a downstream resource server's request
> for SSL client authn sig and cert (e.g. a resource server within the server
> farm), the firewall acting as MITM asks the browser (or upstream CONNECT
> proxy, even) for the same (and duly obtains it, on that browser-firewall ssl
> channel). The MITM agent then ephemerally-mints keying/credentials for that
> user (given access to the unsigned foaf card, or directory record), and uses
> them to signs the client authn signature - prepared for the resource
> server's usage. It presents its own cert ActingFor the user, recently minted
> on the fly of course - which points to its own copy of the directory record
> (or foaf card).

That only works for employees of the company identified by WebIds the company owns.
So that is ok.

> This works well in reverse proxy mode by an array of firewall nodes
> implementing extranets and SAAS tenants - being on the inbound path to an
> protected enclosure of sensitive resources. Why? Because the resource
> servers use that very same agent (on their outbound path to the web) as
> their caching source, and thus may obtain the recently "modified foaf card"
> in response when checking webid protocol. This is the (unsigned, no
> integrity) card modified above, to add to the user's list of pubkeys the key
> ephemerally generated, to spoof the user to anyone will to use THIS copy of
> the (no integrity) foaf card. Today, this is done in practice by modifying
> the object in the firewall maintained replica of the activeDirectory record,
> rather than a foaf card - a replica which is trusted as authoritative (think
> DNS authoritive zone copy) by the resource servers relying on trusted agents
> . Trust in this sphere is usually defined as: you let ME do the authority
> checking (and I may lie, in your wider interests, since Im better positioned
> to know corporate policy, being the guardian of end-end policy).

Again we had a long discussion on this before. I believe this only works if the
company has access to your hardware. You never denied this as far as I can tell.

> This is all very classical. Its spoofing by poisoning the caches, and duping
> the relying party into believing they do have an end-end path that WOULD
> counter agent spoofing of keys (which they don't have, in reality, due to
> authority subversion).

It is not spoofing if it is part of your contract that the company has access to all your communication, which is why they were able to place the CA certificate on your computer they gave you. Btw, this can be helpful, in that for example the company could log all transaction allowing you to later prove your innocence in case of some issue.

> Usually, its accompanied by a trust projection, in
> which "trustworthiness" is assigned to only those resource servers that
> willingly participate in this regime.

Sometimes you go on so long that I have difficulty seeing how what you are saying relates
to what is at hand. Are you still talking about the issue on this thread? 

> Commercially its sensible. If one recalls the VISA/Master card SET protocol
> (that I got to work on, at the pinnacle of my career in 1997) it failed as
> an end-end design. Just nobody adopted (as it implied Americans would have
> to carry $20 crypto cards merely to use the web's e-commerce (and then gain
> access to ISPs, was the plan), much like those planned for the national id
> card - now $30-35 per person, of course).

I don't understand your argument. Some scheme you worked on failed because people had to carry
crypto cards. There was no talk of crypto cards here.

> So, folks innovated - tring to
> leverage what worked, but had been socially rejected. Soon, there were
> server-side wallets and software client credentials (vs
> javacards/visacards), that would retain the user's keying material and certs
> remotely - and actFor the AUTHENTICATED user, on demand. To access these
> agents from a browser... one used commodity SSL :-) to establish one's
> identity. 

yes, one can use commodity SSL to establish one's identities, and one can have robot
butlers and secretaries with their own identities.

Btw, if crypto cards did take off, and people kept them for a long enough time that this identity was stable then one could use them to distinguish butlers and secretaries from the agent. This could be done by having friends print out links to one and also to one's long term public key.

she:anne foaf:knows [ = he:bob;
                      cert:publicKey [ .... ]

Thinking long term like this would suggest that we distinguish between the person and the secretaries. But that brings too many new issues into play, that we don't have enough experience to decide on, given that we don't even yet have a test suite for the simple WebId protocol.


> -----Original Message-----
> From: public-xg-webid-request@w3.org [mailto:public-xg-webid-request@w3.org]
> On Behalf Of Henry Story
> Sent: Saturday, April 09, 2011 9:43 AM
> To: WebID XG
> Subject: butlers and robots and access control
> I have been  working on  Clerezza (zz) which can now host very basic and
> uninteresting profiles and allow users to create WebIDs. I started to work
> on the ability for the server to fetch remote resources *for* its users, ie.
> possibly protected ones. Perhaps I started this a bit too early. But here is
> an outline of what I discovered while working on this.
> The most obvious way to proceed is for a zz server to create an additional
> Public/Private key pair for each of its users and build SSL client
> connections with that when it fetches information that may be under access
> control. Any server zz connects to can return representations just by
> considering the WebID in the request and if they should or should not have
> access.

Social Web Architect
Received on Saturday, 9 April 2011 21:36:42 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:39:43 UTC