W3C home > Mailing lists > Public > public-xg-webid@w3.org > March 2011

RE: issue of initiating client auth for parallel SSL sessionids

From: peter williams <home_pw@msn.com>
Date: Tue, 1 Mar 2011 04:31:51 -0800
Message-ID: <SNT143-ds12F53D2E6FF2CD865B33E692C10@phx.gbl>
To: "'Henry Story'" <henry.story@bblfish.net>
CC: <public-xg-webid@w3.org>
 

The Machine is an agent. It can either have it's own WebID, and there be a
relation between the user and the the machine, or the machine can create its
own public private key and act as the user it is representing.

 

The best would be to distinguish and to add the following to your profile,

 

 :me userAgent :robot .

 

Web sites should allow userAgents asserted to be trusted to gain access to
protected data.

 

I don't like that bit (on first impression). Its setting up a 2 tier world,
where certain agents are more trusted than others. This is introducing the
notion of a privilege - used widely in trusted operations systems of course
that some accounts are more privileged than others - and can circumvent the
reference monitor. This is of course a strong theory to base design on
(being so classical).

 

How the privilege model from the world of the monolithic single system case
(e.g. a Unix security kernel) translates into networked distributed systems
(e.g. a US NCSC RedBook trusted "network") has been a study topic in the
security world for 30+ years. Like any design theory, it has its lovers (US
DoD) and haters (UK DERA) The distributed "trusted computing base" from the
US model is getting very much closer to reality, because most personal PCs
have a TPM control chip in the motherboard with trusted crypto and trusted
store and trusted root keys, that allows the unit to not only optionally do
trusted boot but participate in a trusted communication network (logical
IPsec nets, overlaying internet packets). Obviously, that is all a bit
"political" - though it allows a good theoretical security model based on
remote privilege execution/evaluation.

 

Surely, it doesn't have to have a custom script, knowing about each
particular foaf agents programming of a URI, that fires up off login
button's event handler!

 

No, of course not. The server that crawls the web uses it's own https
library, with it's own certificate containing it's own WebID. The protocol
is recursive. The same machine can at times be a client at others a server.
The web is a peer to peer protocol.

 

But how does the crawler invoke https, so SSL has it present its client
cert?  remember, SSL is not peer to peer, its client server. Only the server
can induce the client authn procedure to happen (a client can merely
indicate unwillingness to continue a previous sessionid). The server has to
know to do that, by SOME signal. In the IDP case its easy (user presses
login button, event handler fires, SSL server invokes client authn demand,
sessionids fix themselves to suit the clientHello/ServerHello
negotiation(s)).

 

By default, going to the foaf card's http endpoint just gets one the public
profile (over https for authenticity and integrity), with no client authn
requirements.

 

Now, I suppose, if the world was such that no anonymous browsing exists any
longer (and EVERY interaction is over https with client authn) then the
privilege model works, as does https (since crawler ID is always present,
even on public access). At that point, one is in Gilmore paranoia land (and
I'll probably join him).

 

 
Received on Tuesday, 1 March 2011 12:32:25 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Tuesday, 1 March 2011 12:32:25 GMT