Re: issue of initiating client auth for parallel SSL sessionids

On 28 Feb 2011, at 12:43, peter williams wrote:

> [snip, stuff about webid loosing its user centric, and how WebID is interested in RESTful web architecture solutions]

yes, Web Architecture is not an accidental aspect of the success of the web. 

> Here, with this initiative coming from the foaf project, folks prpobably want to see “some” of the user-centric flavor be retained. I don’t feel folks are exactly on anti-corporate rants; but they are concerned about the “politics of control.” Should a lowly user

"lowly?"

> thus create an RDF graph (in a hand-written foaf card) that cites lots of https URIs including URIs of other foaf cards (quite typical!), what does it mean for a machine to crawl the file’s URI pointers - and thus de–reference the URIs – all 100 of them say; on different domains, different ports, different URI stems?

So if you connect to someone's social web server - let's say a friend of a friend of yours - that server presumably already crawls the web of his owners' friends, in order to keep up to date on what they are doing, where they are, who their new friends are, etc... 

> Assume the user is largely clueless technically, and might well not follow best security practice. Perhaps, he is 16, web savvy, lives in the Sudan, and has the education of an 8 year old, in the US, with similar access to funds (being the Sudan where folks earn $2 a day).

Who is the user? The developer of the site? Or the user using the site: the friend? Your argument relies a lot on this confusion of role. I don't see an end user writing a foaf file by hand. Developers do. So let's assume that a developer is building this site. He's already pretty clued in, if he is developing is foaf file. Kudos for him for being on the leading edge.

>  
> If I click on a webid, what do I expect to happen? Do I expect a webid protocol flow to occur?

Yes, here is a more recent version
http://bblfish.net/tmp/2011/02/23/index-respec.html#authentication-sequence

Notice that at this point the server you are connecting to would not need to crawl the web. It already has the foaf files.

> Well, if we look at foaf.me, that is NOT what happens. It shows public elements of the foaf card, no login required. If one does a modal login (with the infamous login button), it will then show the public/private elements of the same card. Its login button happens to be a websso demo – that could be talking openid auth protocol to myopenid (if myopenid was webid powered as an IDP).

Yes foaf.me has not evolved much in the last year and a half. It is working only with public foaf files.
A more confidentiality respecting Social Web server would show the minimal information to the public, and link to restricted information accessible to authenticated and authorized clients. 

I am building a demo that shows how this works. I hope others are too.

>  Now, let’s say Im a foaf group crawler, a machine setup to crawl and then cache (in a “trusted” cache using Lamspon’s theories about secure channels) all my friends PRIVATE cards. I can gain access to the good private stuff, because I’m authorized to do so as a particular foaf group member and because of following/follower relationship between foaf cards. The authorization quality is good, but access enforcement is not military or commercial grade (not needing to be). Its webby; I expect you to honor the no-trespassing sign and the symbolic fence; please don’t hack through it (though obviously you can, if you bring a chainsaw).
>  
> How do we accomplish this, using webid protocol? It’s a machine consumer acting as a foaf person, not a human person full of energy and vigour (after working 8 hours for $2). Perhaps the machine is a server, acting for a user; OAUTH like (or proxy cert like).
>  
> How does the machine UA invoke webid protocol, so as to get access rights to the private graphs and then pull them – simply to act as a foaf card crawler and cacher?

The Machine is an agent. It can either have it's own WebID, and there be a relation between the user and the the machine, or the machine can create its own public private key and act as the user it is representing.
The best would be to distinguish and to add the following to your profile,

 :me userAgent :robot .

Web sites should allow userAgents asserted to be trusted to gain access to protected data.

>  
> Surely, it doesn’t have to have a custom script, knowing about each particular foaf agents programming of a URI, that fires up off login button’s event handler!

No, of course not. The server that crawls the web uses it's own https library, with it's own certificate containing it's own WebID. The protocol is recursive. The same machine can at times be a client at others a server. The web is a peer to peer protocol.

Received on Tuesday, 1 March 2011 11:47:12 UTC