- From: Henry Story <henry.story@bblfish.net>
- Date: Wed, 22 Jun 2011 18:53:42 +0200
- To: William Waites <ww@styx.org>
- Cc: Kingsley Idehen <kidehen@openlinksw.com>, public-lod@w3.org
On 22 Jun 2011, at 17:14, William Waites wrote: > * [2011-06-22 16:00:49 +0100] Kingsley Idehen <kidehen@openlinksw.com> écrit: > > ] explain to me how the convention you espouse enables me confine access > ] to a SPARQL endpoint for: > ] > ] A person identified by URI based Name (WebID) that a member of a > ] foaf:Group (which also has its own WebID). > > This is not a use case I encounter much. Usually I have some > application code that needs write access to the store and some public > code (maybe javascript in a browser, maybe some program run by a third > party) that needs read access. > > If the answer is to teach my application code about WebID, it's going > to be a hard sell because really I want to be working on other things > than protocol plumbing. So you're in luck. https is shipped in all client libraries, so you just need to get your application a webid certificate. That should be as easy as one post request to get it. At least for browsers it's a one click affair for the end user, as shown here http://bblfish.net/blog/2011/05/25/ It would be easy to do the same for robots. In fact that is why at the University of Manchester Bruno Harbulot and Mike Jones are using WebID for their Grid computing work, because it makes access control to the grid so much easier that any of the other top heavy technologies available. > If you then go further and say that *all* access to the endpoint needs > to use WebID because of resource-management issues, then every client > now needs to do a bunch of things that end with shaving a yak before > they can even start on working on whatever they were meant to be > working on. You can be very flexible there. If users have WebId you give them a better service. Seems fair deal. It can also be very flexible. You don't need all your site to be WebID enabled. You could use cookie auth on http endpoints, and for clients that don't have a cookie redirect them to an https endpoint where they can auth with webid. If they don't ask them to auth with somithing like OpenId. I'd say pretty soon your crawlers and users will be a lot happier with WebID. > On the other hand, arranging things so that access control can be done > by existing tools without burdening the clients is a lot easier, if > less general. And easier is what we want working with RDF to be. All your tools probably already are webId enabled. It's just a matter now of giving a foaf profile to yourself and robots, getting a cert with the webid inthere, and getting going. Seems to be that that's a lot easier than building crawlers, or semweb clients, or semweb servers, or pretty much anything. Henry > > Cheers, > -w > > -- > William Waites <mailto:ww@styx.org> > http://river.styx.org/ww/ <sip:ww@styx.org> > F4B3 39BF E775 CF42 0BAB 3DF0 BE40 A6DF B06F FD45 > Social Web Architect http://bblfish.net/
Received on Wednesday, 22 June 2011 16:54:15 UTC