Re: responsibility in LDP, was Re: TWO PROPOSALs involving Prefer - volunteering for the army -> ldp:index PROPOSAL

On 3 Mar 2014, at 15:55, Roger Menday <roger.menday@uk.fujitsu.com> wrote:

> 
> For what it's worth, on the Web, 'high-stakes' activities would probably be part of a wizard of steps with a number of "Are you sure you want to do this ?" questions along the way. 
> 
> I can see how this pattern could be provided for Robot consumers too, i.e. 
> 
> POST to Army, to create a Joining resource
> Followed by POST to Joining to create some other kind of resource confirming the commitment. 

Completely agree. But at some point there is going to be one that is a commitment. And that would 
in the current spec probably be an LDP-IC. 

Now the question which was never fully adressed is if we really need that. Can one get the same
thing by the client:
 1) POSTing to an LDP-BC
 2) Patching some other file with one relation

If so then Sandro would be correct in arguing that the LDP-DC and LDP-IC may not be needed in a LDPv1.
I have a feeling that LDPC with effects are going to be pretty useful. But a proof would be nice.


> 
> Roger
> 
> 
> 
> On 3 Mar 2014, at 14:47, henry.story@bblfish.net wrote:
> 
>> 
>> On 2 Mar 2014, at 04:21, Sandro Hawke <sandro@w3.org> wrote:
>> 
>>> On 03/01/2014 03:58 PM, henry.story@bblfish.net wrote:
>>>> On 1 Mar 2014, at 18:03, Kingsley Idehen <kidehen@openlinksw.com> wrote:
>>>> 
>>>>> On 3/1/14 4:23 AM, henry.story@bblfish.net wrote:
>>>>>> But the rules are not excessive. The rule is that the client understand the meaning of the membership triples that will be added as a consequence of the POST. It can gain this understanding by asking a human agent of course (as browsers currently do on an html form), or the client can be specialised on specific vocabularies, ( a meeting organiser for example ), or it can work with relations that are well understood to have little dangerous implications, such as ldp:contains. And since there exists an ldp:BasicContainer that clients can work with without danger, those that do not wish to take risks should use only that.
>>>>> It's impractical, to expect this of Web clients.
>>>> I don't see how it is impractical to require of Web Clients that don't want to use Direct or Indirect Containers not to use them. That pretty simple to do: they just can use DirectContainers.
>>>> 
>>> 
>>> I assume you mean BasicContainers at the end of that sentence.
>> 
>> yes, thanks.
>> 
>>> 
>>>> It is also not impractical to require of a Web Client that does wish to use Direct or Indirect Containers that it understand the membership predicates of a particular container, and that if it does not, that it not POST there. Crawlers and user agents of all types come accross HTML forms that can POST things to resources, and they don't just arbirarily POST things there. So this habit of not POSTing blindly to any resource that accepts POSTs is already wide spread and central to the current web architecture.
>>> 
>>> As in my room-reservation example [1], there may be ways to know what the container is for without ever GETing it.  In fact, in that example, I provided a solid example of how the alternate ways are more like the current Web and more reliable.
>>> 
>>> Henry, you want to make the client responsible for the triples the server adds to the container, and I think I'd agree that if the client knows the triples are going to be added it bears some responsibility for the consequences of them being added.  But what I think Kingsley and I are saying is that on the open Web, this Working Group doesn't have the power to compel clients to obtain that knowledge, and without that knowledge they no longer bear the responsibility.   I don't believe us saying in the spec that they are responsible will make it so.
>> 
>> Things are more complex than that as I think you show below. Responsibility can be shared.
>> 
>>> 
>>> I think the solution is to be clear that clients bear responsibility for the triples they POST, and if you want them to be responsible for their membership triple, then have the server require them to include it in the POST.   (That is, membership triples become just a kind of often-inlined member triple.)
>>> 
>>> But ... there's a lot more going on here.   Maybe we do need to figure it out.    I'll start with briefly summarizing how I think responsibility works on the Web:
>>> 
>>> 1.  Each Web Resource (aka Web Page) is a social entity, with social behavior.  Basically, it says things.  Those things might be lies, they might be out of date, they might be useless.    Hopefully they're true and interesting.   When the Resource is a really good one, if we act on the assumption what it says is true, good things will happen for us.   If it's a bad one and we trust it, bad things will probably happen.   That's a kind of loose definition of trustworthy.
>>> 
>>> 2.  We need ways to figure out which resources are trustworth, which ones we can trust to lead us in the right direction.  But there are a lot of resources, far more than we could ever keep track of.   And we move between them a lot, using many of them only once, so we have no chance to learn if it's trustworthy.
>>> 
>>> 3.  The general solution is to cluster resources, with one taking responsibility for a set of others.   Sometimes it's partial responsibility.   Every domain owner has some responsibility for the content served with URLs in that domain.  Domain certificate holders have some additional responsibility for https content signed with that certificate.   On my business card, I have some URLs for which I'm taking responsibility.    Those URLs convey some level of trust onto other Resources by linking to them without a big warning (or rel=nofollow).
>>> 
>>> 4.  We probably need some good Linked Data ways to handle this.   It looks feasible, but I haven't seen it done.  But basically, I'm pretty strongly responsible for the content served from my webid, and I should be able to take responsibility for other resources chaining out from there.   By doing so, I signal that trust in me should be to some degree linked to trust in all these resources. Some of those resources might be human, too.   (This sounds a lot like pagelink, of course.)   I guess owl:import is the predicate we have that signals complete trust; we probably want more control than that.
>>> 
>>> So what happens during POST?    My proposal is that we be clear that clients doing a POST are responsible for the content of the post. To a first approximation, if my client certificate is used for the POST, then I'm responsible for the posted content.  Looking closer, probably there's some software that's responsible, too; in some situations that can be made explicit. (This is like the "Posted Via..." links in Facebook and Twitter, which show which client software/app was used to make the posting.  This is important when those screw up or might be malicious.   I had some android malware once, posting spam as me, but it was easy to fix because of the client software attribution and access control.  I guess using the Origin header might address this for a class of webapps.)
>>> 
>>> When the server gets the POST and makes it available to others, I expect it to keep the responsibility pointed at whoever did the post.   This might be done cryptographically, but I'm expecting it to more like rel=nofollow.
>>> 
>>> Considering only general LDP servers for now, I imagine client A creates container AC and controls its configuration.  It sets things up so client B can POST, but not edit AC's configuration.   Now B uses POST to create container BC, for which only it controls the configuration.   It allows C to do a POST there, at URL CP.
>>> 
>>> Someone comes along and see the post CP.   Do A and B have any responsibility for the triples in CP?     Does it matter what the configuration of AC and BC are?
>>> 
>>> It seems like a good model to say simply that clients are responsible for the triples they post.   LDP servers use HTTP headers (and maybe some special resources, named by the headers) to tells clients which other clients (or entities) are responsible for content.
>>> 
>>> Alternatively, I guess one could say that by granting write access, one is giving complete trust.     That's probably the model in most existing rww applications.    My sense is that's too naive to work for my apps.
>> 
>> I agree with a lot of what you say here, but this is really a question of how an agent makes choices of what it trusts. Web User Agetns leave this over
>> to humans to do, and they tend to rely on web sites. Web site try to use things like TLS to increaser the trust. And there are a number of signals that
>> can be used. Humans expect some coherence from one site, and site owners try to keep their site coeherent.
>> 
>> Robots that are following the web will need to decide also what links they should follow and how much they should trust them. So if
>> </> describes </container> to be an ldp:IndirectContainer with the consequence of creating the next move in a chess game, and the trusts
>> the site to be consistent, so be it. If it understands the relation of creating a next chess move, then it knows what it should POST there.
>> Sites that are inconsistent or misleading will probably end up with a lot of legal problems, and so won't last long.  Other robots might
>> feel for certain transactions they would like to be more sure about what they are doing and only do a Conditonal POST. What I am arguing here
>> is that conditional POSTs be possible and thought of at the outset. Putting the empty container triples in another LDPR does not help
>> that use case.
>> 
>> So I still think that we'd be better off with an ldp:index relation .
>> 
>> Henry
>> 
>> 
>>> 
>>>    -- Sandro
>>> 
>>> [1] http://lists.w3.org/Archives/Public/public-ldp/2014Feb/0012.html
>>> 
>>> 
>>>>> Yes, RDF is about machine discernible and comprehensible semantics, but none of that means that a client MUST possess any such capabilities. In my eyes, RDF comprehension resides in vocabularies, never in the behavior of a client that's using HTTP to interact with content. i.e., a majority of HTTP clients will not exploit all the semantic implications expressed in a vocabulary or ontology.
>>>> When reading RDF this is not a problem, because of the monotonicity requirement of RDF implication: you still have a true graph if you remove statements that you do not understand. Every graph implies its subgraphs.
>>>> 
>>>> But when POSTing a graph to an Direct or Indirect Container you
>>>> 1) create a new resource with your POSTed graph
>>>> 2) create a new relation on top of the ones contained in the graph POSTed
>>>> 
>>>> RDF does _not_ say that a subgraph implies every super graph.
>>>> 
>>>> Take for example a graph A = { c a Car }. It is compatible with each one of
>>>> 
>>>>  B = { c a Car; unicolor blue }
>>>>  R = { c a Car; unicolor red }
>>>>  W = { c a Car; unicolor White}
>>>> 
>>>> But given the right definition of unicolor B, R and W are not compatible. There  is no possible world where the car is all three.
>>>> So a client knows that when posting to a direct or indirect container it needs to agree with the graph AND the extra triple that
>>>> the protocol very clearly lays out as being created.
>>>> 
>>>>> This (I think) is the point Sandro is trying to relay in regards to his concerns about the above. We have to understand that (fundamentally) the Web's strength lies in its tolerance of the good, bad, and the ugly during client and server interactions.
>>>>> 
>>>>> On the Web (or any network with heterogeneous clients and servers) you could inadvertently sign up for the Army, but that signup will never stand up in the real world :-)
>>>> I'd be pretty pissed of with my software if it inadverently signed me up for the army even if I was then able to go to court and win my case.
>>>> There were times when you needed much less to get signed up to the army (see the http://en.wikipedia.org/wiki/King's_shilling )
>>>> 
>>>> I purposefully took an extreme case to get people thinking, but you can take many more realistic ones.
>>>> 
>>>> If you go to e-bay and you POST a bid and win, you are liable to have to spend money.
>>>> If a microtransaction system is set up with LDP and you post to it, then you'd still later need to UNDO your POST.
>>>> 
>>>> In each of these cases you may be able to UNDO harm, but the harm still happened. Undoing the harm is one extra event on top
>>>> of the initial harmful event.
>>>> 
>>>>> I've always seen entity relation semantics comprehension as a feature that clients and servers use to distinguish themselves competitively, but never the basis for MUST requirements in specs.
>>>> LDP is new in the space of RDF usage. So expect things here to be a bit different :-)
>>>> 
>>>> 
>>>>> --
>>>>> 
>>>>> Regards,
>>>>> 
>>>>> Kingsley Idehen
>>>>> Founder & CEO
>>>>> OpenLink Software
>>>>> Company Web: http://www.openlinksw.com
>>>>> Personal Weblog: http://www.openlinksw.com/blog/~kidehen
>>>>> Twitter Profile: https://twitter.com/kidehen
>>>>> Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
>>>>> LinkedIn Profile: http://www.linkedin.com/in/kidehen
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>> Social Web Architect
>>>> http://bblfish.net/
>>>> 
>>>> 
>>>> 
>>> 
>> 
>> Social Web Architect
>> http://bblfish.net/
>> 
>> 
> 

Social Web Architect
http://bblfish.net/

Received on Monday, 3 March 2014 15:02:53 UTC