Re: Named graphs etc

On Mar 16, 2004, at 04:06, ext Pat Hayes wrote:

>>> I fail to see how the use of vocabulary IN a graph can POSSIBLY 
>>> constitute a signature or warrant. Anyone can write anything into a 
>>> graph.
>>
>> Yes and no. If the signature includes a checksum of some sort by which
>> the contents of the graph can be (to some degree) verified, then it
>> becomes harder to create fraudulent graphs -- and those 
>> agents/publishers
>> which have much to lose from fraud (e.g. banking services) will invest
>> more time/effort in checksumming than others.
>
> We are looking at different ends of the arrow. I'm not worried about 
> making sure that the reference to the asserted graph is OK. I agree 
> that can be checked in various ways. I'm talking about how we check 
> the reference to the agent who is supposed to be asserting the graph.

I consider the vocabulary to allow folks to simply make claims
about the authority and signature of a graph -- and whether those
claims turn out to be true/valid is based on extra-RDF determinations.

The URI denoting the supposed authority should in some way be
relevant to the testing of the signature.

We don't have to say how. That's part of the PKI or other machinery.

All we have to say is that, given certain bits of information:

1. The URI denoting a graph
2. The URI denoting an authority
3. The signature associated with a graph.

we have what we need to authenticate that graph per that authority, and
check if they said what the graph expresses (regardless of whether they
assert it).

If the PKI machinery cannot conclude, given the above information,
that the graph is authentic per that authority (for whatever reason,
maybe a server was down or a signature expired, etc.) then that is too
bad for the particular agent trying to verify a graph, but doesn't
invalidate the basic model.

All that matters is that we have the identity of a graph, the identity
of an authority, and some signature to test their valid relationship.

Right?

>
>>
>> So, a recieving agent can validate/verify a graph to different 
>> degrees.
>>
>> It may simply take the statement about the authority at face value
>> and believe it.
>
> That sounds like a VERY poor idea.

I didn't say it was a good idea, or recommended. I simply said it *may*.

> Think of the mindset of spammers. Suppose one could generate, that 
> easily and that rapidly, things that looked just like purchase orders 
> to be processed by software, and ask yourself how long before there 
> were so many of them that you wouldn't be able to find the real 
> purchase orders in the world-wide pile of rubbish.

Agreed. I was simply pointing out that this model allows different 
degrees
of validation/trust and each agent is free to decide just how strict it
wants to be.

Free
  ^      Accept all graphs as valid, presume all are asserted
  |      Accept graphs with explicit authorities identified, presume all 
are asserted
  |      Accept authenticated graphs, and presume all are asserted
  |      Accept authenticated graphs, and explicitly asserted graphs 
(inter- or intra-graph)
  |      Accept authenticated graphs, and only first-party (intra-graph) 
assertions
  v      Accept authenticated graphs, and only trusted assertions
Strict

etc...

and that last strictest level of course then can be further broken
down into all sorts of different trust models and trust policies.

The present day SW seems, though, to work at the first, free'est
level ;-)

>>
>
> OK, just as long as we then do NOT claim that a graph containing this 
> vocabulary is thereby automagically authenticated as in any way 
> authoritative., just because of the vocabulary it uses.

Not insofar as the RDF or OWL MTs are concerned, no.

But we need to have *some* extra/special MT which DOES claim that
graphs having particular terms from this vocabulary should be
interpreted in special ways (special, meaning beyond that defined
by the RDF/OWL MTs).

Thus, insofar as the RDF/OWL MTs are concerned, some graph where

    ?graph ( ?graph rdfg:assertedBy ?authority .
             ?graph rdfg:signature  ?signature . )

will not be automagically asserted or authenticated.

But those statements can provide the basis for extra-RDF bootstrapping
assertion and authentication machinery which can provide the grounding
for a suitable trust architecture (an architecture which extends, or
builds upon, or works along side of RDF/OWL).

>>>> Insofar as this latter question is concerned, I don't see one
>>>> graph specifying the assertiveness of another graph as practical.
>>>
>>> Well, I disagree. It is practical because if first-party references 
>>> can be made safe, then so can third-part ones; and it seems to me to 
>>> be extremely useful as a tool for providing warrants and so on.
>>
>> Agreed. I misspoke. Sorry.
>>
>> What I meant was that I failed to see as practical a system
>> in which *every* assertion was essentially third-party.
>
> I wasnt intending to propose that. If a graph can assert (by virtue of 
> being asserted by a signed agent and saying that it asserts by that 
> agent) then it can assert itself or can assert something else, either 
> way. The checking arises from the coincidence between the real agent 
> and the claimed agency of the assertion. What is asserted can be 
> anything.

I think then we are in agreement.

>
>>
>> Without being able to terminate those assertion chains at graphs
>> which have within themselves the terminal, bootstrapping statements
>> such that the agent need not look to yet another graph to determine
>> the assertiveness/authenticity/trustworthiness of that graph
>
> But you can't get that assurance from the graph alone. We  MUST have 
> some way to check that the agent is real: otherwise I can publish 
> graphs which assert that you assert things that you don't even know 
> about. And that's where the termination happens, at the signed 
> confirmation of the real agent coinciding with the claimed agent. That 
> has nothing to do with the graph being first- or third-person relative 
> to what is asserted

I think we agree here, but are having a disconnect of focus.

A few examples may help:

:A ( some:resource some:property some:value . )
:B ( :A rdfg:assertedBy ex:Patrick .
      :B rdfg:assertedBy ex:Pat .
      :B rdfg:signature "..." . )

Now, in this case, an agent that trusts Pat and has
authenticated graph B can take both graph A and B as
asserted, and can accept the claim that Patrick has
asserted graph A.

But that agent cannot confirm that Patrick has actually
asserted graph A since no verifiable claims about A
by Patrick are available. The agent is operating base
on "hearsay" insofar as graph A is concerned.

If, however, the agent also had access to

:C ( :A rdfg:assertedBy ex:Patrick .
      :C rdfg:assertedBy ex:Patrick .
      :C rdfg:signature "..." . )

and if graph C is verified as authentic, then the agent can
consider graph A as authentic since we have a verified
first-person claim about the authority of A by that same
authority.

So, on the one hand, we have certain claims being expressed in
the various graphs. Some of those claims/statements provide some
information by which the authenticity of those claims can be
tested. Since we are interpreting those claims as valid/asserted
claims in order to actually test those claims, it is a form
of "bootstrapping".

Ultimately, if the tests fail, then we reject those claims as
invalid or untrustworthy -- essentially as not being claims at
all, just noise.

Yes?

>
>> , you
>> simply go on forever and ever without ever properly grounding your
>> trust model.
>>
>> That's what I meant.
>>
>> Yes, absolutely, third party assertions are useful, but not
>> sufficient in themselves.
>
> We agree.

Woohoo! ;-)

>
>>
>>>
>>>>>>
>>>>>> Restraining the boostrapping machinery to each graph prevents
>>>>>> folks from speaking on behalf of others.
>>>>>
>>>>> You don't speak on behalf of others by using their words to make 
>>>>> an assertion that they havn't made. If you SAY that they have made 
>>>>> an assertion that they havn't in fact made, or if you pretend to 
>>>>> be them, then you are lying: and we need to be able to check up on 
>>>>> liars and detect the lies quickly and reliably.
>>>>
>>>> How? If the publisher of a graph says nothing about whether the 
>>>> graph
>>>> is asserted or not, how can anyone disagree with me if I say it is?
>>>
>>> People can say whatever they like. Why should anyone believe them, 
>>> is the question. Ultimately, the only firm authority for a claim 
>>> that A asserted something is an actual assertion by A. If we can 
>>> check an assertion by A to the effect that A asserts a first-person 
>>> graph, then we can just as easily, using the same mechanism, check 
>>> an assertion by A that A asserts a third-person graph. Asserting 
>>> doesn't have to have an implicit 'this graph' in it in order to be 
>>> checkable.
>>
>> True. And my recent counter examples to Chris' reflect this.
>>
>> The point was that you *have* to have at some point a first-person
>> assertion or else your trust model is not grounded and is just
>> floating in space with nothing but guesses and uncertainty at
>> its periphery.
>
> The grounding comes from a connection between the claimed agent and 
> the actual agent of the graph, not from whether the graph asserts 
> itself or some other graph. If G is signed by Bill and says that Bill 
> asserts H, then whether H is the same as G is irrelevant: Bill asserts 
> H. If H = G then that is fine, and if G=/= H that is fine also.

I think perhaps it would be useful to separate "bootstrapping",
(whether the graph is asserted or not based on statements in the
graph itself), from "grounding", (whether the claims expressed in
the graph are authentic for a given authority based on an
authority and signature specified in the graph itself).

Both require special extra-RDF interpretation/machinery.

>
>>
>>>
>>>> Having to rely on other (potentially infinite number of) other 
>>>> graphs
>>>> to determine the assertiveness of one particular graph seems to
>>>> introduce an horrifically inefficient and burdensome bootstrapping
>>>> mechanism.
>>>
>>> Nobody is proposing that. The only way to check whether any graph is 
>>> asserted is to confirm who said it. You, the reader who is trying to 
>>> figure out who is asserting what,  have to be able to trace a triple 
>>> of the form "A asserts..." back to a graph authored by A (whatever 
>>> exactly "authored" means). I think we agree on this, by the way. The 
>>> only thing that we disagree on it whether or not those three dots 
>>> have to refer to the graph that contains that triple, and I see no 
>>> good reason for that restriction. It doesn't provide a graph-ish way 
>>> to check true assertion unless you can check graph authorship, in 
>>> any case.
>>
>> As I've said elsewhere, ultimately one has to rely on some special
>> extra-RDF mechanism to terminate such inter-graph assertion chains.
>>
>> You say all you have to be able to do is confirm that "A asserts ..."
>> but if the only machinery you have are RDF statements and the RDF
>> MT, you can *never* get there
>
> Indeed. But we can extend the MT to give you a real place to 
> terminate. I thought that was what you wanted me in on the project to 
> do :-)

Naahhh. We were just bored and wanted some excitement... ;-)

Really, though, what we do want is *some* MT (either distinct from or 
an extension
to the RDF MT) which provides for the special intra-graph 
interpretations needed
to bootstrap the assertion and authentication per statements in the 
graph
itself.

Then we get passed the termination/grounding problems and can employ 
both
inter-graph and intra-graph assertions to make determinations about 
trust.

Right? Jeremy? Chris?

>>> I agree about KISS, but inserting self-referential constructions 
>>> which break (put severe strain on) the semantics and have to be 
>>> handled by an OWL-incompatible new layer of processing doesn't seem 
>>> KISSish to me.
>>
>> I've have yet to see an example that shows that the "bootstrapping 
>> interpretation"
>> I propose for authenticating graphs is OWL-incompatible. In fact, I 
>> assert that
>> it is not. Every statement relevant to that bootstrapping 
>> interpretation/test
>> remains true and valid per both the RDF and OWL MTs.
>>
>> It appears that you see dragons that don't exist and which I've never 
>> proposed
>> to exist.
>>
>> If you like, please take any of the examples I've provided, and show 
>> how OWL
>> breaks.
>
> Well it was that layer of preprocessing stuff that seemed problematic, 
> for the reasons I suggested. Suppose to take a very simple example, 
> you have OWL statements that a class C has cardinality one and that 
> ex:thisURI and ex:thatURI are both in it and that ex:thisURI is the 
> name of a graph, and that ex:thatURI is asserted. It follows that the 
> graph is asserted, but you won't know that by inspecting the URIs 
> unless you are very OWL-savvy. Now suppose that the graph doesnt have 
> the cardinality info in it but you discover it a month later. Now make 
> the reasoning arbitrarily more complicated.

Right. OK.

So different agents will be able to make different determinations about
certain graphs depending on their ability/inability to do OWL reasoning.

But is that really breaking anything (as opposed to simply making things
more complicated for certain agents -- which OWL does anyway ;-)

>>
>>
>> Defining the interpretation/testing of that special information,
>> expressed as statements in the graph, need not intersect nor impact
>> the RDF or OWL MTs.
>
> The issue is how to STOP it being involved with those MTs. I don't see 
> how that would be possible.

Well, my original idea was that agents would be able to consider
graphs in terms of a specialized, narrower MT than RDF/OWL which
was just sufficient to allow them to make determinations about
assertion and authenticity per the special vocabulary.

I.e. the special MT wouldn't presume the full RDF/OWL MTs.

Sort of like having a zoom lens on a camera. To test 
assertion/authenticity,
you zoom in to apply a narrow specialized MT, and then for the rest of 
your
processing (if satisfied with the tests of assertion/authenticity)
you zoom out to apply the wider RDF/OWL MTs.

The statements you zoomed in on for the narrow shot are still there
in the wider shot, but some "special" detail may simply not be visible
from the wider view.

Just a thought...

>
> But OK, lets stop quarreling and agree that we need to do an MT job on 
> this stuff. I'll try to do one, OK?

OK.

Patrick

--

Patrick Stickler
Nokia, Finland
patrick.stickler@nokia.com

Received on Tuesday, 16 March 2004 04:20:00 UTC