RE: questions on assertion

> In this case, I am trying to figure out in that case how the RDF model
> theory would cope with expressing the following.
> 
> 1. my car is red

<rdf:Description rdf:about="urn:autos:my-car" rdf:ID="Statement1">
  <ex:Color>Red</ex:Color>
</rdf:Description>


> 3. X is not true.

<rdf:Description rdf:about="#Statement1" rdf:ID="Statement3">
  <ex:Veracity>False</ex:Veracity>
</rdf:Description>

> 4. my car has four wheels

<rdf:Description rdf:about="urn:autos:my-car" rdf:ID="Statement4">
  <ex:WheelsCount>4</ex:WheelsCount>
</rdf:Description>

> 6. X is an assertion made by P

<rdf:Description rdf:about="#Statement1" rdf:ID="Statement6">
  <dc:Author>P</dc:Author>
</rdf:Description>

> 7. Y is an assertion made by Q

<rdf:Description rdf:about="#Statement4" rdf:ID="Statement7">
  <dc:Author>Q</dc:Author>
</rdf:Description>

> 1. If we interpret an assertion to mean "I believe 'my car is red' is
> true."

More like "Someone asserted that ('my car is red' is true)".

> "I believe ["I believe 'my car is red' is true"] is false"
> Which is a paradox.

Someone asserted that (asserting ('my car is red' is true) is false))

> So the problem I am getting at, is how can say, without creating a
logical
> inconsistency, that one believes a statement in rdf data is false?

Just because you have two conflicting assertions does not mean that you
have chosen to believe either one of them.

> This is in my view a real problem for applications involved in
reputation
> and trust.

Actually, I think that trust in metadata depends on people being able to
make statements like number 3.  This is exactly what is needed to allow
you to choose what assertions to trust.  For example, assume that your
list has a few more assertions:

8. Statement 3 is an assertion made by R
9. Statement 6 is true
10. Statement 7 is false
11. Statement 8 is true
12. Statement 9 is made by your tamper-proof digital signature checker
13. Statement 10 is made by your tamper-proof digital signature checker
14. Statement 11 is made by your tamper-proof digital signature checker

Now, if you can determine that R is someone you routinely trust, you can
discard assertion #1, and store some internal information about person P
so that you know to be suspicious of him in the future.  Else, if *you*
happen to be person P, you can disregard R and put a ding against him in
your reputation database.  And if P and R turn out to be the same
person, you can just discard both assertions, since they are coming from
a schizophrenic (or you could choose to take the most recent one,
assuming the guy changed his mind or repainted the car).

> 2. If rdf statements implicitly carry assertion, how can I specify the
> author of the assertion? That is - does the assertion implied by 1.
also

As I showed in 6 and 7, and assuming your subsystem can assert 9-14 for
you.

Also, note that it is not necessary to decorate every assertion like
this.  You could wrap assertions in collections -- this is how Klyne
Contexts work.

Received on Monday, 8 July 2002 18:39:49 UTC