Re: rdf as a base for other languages

>In message <20010601130755B.pfps@research.bell-labs.com> you wrote:
>
> >For example, suppose that you wanted to represent propositional formulae
> >within RDF.  You might do something like:
> >
> ><rdf:type x OR>
> ><component x y>
> ><component x z>
> >
> ><rdf:type y rdf:Statement>
> ><rdf:subject y John>
> ><rdf:predicate y loves>
> ><rdf:object y Mary>
> >
> ><rdf:type z rdf:Statement>
> ><rdf:subject z John>
> ><rdf:predicate z loves>
> ><rdf:object z Susan>
> >
> ><loves Bill Susan>
> >
> ><rdf:type Bill Person>
> ><rdf:type John Person>
> ><rdf:type Susan Person>
> ><rdf:type Mary Person>
> >
> >You understand this collection of RDF triples to mean that Bill loves
> >Susan and John loves either Mary or Susan, and that they are all people.
>
>Well, not really.  I had to think about which kind of OR you meant.
>Did you you mean to just declare a relation (r=y OR z) or did you mean
>to assert something (true=y OR z)?  I (with help from Eric
>Prud'hommeaux looking over my shoulder) made a closed-world
>assumption, noting the absense of <result x r>, and decided you meant
>the later version.
>
>The two kinds of OR are exactly like my two kinds of robot actions:
>does it jump when I tell it about a jump, or does it wait until I
>specifically ask it to perform the jump?
>
>The point is that some RDF vocabular terms need to be defined as
>"operational" or "performative" for particular agents.  Your OR was a
>performative OR, where the operation was to add a disjunction to the
>knowledge base obtained by reading the text.  That operation could
>only be performed by an agent which understands disjunction, of
>course.

No, no no. Logical meaning has nothing to do with performatives. 
(This seems to be a common misunderstanding which has surfaced on 
rdf-logic several times.) The basic idea of making assertions is not 
that these will *mean* some kind of inferential behavior in the 
listener (such as adding a disjunction to something). Logic is not 
about programming something to draw conclusions. The idea is that the 
logical assertions express truths about some domain. An agent which 
gets these assertions can utilise the sentences in any of a variety 
of ways, as long as those ways conform to the intended semantics. In 
order to conform to the logical meaning, the receiving agant only has 
to be constrained to perform *some* kind of valid operations on the 
sentences. It might draw conclusions, decide to behave in a certain 
way, check consistency against a model, or indeed to just do nothing 
(doing nothing is always valid in a monotonic logic); the logical 
meaning makes no stipulation between these or a host of other 
possible actions. It refers only to truth, and requires only that 
truth be preserved.
So, to return to Peter's example: if OR is supposed to be what it 
started out as being (before Peter encoded the logical disjunction 
into RDF), then something needs to know what its truth-conditions 
are: how the truth of a disjunction depends on the truthvalues of its 
compnent subexpressions. But in the RDF encoding, that information is 
not provided as part of the RDF model. So this encoding is not a 
translation of a disjunction into RDF.

You refer to a closed-world assumption, but I fail to see how a CWA 
can specify the truthconditions for disjunction (or anything else.)

Pat Hayes

---------------------------------------------------------------------
IHMC					(850)434 8903   home
40 South Alcaniz St.			(850)202 4416   office
Pensacola,  FL 32501			(850)202 4440   fax
phayes@ai.uwf.edu 
http://www.coginst.uwf.edu/~phayes

Received on Friday, 1 June 2001 16:38:24 UTC