Re: Social Meaning and RDF

>Pat,
>
>Wow, that's a mouthful ... glad you said it .. especially the part about 
>   "Think of RDF as more like a simple, formalized,
>    sharply defined  "natural language" for software agents, .."
>
>.. which is something that I have believed from the very start ... :-)

Hey, we agree on something!

>
>However I continue to to troubled by: 
>       And this is a real constraint, not just a form of words:
>       for example, RDF really is monotonic, and that imposes
>        some nontrivial conditions on *any* notion of RDF
>        meaning, social or otherwise. " . 
>I can't seem to wrap my pee brain around the idea that this 
>restraint is useful in a nonmonotonic social world where truths are 
>always popping in and out of existance.

Well, it would be nice if there were a way to handle this in-and-out 
complexity, but I don't see one on the horizon at the moment, and the 
ones that I think will be immediately useful will in fact fit within 
a monotonic overall framework (eg time/data stamping, so that one can 
say: OK, this was true on Tuesday, and its now Wednesday; and then to 
believe it or not is a judgement call which has to be done outside 
the formal MT.)

>  From a layman's perspective could you elaborate on what this 
>restraint really entails ?

The basic point is that you can't rely on some piece of information 
being *missing* in order to conclude that it is false. So if I tell 
you that I have a left arm, you shouldn't conclude that I don't have 
a right arm, just because I didn't happen to mention it. There have 
been some discussions recently on rdf-comments about the container 
vocabulary, for example, that are relevant to this.

Now, in richer languages like OWL, you can say things like: this is 
the class of all my limbs, and its got a left arm in it but no right 
arm, and then you can conclude that I don't have a right arm. But the 
point is that you didnt infer this just from my failure to mention 
it: the description of me actually asserted positively that these 
were all my arms, and OK, then you can conclude monotonically that if 
something isn't listed, then its not there. In other words, you can 
express closed worlds, but you can't make the closed world 
*assumption*. In fact you (the reasoner) can't make *any* assumptions 
that aren't sanctioned by the model theory. (By 'can't' here I don't 
mean that its impossible or forbidden, but that the meaning spec says 
that if you do that, then YOU are making the assumption, not the 
logic itself.) The person writing the information has to be explicit 
about saying that some list is a complete list, or that some class 
has all the elements listed (eg using owl:oneOf).

>How should we think about this as we are reading and writing RDF assertions ?
>I'm looking for answers to cases like this one:
>
>Assume:
>AgentA pubilishes an ontology where "Birds type

subClassOf is what you need here.

>FlyingThings".
>AgentB publishes and ontology where "Penguins type Birds" and 
>"Penguins type NotFlyingThings".
>AgentC reads both ontologies adding the entailment "Penguins type 
>FlyingThings" according to the MT. 
>How is AgentD to remove the contradiction by communicating  to the 
>aggregated ontonlgy in RDF ?

Well, not sure what you mean by 'remove'. As you have set it up, 
there IS a contradiction (assuming that NotFlying really means not 
Flying, which is strictly speaking beyond the RDFS MT, but OK, we can 
do this kind of thing in OWL) . D can always choose to not believe 
everything it is told, of course: for example, D might have a 
strategy which says, always believe the more specific fact and reject 
the general. Or it might have some more sophisticated strategy, such 
as making a judgement call about the relative trustworthyness of the 
sources. I anticipate that much of this will be done by relatively 
simple algorithms most of the time, and that contradictory ontologies 
will cause trouble and social pressures will either tend to eliminate 
the contradictions or lead to subgroups who mutually trust each other 
emerging. But we will see.

>If this is impossible (and I believe that it is), then how can RDF 
>even be used for aggregating knowledge?

Look, RDF isnt a miracle cure. It can be used to transmit and store 
simple information and to draw simple conclusions, that is all. As a 
consequence, it can be used to DETECT some contradictions. OK, not 
much, but even that would be very useful. Actually fixing the 
detected contradictions is a research problem which ultimately gets 
us into AI and God knows what kinds of complexities; we can't solve 
those problems - right now we don't even know clearly what problems 
there will be of this general kind, in fact. But what we can do is 
set up a basic infrastructure that at least enables the process of 
social testing to get started, so that people can start experimenting 
and playing with these new ways of using information on the web. It 
doesn't have to be perfect in order to be damn useful: just being 
able to do a little bit of automatic inference from partly-reliable 
but heterogenous sources would be a huge step forward.

Pat

>... my previous readings on this topic are  here:
>http://robustai.net/papers/Monotonic_Reasoning_on_the_Semantic_Web.html
>
>Seth Russell
>http://radio.weblogs.com/0113759/
>
>
>--- in response to this context  ----
>
>pat hayes wrote:
>
>>
>>Peter, you and I both have a background in AI/KR, so I think I know 
>>where you are coming from. We both have been steeped in the need to 
>>avoid the gensym fallacy and the concomitant dangers of thinking 
>>there is more in one's KR than there really is there, and the use 
>>of an MT to provide the needed rigor to resist such errors. But 
>>that is all to do with modelling belief: representing the private 
>>mental state of a believing agent. The SW really is a different 
>>situation. RDF isn't just going to be used by agents to think 
>>private thoughts with, it's not a Fodorian Language of Thought; if 
>>anything, its more like a language for agents to talk to one 
>>another with. You know the classic 'grounding problem' for formal 
>>KR semantic theories? Well, RDF in use is grounded by its 
>>surrounding context of use, and it may be only a small part of 
>>something much larger, which is representing other information in 
>>other ways. Think of RDF as more like a simple, formalized, sharply 
>>defined "natural language" for software agents, something whose 
>>chief function is for communication, not for thinking with; and 
>>then observe that the software agents are also working in a context 
>>which involves human and social 'agents'. We really do not know 
>>what aspects of meaning might arise in the uses of RDF in such 
>>contexts, and we don't really need to know: but we DO need to say, 
>>normatively, that whatever they are, they ought to at least 
>>*respect* the minimal constraints on meaning described by the 
>>formal MT, so that the use of inference processes which depend on 
>>these constraints does not destroy or distort these social or 
>>contextual aspects of meaning. And this is a real constraint, not 
>>just a form of words: for example, RDF really is monotonic, and 
>>that imposes some nontrivial conditions on *any* notion of RDF 
>>meaning, social or otherwise.
>>
.....
-- 
---------------------------------------------------------------------
IHMC					(850)434 8903 or (650)494 3973   home
40 South Alcaniz St.			(850)202 4416   office
Pensacola              			(850)202 4440   fax
FL 32501           				(850)291 0667    cell
phayes@ai.uwf.edu	          http://www.coginst.uwf.edu/~phayes
s.pam@ai.uwf.edu   for spam

Received on Thursday, 6 February 2003 15:55:59 UTC