- From: Carlos Viegas Damásio <cd@di.fct.unl.pt>
- Date: Mon, 29 Aug 2005 16:40:26 +0100
- To: "'Michael Kifer'" <kifer@cs.sunysb.edu>, "'Sandro Hawke'" <sandro@w3.org>
- Cc: "'Gerd Wagner'" <wagnerg@tu-cottbus.de>, <public-rule-workshop-discuss@w3.org>, <analyti@ics.forth.gr>, <antoniou@ics.forth.gr>
> > > Yes, it is a weaker kind of negation. My point is that it doesn't use > NAF > > > and is suitable for cases like the pharmacy example in the charter > where > > > you might not want to jump to conclusions. > > > > But the use case requires coming to life-and-death conclusions based > > on accessing only parts of the KB, which seems to require > > monotonicity. Maybe there can be two kinds of conclusions - strong > > and weak/defeasible - drawn by two overlapping languages? So you > > might come a stronge conclusions (these two drugs are safe together) > > or you might come a weak conclusion (there is no evidence so far that > > there is any harmful interaction between these drugs). This is like > > having NEG and NAF, but any conclusions coming from NAF have to remain > > tainted by weakness. (I'm sure this is all obvious and simple stuff > > to some of you; please bear with me and others and we learn how to put > > it together.) > > > > Yes, this is precisely what I said. The weak classical negation does the > trick here; it is monotonic (it is what is called NEG in RuleML). Just to complement: the semantics and coexistence of both forms of negation are well-understood in the LP community since the 90s, inference systems do exist, do not add computational complexity, and can be smoothly integrated in the Semantic Web (see my previous posting http://lists.w3.org/Archives/Public/public-rule-workshop-discuss/2005Aug/012 8.html). Sorry for the annoying line-breaks :-( Best regards, Carlos
Received on Monday, 29 August 2005 15:41:02 UTC