- From: Paola Di Maio <paola.dimaio@gmail.com>
- Date: Mon, 29 Aug 2011 12:43:18 +0100
- To: Enrico Franconi <franconi@inf.unibz.it>
- Cc: semantic-web at W3C <semantic-web@w3c.org>
Received on Monday, 29 August 2011 11:43:46 UTC
Its been a while since I studied artificial intelligence, but I remember writing fact checking routines implemented with rules at the time were pretty basic stuff The way I did it at the time was to model the fact checking routines that humans carry out (some professions have specific rules/protocols for fact checking, such as the legal or the forensics professions, other just follow their common sense) and all have their limitations, of course I am sure the concept can be refined ad libitum will send you a link to the paper, and would welcome input/feedback P On Mon, Aug 29, 2011 at 12:17 PM, Enrico Franconi <franconi@inf.unibz.it>wrote: > > On 29 Aug 2011, at 11:44, Paola Di Maio wrote: > > > ha ha, no- the reasoner (or the ontology) would need to check its facts > via a simple routine have a built before it spews its outcome > > This simple routine being? > --e.
Received on Monday, 29 August 2011 11:43:46 UTC