Re: fact checking for semantic reasoners

On 29 Aug 2011, at 14:31, Paola Di Maio wrote:

> I guess that your 'fact checking routines' are what normally are called ontologies.
> 
> Nope. An ontology is not a routine, to my knowledge (or at least, thats the first time I hear this definition, can you please point me otherwise), however it can referenced by a routine.  

The routine - as I said below - is the consistency check of the data wrt the ontology.
 
> The checking part would the consistency check of the available data with the ontologies themselves.
> 
> sure, so the consistency checking in ontologies - back to my question I guess - are they done internally (internal consistency) or using other resources on the web ? I suppose its easy to validate a fact (as you note) based on an internal schema, its when this schema is compared to different schemas that the consistency vacillates. that's what I asked, let me rephrase
> 
> are the consistency checks in ontologies validated by supported evidence, or how?
> 
> I suppose I am thinking of  the situation where there reasoning spans many ontologies with  conflicting axioms

You are now mixing pears and oranges, but lets consider this latest question.
There are few aspects here:
1) Is the ontology faithful? To understand that, you should resort to all possible methodologies for ontology design, and borrow all the ideas from the quite old and consolidated discipline of information systems design via conceptual models (e.g., ER, UML, ORM, etc).
2) Is your data consistent with your ontologies? This is a standard service provided by ontology systems (ABox consistency).
3) How to repair the data or the ontologies in case of inconsistency? In this case, there is plenty of recent work in the ontology field on explanation, repair, etc.

> Not to mention that it is impossible to check the consistency of all the available data in the semantic web with the ontologies.
> 
> it's called due diligence, we need to fact check everything all the time, and we do it 'manually'. 

Nope, it is ABox checking.

> would be nice to have a service as such

we do.

> you may heard that there is a concern that 'there is a lot of rubblish on the internet', something hopefully can be done to increase the confidence in web based information.

As I said, the tools are there. The point is whether they are the right tool. I guess that the notion of global consistency is unachievable, and therefore you need a different notion of "global consistency" - I guess à la peer-2-peer. On the other hand, the above tools are perfect to check local consistency.

cheers
--e.

> 
>  'its impossible' is not how we got this far (on the web, same a in space)
> 
> ...rather 'can do, must work out how to' 
> 
>  prolly means I am not expecting a collaboration on this project with you or your research group for the moment ... :-)   oh well, hopefully some other time .....
> 
> 
> cheers
> 
> P 
> --e.
> 
> On 29 Aug 2011, at 13:43, Paola Di Maio wrote:
> 
>> Its been a while since I studied artificial intelligence, but
>> I remember writing fact checking routines implemented with rules at the time
>> were pretty basic stuff
>> 
>> The way I did it at the time was to model the fact checking routines
>> that humans carry out (some professions have specific rules/protocols for fact checking, such as the legal or the forensics professions, other just follow their common sense)
>> and all have their limitations, of course
>> 
>> 
>> I am sure the concept can be refined ad libitum
>> 
>> will send you a link to the paper, and would welcome input/feedback
>> 
>> 
>> P
>> 
>>  
>> 
>> On Mon, Aug 29, 2011 at 12:17 PM, Enrico Franconi <franconi@inf.unibz.it> wrote:
>> 
>> On 29 Aug 2011, at 11:44, Paola Di Maio wrote:
>> 
>> > ha ha, no- the reasoner (or the ontology) would need to check its facts via a simple routine  have a built before it spews its outcome
>> 
>> This simple routine being?
>> --e.
>> 
> 
> 

Received on Monday, 29 August 2011 12:46:51 UTC