- From: Xiaoshu Wang <wangxiao@musc.edu>
- Date: Mon, 18 Sep 2006 11:25:01 -0400
- To: "'Miller, Michael D \(Rosetta\)'" <Michael_Miller@Rosettabio.com>, <public-semweb-lifesci@w3.org>
--Michael, > > Well, how can a computer knows my intension about the parts that I > > don't "use/disagree"? But, I think, if I disagree one > portion of the > > ontology, I certainly would not use the other part of the > ontology at > > all since if I make one contradicting statement, it will invalidate > > the entire model. > > Consider an effort that creates an ontology to wrap the > English language (or any other language) so that it could be > reasoned over. This seems a noble objective. Noble indeed, but I doubt it is ever possible. :-) > Now if it truly captured the 'essence' of the language, which > many people only understand overlapping parts of, others, > perhaps those in a particular scientific domain, have a > specialized knowledge of a part of the language that others > don't, different reasoners ought to be able to be created > that can duplicate this ability of humans to (mostly) > communicate together at different levels of understanding. What makes a human language different from a machine language is the former is semipolymorphic whereas the latter is not. Human language favors creatism and it communicate through experience. Not two people would appreciate the same poem in the same way because our experience of life is different. Machine language is different. It must be precise and unambiguous. I remember John Madden once said that Snomed actually has more vocabulary than a english dictionary. I think that says the problem. > If we can't, I believe this points out a current weakness in > how we express ontologies and write reasoners. It's > obviously possible to do, we do it as people all the time. I think we want machine to be predicatable and reliable. Thus, it is not a "weakness". It is our intension, right? Xiaoshu
Received on Monday, 18 September 2006 15:26:03 UTC