Re: RE: Evidence

Hi, Matthias, Vipul, Alan,
just a few comments on your nice idea of acceptance tests, which fit  
very nicely into my approach to ontology evaluation and assessment  
(see e.g. "Modelling ontology evaluation and validation", ESWC2006),  
as sets of criteria (based on domain coverage, task adequacy, and  
project sustainability), which can greatly change how measures of  
ontologies are used to select them

> Hello Vipul,
>
> > I do lack knowledge of the nuances of BFO, but from a 10,000 ft  
> level
> > it is not clear to me the value of modeling a process as an  
> occurrent.
> [...]
> > it fails your acceptance test: understandability..

Dear Vipul, I can understand you here: unusual names are not a good  
service for the case for reusable ontologies. In order to overcome  
this problem, we have produced a totally renamed, lightly axiomatized  
version of the DOLCE library, called DOLCE-Ultralite: http://www.loa- 
cnr.it/ontologies/DUL.owl. Another approach we are following is  
creating a lattice of small ontologies, called "content ontology  
design patterns".

> [...]
> > It also fails another acceptance test, doesn't help me to  
> represent the
> > notion
> > of a computational process easily....
>

A computational process can be an occurrent, if conceptualized as  
something occurring in a computing machine. If you conceptualize it  
as a process type, to be implemented in a programming language, you  
are probably thinking to something else, which I would represent as  
an EventStructure, or a Workflow.
See also the Core Ontology of Software by Daniel Oberle (http:// 
cos.ontoware.org), designed by reusing an older version of the DOLCE- 
Lite-Plus library.

> I have to disagree with you here. The distinction BFO (and also  
> DOLCE) makes between continuants and occurents, and especially the  
> distinction between processes and objects, is relatively easy to  
> understand, you just need to look at the documentation and the  
> examples for each a bit more. I also think that the distinction  
> between processes and objects is quite practical. The main point in  
> this discussion so far was that the domains and ranges of some  
> relations might be perceived to be unnecessarily restricted to one  
> or the other. Again, this might be true in some occasions, but I  
> don't see too many problems here at the moment.
>

Can you point at some case of unnecessary domain/range restrictions?  
I am very interested in this kind of feedback.

> I also think that you put too much emphasis on the definitions of  
> terms like 'process' in computer science. I have the impression  
> that many people with a strong background in computer science  
> easily mix up such terms from computer science with the general  
> meaning of these terms. Sometimes it appears that it would be  
> better if they would temporarily forget their education in computer  
> science while creating ontologies, because it does more harm than  
> good.
>

Agreed. And this is true of any domain: lawyers will tend to force  
the meaning of "Norm" on the legal side, physicists will tend to  
engage everyone with physical forces, etc. Howeve, very reusable  
ontologies should be careful in looking for readability and extended  
documentation, examples, etc.

> 'Process' in the context of software systems has a very special  
> meaning that does not have much to do with what people in general  
> are thinking of as a process. The 'processes' in BFO are much  
> closer to the intuitive understanding of most people. In other  
> words, it is more likely that the diction of software developers  
> fails the acceptance test of understandability, and *not* BFO!
>

Probably so, if the domain is everyday life; but still, we need to be  
clear and assist domain experts to find their own way and utility in  
using reference ontologies.

> By the way, this discussion was triggered by my statement that  
> things like 'binding assay result' are not subclasses of something  
> called 'evidence', but instead 'evidence' is more of a role they  
> play in certain contexts. My emphasis actually was that we should  
> NOT make a bogus subclass statement. I don't think we have to deal  
> with 'evidence roles' at the moment, so we can just omit them. Of  
> course, the discussion we are having right now is still valuable,  
> but I did not intend to trigger this discussion when I made that  
> statement.
>

If you need such roles after all, see my reply to the evidence thread  
with the comment raised by Pat Hayes.

> cheers,
> Matthias Samwald
>
> ----------
>
> Yale Center for Medical Informatics, New Haven /
> Section on Medical Expert and Knowledge-Based Systems, Vienna /
> http://neuroscientific.net
>
> cheers,
> Ma
>
> >
> > And I suspect similar utility issues will arise with the others  
> as well,
> > DOLCE,
> > OpenCyc, ...
> >

See above

> > > I totally with you on us all agreeing, durably. This has worked  
> for
> > > BFO in OBI so far.
> >
> > [VK] Yes, it may have, but then the clinical types in HCLS would  
> view it
> > ss a
> > siloized approach and issues start coming up when we try to use BFO.
> >
> > > This one I find harder to evaluate, for some reason. As I've  
> said, my
> > > bias is that I think distinctions are usually good and we need  
> more
> > > rather than less of them. I'm worried that absent having them it's
> > > too easy to say things that don't have a consequence, or for which
> > > the consequence is not clear. I'd say this is an area that I  
> need to
> > > learn more about.
> >
> > [VK] I guess then the acceptance test would be: Are the  
> entailments that
> > are
> > inferred as a result of having these distinctions useful? Hope  
> that makes
> > it
> > clearer ... :)

This is a very good acceptance test, best known as "competency  
fitness". I support the idea that only competency-fit parts of a  
reference ontology should be bought, and this is idea we are pursuing  
with the patterns project.

> >
> > > > Acceptance Test 2: Ability to express my information needs  
> without
> > > > jumping
> > > > through hoops>?
> > > don't know what this means
> >
> > [VK] I had the use case of "computational process" in mind....  
> Had to go
> > through
> > some gyrations there. I guess the issue there was a lack of clear
> > methodology to apply these constructs.
> >

As said above, we need clear names, very good comments, examples, and  
possibly use cases.

> > > > Acceptance Test 3: Ability to align with other "standards"
> > > not necessarily interesting. Depends on what you think of the  
> other
> > > standards.
> >
> > [VK] What I meant here is that there are existing standards within
> > Healthcare,
> > e.g., HL7 - RIM, Snomed, LOINC, etc. So alignment would be good.
> > Also, mis-alignment would be good if it exposes gaps and  
> weaknesses in
> > these
> > existing standards, in that they do not support some use cases.
> >

Standards are not so different from ontologies, at least from the  
assessment viewpoint: they need to pass the same competency tests :)

> > > Ideally you evaluate these by having some problem to solve, then
> > > trying to use the system to solve the problem and then seeing how
> > > well it did. This is hard work and I don't know of any shortcut.
> >
> > [VK] Alan you have been the driving force of the HCLS demo and in  
> some
> > sense you
> > are best positioned to come up with an interesting use case from the
> > biological
> > world, probably related to evidence. And then we can work it  
> through.
> >
> > I can put forward a use case from the clinical world. It is centered
> > around the
> > aspect of judgements/assessments a nurse makes in the process of  
> nursing
> > care
> > and the pieces of evidence he/she requires to make that assessment.
> >
> > > Maybe we can bring this back to the main subject: What problems  
> are
> > > we trying to solve by recording evidence? What are the ways we  
> would
> > > know that we've made a mistake?
> >
> > [VK] IMHO, the key issue is that the process of assessment/ 
> judgement be
> > predictable, i.e., given the same set of evidence and the same  
> context,
> > one
> > should be able to reproduce the same conclusions.
> >
> > The other requirement would be the ability to explain why a  
> particular
> > ssessment
> > was made. Most statistical reasoning systems are weak in this  
> regards....
> >
> > Some others may come up with other requirements.
> >
> > ---Vipul
> >


Assessment and evidence-based medicine need a quite sophisticated  
modelling, because you need to decouple the observations from the way  
you frame them into a context, and you can have different contexts,  
rationales for the assessments, etc. And you may want to reason over  
all those entities together, not just on observations. Having just  
roles, objects, and events is not enough, you also need some  
expressivity for contexts of observations (e.g. a notion of  
"Situation"), as well as for contexts of interpreting those  
observations.

A design pattern that I'm using since several years for cases like  
these (e.g. in clinical trials) is D&S, and is embedded e.g. in DOLCE- 
Ultralite (see above). Logically speaking, it is a more complex  
variety of the N-ary relations pattern published by SWBPD working  
group, but has an explicit axiomatization, and features the two- 
layers required to decouple observations from interpretations.

I wish you a good work
Aldo


_____________________________________

Aldo Gangemi

Senior Researcher
Laboratory for Applied Ontology
Institute for Cognitive Sciences and Technology
National Research Council (ISTC-CNR)
Via Nomentana 56, 00161, Roma, Italy
Tel: +390644161535
Fax: +390644161513
aldo.gangemi@istc.cnr.it

http://www.loa-cnr.it/gangemi.html

icq# 108370336

skype aldogangemi

Received on Thursday, 21 June 2007 21:09:34 UTC