Re: Evidence

On Jun 20, 2007, at 12:07 PM, Kashyap, Vipul wrote:

> For instance, If I have to do workarounds for > 50% of the classes/ 
> properties I
> represent using BFO/DOLCE/OpenCyc, then there is a problem..

Makes sense. Do you have some examples, which I would hope would  
include: a statement of the problem, the proposed solution, and the  
necessary workarounds?

As I elaborated on in a previous message, I don't buy the claim that  
a workaround is necessary in the case of evidence as process as a  
role. The statement simply doesn't make sense in BFO and my call for  
a definition of what it would mean went unanswered.

> The other issue is what value does this workaround bring us, a  
> question you
> address later, see my response below.
>
>> I don't know what "ontologically sound" means. I would offer that a
>> "best practice" would be be to make sure that part of our "acceptance
>> tests" for agreeing that something is useful is that many of us
>> understand what is meant by a construct.


On Jun 20, 2007, at 12:37 PM, Pat Hayes wrote:
> Fair enough, though I would suggest strengthening it and making it  
> more empirical: that many of y'all understand *and all agree* what  
> is meant by a construct. So that a recurrent need to have  
> discussions about whether or not a construct applies to a new case,  
> may be a sign that it is not as well mutually understood as one  
> initially thought.

I totally with you on us all agreeing, durably. This has worked for  
BFO in OBI so far. IMO it has worked better than other cases where  
I've been involved in collaborative ontology building, though my  
experience doing so is not extensive. You are right about what such  
recurring discussion might indicate, but there are other explanations  
- that it's work to figure out what we we mean when we say something,  
that we are navigating through aligning different people's views of  
the same word, or that we are overloading a word and are involved in  
the process of pulling apart the different senses. I don't recall  
discussions about continuant versus occurrent, but are having them  
about plan versus process. Jury's out on that one - we decided to  
mainly play in process and see what it looks like after we have more  
content.

> Another acceptance test I would urge on y'all is to ask, of each  
> construct, what utility it might be. For example, of a proposed  
> distinction, is making this distinction useful (for what?), or does  
> it simply make a distinction, which could be ignored?

This one I find harder to evaluate, for some reason. As I've said, my  
bias is that I think distinctions are usually good and we need more  
rather than less of them. I'm worried that absent having them it's  
too easy to say things that don't have a consequence, or for which  
the consequence is not clear. I'd say this is an area that I need to  
learn more about.

> [VK] I think this is a good start. What would be a set of  
> acceptance tests for
> something like BFO/DOLCE/OpenCyc...?
>
> Acceptance Test 1: Understandeability?
yes

> Acceptance Test 2: Ability to express my information needs without  
> jumping
> through hoops>?
don't know what this means

> Acceptance Test 3: Ability to align with other "standards"
not necessarily interesting. Depends on what you think of the other  
standards.

Ideally you evaluate these by having some problem to solve, then  
trying to use the system to solve the problem and then seeing how  
well it did. This is hard work and I don't know of any shortcut.

Maybe we can bring this back to the main subject: What problems are  
we trying to solve by recording evidence? What are the ways we would  
know that we've made a mistake?

(I suspect that there will be a variety of answers to this, and I'm  
very curious to hear what people think)

-Alan

Received on Wednesday, 20 June 2007 18:51:33 UTC