Re: process to discover and adopt/adapt relationships

Dear John


In our research at VUB STARLab the focus was always on community-based  
ontology management methodologies with the intention to create  
flexible, reusable bounded semiotics for very diverse computational  
needs in communities for an unlimited range of pragmatic purposes.

Our Community/Business Semantics Management approach is described  
here, including examples:

http://deleenheer.wordpress.com/business-semantics-management/

As you will read the basic principles are a repository of reusable  
semantic patterns which can be assembled and constrained according to  
the needs of the committing applications. By committing to the  
semantic patterns, semantic interoperability is established between  
the applications. The semantic patterns are elementary fact types (cf.  
NIAM/ORM) hence their representation lies close to natural language,  
though they are fully machine-interpretable. The same goes for the  
language we use to express constraints, called O-RIDL.

The underlying storage framework is based on the so-called DOGMA  
ontology framework that has some distinguishing characteristics that  
make it different from traditional ontology approaches such as (i)  
fact-orientation (cf. NIAM/Object-role Modelling); (ii) its groundings  
in the linguistic representations of knowledge, (iii) the explicit  
separation of the conceptualisation (i.e., lexical representation of  
concepts and their inter-relationships, materialised by so-called  
lexons) from its axiomatisation (i.e., semantic constraints) and (iv)  
its independence from a particular representation language. The goal  
of this separation, referred to as the "double articulation" principle  
is to enhance the potential for re-use and design scalability; and  
find a practical balance between ontology reuseability and useability.

Currently, we are valorizing our BSM technology and methodology in a  
spin-off company http://www.collibra.com

If you have any questions don't hesitate to ask.
Pieter


On 11 Mar 2009, at 02:39, John Graybeal wrote:

> I have a question of 'best practice' (uh oh).
>
> When you need an ontology for a purpose (like creating a controlled  
> set of terms to describe a domain area, let's say for  
> authoritatively populating a drop-down list), there are two stages  
> of work: (1) Find what exists. [2] If what exists doesn't fit the  
> need, subset or expand it.
>
> For step [1], I go to Watson and Swoogle and Google-('.owl' only),  
> enter some appropriate search terms, and try to weed through the  
> morass of sources that result, eliminating mail lists and other  
> irrelevancies.
>
> What else should I be doing to have a reasonable shot at finding the  
> almost perfect, already existing ontology?
>
> [2] Now, inevitably, there are many ontologies that have some piece  
> of what I want, and a few that have way more than what I want.  Now  
> what?  I can (a) piece together parts of each ontology (means  
> importing them all?), (b) use one of the mother-of-all-ontologies or  
> vocabularies (cyc, wordnet, others?) as is (means importing the  
> whole thing?), (c) create a new ontology that associates concepts to  
> those in other ontologies (either sameAs or more subtle  
> relationships), or (d) some combination of the above.
>
> It looks to me like if I want to provide a specific list of terms,  
> that don't overlap, have clear definitions, are unambiguous, and  
> fill the domain space, I will almost always have to create that  
> entire list on my own (then I can map it to other concepts if I want  
> to be a good boy).
>
> Even if I find a very solid ontology that meets these criteria,  
> inevitably it has more or fewer concepts than I want to show the  
> users of my ontology. So presenting just the right variation of the  
> ontology requires...another ontology.  (I guess extension can be  
> done by importing, and adding the few extra terms. But subsetting  
> seems awkward, unless one can import and _deprecate_ a few terms?)
>
> Is there something fundamental I've missed in the best practices and  
> technologies that people are using for this use case?  Or are we  
> inevitably in a world full of duplications, possibly with some  
> extensions and specializations?
>
> John
>
> --------------
> John Graybeal   <mailto:graybeal@mbari.org>  -- 831-775-1956
> Monterey Bay Aquarium Research Institute
> Marine Metadata Interoperability Project: http://marinemetadata.org
>
>

Pieter De Leenheer

Semantics Technology & Applications Research Laboratory
Vrije Universiteit Brussel

T +32 2 629 37 50 | M +32 497 336 553 | F +32 2 629 38 19

See my blog about community/business semantics management: http://www.pieterdeleenheer.be

Received on Thursday, 19 March 2009 12:41:10 UTC