Re: Normalize Ontologies?

--- Eli Israel <Eli@SemanticWorld.Org> wrote:
> I work designing Semantic Models (ontologies) for corporate clients. 
> 
> 
> Often, in the course or training, they ask for best practices in
> modeling - and I have a collection of best practices that I have
> personally come to.  Many of the best practices of data modeling
> carry over, but ontology development is different enough from data
> modeling, and the uses are different enough, that I pause before
> declaring that the best practices can be imported.
> 
> Particularly, I am thinking about normalization.  
> 
> Has any thought been put into normalizing ontologies?

On one level the proof engines can be used to support such an
normalization. On the other level, what would such a normalization look
like?

Lets look at redundant types and the factoring of attribute.
Lets say that i have 50 types and 100 properties in some form of
lattice : What are the normalizations appliciable :

1. a new base class is introduced to merge two types with the same
properties.
2. a class is subdivided into two a base class and two new subclasses
based on the fact that they are disjoint.

But how can do determine that two classes are disjoint? by having
enough  representative data sets? But that is not a proof that a new
record will come and break the model.

This brings me to the point of statistics, data mining, bayesian
modelling : is there anyone here working on statistical tools for
classifing rdf data sets and extracting prototypical ontolgies from
them?

or did I miss your point completly?

mike




=====
James Michael DuPont
http://introspector.sourceforge.net/

__________________________________
Do you Yahoo!?
Yahoo! SiteBuilder - Free, easy-to-use web site design software
http://sitebuilder.yahoo.com

Received on Wednesday, 13 August 2003 10:18:15 UTC