W3C home > Mailing lists > Public > semantic-web@w3.org > September 2007

Re: Defining subsets of existing OWL / RDF-S vocabularies in another vocabulary?

From: Bijan Parsia <bparsia@cs.man.ac.uk>
Date: Fri, 28 Sep 2007 17:44:15 +0100
Message-Id: <F6D83510-79E0-4EB1-880B-71CFD54B4787@cs.man.ac.uk>
Cc: "Semantic-Web@W3.Org," <semantic-web@w3.org>
To: mhepp@computer.org, Martin Hepp <martin.hepp@deri.org>

(Trimmed follow ups a bit.)

On 27 Sep 2007, at 19:21, Martin Hepp wrote:

> Dear all:
> Is it valid to locally define a subset of an existing OWL / RDF-S  
> vocabulary in your own vocabulary in order to
> a) avoid ontology imports or
> b) make it simple for annotation tools to display only a relevant  
> subset of that external vocabulary?
> In other words, can I declare some FOAF or Dublin Core vocabulary  
> elements, which are relevant for my annotation task, locally in my  
> new domain vocabulary, instead of adding an import statement for  
> the whole vocabulary in the ontology header?

I'm not sure what you mean by "valid". Or rather there are a number  
of meanings.

There is no restriction on the use of URIs in an OWL or RDF document  
based on the form of the URI. That is, it is perfectly legal to have  
a document accessible at:

that contains absolutely no classes, properties, etc. with URIs that  
begin with "http://ex.org/myCoolOnt". They could easily all begin  
with "http://ex.org/notMyCoolOnt", or any arbitrary mix. There is no  
requirement that you import the root of URIs that you use in your  
document (though some people sometimes have said things that might  
lead one to believe that they think this is a best practice; I think  
it's a nutsy practice to enforce :)).

So, go to town.

> If that was okay, it would make it easier to prepare pre-composed  
> blends of relevant ontologies that can be directly used for form- 
> based instance data creation.
> However, I fear that defining an element that is residing in  
> someone else's URI space is not okay, since I (e.g. http:// 
> www.heppnetz.de) have no authority of defining the semantics of an  
> element

Eh. I don't believe anyone has authority over how I use terms in my  
own documents! If I can't reuse, extend, change the meaning in a  
variety of ways and investigate how these changes would affect data  
(for example) then your vocabulary is of much less use to me.

> that is within
> |http://xmlns.com/foaf/0.1/, even if I what I am saying is  
> consistent with the authoritative definition of the given  
> vocabulary element. |
> ||
> ||I am assuming that I duplicate the very same specification of the  
> element, i.e., I would assure that my definition just replicates a  
> subset of the official vocabulary. I also abstract from semantic  
> dependencies, i.e., whether it is possible to specify a consistent  
> subset of a given vocabulary (this may not be trivial for an  
> expressive DL ontology, but should be feasible for lightweight RDF- 
> S or OWL vocabularies). Also, the legal point of view (whether I am  
> allowed to replicate an existing specification) is less relevant  
> for me at the moment. I just want to know whether this is an  
> acceptable practice from a Web Architecture perspective.

I am not authoritative on web architecture (I personally think no one  
is and that it's a worry much overplayed).

I think the most that should be said is that you have to balance the  
virtues of reusing existing terms in somewhat different ways with the  
harms. I think in a lot of cases the harms are much exaggerated and  
the virtues greater.

(I would argue that to be used is to potentially alter the meaning  
anyway. There's no truly "safe" reuse. E.g., every assertion that bar  
is dc:creator of foo is incompatible with saying that that bar is NOT  
dc:creator of foo. We don't sweat these at all :))

> Any feedback would be very much appreciated!

 From a technical perspective, the tricky bit is if you want to  
isolate the meaning of a term you are extracting (i.e., the set of  
axioms which  affect entailments which involve that term). There is a  
lot of very good work happening here, see:

Usable software in this area is coming within the next 9 months.

These techniques have already been used to accelerate incremental  
classification. We are working on how to use it for ontology  
engineering and integration.

At the moment, these support *monotonic* extensions, i.e., when you  
want to *refine* the meaning. *Altering* the meaning in a variety of  
structured ways is another interesting problem. The module extraction  
stuff can help there, I believe, even now.

Hope this helps.

Received on Friday, 28 September 2007 16:43:03 UTC

This archive was generated by hypermail 2.4.0 : Tuesday, 5 July 2022 08:45:02 UTC