W3C home > Mailing lists > Public > public-xg-geo@w3.org > August 2006

Re: INSEE releases OWL ontology and RDF data for geographical entities

From: Dan Connolly <connolly@w3.org>
Date: Fri, 04 Aug 2006 17:20:27 -0500
To: Alan Ruttenberg <alanr@mumble.net>
Cc: Eric van der Vlist <vdv@dyomedea.com>, Bernard Vatant <bernard.vatant@mondeca.com>, semantic-web@w3.org, public-xg-geo@w3.org, Franck Cotton <franck.cotton@insee.fr>
Message-Id: <1154730027.30621.89.camel@dirk.w3.org>

On Fri, 2006-08-04 at 16:27 -0400, Alan Ruttenberg wrote:
> In thinking about this I too have been leaning towards not using the  
> hash form. In my case I was initially ignorant of the fact that stuff  
> after the hash isn't sent to the server, and I think the server,  
> which both knows much more about the domain, and often has more  
> computational power available to it ought to have some more control  
> over what is returned for such a query.

Well, just know that you're buying a lot of complexity when
you go that way. Make sure it's justified.

I'm pretty sure it's not justified for the case of
the INSEE geo ontolgoy. One static OWL document is plenty.

(By the way... these things change from year to year,
but currently, there's a lot more compute power per
transaction on the client end of a typical web transaction.
The user does one click every few seconds or minutes;
a popular server does thousands of transactions per minute).

> Moreover, it is hard to figure out, at the outset, whether an  
> ontology will be "big" or "small", or what the "natural" chunk size  
> would be.

Perhaps I'm missing something, but everything I can see suggests
that this IEEE geo ontology (the one marked up a la
xmlns:geo="http://rdf.insee.fr/geo/" )
has a dozen terms or so; maybe 50 or 100.
At that size, anything other than a static OWL/RDF file is almost
certainly overkill.

I don't think it's anything like the sort of ontology with thousands
and thousands of terms where the packaging issues arise.

> Another issue is what sort of work the client needs to do in order to  
> extract the part referred by the fragment identifier from the whole  
> ontology. Is this obvious?

Maybe I don't understand your question, but if I do, then yes,
it's obvious what clients do to deal with fragments of URI
documents. It doesn't involve extraction. It's just like
what a loader does when resolving symbols in a library.

> Regarding the issue of round trip, i think that with OWL there is a  
> clear way to provide both.
> When returning just the definition of a specific class or property,  
> in order for it to make sense in OWL, it needs to specify an  
> owl:imports of the full ontology. I would suggest that the full  
> ontology be served at that URI. Clients who retrieve a single  
> definition can then decide whether they want to dereference items one  
> at a time, or to retrieve the whole ontology  by dereferencing the  
> import.

One question remains: once I have imported a "full ontology"
at X, how do I know, for some term T, whether the lookup
of X suffices?

The dublin core ontology uses rdfs:isDefinedBy links to
connect each such T with X. There has been some
discussion (e.g. http://esw.w3.org/topic/DereferenceURI )
about adopting this as a norm. We've considered implementing
it in tabulator (http://dig.csail.mit.edu/2005/ajar/ajaw/tab ).

It would be awkward to use owl:imports to relate T to X,
since the domain of owl:imports is owl:Ontology, and not
every such T is an owl:Ontology.

> On the server side, of course, there need not really be a document  
> for each definition  - the server can handle packaging up a single  
> definition from the larger file.

Have you found any cases where it was worthwhile to implement
something like that? I'm interested to see any available details.

Meanwhile, just sticking an RDF/OWL file on a web server and
using # is completely straightforward for ontologies of
up to a few dozen terms.

> My 2c,
> Regards,
> Alan

Dan Connolly, W3C http://www.w3.org/People/Connolly/
D3C2 887B 0F92 6005 C541  0875 0F91 96DE 6E52 C29E
Received on Friday, 4 August 2006 22:20:37 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 19:43:25 UTC