W3C home > Mailing lists > Public > public-lod@w3.org > April 2010

Re: [Patterns] Materialize Inferences (was Re: Triple materialization at publisher level)

From: Patrick Logan <patrickdlogan@gmail.com>
Date: Fri, 9 Apr 2010 09:59:55 -0700
Message-ID: <q2we9447a401004090959l9326ea95z265c21adf30658a8@mail.gmail.com>
To: public-lod <public-lod@w3.org>
As I understand it, the most concise approach would be to use the
pattern "Equivalence Links". Then to make those more maintainable,
possibly use "Link Base".

But the "Materialize Inferences" is indicating there are forces on the
data provider to perform those inferences over equivalent links at the
source, and to make (materialize) the resulting links explicitly?

What are those forces that would lean a developer one way or the
other? They seem to be based on the capabilities of the data

Do you (always?) provide two options? "Give me the concise equivalence
links" and "Give me all the materialized inferences?"


On Fri, Apr 9, 2010 at 9:07 AM, Vasiliy Faronov <vfaronov@gmail.com> wrote:
> I think Dan has spotted a very good rule of thumb for subclass
> materialization with his notion of "mid-level" classes.
> Here's another rule of thumb I can think of: materialize inferences that
> map your data to better known, and more widely deployed, vocabularies.
> Example. A consulting company could develop a custom ontology for
> describing businesses. Let's say it has a class ex:BusinessEntity which
> has owl:equivalentClass gr:BusinessEntity. It's likely that some LD
> clients will be familiar with the GoodRelations vocabulary but unable or
> unwilling to do reasoning over custom ontologies. In this case,
> explicitly spelling out that every ex:BusinessEntity is also a
> gr:BusinessEntity may be helpful.
> --
> Vasiliy Faronov
Received on Saturday, 10 April 2010 07:28:18 UTC

This archive was generated by hypermail 2.3.1 : Wednesday, 7 January 2015 15:16:05 UTC