Re: [Patterns] Materialize Inferences (was Re: Triple materialization at publisher level)

As I understand it, the most concise approach would be to use the
pattern "Equivalence Links". Then to make those more maintainable,
possibly use "Link Base".

But the "Materialize Inferences" is indicating there are forces on the
data provider to perform those inferences over equivalent links at the
source, and to make (materialize) the resulting links explicitly?

What are those forces that would lean a developer one way or the
other? They seem to be based on the capabilities of the data
_consumer_.

Do you (always?) provide two options? "Give me the concise equivalence
links" and "Give me all the materialized inferences?"

Thanks


On Fri, Apr 9, 2010 at 9:07 AM, Vasiliy Faronov <vfaronov@gmail.com> wrote:
> I think Dan has spotted a very good rule of thumb for subclass
> materialization with his notion of "mid-level" classes.
>
> Here's another rule of thumb I can think of: materialize inferences that
> map your data to better known, and more widely deployed, vocabularies.
>
> Example. A consulting company could develop a custom ontology for
> describing businesses. Let's say it has a class ex:BusinessEntity which
> has owl:equivalentClass gr:BusinessEntity. It's likely that some LD
> clients will be familiar with the GoodRelations vocabulary but unable or
> unwilling to do reasoning over custom ontologies. In this case,
> explicitly spelling out that every ex:BusinessEntity is also a
> gr:BusinessEntity may be helpful.
>
> --
> Vasiliy Faronov
>
>
>

Received on Saturday, 10 April 2010 07:28:18 UTC