W3C home > Mailing lists > Public > semantic-web@w3.org > February 2016

Re: Handling multiple rdfs:ranges

From: Peter F. Patel-Schneider <pfpschneider@gmail.com>
Date: Tue, 23 Feb 2016 14:00:42 -0800
To: Simon Spero <sesuncedu@gmail.com>
Cc: semantic-web@w3.org, Reto Gmür <reto@wymiwyg.com>
Message-ID: <56CCD68A.5090909@gmail.com>
Yes, this is one way to think of schema.org ranges.  I have asked whether
something like this was this case, and didn't get any response.

Other meanings for schema.org ranges might be more likely.  For example, one
might consider schema.org ranges to be constraining, i.e., that a property
s:p1 with definition
  s:p1 s:rangeIncludes s:cr .
(and no other s:rangeIncludes triples) constrains valid inputs to have type
triples to s:cr or one of its subclasses for every node that is the value of s:cr.

Which is right?  At one time I was hoping that I could get a public answer,
but it was not to be.


It is definitely not the case that one can just think of schema.org "triples"
as OWL axioms and facts.   The treatment of domains and ranges in schema.org
mean that this simple transformation doesn't work.  Even doing some sort of
closure also doesn't work, because of strings as things.

peter


On 02/23/2016 01:22 PM, Simon Spero wrote:
> The interpretation of rangeIncludes etc., becomes easier if one hand-waves in
> a simple temporal context . 
> 
> A canonical reference oracle (iming danbri) accepted the assertion that the
> set of rangeIncludes axioms could be considered closed for a given version of
> schema.org <http://schema.org>.
> 
> The included ranges form an anonymous unionOf; the effective range is the
> conjunction of this anonymous range with all other range assertions applicable
> to property, whether through assertions or by inheritance.
> 
> Inferences from assertions in a document using schema.org <http://schema.org>
> semantics should be made with respect to the version of the schema that
> existed at the time the assertions were made.
> This behavior roughly corresponds to the behavior of the various sdo sponsors
> validators.
> 
> This assumption allowed for relatively simple mapping to OWL (literal types
> were just converted to classes, with magic boxing/unboxing).
> 
> Generating named classes for the anonymous unions and computing the class
> hierarchy revealed a good bit of hidden structure, and also uncovered
> anomalies caused by errors.
> 
> One interesting idiom that initially made no sense until it is explained is
> the use of ranges that are (Text or URL), where URL is a subclass of Text. 
> This generally indicates an identifier of some kind, where the URL is in
> principle pointing to a named individual.
> 
> What makes this interesting is that there is no ready way in OWL to restrict
> the range of an object property to be a named individual, since that
> distinction is purely syntactic. It's easy enough to sort of handle this
> poorly (checking for unacceptable anonymous individuals in input, generating
> different individual assertions, and discarding inferred anons in post).  It's
> difficult to handle this cleanly without bringing up a whole raft of UNA
> issues (and CWA issues if cardinality constraints are around).
> 
> Simon
> 
> On Feb 23, 2016 2:53 PM, "Peter F. Patel-Schneider" <pfpschneider@gmail.com
> <mailto:pfpschneider@gmail.com>> wrote:
> 
>     On 02/23/2016 09:12 AM, Reto Gmür wrote:
>     >
>     [...]
>     >> Without any official formal semantics for schema.org
>     <http://schema.org> or other guidance
>     >> from
>     >> the schema.org <http://schema.org> people we are reduced to considering
>     the meaning of
>     >> English
>     >> phrases on the schema.org <http://schema.org> website.
>     >
>     > Could it be triples all the way down? Doesn't the justification chain
>     > typically ends at some definitions in natural language?
> 
>     Well, maybe.  There is some stuff that has been machine-validated.  (Which
>     then makes the basis some computer code, I guess.)
> 
>     One big reason for formal semantics is to ground on something that is quite
>     precise.  Grounding on simple model theories is useful, I think, because there
>     is very little wiggle room left in the definitions and constructions, even
>     though there is, as you say, still a natural language component that has to be
>     considered even if the natural language is some language that mathematicians
>     use to communicate with each other.
> 
>     >> Worse, the phrases used there are generally quite informal.
>     >
>     > This makes it difficult indeed.
>     >
>     > Reto
> 
>     peter
> 
> 
Received on Tuesday, 23 February 2016 22:01:13 UTC

This archive was generated by hypermail 2.4.0 : Tuesday, 5 July 2022 08:45:44 UTC