Re: Core or Lite?

If it is of interest, for DBpedia the core vocabulary coverage is very low.
It would be premature to say close to 0% without proof-checking all the
different cases but definitely below 10-20%.

The reason is that we base our test on properties in the OWL-way.
So, instead of saying Class X1 must have property p with range Z1
we say property p must have domain X1, property p must have range Z1

The reason is that the extraction from Wikipedia is performed by
independent extractors that aggregate the data in the end.
so if a property p is supposed to have domain X1 but gets X2 and range Z1
but gets Z2
a class shape (from the current shacl spec) will not be able to catch
either of these errors.
We also use class disjoint axioms to validate the types

Both I and IIRC Jerver Bolleman suggested to include property-based shapes
but there was no support from any WG member.
I didn't push it further since I thought SPARQL that could cover that gap.

Best,
Dimitris

On Tue, Mar 24, 2015 at 8:12 PM, Arnaud Le Hors <lehors@us.ibm.com> wrote:

> Well, you wrote: "The pre-built constructs may cover 80% of their use
> cases, but 20% still remains." but ok.
> Thanks for the clarification.
> --
> Arnaud  Le Hors - Senior Technical Staff Member, Open Web Technologies -
> IBM Software Group
>
>
>
>
> From:        Irene Polikoff <irene@topquadrant.com>
> To:        Arnaud Le Hors/Cupertino/IBM@IBMUS, Richard Cyganiak <
> richard@cyganiak.de>
> Cc:        "public-data-shapes-wg@w3.org" <public-data-shapes-wg@w3.org>
> Date:        03/24/2015 10:56 AM
> Subject:        Re: Core or Lite?
> ------------------------------
>
>
>
> Arnaud,
>
> I didn’t say that 80% of all use cases can be addressed declaratively. I
> said that in our experience:
>
>    - Pretty much everyone needs some custom built templates
>    - Some can use 80% of pre-bult declarative constructs and 20% of
>    custom built templates
>    - For other users, percentage of “custom" is much higher and can be
>    over 50%
>    - There is no common 80% and 20% percent across all users – which of
>    the declarative prebuilt templates/statements each uses varies across
>    different users, how much and what exactly they need to custom build also
>    varies
>
> Hope this clarifies.
>
> Irene
>
> *From: *Arnaud Le Hors <*lehors@us.ibm.com* <lehors@us.ibm.com>>
> * Date: *Tuesday, March 24, 2015 at 1:42 PM
> * To: *Richard Cyganiak <*richard@cyganiak.de* <richard@cyganiak.de>>
> * Cc: *"*public-data-shapes-wg@w3.org* <public-data-shapes-wg@w3.org>" <
> *public-data-shapes-wg@w3.org* <public-data-shapes-wg@w3.org>>
> * Subject: *Re: Core or Lite?
> * Resent-From: *<*public-data-shapes-wg@w3.org*
> <public-data-shapes-wg@w3.org>>
> * Resent-Date: *Tue, 24 Mar 2015 17:48:02 +0000
>
> I totally agree with everything you said here Richard and actually don't
> know of anyone disagreeing. If anyone does I'd like to hear it.
>
> I think the contention is over the place given to the extension mechanism
> and specifically whether it is the foundation on which everything is built
> and depends. By your own words, and in line with Irene's input that 80% of
> use cases can be addressed declaratively, the "expressive fallback" should
> just be that - a fallback - and not the main foundation of the spec. I
> think this is what's getting some people to push back on Holger's proposal.
> --
> Arnaud  Le Hors - Senior Technical Staff Member, Open Web Technologies -
> IBM Software Group
>
>
> Richard Cyganiak <*richard@cyganiak.de* <richard@cyganiak.de>> wrote on
> 03/24/2015 10:19:13 AM:
>
> > From: Richard Cyganiak <*richard@cyganiak.de* <richard@cyganiak.de>>
> > To: Arnaud Le Hors/Cupertino/IBM@IBMUS
> > Cc: "*public-data-shapes-wg@w3.org* <public-data-shapes-wg@w3.org>" <
> *public-data-shapes-wg@w3.org* <public-data-shapes-wg@w3.org>>
> > Date: 03/24/2015 10:21 AM
> > Subject: Re: Core or Lite?
> >
> > The workshop report says that “SPARQL plays a prominent role in how
> > people tackle the validation problem today” and that “constraints
> > checking can be performed using SPARQL quite effectively”.
> >
> > However, it also says that “SPARQL queries cannot easily be
> > inspected and understood, either by human beings or by machines, to
> > uncover the constraints that are to be respected”, and therefore
> > “SPARQL does not constitute a complete solution”. From this arises
> > the need for a declarative approach to constraints, and this cannot
> > be delivered by SPARQL alone.
> >
> > Hence the consensus opinion that you quote: A declarative high-level
> > vocabulary for the most common cases, with an ability to break out
> > into something like SPARQL for the inevitable more complex cases
> > that can’t be handled by the declarative solution.
> >
> > But it is clear from the workshop report that a declarative solution
> > alone would not satisfy many of the workshop participants, and
> > there’s nothing in it to suggest that complex constraints on the
> > level of SPARQL are uncommon or not typical.
> >
> > We need both; the declarative vocabulary and the expressive
> > fallback. Proposals that only address one (like Resource Shapes, and
> > most of the descriptions of ShEx, and Peter’s original CONSTRAINTS
> > proposal) are insufficient to address the charter.
> >
> > Best,
> > Richard
> >
> >
> >
> > > On 24 Mar 2015, at 15:51, Arnaud Le Hors <*lehors@us.ibm.com*
> <lehors@us.ibm.com>> wrote:
> > >
> > > I think we're touching on the very point of division in the group:
> > whether writing complex queries using SPARQL is the normal/common
> > case or not. From my point of view the workshop clearly didn't
> > support that point of view and this is why the charter is written
> > the way it is.
> > >
> > > Participants agreed that we should have a declarative mechanism a
> > la OSLC Resource Shapes and, in recognition of the fact that there
> > is only so much one can do that way, an extension mechanism should
> > also be available to address complex cases which can't be handled
> > declaratively. Here is how the report reads (*http://www.w3.org/2012/*
> <http://www.w3.org/2012/>
> > 12/rdf-val/report):
> > >
> > > There was consensus on the need for
> > > 1.        Declarative definition of the structure of a graph for
> > validation and description.
> > > 2.        Extensible to address specialized use cases.
> > > 3.        A mechanism to associate descriptions with data.
> > >
> > > Note that this doesn't mean that the extension mechanism is any
> > less normative than the declarative one but it makes a difference as
> > to whether the extension mechanism is the center piece (or as
> > Richard put it "the most basic construct") or not.
> > > --
> > > Arnaud  Le Hors - Senior Technical Staff Member, Open Web
> > Technologies - IBM Software Group
> > >
> > >
> > > Arthur Ryman <*arthur.ryman@gmail.com* <arthur.ryman@gmail.com>>
> wrote on 03/24/2015 05:50:14 AM:
> > >
> > > > From: Arthur Ryman <*arthur.ryman@gmail.com*
> <arthur.ryman@gmail.com>>
> > > > To: Holger Knublauch <*holger@topquadrant.com*
> <holger@topquadrant.com>>
> > > > Cc: "*public-data-shapes-wg@w3.org* <public-data-shapes-wg@w3.org>"
> <*public-data-shapes-wg@w3.org* <public-data-shapes-wg@w3.org>>
> > > > Date: 03/24/2015 05:51 AM
> > > > Subject: Re: Core or Lite?
> > > >
> > > > Holger,
> > > >
> > > > All aspects of SHACL that are described by the spec are normal in the
> > > > sense that they define compliant behavior. You seem to be implying
> > > > that only Part 1 is normal. I don't understand what is gained by this
> > > > use of the term "normal".
> > > >
> > > > I'd also like to clarify that I think we only need one RDF namespace.
> > > > Part 1 defines some of the terms and Part 2 defines the rest of the
> > > > terms.
> > > >
> > > > If we are going to use the term "normal", let's agree on the meaning.
> > > > One meaning is say that the largest user group is the "normal" one.
> If
> > > > that is the case then we have clear feedback that the majority of
> > > > users will want a high-level vocabulary for expressing common
> > > > constraints. A smaller, more advanced, set of users will write
> > > > constraints in SPARQL, JS, ShEx, etc.
> > > >
> > > > -- Arthur
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > On Tue, Mar 24, 2015 at 1:50 AM, Holger Knublauch
> > > > <*holger@topquadrant.com* <holger@topquadrant.com>> wrote:
> > > > > On 3/24/2015 15:17, Arnaud Le Hors wrote:
> > > > >
> > > > > Holger,
> > > > > What would constitute the "extension mechanism" in your view then?
> > > > >
> > > > >
> > > > > The macro facility could be regarded as an extension mechanism
> > for the core
> > > > > vocabulary. Another extension mechanism is the ability to use other
> > > > > languages such as shx:javaScript. But writing complex queries
> > (e.g. using
> > > > > SPARQL) is not an extension mechanism. Also, using pre-defined
> > macros from a
> > > > > 3rd party template library is also not an extension. So the
> > headline of that
> > > > > Part 2 should reflect this differently. That was all I wanted
> > to point out.
> > > > >
> > > > >
> > > > > I have to point out that Arthur's suggestion happens to be
> > very much in line
> > > > > with what the charter calls for:
> > > > >
> > > > > An RDF vocabulary, such as Resource Shapes 2.0, for expressing
> > these shapes
> > > > > in RDF triples, so they can be stored, queried, analyzed, and
> > manipulated
> > > > > with normal RDF tools, with some extensibility mechanism for
> complex use
> > > > > cases.
> > > > >
> > > > > I don't think it helps to ignore that and try to force people into
> > > > > considering what was meant to be an "extensibility mechanism
> > for complex use
> > > > > cases" the "completely normal use of SHACL".
> > > > >
> > > > >
> > > > > I stick to my statement that it is a completely normal use of
> SHACL to
> > > > > include SPARQL queries. It is also completely normal for OWL DL
> users to
> > > > > rely on features outside of OWL Lite. In the draft, SPARQL is
> > part of the
> > > > > official spec. For a large class of users, what you call the
> > "extensibility
> > > > > mechanism" will even be the main feature of SHACL. This
> > includes people who
> > > > > currently use OWL and just want to use SPARQL for the bits
> > that OWL cannot
> > > > > express. This is how TQ customers have operated for many years
> > and is also
> > > > > the least disruptive path to adoption if we want SHACL to succeed
> with
> > > > > current semantic web people.
> > > > >
> > > > > What we have right now in the WG is that some people believe they
> don't
> > > > > really need SPARQL support, and that the core features are
> > sufficient for
> > > > > most use cases. That's good for them, although not backed by
> > much empirical
> > > > > evidence. At this stage we have no idea which features will be
> > most widely
> > > > > used. Claiming that feature 1 is more important than feature 2(and
> call
> > > > > feature 2 just an "extension") is premature and makes it more
> > difficult for
> > > > > the supporters of feature 2 to get heard.
> > > > >
> > > > > The wording in the Charter was in retrospect unfortunate but it was
> > > > > difficult to clarify all these nuances in a single short sub-
> > sentence. Back
> > > > > then I have been very clear that I will object to any attempts to
> > > > > marginalize the SPARQL support, and I will continue to do
> > this. I hope the
> > > > > group respects the point of view of the SPARQL camp in the
> > same way that we
> > > > > all respect the point of view of those who don't need really
> > SPARQL support.
> > > > > My draft supports both view points.
> > > > >
> > > > > Regards,
> > > > > Holger
> > > > >
> > > >
> >
>



-- 
Dimitris Kontokostas
Department of Computer Science, University of Leipzig
Research Group: http://aksw.org
Homepage:http://aksw.org/DimitrisKontokostas

Received on Tuesday, 24 March 2015 21:08:19 UTC