Re: Update and opportunities with SHACL

Hi all,

I'd like to clarify that Holger's statements are his own and don't 
necessarily reflect the WG's opinion. Our WG has been the subject of a lot 
of controversy with several constituencies with very different 
expectations about what users will want to use and what the solution 
should look like.
Before they jump in and we end up with another endless argument, I'd 
rather try to set the record straight.

Specifically,
> At 
> the beginning of the SHACL WG we were collecting use cases and quickly 
> found that if we want to stay entirely within a high-level language, 
> then this high-level language would have to be equivalent to SPARQL to 
> achieve the required expressivity.

I expect the ShEx people would disagree with that claim. For that matter 
they took a different approach in which they developed a semantics that 
was not defined by SPARQL but could be compiled into SPARQL. And while the 
WG agreed to use SPARQL as much as possible to define SHACL's semantics, 
there is no agreement on making SHACL entirely depend on SPARQL.

> With SHACL, the committee just publishes a Core starter kit plus a 
> web-based extension mechanism. We do not know yet what people will do 
> with these building blocks in the next few years. Anyone can publish 
> their own lego bricks (shapes, templates, functions) for others to reuse 

> by pointing at their URIs.

Again, this does not reflect a resolution of the WG. For what it's worth I 
would say that this is rather opposite of the initial stated goal to have 
a solution that addresses 80% of the use cases out of the box with an 
extension mechanism for the remaining 20%.

The feedback we're getting from the hydra people tells me that we need to 
quickly publish more broadly our current draft to get broader input as to 
whether the direction we currently have is likely to lead us to meeting 
the mark or not.

In the meantime, please, let's be careful not to overuse the word "we", 
especially when addressing an external audience which cannot be expected 
to be aware of the possible nuances existing being one's statements.

Thanks.
--
Arnaud  Le Hors - Senior Technical Staff Member, Open Web Technologies - 
IBM Software Group


Holger Knublauch <holger@topquadrant.com> wrote on 08/11/2015 10:52:38 PM:

> From: Holger Knublauch <holger@topquadrant.com>
> To: "'Hydra'" <public-hydra@w3.org>, public-data-shapes-wg@w3.org
> Date: 08/11/2015 10:54 PM
> Subject: Re: Update and opportunities with SHACL
> 
> On 8/12/2015 6:39, Markus Lanthaler wrote:
> 
> >> My hope with SHACL is that we try to build
> >> a truly extensible language, not just wait for the W3C to prescribe
> >> whatever gets hard-coded into the language. Any programming language
> >> supports creating new functions and publishing them as libraries for
> >> re-use.
> > Right. But this a different beast. You don't use your "programming
> language" (SHACL) to create those "libraries" (new constraints) but 
> a completely different language (SPARQL).
> 
> It is possible to use existing SHACL shapes to define new shapes, by 
> combining them with AND, OR, NOT, sh:valueShape, via subclassing 
> relationships or even via the sh:hasShape callback. These new shapes get 

> a new URI and can be published as a library.
> 
> But I do understand your point of layering SHACL on top of another 
> language or two, and a similar issue was raised by Jose in the past. At 
> the beginning of the SHACL WG we were collecting use cases and quickly 
> found that if we want to stay entirely within a high-level language, 
> then this high-level language would have to be equivalent to SPARQL to 
> achieve the required expressivity. So basically we'd need an object 
> model consisting of nodes for all kinds of expressions (=, >, !=, 
> variables etc) plus joining of values similar to Basic Graph Patterns, 
> and other features. For those familiar with SPIN, this was exactly the 
> approach taken by the SPIN RDF Vocabulary (http://spinrdf.org/sp.html).
> 
> But this would lead to a massively bigger core language with all its 
> consequences. Furthermore, if this language is so close to SPARQL, why 
> not use SPARQL directly and systems that wish to operate on objects can 
> parse the strings. Finally, this model could not express things that 
> JavaScript could, i.e. we'd need to extend this high-level vocabulary 
> with typical imperative constructs such as if and while. This is just 
> not realistic.
> 
> >
> >
> >> I anticipate that most deployed server-side SHACL systems will
> >> have full support for the SPARQL extension point.
> > This is the point I'm not so sure about. If that would indeed be 
> the case, why not simplify it to something like Tom mentioned in his 
email:
> >
> > On Tuesday, August 11, 2015 1:36 AM, Tom Johnson wrote:
> >> why not jettison the idea of a language and simply use SPARQL 
andfocus the
> >> Shapes WG output on a lightweight API for
> >> `f(Graph, SPARQL Expression) -> Violations`?
> 
> This is indeed a subset of SHACL, basically if you only use 
> sh:constraint and sh:sparql, e.g.
> 
> ex:MyShape
>      a sh:Shape ;
>      sh:constraint [
>          sh:sparql "..." ;
>      ] .
> 
> But this doesn't fulfill the design goal of SHACL to serve as a language 

> to communicate the structure of data in a high-level language, e.g. to 
> populate input forms. What we created was a compromise between those two 

> worlds.
> 
> >> This means that it
> >> will become quite safe to rely on SPARQL-based extensions, if your 
data
> >> is stored in an RDF database.
> > These are a lot of "ifs" if you ask me.
> 
> If someone is willing to help with the JavaScript execution language 
> (see my original post) then all popular templates including the Core 
> vocabulary will likely have a JavaScript executable too. This removes 
> most of the ifs above, because people can then run them client-side 
> against normal JSON objects.
> 
> >
> >
> >> I would like to be in a similar situation
> >> for other setups, esp for client-side SHACL systems. For this to work
> >> though, we need a proper JavaScript mapping from day one.
> > ... which makes interoperable implementations even harder and way 
> more complex. Now they don't only need to include a full-fledged 
> SPARQL engine but also a JavaScript engine. The statement in your 
> other mail unfortunately leaves the impression that you don't find 
> this to be a problem:
> >
> > On Tuesday, August 11, 2015 2:04 AM, Holger Knublauch wrote:
> >> To be clear though: it will be perfectly fine to declare macros 
> that do not have a SPARQL body,
> >> because they can only be implemented in, say, JavaScript. It only
> means that SPARQL-based
> >> engines will not be able to make sense of them.
> 
> I personally want SHACL to be a language that I can use to get work 
> done. If our design goal is to have 100% compatibility between all 
> implementations then SHACL will likely be a very limited language, or 
> (as I said above) a reinvention of SPARQL. In many environments 
> (especially enterprise software) people will be perfectly happy to use 
> SHACL with their own extensions - uploading a model to the public web is 

> not relevant there.
> 
> > In that same email you made an interesting statement that I don't 
> fully understand:
> >
> >> We are trying our best to cover the most relevant use cases out 
> of the box, with the Core
> >> vocabulary. But it would be a wasted opportunity to not rely on 
> the spirit of the web here.
> > What exactly do you mean by "the spirit of the web" in this context?
> 
> Languages like OWL were designed by committee and include exactly the 
> features that certain people found relevant at a given point in time. 
> There is no extension mechanism in OWL. Nobody is able to publish a new 
> keyword such as owl:IrreflexiveProperty themselves. If someone wants to 
> express that the values of skos:prefLabel must all have different 
> language tags, they cannot do that. See [1].
> 
> With SHACL, the committee just publishes a Core starter kit plus a 
> web-based extension mechanism. We do not know yet what people will do 
> with these building blocks in the next few years. Anyone can publish 
> their own lego bricks (shapes, templates, functions) for others to reuse 

> by pointing at their URIs. So in addition to publishing ontologies and 
> data, people can publish domain-specific language libraries. These 
> libraries include instructions on how they should be executed - in 
> SPARQL, JavaScript or whatever. Complete SHACL engines would not 
> hard-code anything - just the generic code to take those macro 
> definitions and execute them. Even the Core language (sh:minCount etc) 
> is soft-coded in SPARQL, and could easily have other implementations for 

> various JavaScript set-ups. And if Google invents a successor scripting 
> language to replace JavaScript then the SHACL spec would not need to be 
> changed - it would be just another execution language plugin.
> 
> HTH
> Holger
> 
> [1] 
> 
http://composing-the-semantic-web.blogspot.com.au/2010/04/where-owl-fails.html

> 

Received on Wednesday, 12 August 2015 15:25:13 UTC