- From: Arnaud Le Hors <lehors@us.ibm.com>
- Date: Mon, 12 Dec 2016 21:41:35 +0100
- To: Dean Allemang <dallemang@workingontologist.com>
- Cc: "Eric Prud'hommeaux" <eric@w3.org>, public-rdf-shapes@w3.org
- Message-Id: <OF7CDBDBB8.423A904D-ONC1258087.00716263-C1258087.0071A7BD@notes.na.collabserv.c>
Hi Dean, So what do you use for this work? Did you implement SHACL Full? Thanks. -- Arnaud Le Hors - Senior Technical Staff Member, Open Web & Blockchain Technologies - IBM Cloud From: Dean Allemang <dallemang@workingontologist.com> To: "Eric Prud'hommeaux" <eric@w3.org> Cc: public-rdf-shapes@w3.org Date: 12/12/2016 09:05 PM Subject: Re: Why not adopt ShEx? (was Re: Enough already) Sent by: deanallemang@gmail.com There are three validation operations I can imagine wanting to perform on an engine that supports SPARQL: This is a lovely idea, but it isn't realistic from the beginning. We are in the position of having a vendor work with us, with their current technology stack. We can't expect them (and we don't have time to wait; once a PoC is approved, it is usually due yesterday) to adapt their stack to something new. Of course, once we have a Recommendation in place, it is a lot easier for us to request compliance to it. I am interested in a smooth slope to adoption. A SPARQL based constraint system is easy to adopt in the current environment. As adoption of the standard progresses, adherence to acceptance criteria will be a good idea (and we would be interested in cooperating in developing those) 3 Extend the SPARQL engine to support SHACL Full's extensibility mechanism. I find this criterion to be particularly interesting. As a standards organization ourselves, I don't think we are in a position to make use of this (are we going to publish our own extensions to SHACL? I don't think so). But from an implementation point of view, this is really awesome. I don't know that we would need to have candidate POC vendors pass this one. Dean On Mon, Dec 12, 2016 at 2:48 PM, Eric Prud'hommeaux <eric@w3.org> wrote: * Dean Allemang <dallemang@workingontologist.com> [2016-12-12 10:33-0500] > > > > Sorry, but I see zero advantages of ShEx over SPIN/SPARQL. > > > > Why would I want to lock my software into a new non-standard syntax with > > close to none adoption, when I can simply use the query engine to validate > > constraints? > > > > > > > > I couldn't agree more. In FIBO, we have been looking for a constraint > language to help us make definitions that go beyond the capabilities of > OWL. I presented these at the inaugural meeting of the SHAPES group a > couple years ago. It is easy to specify them in SPARQL, and we have done > so (and I did the same in SHACL, now that there is a write-up of how it > works). > > When we move from our conceptual ontology to something operational for a > Proof of Concept, some vendor is always involved. That vendor (different > one for each PoC) always has an RDF store somewhere in their stack. They > can always consume OWL (though often through a rule engine interpretations > via OWL2RL). For rules/constraints that go beyond OWL, we have to work out > some way to give them the rules. SWRL? Some of them can manage that. > RIF? Everyone knows what it is, but few can handle it out of the box. > Other rule systems have varying degrees of uptake. > > But one thing all the triple stores can manage is SPARQL. "How about if I > give you the constraints in SPARQL?" the answer is always, "Oh, sure, that > works". Because they are all triple stores, and they already do it. There are three validation operations I can imagine wanting to perform on an engine that supports SPARQL: 1 Validate ShEx or SHACL core over a the SPARQL protocol. 2 Validate ShEx or SHACL core over a graph API. 3 Extend the SPARQL engine to support SHACL Full's extensibility mechanism. For 1 and 2, I think ShEx and SHACL are about equal. Peter's implementation of SHACL Core used a mixture of graph API and SPARQL but could certainly have been implemented just in terms of the ubiquitous triplesMatching API. The ShEx demo compiles ShEx 1 to SPARQL queries to run over the SPARQL protocol but of course that didn't support features like told bnodes (identifying a bnode by label). I think SPARQL becomes relevent when you want to build a SHACL Full (or SPIN) engine. You would have to implement a full SPARQL engine *and then* build the node/shape iterators, templating system, and recursion control that are required for SHACL Full. > This doesn't mean that we have to do this in SPARQL, but it does mean that > if we have that option, we shortcut a lot of work to get to our Proofs of > Concept. > > In the end, I'm just re-iterating what Martynas has said much more > succinctly, but in the context of a whole industry effort (FIBO) and a > selection of vendors who want to work with us. > > > > Dean -- -ericP office: +1.617.599.3509 mobile: +33.6.80.80.35.59 (eric@w3.org) Feel free to forward this message to any list for any purpose other than email address distribution. There are subtle nuances encoded in font variation and clever layout which can only be seen by printing this message on high-clay paper.
Received on Monday, 12 December 2016 20:42:04 UTC