Re: shapes-ISSUE-130 (rdf dataset assumption): SHACL should not assume that the data graph is in an RDF dataset [SHACL Spec]

The spec draft currently says

The definition of some constraints requires or is simplified through access to
the shapes graph during query execution. SHACL validation engines MAY prebind
the variable $shapesGraph to provide access to the shapes graph.

This indicates that some constraints require access to the shapes graph during
query execution.

The resolution of ISSUE-47, at, indicates that
access to the shapes graph during query execution is an optional feature.  The
SHACL spec needs to ensure that all constraints that need access to the shapes
graph are optional.  The SHACL spec should go further and be very clear that
access to the shapes graph is indeed optional and all all constraints that
need access to the shapes graph are optional.


On 03/21/2016 04:12 AM, Dimitris Kontokostas wrote:
> Found it,
> the resolution does not say this but iirc the discussion (which is not 100%
> scribed) was talking about bnodes and how they can be identified with a remote
> call vs in-memory.
> ARQ and Sesame do something clever with bnodes which is not the case for all
> sparql engines but I am not trying to re-open the old issue, only trying to
> close this one using that resolution
> I propose we close this issue as: SHACL does not assume that the data graph is
> an RDF dataset as addressed by the current editor's draft
> This of course allows people to use datasets but SHACL doesn't take any
> special care in this case
> On Mon, Mar 21, 2016 at 12:59 AM, Holger Knublauch <
> <>> wrote:
>     On 18/03/2016 18:38, Dimitris Kontokostas wrote:
>>     On Fri, Mar 18, 2016 at 9:41 AM, Peter F. Patel-Schneider
>>     <<>
>>     <>> wrote:
>>         If it is always possible to construct the dataset, then I don't see
>>         a problem
>>         either.  However, is this always possible?  For example, a user who
>>         is just
>>         trying to validate a graph may not have permissions to create or
>>         modify a dataset.
>>     iirc there was a resolution on supporting only in-memory validation (not
>>     my favorite and cannot find it), e.g. full shacl may not run on remote
>>      datasets e.g. sparql endpoints. 
>>     With this in mind an implementation could just copy the shapes & data
>>     graph in memory and perform the validation there
>     The resolution that we made a while ago was to not require support for the
>     SPARQL endpoint protocol. Note that this is different from the question of
>     in-memory vs database. It means that implementations can still work
>     against databases, e.g. via an API such as ARQ or Sesame (for which all
>     major databases provide drivers for), while the SPARQL endpoint protocol
>     is too limiting for what SHACL needs to do.
>     Holger
> -- 
> Dimitris Kontokostas
> Department of Computer Science, University of Leipzig & DBpedia Association
> Projects:,,
> http:// <>
> Homepage:
> Research Group: AKSW/KILT

Received on Monday, 21 March 2016 17:56:46 UTC