Re: ISSUE-139: uniform descriptions and implementations of constraint components

I think that allowing people to write constraints that are entirely superfluous (that is they would always produce an error) is not a good design.

This doesn't improve usability of language - quite on contrary, because it doesn't tell a shapes designer what constraints make no practical sense. Thus, encouraging them making poor or mistaken design choices, rather than doing the opposite. Given that the overall competence in RDF is quite low, I see this as creating a lot of potential for misunderstanding and frustration. 

Further, ability to screen out such unnecessary, nonsensical constraints constraints in the design time is important to performance.

Sent from my iPhone

> On Jun 6, 2016, at 7:41 AM, Peter F. Patel-Schneider <pfpschneider@gmail.com> wrote:
> 
> I don't think that this argument speaks to my proposal.  My proposal is not
> that using sh:datatype in an inverse property constraint should be ignored. It
> is that using sh:datatype in an inverse property constraint should produce
> constraint violations instead of being syntactically illegal, just like using
> sh:minLength does on a property value that is a blank node.
> 
> One could argue, I suppose, that sh:minLength should produce a failure when
> used on a blank node and so should sh:datatype in an inverse property
> constraint.  But then sh:datatype should produce failures instead of
> constraint violations on IRIs when used in other contexts and sh:class should
> produce failures instead of constraint violations on literals and several
> other situations should also produce failures instead of constraint violations.
> 
> peter
> 
> 
>> On 06/06/2016 03:55 AM, Dimitris Kontokostas wrote:
>> 
>> 
>> On Mon, Jun 6, 2016 at 9:31 AM, Dimitris Kontokostas
>> <kontokostas@informatik.uni-leipzig.de
>> <mailto:kontokostas@informatik.uni-leipzig.de>> wrote:
>> 
>> 
>> 
>>    On Sun, Jun 5, 2016 at 11:36 PM, Peter F. Patel-Schneider
>>    <pfpschneider@gmail.com <mailto:pfpschneider@gmail.com>> wrote:
>> 
>>        Yes, each constraint component should not need more than one
>>        implementation,
>>        whether it is in the core or otherwise.  Otherwise there are just that
>>        many
>>        more ways of introducing an error.
>> 
>>        Yes, in the current setup each constraint component should be usable
>>        in node
>>        constraints, in property constraints, and in inverse property constraints.
>>        Otherwise there is an extra cognitive load on users to figure out when a
>>        constraint component can be used.  The idea is to not have errors
>>        result from
>>        these extra uses, though.  Just as sh:minLength does not cause an
>>        error when a
>>        value node is a blank node neither should sh:datatype cause an error
>>        when used
>>        in an inverse property constraint.  Of course, an sh:datatype in an
>>        inverse
>>        property constraint will always be false on a data graph that is not an
>>        extended RDF graph.
>> 
>> 
>>    I would argue that all these cases should throw an error, otherwise it
>>    would again require extra cognitive load to remember when a use of a
>>    constraint is actually used or ignored.
>> 
>> 
>> Trying to back this up a bit, on a recent paper I presented last week in ESWC
>> we had a related issue. 
>> http://svn.aksw.org/papers/2016/ESWC_Jurion/public.pdf 
>> 
>> If you look at the first paragraph of page 13, experts said that getting
>> violations back when one runs a validation is very good but when you get
>> nothing back (succesful validation) it is not as good as one would expect.
>> The reason is that you cannot be 100% sure that you got a success because no
>> errors were found or because you missed to define a constraint correctly
>> 
>> so, if we allow constraints in places that they are just ignored, we give room
>> for such errors and imho would be a wrong decision
>> 
>> 
>>    One other case is optimization, if we require "no more than one"
>>    implementation then we may result in very inefficiently defined constraints.
>>    e.g. for a particular context (and a particular value of the constraint) I
>>    can probably create a very efficient SPARQL query that is many times
>>    faster than the general one, with your approach we loose that advantage.
>>    When we test small / in-memory graphs the delay might not be so noticeable
>>    but in big SPARQL endpoints it may result in very big delays or even
>>    failing to run the query
>> 
>> 
>> 
>>        peter
> 

Received on Monday, 6 June 2016 12:20:48 UTC