W3C home > Mailing lists > Public > public-data-shapes-wg@w3.org > January 2015

Re: Language or technology

From: Jose Emilio Labra Gayo <jelabra@gmail.com>
Date: Tue, 27 Jan 2015 13:34:20 +0100
Message-ID: <CAJadXXLGueLvmHf-g_+G=YNf6Pkigwcj1CjV-sAsTDy1=R_T5g@mail.gmail.com>
To: Holger Knublauch <holger@topquadrant.com>
Cc: RDF Data Shapes Working Group <public-data-shapes-wg@w3.org>
On Tue, Jan 27, 2015 at 7:58 AM, Holger Knublauch <holger@topquadrant.com>
wrote:

>
> On 1/27/15, 4:41 PM, Jose Emilio Labra Gayo wrote:
>
>  On Tue, Jan 27, 2015 at 7:26 AM, Holger Knublauch <holger@topquadrant.com
> > wrote:
>
>> You mean we should create something like OWL Structural Specification
>>
>>     http://www.w3.org/TR/owl2-syntax/
>>
>> i.e. some abstract data model only,
>
>
>  No, I mean we should create some structural syntax like the one for OWL
> accompanied with a well defined semantics as:
>
>  http://www.w3.org/TR/2012/REC-owl2-direct-semantics-20121211/
>
>
> Such a thing already exists, it's called SPARQL. There is no need to
> reinvent what it means to count triples or to compute a + b. And the
> mapping of something like maxCardinality is already specified in LDOM
> itself.
>
>
As far as I know, the semantics of LDOM is in natural language terms
nowadays. Where are the mappings from those constructs to SPARQL?
How do you handle recursive shapes in SPARQL?

>   and leave all the details to individual groups outside of the WG?
>
>
>  No, the individual groups should have to validate their implementations
> against the test cases and the semantics wedefined by the WG
>
>
> But where is anything that end users can use here, if you end up with a
> number of vendor-specific syntaxes that are not standardized?
>

As I said, we can have the well defined RDF syntax which I think all of us
agree with.

I would also propose a more human friendly syntax like ShEx, but even if
people don't like it, we could have a separate document for it.

Any other vendor specific syntax that appear should compete to be the
best...in a free market, it would be interesting to see it.


> We would be back to square one and end up with nothing useful at all.
>
Users would not even get a way to exchange their constraints in a concrete
>> syntax?
>>
>
>  We could define some concrete syntax...which as I said, could be RDF or
> something more human-friendly.
>
> I propose LDOM for that role and skip your other abstract documents.
>

I have no problem if you call the language LDOM...but whatever you call it,
I think it needs to have a well defined semantics which could be understood
without leaving everything to a full stack technology that could be much
more problematic.

What use would such an "abstract" standard have?
>
>
>  As I said, it would not just be the abstract standard...we could also
> have some reference implementation.
>
>
>> And how does it solve the issue of trying to cast some technology into
>> others?
>
>
>  Because there is a well defined semantics on the agreed terms that we
> have found, letting the controversial terms as unspecified.
>
>
> LDOM will be 100% well-defined - every term has a SPARQL query behind it.
> There is nothing controversial.
>

There are several things where there are appearing some differences:

- How to handle recursive definitions. In SPARQL you cannot define them, so
you need something extra.

- How to select which nodes you are validating. I propose to leave it
unspecified or to have some extra definition of it. I am definitely against
selecting nodes for validation only by rdf:type, or attaching all
constraints to a class, when they don't need to be there.

- there could be other differences once we start looking at the
details...for that, we would need a formal definition of LDOM that we don't
have right now.


> You basically just create another layer of indirection that all languages
>> have to map into, while LDOM directly maps into SPARQL which is already
>> well-established and supported by all triple stores.
>
>
>  We could maintain SPARQL compatibility also. We could define a mapping
> to SPARQL as a recommendation if you prefer.
>
>
> Looking at the requirements catalogue (string operations, language tags,
> aggregations etc) it is clear that the language would actually have to be
> SPARQL itself. Why invent another language all over again?
>

As far as I know the requirements catalogue is still being developed. Some
of those things could be handled while others could not.

Maybe, we would not need all the SPARQL functionality but a subset of it.
For example, string comparisons and arithmetic expressions could be handled
by the expressions that appear in the FILTER expressions of SPARQL, which
in fact refer to a subset of XQuery. But I suppose that this could be part
of another thread.

If it just wraps SPARQL, how would it be different from LDOM?
>
>
>  LDOM as far as I know now has not a well defined semantics. Its
> semantics appears to be in natural language accompanied with *your*
> implementation.
>
>
> The detailed LDOM spec will have this written up so that anyone can
> implement it, of course.
>

But you would need other independent implementations so it could become a
recommendation. I really think it is much more practical to separate an
implementation from a spec. That's something that has been done in most of
the standards and W3c recommendations and I think it is the way to proceed
and move forward.

> We can not reason about LDOM nor compare if some constraints expressed in
> LDOM are equivalent to other constraints...for example, how could you
> assert that one shape defined in LDOM is equivalent to another?
>
>
> Which user story and requirement needs such static analysis?
>

Mostly all of the user stories need to know what a shape means and how one
can differentiate one shape from another. I went quickly to the wiki and
the first story that I met was:

http://www.w3.org/2014/data-shapes/wiki/User_Stories#S12:_App_Interoperability

How could you warrant app interoperability if you don't have a well defined
semantics for the shapes?

-- 
Best regards, Labra
Received on Tuesday, 27 January 2015 12:35:12 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 27 January 2015 12:35:13 UTC