Re: Organizing the requirements

Hello Anamitra,

yes absolutely. The group will produce a high-level vocabulary that is 
user-friendly for people who don't even know SPARQL. This vocabulary 
will be suitable to represent structural metadata about a given RDF 
graph. Tools can then (for example) hard-code against this vocabulary 
(e.g. cardinality restrictions) to drive input widgets of a UI. Having 
to parse SPARQL query strings for that purpose would be a nightmare.

The role of SPARQL (as I see it) is twofold:

1) SPARQL can provide the executable semantics of those structural 
vocabularies. For example, whenever a ex:CardinalityConstraint is 
encountered, a SPARQL query can tell the engine which condition needs to 
be validated, and what to report if a violation is found. In SPIN this 
is achieved via Templates [1]. See my prototype for a representation of 
the OSLC vocabulary in SPIN [2]. The main benefit here is that it is 
possible to define arbitrary high-level vocabularies while using a 
unified execution engine based on mainstream SPARQL technology.

2) SPARQL can serve as an "escape" mechanism to represent constraints 
that are not covered by a fixed vocabulary. SPIN has the property 
spin:constraint [3] for that purpose. This ability to represent 
arbitrary constraints is in our experience absolutely crucial for 
real-world validation purposes.

HTH
Holger

[1] http://spinrdf.org/spinsquare.html#templates
[2] http://lists.w3.org/Archives/Public/public-rdf-shapes/2014Jul/0237.html
[3] http://spinrdf.org/spinsquare.html#constraints

On 10/21/14, 11:48 PM, Anamitra Bhattacharyya wrote:
>
> In my honest opinion, we need to look at both the aspects of 
> validation and meta-data for describing the RDF document. SPARQL 
>  maybe good for expressing validation aspects - but I am not sure how 
> useable it is for describing RDF documents. From an application that 
> is consuming RDF documents - say for the purpose of displaying it in 
> an Userr Interface, it would be difficult to analyze SPARQL 
> constraints to infer the shape of the document..
>
> Anamitra
>
> Inactive hide details for Holger Knublauch ---10/21/2014 03:23:50 
> AM---On 10/21/2014 16:38, Peter F. Patel-Schneider wrote: > IHolger 
> Knublauch ---10/21/2014 03:23:50 AM---On 10/21/2014 16:38, Peter F. 
> Patel-Schneider wrote: > I thought that there was this agreement to st
>
> From: Holger Knublauch <holger@topquadrant.com>
> To: public-data-shapes-wg@w3.org,
> Date: 10/21/2014 03:23 AM
> Subject: Re: Organizing the requirements
>
> ------------------------------------------------------------------------
>
>
>
> On 10/21/2014 16:38, Peter F. Patel-Schneider wrote:
> > I thought that there was this agreement to start from a
> > technology-neutral beginning.  Trying to determine the role of SPARQL
> > before doing use case and requirements analysis does not seem to fit
> > into this agreement.
> >
> > This would be true even if there were universal agreement that SPARQL
> > had the right expressive power.
>
> I fully agree but did not say that we should decide on any technology
> before doing use cases. I only stated that this decision can hopefully
> be done early in the process - once the use cases are collected and
> analyzed. Without any grounding, future decisions become very hard to
> make. For example the group could decide to first develop a completely
> new language, but this would have flow-on effects to the design of the
> higher-level language for average end users and overall lead to a delay
> in the deliverables. If there was an agreement that SPARQL's
> expressivity is a good match for the catalog of requirements, then we
> can work on the delta that makes SPARQL as useable as possible for our
> scenarios.
>
> Holger
>
>
> >
> > peter
> >
> >
> > On 10/15/2014 06:18 PM, Holger Knublauch wrote:
> >
> > [I have removed the bulk of Holger's message to concentrate on this
> > one particular point.]
> >>
> >> Pragmatically speaking, I believe we should aim at concluding on a key
> >> question early on: the role of SPARQL versus any alternatives.
> >> Judging from
> >> the discussions in the old mailing list, I believe many people agree
> >> that
> >> SPARQL is the most suitable existing language in terms of
> >> expressivity. That's
> >> because SPARQL is a general RDF pattern-matching language and covers
> >> the most
> >> common operations with its arithmetic and string manipulation
> >> functions. I
> >> don't really see alternatives.
> >>
>
>
>

Received on Tuesday, 21 October 2014 22:25:24 UTC