Re: LIME proposal for the OntoLex W3C Community Group

Dear Armando,

read the comment below a quote from an email of yours.

Some of them are in fact SPARQL deducible, but it seems the one we took as
> an example (lime:languageCoverage<http://art.uniroma2.it/ontologies/lime#languageCoverage>)
> is exactly one of those not so trivial to write (maybe I'm not an expert
> with CONSTRUCTS, but I would say not possible at all).
>
> In the LIME module, we used RDF API and plain Java post processing to
> compute them, so I was not recalling which ones were simple SPARQL
> constructs and which ones needed more processing.
>

In a certain sense, you were right when saying that
lime:languageCoveragecan be expressed as a SPARQL 1.1 query. In fact,
the following query should
compute the average number of rdfs:labels in English (regardless of country
variant) per owl:Class:

SELECT (AVG(?labelCount) AS ?averageCount) {
   {SELECT ?class (count(?l) AS ?labelCount)
    WHERE {
       ?class a owl:Class ;
              rdfs:label ?l .
       FILTER(langMatches(lang(?l), "en" ))
    }
    GROUP BY ?class
   }
}

(Note that if there are no labels the above query does not return an
average equals to zero)

However, if you change the relevant type (e.g. skos:Concept instead of
owl:Class) or the linguistic enrichment vocabulary (e.g. reified
skosxl:Labels instead of plain rdfs:labels), then the query definitely
needs to be rewritten.

Actually, the sense of my answer was that we can define the overall
semantics of lime:languageCoverage via a "template" query defined up to
other graph patterns (?) that match the relevant data.

It would be interesting to discuss whether these graph patterns (depending
on the resource type and the linguistic model) should be conveyed somehow
in a machine readable form.

2014-03-08 20:44 GMT+01:00 Armando Stellato <stellato@info.uniroma2.it>:

> Dear John,
>
>
>
> well I'm a bit puzzled, in that this is surely worth discussing, but it's
> a completely orthogonal topic again. The fact that Philipp mentioned the
> possibility to define their semantics through SPARQL does not change
> anything about the nature of these properties so, if you found them useless
> because of their redundancy with the data, they were useless/redundant even
> before.
>
> Maybe we should synthetize a few aspects and discuss them in a page of the
> wiki. What do you think? The impression is that in the emails we are
> opening new topics instead of closing the open ones, so it may be worth to
> have separate threads. Please let us know, if you feel we are almost close
> to the end, we may even go along with emails (maybe with specific threads).
>
>
>
> Btw, to reply to your specific question:
>
>
>
> The point of metadata is not to optimize commonly run SPARQL queries, for
> two primary reasons, firstly it bulks up the model and instances of the
> model with triples for these 'pre-compiled' queries and secondly it is very
> hard to predict what queries an end-user will want to run. It seems that
> the kind of metadata we are proposing to model is nearly entirely
> pre-compiled queries, and are of questionable practical application. That
> is, I ask a simple question: *if we can achieve resource interoperability
> for OntoLex already with SPARQL why the heck do we need metadata anyway??*
>
>
>
> Personally, as an engineer, I'm biased towards considering "redundancy the
> evil", and keep information to its minimum (so I would tend to agree with
> your point). But, engineering 101 manual tells that you may sometimes give
> up the orthodoxy on the above principle, if this greatly improves
> performance, scalability etc...
>
> Furthermore, instead of trivially giving up, you should designate how,
> when and where the redundancy points are defined (whatever system you are
> speaking about).
>
>
>
> Now, narrowing down to our case, we have a clear point, the void file,
> that is a surrogate of a dataset, contains its metadata, and is always
> updated following updates to its content: no danger of dangling out-of-date
> redundant information then.
>
> We have also a clear scenario: packs of spiders roaming around the web and
> getting plenty of useful information from tons of different datasets
> without stressing their SPARQL endpoints; mediators examining metadata from
> multiple resources and taking decisions very quickly etc...
>
>
>
> But, I'm a just poor guy :) so, out of my personal view, let me mention
> some notable predecessors:
>
>
>
> Already mentioned by Manuel in his email of today, we have VOAF:
> http://lov.okfn.org/vocab/voaf/v2.3/index.html
>
> ..but VOAF is not a standard...
>
>
>
> ...talking about standards, ladies and gentlemen, here is VoID itself and
> its many SPARQL deducible properties!
>
> https://code.google.com/p/void-impl/wiki/SPARQLQueriesForStatistics
>
>
>
> ..and to happily close my defense, well, in any case Manuel just confirmed
> in his email that I should have thought one second more about the SPARQL
> deducibility of LIME's properties :-)
>
> Some of them are in fact SPARQL deducible, but it seems the one we took as
> an example (lime:languageCoverage<http://art.uniroma2.it/ontologies/lime#languageCoverage>)
> is exactly one of those not so trivial to write (maybe I'm not an expert
> with CONSTRUCTS, but I would say not possible at all).
>
> In the LIME module, we used RDF API and plain Java post processing to
> compute them, so I was not recalling which ones were simple SPARQL
> constructs and which ones needed more processing.
>
>
>
> Cheers,
>
>
>
> Armando
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>



-- 
Manuel Fiorelli
PhD student in Computer and Automation Engineering
Dept. of Computer Science, Systems and Production
University of Rome "Tor Vergata"
Via del Politecnico 1
00133 Roma, Italy

tel: +39-06-7259-7334
skype: fiorelli.m

Received on Monday, 10 March 2014 10:23:43 UTC