Re: LIME proposal for the OntoLex W3C Community Group

Hi Armando,

So if I understand your proposal correctly you want to introduce the
Lexicalization as an object that describes the linking between one lexicon
and one ontology? In principle, this should be fine as long as it is
understood to be useful by a majority of the group and a suitable reference
implementation can be introduced.

For a practical purpose, I would stress that where ever possible we should
prefer the use of integers to decimals for two reasons:

   - Integers are exact
   - Applications generally generate the integer values first (e.g., select
   count(*) where { blah blah })

That is we should prefer "total ontology entities with lexicalization" to
"average lexicalizations per ontology entity" or "percentage of ontology
entities with lexicalization"

Regards,
John


On Wed, Mar 26, 2014 at 1:02 AM, Armando Stellato <stellato@info.uniroma2.it
> wrote:

> Dear all,
>
>
>
> a few more comments about the metadata properties, concerning the “primary
> properties vs derived properties” issue. Quite long, but better to resume
> the whole situation across the various emails exchange and conference
> calls, and then go ahead.
>
>
>
> One point raised was: if we accept that we may provide the number of
> lexical entries in a Lexicon (as I replied affirmatively from John
> proposal, telling it was also in the original Lime, in the section about
> linguistic resources), and we already have from VoID the number of entries
> (which is actually not always guaranteed), why should we provide the
> coverage as it is a derived information?
>
>
>
> A short resume of the situation, and some news.
>
>
>
> RESUME
>
> In the last conference call, we analyzed various scenarios and
> motivations. According to my experience and the scenarios I had in mind, I
> observed that in any case the ratio
> lexicalentries/num-of-resources-in-the-ontology was a primary information
> for agents, and a more objective information than just having the numerator.
>
> …though, if we are comparing different lexicalizations for the same
> ontology, the denominator is always the same...
>
> …though, an agent may be interested in knowing how good is the linguistic
> coverage to compare it with *other* kind of info; in this case, the
> denominator helps in normalizing the information about the amount of
> lexical information available wrt the given ontology.
>
> In any case, I agreed that, IFF the two data values are redundant, then we
> may decide to drop one of them.
>
> One other aspect for me in favor of keeping the ratio over the numerator
> was that, again, Lime was not thought for Ontolex alone (and we would
> really like to get a unified metadata vocabulary covering ALSO ontolex, but
> only that), and in the hypothesis of a simple SKOS-enriched vocabulary, I
> would see more interest in keeping the ratio than in knowing how many
> skos:xxxLabels are there…
>
> …but, again, in the rush of the phone call (and with the redundancy issue
> being raised just the day before), I didn’t have time to think back about
> it in detail.
>
>
>
> MORE RESUME
>
> Yesterday, me and Manuel took some time to think back about the original
> Lime, and about the updated structure that we sent after the remark of
> Philipp that we should cover all scenarios including separated Lexicons and
> Ontologies.
>
> We recall here what has been said about the scenarios. Given an ontology O
> and a Lexicon L, there are three scenarios:
>
>
>
> 1)      O and L are part of the same dataset. Simply, the publisher of
> the ontology decided to use Ontolex to model linguistic data there
>
> 2)      We have dataset O, and a dataset L pointing to it. Most common
> case probably. So, someone wrote a lexicon L by purpose for ontology O.
>
> 3)      L is a dataset on its own (e.g. WordNet), developed independently
> of O. Someone then lexicalized O with elements from L. Almost 100% certain,
> he will not use all of the lexical entries in L to lexicalize the resources
> in O (this is very important!).
>
>
>
> I think, yet from now, it would be worth to introduce the concept of
> Lexicalization. This has not to be owl classified anyhow in the data yet,
> for clarity, it is easier to recognize its existence, and, more in general,
> consider the case 1) as Ontology, Lexicon and Lexicalization in same
> dataset 2) Ontology separate from Lexicon&Lexicalization 3) all of them
> separate
>
>
>
> Now, three LIME properties which seemed to fit Ontolex were the following:
>
>
>
> lime:languageCoverage  for each language, the percentage of RDF
> resources, per type (classes, individuals, properties, SKOS concepts)
> described by at least a lexicalization in that language.
>
> lime:lexicalResourceCoverage   for each specified lexical resource, the
> percentage of RDF resources, per type (classes, individuals, properties,
> SKOS concepts) described by at least an ontolex:LexicalConcept in that
> lexical resource.
>
> lime:avgNumOfEntries per concept
>
>
>
> NEWS
>
> Ok, simply the put, there is no redundancy.
>
>
>
> We are interested, for the avgNumOfEntries (and even more evidently in the
> xxxCoverage!) in the number of attachments (that is, of lexicalizations),
> and not in the number of lexical entries.
>
>
>
> There is a series of very good reasons for that:
>
>
>
> 1)      Consider Scenario 3: the number of lexical entries in the lexicon
> is useless for our counts if not all of them are involved in the
> lexicalization (which will not happen to be
>
> a.       As a consequence, the number of lexical entries for the lexicon
> may still be considered as a useful metadata per se (so, we do not have to
> make a choice and we can keep both), but again, it has to be local to the
> lexicon, and is not relevant for the onto-lexical metadata
>
> 2)      Even with a 100% participation of lexical entries to a
> lexicalization, a lexical entry could participate in lexicalizing two
> concepts (polysemy), and we would really prefer to tell that two concepts
> benefited from that lexical content
>
> 3)      In the specific case of the xxxcoverage properties, the real
> target is the amount of concepts being lexicalized, so it is in no way
> related to the amount of lexical entries. If we had 100 skos:Concepts and
> 1000 lexical entries, and only one concept covered by those 1000 lexical
> entries which happen to be synonyms, then the coverage for class
> skos:Concept is sadly 1%.
>
>
>
> See in this sense, the distinction we already made in the LIME paper [1] (which
> actually dates back to the precursor of Lime, the Linguistic Watermark [2,
> 3] ) about “lexical metadata” and “onto-lexical metadata”
>
>
>
> Now, coming back to the Lexicalization, we really feel it is a determinant
> element to be put into consideration. We are not suggesting to address it
> in the core OntoLex vocabulary. After all, with the exception of
> owl:Ontology, for the most datasets are not categorized in their own data.
> And the concept of Dataset is introduced in VoID, which is targeting
> metadata.
>
>
>
> For this reason, I would suggest to include the notion of Lexicon and
> Lexicalization in the metadata, as subclasses of void:Dataset. The property
> void:subset should then help to address all the three scenarios we foresaw.
>
>
>
> Ok, I stop the mail here (already quite long :D ), and wait for your
> feedback, before sending a concrete proposal to the list.
>
>
>
> Cheers,
>
>
>
> Armando and Manuel
>
>
>
> [1] http://aclweb.org/anthology//W/W13/W13-5504.pdf<http://aclweb.org/anthology/W/W13/W13-5504.pdf>
>
> [2]
> http://art.uniroma2.it/publications/docs/2008_OntoLex08_Enriching%20Ontologies%20with%20Linguistic%20Content%20an%20Evaluation%20Framework.pdf
>
> [3] http://iospress.metapress.com/content/x043167268663268/
>
>
>
>
>
>
>
> *From:* Philipp Cimiano [mailto:cimiano@cit-ec.uni-bielefeld.de]
> *Sent:* Thursday, March 13, 2014 11:18 AM
> *To:* Armando Stellato; 'John P. McCrae'
> *Cc:* 'Manuel Fiorelli'; public-ontolex@w3.org
> *Subject:* Re: LIME proposal for the OntoLex W3C Community Group
>
>
>
> Dear all,
>
>  ok, so we clarified that per se it is fine to include materialized
> results of pre-defined SPARQL queries as new vocabulary elements.
>
> So we are a step further guys ;-)
>
> Whether or not we want to include properties related to linguistic
> resource coverage is then the real point of discussion I think. So let's
> focus on this point.
>
> Other than that: maybe it is not so important whether the values can be
> computed using SPARQL or we need some procedural component to compute them
> (as in the lime Java API mentioned by Armando).
>
> My point was rather: let's define what we mean exactly with these
> properties by giving them an exact semantics. It is fine if this semantics
> is made explicit. But the point is: if it is not the case that all creators
> of lexica use the properties in the same way, then they become sort of
> useless, see our recent discussion of the "confidence" property to indicate
> confidence in a translation: it is quite useless if people adopt a
> completely different interpretation of this value.
>
> So rather than really having SPARQL Construct Statements for most metadata
> properties, let's give precise semantics so that anyone could compute the
> values of the properties consistently with this semantics.
>
> Does this make sense?
>
> Talk to you all tomorrow.
>
> Philipp.
>
> Am 08.03.14 20:44, schrieb Armando Stellato:
>
> Dear John,
>
>
>
> well I’m a bit puzzled, in that this is surely worth discussing, but it’s
> a completely orthogonal topic again. The fact that Philipp mentioned the
> possibility to define their semantics through SPARQL does not change
> anything about the nature of these properties so, if you found them useless
> because of their redundancy with the data, they were useless/redundant even
> before.
>
> Maybe we should synthetize a few aspects and discuss them in a page of the
> wiki. What do you think? The impression is that in the emails we are
> opening new topics instead of closing the open ones, so it may be worth to
> have separate threads. Please let us know, if you feel we are almost close
> to the end, we may even go along with emails (maybe with specific threads).
>
>
>
> Btw, to reply to your specific question:
>
>
>
> The point of metadata is not to optimize commonly run SPARQL queries, for
> two primary reasons, firstly it bulks up the model and instances of the
> model with triples for these 'pre-compiled' queries and secondly it is very
> hard to predict what queries an end-user will want to run. It seems that
> the kind of metadata we are proposing to model is nearly entirely
> pre-compiled queries, and are of questionable practical application. That
> is, I ask a simple question: *if we can achieve resource interoperability
> for OntoLex already with SPARQL why the heck do we need metadata anyway??*
>
>
>
> Personally, as an engineer, I’m biased towards considering “redundancy the
> evil”, and keep information to its minimum (so I would tend to agree with
> your point). But, engineering 101 manual tells that you may sometimes give
> up the orthodoxy on the above principle, if this greatly improves
> performance, scalability etc…
>
> Furthermore, instead of trivially giving up, you should designate how,
> when and where the redundancy points are defined (whatever system you are
> speaking about).
>
>
>
> Now, narrowing down to our case, we have a clear point, the void file,
> that is a surrogate of a dataset, contains its metadata, and is always
> updated following updates to its content: no danger of dangling out-of-date
> redundant information then.
>
> We have also a clear scenario: packs of spiders roaming around the web and
> getting plenty of useful information from tons of different datasets
> without stressing their SPARQL endpoints; mediators examining metadata from
> multiple resources and taking decisions very quickly etc…
>
>
>
> But, I’m a just poor guy :) so, out of my personal view, let me mention
> some notable predecessors:
>
>
>
> Already mentioned by Manuel in his email of today, we have VOAF:
> http://lov.okfn.org/vocab/voaf/v2.3/index.html
>
> ..but VOAF is not a standard…
>
>
>
> …talking about standards, ladies and gentlemen, here is VoID itself and
> its many SPARQL deducible properties!
>
> https://code.google.com/p/void-impl/wiki/SPARQLQueriesForStatistics
>
>
>
> ..and to happily close my defense, well, in any case Manuel just confirmed
> in his email that I should have thought one second more about the SPARQL
> deducibility of LIME’s properties :-)
>
> Some of them are in fact SPARQL deducible, but it seems the one we took as
> an example (lime:languageCoverage<http://art.uniroma2.it/ontologies/lime#languageCoverage>)
> is exactly one of those not so trivial to write (maybe I’m not an expert
> with CONSTRUCTS, but I would say not possible at all).
>
> In the LIME module, we used RDF API and plain Java post processing to
> compute them, so I was not recalling which ones were simple SPARQL
> constructs and which ones needed more processing.
>
>
>
> Cheers,
>
>
>
> Armando
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> --
>
>
>
> Prof. Dr. Philipp Cimiano
>
>
>
> Phone: +49 521 106 12249
>
> Fax: +49 521 106 12412
>
> Mail: cimiano@cit-ec.uni-bielefeld.de
>
>
>
> Forschungsbau Intelligente Systeme (FBIIS)
>
> Raum 2.307
>
> Universität Bielefeld
>
> Inspiration 1
>
> 33619 Bielefeld
>
>

Received on Wednesday, 26 March 2014 13:14:50 UTC