- From: Aldo Gangemi <aldo.gangemi@cnr.it>
- Date: Fri, 26 Apr 2013 13:11:32 +0200
- To: Philipp Cimiano <cimiano@cit-ec.uni-bielefeld.de>
- Cc: Aldo Gangemi <aldo.gangemi@cnr.it>, public-ontolex@w3.org
- Message-Id: <8C159A2C-C6C8-43CC-9653-5D9A1EB8D057@cnr.it>
Ok Philipp. You have interpreted me well, and your point is clear. I used "Reference" instead of "Referent" to avoid that strongly realist commitment (and then I discovered that Bedeutung is translated as "Reference" in English …), anyway I am not particularly eager to fight for names. More important for me are the three relations: expressing, conceptualizing, and denoting, which help clarifying ontolex issues. Aldo On Apr 26, 2013, at 12:25:19 PM , Philipp Cimiano <cimiano@cit-ec.uni-bielefeld.de> wrote: > Aldo, > > see below. > > Am 26.04.13 12:03, schrieb Aldo Gangemi: >> Dear Philipp I won't participate today, then I need you a short reply. >> >> On Apr 26, 2013, at 10:33:03 AM , Philipp Cimiano <cimiano@cit-ec.uni-bielefeld.de> wrote: >> >>> Dear all, >>> >>> apologies for my silence so far, the last two weeks have been extremely busy for me. I have been following all discussion and I am happy that they have taken place. Thanks to John for being so active on the Bielefeld side. >>> >>> Concerning this email of Armando but also others, I really think that the distinction between intensional and extensional entities or between knowledge models of type A and of type B are not really helpful from the ontolex model point of view. Aldo might help me to see where I am wrong. >>> >>> All the entities we have in these models are prima facie intensional ones, this holds for WordNet synsets, skos concepts but also owl classes. They all might have a gloss etc. that describes this intension (Sinn in the terms of Frege) closer. Further, these intensional entities might be more or less axiomatized (as in OWL) or not (as in Wordnet/SKOS). But in any case they are symbols that represent an intension, I agree with Armando here actually. >> >> I agree too :) but this does not prevent one to focus on the prevalent extensional interpretation of OWL entities. It's not only the amount of axioms, but their model-theoretic interpretability that is lacking in type-A models. >> In addition, there are use cases that prove we need that distinction in OntoLex, see below. > > True, but this is an indirect commitment to an extension. When I say A rdfs:subClassOf B, I of course semantically claim > that the extension of A in any model (i.e. no matter what it turns out to be) is contained in the extension of B in the same model (no matter what it turns out to be). So I never really commit to an exact extension, but only create constraints between intensions, i.e. a play a constraint satisfaction game with declaratively given constraints. > > Of course, you are right that OWL ontologies enforce this constraint-based view in terms of extensions, while WordNet, KOS etc. don't do this. In this sense we agree I think. And if this is what you mean then I am fine. What I am not sure is if it follows from this that we should treat a synset in a different way than an owl concept in the lextonto model as you seem to propose. >> >>> >>> Note that the extensional interpretation of an OWL concept is not inherent in the symbol that represents the intension. The extension is assigned by a certain model in the process of interpreting the symbol in context to other symbols, i.e. in some possible world that fullfills all the constraints introduced by the logical theory. >> >> Ok, but from the moment you introduce something: a subclass, an individual, etc. in an OWL ontology, you are ipso facto committing to an extensional interpretation. The context can be more or less rich of course, but model-theoretical interpretation is not changed by that. >> >>> I think the good news is that ontolex does not have to care about this extensional interpretation of symbols. What we ultimately care about is about their intensional component. >> >> That's a clear claim: we can directly accept it or not it based on requirements :) See below for some use cases in which we should care about extensional interpretation. >> >> Besides use cases, there is a thought experiment that I think supports my position: if we only care about intension, why do we need to introduce a relation between senses and ontology entities? Imagine we can easily port any lexical resource or NLP output to RDF: what makes e.g. WordNet OWL or an NLP result different from an OWL ontology? All of them just contain intensional entities ... >> >>> >>> This brings me to one issue: maybe we should really avoid to talk about "reference" at all in our model thus, as we never really model the "Bedeutung" in the extensional sense of a word. The Bedeutung is somthing that a symbol acquires in a particular situation or model and this should be outside of the ontolex model. >> >> As I anticipated, I think we need that. Here are just a few use cases that we should consider: >> >> 1) Ontology learning. I want to establish a model of a domain from a text. I use NER and some sense tagging algorithm or analysis to gather individuals and classes they belong to. Then I derive an ontology out of it, and I want to make it more explicit by introducing subclasses, disjointness, etc. This will require extensional reasoning on the results of the NLP analysis. >> >> 2) Lexicon porting to ontology. I want to derive an ontology for a domain by promoting WordNet synsets to OWL classes (e.g. Armando's OntoLing was an early nice example of that). I also want to reuse hyponymy relations as subclass relations, but evaluated on evidence from text or data. This will require reasoning on extensional interpretations that will get associated with WordNet-based classes. >> >> 3) Automatic typing for Linked Data. Cf. the Tìpalo use case (see my previous email) >> >> 4) QA on Linked Data. I want to make natural language questions to RDF data by detecting the relevant relations in a question, mapping them to available vocabularies, and automatically creating a SPARQL query to distributed endpoints. There are some approaches to this task that require to verify the mappings to actual tuples, i.e. to actual extensional interpretation of properties. >> > > Sure, sure, I completely agree on this. I was just arguing against the name "reference" here. I agree with the basic idea and machinery, but "reference" has the Fregian association of representing a real thing in the real world, which OWL concepts/classes do not do (see above). But given that you philosophers (Aldo) and semioticists among as (Aldo as well) and John is fine as well because backwards compatilibity to lemon is preserved, I will clearly not continue arguing for changing the name. It might have the wrong connotation for some people though, but if we are all aware of this and want to go ahead anyway, I am more than fine. > > Is my point clear now? > >> Aldo >> >>> >>> More this afternoon on the telco, >>> >>> Philipp. >>> >>> Am 24.04.13 03:38, schrieb Armando Stellato: >>>> Hi Aldo, >>>> >>>> Fine. Actually since the naming of concepts was still to be assessed, and since in some cases we could have been reusing specific classes from existing vocabularies, I used that informal labeling in the upper part of the boxes for clarifying their role, and an explicit reference to the proposed class in the lower one. >>>> Thus "target conceptual model" was intended to capture actually elements of possibly different models (and in fact the least subsuming class is owl:Thing) so I confirm your hypothesis. >>>> I must admit I only grasp partially the reason for which we should consider differently type-A and type-B models. My perspective, wrt, for instance, the triangle of Meaning, is that in-any-case what we formally write are still symbols (progressively richer in their description ), which are then translated into references in our mind which refer to referents in the world. >>>> And in this sense a synset, for instance, is still a symbol which, thanks to the set of synonyns in it, and the gloss etc.. better drives the access to a reference in our minds than a single word. In terms of Sinn and Bedeutung, an owl:Class has intensional properties as much as a skos:Concept has, plus it may restrict (through a set of formal constraints) its extension, the interpretations of which, however, are still infinite. In this sense, Words, skos:Concepts, owl:Classes are all "expressions", and referents are totally out of our representation game. Thus, any meaning/reference distinction is not really clear to me. Much the same way, how would u consider an owl:Individual wrt a skos:Concept (well actually a concept is an individual in owl terms..) Are not them both purely intensional objects? >>>> However, I may be easily wrong in that, and will not delve further in the discussion, so one practical question: >>>> Suppose I've a domain concept scheme (e.g. Agrovoc) and a "conceptualized" lexical resources such as WordNet. Beyond any possible linking to meaning/reference etc.. would you see it as possible to have some form of "tagging" of the domain concept scheme with wordnet's synsets, where it is clear (in ontolex) that the synsets are not (only) mere skos:Concepts (thus to be mapped through ordinary mapping relation, eg from skos) and are instead lexical objects (instances of LexicalConcept in particular) which can be used to enrich the domain concepts? >>>> >>>> Cheers, >>>> Armando >>>> >>>> Da: Aldo Gangemi >>>> Inviato: 24/04/2013 00.28 >>>> A: Armando Stellato >>>> Cc: Aldo Gangemi; 'John McCrae'; 'Philipp Cimiano'; 'public-ontolex' >>>> Oggetto: Re: WordNet modelling in Lemon and SKOS >>>> >>>> Hi Armando, John, all, >>>> >>>> On Apr 23, 2013, at 11:19:48 PM , "Armando Stellato" <stellato@info.uniroma2.it> wrote: >>>> >>>>> Dear John, >>>>> >>>>> After seeing your updated scheme, I think we are almost there. I had a short call with Aldo for checking the only one thing I was a bit uncertain of in his email (the double subclassing he proposed for WordNet’s WordSense/Synset under the ontolex:LexicalSense umbrella). >>>>> I’m resuming a few points here, and I ask Aldo to confirm if I’m properly reporting what we discussed (obviously I’m cutting most of the conversation and report only the main questions and where we ended up). >>>> >>>> thanks for the summary :) >>>> >>>>> >>>>> Armando: Why both wn:WordSense and wn:Synset subclasses of LexicalSense? >>>>> Aldo: they are both a form of Meaning. These can be totally disjoint classes as u said in your email, still being under the same superclass. >>>>> Armando: Ok, let’s go back to the linking to semiotics.owl… ok for both wn:WordSense and wn:Synset under semio:Meaning…they are both a form of meaning (thus both rdfs:subClassOf semio:Meaning) and I agree… but then, the engineer in me tells: <ok, this is a proper “tagging”, but how can these be used operatively?> I mean, ok for the general Meaning class in semiotics.owl, but LexicalSense cannot be an Umbrella for both too…our ontolex model should be general enough to cover different resources, and specific enough to cover in detail the most important aspects of them. To me, I would like WordNet to be opaquely handled by agents as an instance of a Lexical Resouce modeled in OntoLex. I’m thinking about some of the use cases, where smart agents covering given tasks (such as Ontology Mapping) may benefit of the implicit perspective on WordNet given through OntoLex glasses (a monolingual resource, with a conceptual structure etc…), and can adapt this sort of “ontolex fingerprint” of the resource into their general mapping strategies (this is also where the metadata part of the language will come into play). “Plugging” another resource should work as well, as much as its content can be seen through a proper mapping inside the OntoLex vocabulary. >>>>> So I suggest to make explicit in our model the existence of “Senses of LexicalEntries”, let’s call them LexicalSense or just Sense (e.g. specifically, a superclass of WordSenses in wordnet) and LexicalConcepts (specifically, a superclass of synsets in WordNet). Then I agreed that both Sense and LexicalConcept are tagged (subClassOf) as (different types of) Meanings, for the purpose of properly representing them under the Triad in semiotics.owl >>>>> Aldo agrees on having these two distinct elements in OntoLex too, and bound them under the common umbrella of semio:Meaning. >>>> >>>> Confirmed. I have no issue about creating intermediate classes whatsoever, provided we all agree on the intuition about expressions, (intensional) meanings, and (extensional) references. >>>> >>>> Concerning the diagram, I'm ok with links and names. >>>> >>>> My only observation is about "TargetConceptualModel" (not really discussed with Armando): if that is a class of conceptual models (as the name suggests), why should it be a subclass of Reference. I'd call it better OntologyEntity (as Lemon does, as well as LRI, the multilingual ontolex model made in NeOn project in 2008), and put a link between OntologyEntity and the ontology that defines it. >>>> However, maybe you want to talk about arbitrary conceptual models and their elements. For this I think we need some more clarification, because there are two types of conceptual models: >>>> >>>> A) purely intensional conceptual models, like SKOS models, classification schems, thesauri, synsets, lexical frames, etc. >>>> B) formally interpreted conceptual models, like ontologies, ER schemas, UML class diagrams (under ER-like semantics), etc. >>>> >>>> For type-A conceptual models, I am still recalcitrant to accept their elements as references, since no clear extensional intuition is granted, except under a sort of "stipulation" by which I accept the risks of interpreting them extensionally (old SKOS did that by having skos:Concept as both rdfs:subClassOf owl:Thing and of rdfs:Class). I think no default extensional choice like that should be made. >>>> >>>> For type-B conceptual models, we can safely adopt the extensional interpretation. >>>> >>>> Now, since this community group works under the semantic web and linked data umbrella, I do not see the necessity of forcing our model to deal with debatable choices wrt type-A conceptual models, which can be instead interpreted in the context of the Meaning class (that's because I put skos:Concept as a subclass of semio:Meaning). >>>> >>>> I won't be able (last time hopely) to attend Friday's telco, but will be active in the email discussion. >>>> Ciao >>>> Aldo >>>> >>>>> >>>>> I’m attaching (and reporting here below) an updated version of the model I sent in my last email, with the mapping to Semiotics.owl which followed the discussion with Aldo. As you may see, it is pretty similar to the last one you sent (modulo naming choices and the double linking to semio:Meaning). >>>>> Regarding chosen names, just a couple of comments: >>>>> >>>>> 1) I suggested, as a OntoLex superclass for Synset, the name Lexical Concept (ref. Miller’s paper, where he defines synsets as a form of “Lexical Concepts”). This captures the idea of a given set of LexicalEntries hinting at a (non explicit nor formally defined) concept. Note (not in the figure) that this LexicalConcept may be a subclass of skos:Concept. An alternative could be “LexicalizedConcept”, though the former one surely sounds better :-) >>>>> 2) Conversely, for the other class reifying the sense relationship, I’m not sure about the appropriateness of the name LexicalSense, as in this name “Lexical” seems an adjective of “Sense”. But, IMHO, it is not. LexicalSense is more specifically the sense of a given Lexical Entry. Thus the proper name should be LexicalEntrySense (in fact, in WordNet - limiting lexical entries to be words - we have the class WordSense). However LexicalEntrySense is rather long and ambiguous to be parsed. Other choice could be SenseOfLexicalEntry (rather ugly), or simply (my preference), Sense. Btw, just my small note on that and absolutely can be left as is…but I really cannot grasp the meaning of such an expression. Simply, the step from the expression “LexicalSense” to its intended meaning of “Sense of a Lexical Entry” to me is not intuitive. >>>>> 3) I chose the ontolex:sense property to go from LexicalEntry to LexicalConcept. To me it is intuitive, as (grounding to WordNet, for instance), the sense of a Word lies in its linking to a Synset (or in general, to a unit of meaning). And then we can reify this relation into a Sense class as there can be many important things to say about it. However, I understand that following ontology modelling conventions, one could expect the ontolex:sense property to link to instances of a Sense class… so open to opinions (and proposals) for this property renaming. Even those from John’s last model could be reasonable. >>>>> Cheers, >>>>> Armando >>>>> >>>>> <image005.png> >>>>> >>>>> >>>>> >>>>> From: johnmccrae@gmail.com [mailto:johnmccrae@gmail.com] On Behalf Of John McCrae >>>>> Sent: venerdì 19 aprile 2013 10.44 >>>>> To: Armando Stellato >>>>> Cc: Aldo Gangemi; Philipp Cimiano; public-ontolex >>>>> Subject: Re: WordNet modelling in Lemon and SKOS >>>>> >>>>> Hi, >>>>> >>>>> While Aldo's model is very elegant it is not possible to have lexical sense as a subset of skos:Concept for a simple reason: the lexical sense is defined for only a single lexeme, while the skos:Concept can be used for multiple lexemes. >>>>> >>>>> For this key reason we need to have a "lexical sense" object that is between the lexical entry and its meaning. If you are uncomfortable with this object then you can view it as a simple reification (although I would contend it is a very real object). In fact this is nothing more than the traditional lexicographic "word sense", see http://en.wikipedia.org/wiki/Word_sense. >>>>> >>>>> I rename the "lexical sense" object of Aldo's model to "concept" or following WordNet a "synset" >>>>> >>>> >>>> [il messaggio originale non è incluso] >>> >>> >>> -- >>> Prof. Dr. Philipp Cimiano >>> Semantic Computing Group >>> Excellence Cluster - Cognitive Interaction Technology (CITEC) >>> University of Bielefeld >>> >>> Phone: +49 521 106 12249 >>> Fax: +49 521 106 12412 >>> Mail: cimiano@cit-ec.uni-bielefeld.de >>> >>> Room H-127 >>> Morgenbreede 39 >>> 33615 Bielefeld >> > > > -- > Prof. Dr. Philipp Cimiano > Semantic Computing Group > Excellence Cluster - Cognitive Interaction Technology (CITEC) > University of Bielefeld > > Phone: +49 521 106 12249 > Fax: +49 521 106 12412 > Mail: cimiano@cit-ec.uni-bielefeld.de > > Room H-127 > Morgenbreede 39 > 33615 Bielefeld
Received on Friday, 26 April 2013 11:11:58 UTC