- From: Dave Raggett <dsr@w3.org>
- Date: Tue, 19 Jan 2021 11:39:15 +0000
- To: Christian Chiarcos <christian.chiarcos@web.de>
- Cc: public-cogai <public-cogai@w3.org>
- Message-Id: <1D3E60BD-B74D-4143-A0E7-7B345BFBEA1C@w3.org>
Hi Christian, Some comments on the links you kindly provided a few days back. >> CCG (http://groups.inf.ed.ac.uk/ccg/ <http://groups.inf.ed.ac.uk/ccg/>) This focuses on David Harel’s dynamic logic. However, humans don’t actually use logic when reasoning, and instead think in terms of mental models of examples, along with the use of metaphors and analogies. See the work by Philipp Johnson-Laird, e.g. “How We Reason”, Philip Johnson-Laird, 2012, Oxford University Press, https://doi.org/10.1093/acprof:oso/9780199551330.001.0001 <https://doi.org/10.1093/acprof:oso/9780199551330.001.0001> >> DRT / SDRT (https://plato.stanford.edu/entries/discourse-representation-theory/ <https://plato.stanford.edu/entries/discourse-representation-theory/>). This is likewise logic based. It includes examples such as “No farmer beats his donkey”, noting that “no farmer” is not a referential expression. This essentially is a claim that, in general, farmers don’t beat donkeys, in other words, that if you imagine a hypothetical male farmer who has a donkey, then you would be surprised if he beats it. Such claims are only true in a given context and not true generally. This can be represented in chunks as a graph with symbols that stand for hypothetical instances of some class, along with quantifiers such as: at least one, some, most, all and none. These can be interpreted by rulesets and graph algorithms that model different kinds of human reasoning. My demo on smart homes includes an example of default reasoning relating to the lighting and heating of a room taking into account the preferences of who is in that room, see: https://www.w3.org/Data/demos/chunks/home/ <https://www.w3.org/Data/demos/chunks/home/> I look forward to other demos that implement the kind of reasoning described by Johnson-Laird. These will be easier to implement if we have a working implementation of natural language for end to end communication of meaning. Such demos could build upon a limited subset of language, along with manually developed declarative and procedural knowledge. In other words, we don’t need to solve all of language to build useful demos. > Complexity of symbolic parsing. Notoriously slow when it comes to larger dictionaries Can you please expand on that as it isn’t obvious to me. Perhaps this is something to do with the kind of parsers they’ve used? For human processing, you can measure the time someone takes to read an utterance (e.g. with eye tracking), and see how the time changes with different kinds of utterance. I don’t have any pointers to such work to hand, but expect that it would show effects on the level of embedding and the complexity of references within the utterance. Made up words can be used to explore the reasoning involved in dealing with previously unknown words. > Coverage of symbolic parsing. The best HPSG grammars for English cover maybe 85% of the input tokens Perhaps the grammars are too prescriptive? In any case, coverage only needs to be sufficient for the kinds of dialogue of interest for human-agent and agent-agent communication. When adding a new rule to a shift-reduce parser, you could use a test suite in a regression test to check that the new rule doesn’t unexpectedly interfere with parsing other utterances. Cognitive parsers should be able to make some sense of incomplete or ungrammatical utterances. This also relates to the potential for learning new grammar. > UCCA See the slides at: https://github.com/UniversalConceptualCognitiveAnnotation/tutorial/blob/master/01-Birds_Eye_View.pdf <https://github.com/UniversalConceptualCognitiveAnnotation/tutorial/blob/master/01-Birds_Eye_View.pdf> This has the examples: “John took a shower” and “John took a book” where the verb “took” has different meanings. To "take a shower" is an idiomatic expression that is learned from practice. It may seem odd the first time you hear it, but given the context, you can figure out the intended meaning. We can handle this by delegating the meaning to the reasoner rather than expecting it to be handled fully by the syntax-semantics mapping process. In other words, we can use a relationship for “take” that can have direct meanings as in to take an object, as well as idiomatic meanings in certain contexts. This relates to the frequent use of metaphor in natural language as described by George Lackoff. > AMR (https://amr.isi.edu/ <https://amr.isi.edu/>) Slides at: https://github.com/nschneid/amr-tutorial/blob/master/slides/AMR-TUTORIAL-FULL.pdf <https://github.com/nschneid/amr-tutorial/blob/master/slides/AMR-TUTORIAL-FULL.pdf> AMR = abstract meaning representation, and is aimed at large scale human annotation in order to build a giant semantics bank. It uses graphs in a simple tree-like form similar to RDF triples. It has concepts such as “d/dog” that means that d is an instance of the class “dog”, along with arguments such as ARG0 and ARG1. The graphs are serialised in a lisp-like syntax, e.g. (e/eat-01 :ARG0 (d/dog) : ARG1 (b/bone)) The named instance can be referred to as needed, e.g. “d” as a named instance of “dog”. The head of the tree acts as the focus of an utterance. You can also use inverse relations, e.g. X ARG0-of Y = Y ARG0 X. There are a bunch of other features including support for literals (numbers and strings). AMR can thus be seen as a syntax for knowledge graphs. I didn’t see much on reasoning over AMR graphs, nor on scalability. AMR doesn’t include native support for statistics, which is needed for modelling human memory. By comparison, chunks models human memory, and includes support for contexts (named graphs), literals, and rules. Best regards, Dave Raggett <dsr@w3.org> http://www.w3.org/People/Raggett W3C Data Activity Lead & W3C champion for the Web of things
Received on Tuesday, 19 January 2021 11:39:22 UTC