Re: Simulated natural language dialogue

Hi Paola,

Natural language will be key to human-machine collaboration with cognitive agents. It will also make it possible to teach cognitive agents the everyday skills that us humans take for granted. Trying to manually program cognitive agents directly is difficult and won't scale. This demo will feature both natural language generation and natural language understanding, along with reasoning about plans, and follows on from a much simpler demo for the Towers of Hanoi.

 https://www.w3.org/Data/demos/chunks/nlp/toh/ <https://www.w3.org/Data/demos/chunks/nlp/toh/>

Future work will combine natural language and machine learning.

Whilst there has been plenty of work on statistical natural language processing, there has been comparatively  little on cognitive approaches, see e.g.

 http://act-r.psy.cmu.edu/category/language-processing/parsing/ <http://act-r.psy.cmu.edu/category/language-processing/parsing/>

My work is thus a blend of research and engineering with a view to building commercially useful cognitive agents on a roadmap to strong AI.

Conventional approaches to natural language processing focus on statistical processing of text with little attention to meaning. Natural language is highly ambiguous although we are rarely aware of that as we effortlessly select the most appropriate meaning. If you train your statistical parser on a very large corpus, the statistics will make it more likely for the parser to pick the most appropriate parse tree, but then what do you do? Without the meaning you cannot reason about what the text conveys.

For further reading, see Christopher Manning’s slides on statistical natural language parsing [1] and the documentation for the Natural Language Toolkit (NLTK), see [2] and [3]. 

Meaning has been approached in terms of first order logic and predicate calculus, but natural language doesn't lend itself to formal semantics and logical deduction. We instead need to mimic how people reason about the meaning of natural language, in other words, to follow a cognitive approach. This is best explored in a context that is well understood, such as the dialogue used to order dinner at a restaurant. 

The demo is underway, but I still have a lot of work to do before it is ready. I am currently focusing on reasoning about plans and will then work on natural language generation before working on natural language understanding. As you saw, I have already worked on how to use chunks to represent the syntactic structure of typical utterances.

I am looking for ideas for a future demo that focuses on reasoning about time as a means to explore the use of different tenses, e.g. the past continuous tense for something that happened before and after a specific time in the past, see [4]. This will need a scenario with well understood semantics and typical language usage. Any suggestions would be warmly received!

Best regards,
Dave

[1] https://web.stanford.edu/class/archive/cs/cs224n/cs224n.1106/handouts/SLoSP-2008-1-1up.pdf <https://web.stanford.edu/class/archive/cs/cs224n/cs224n.1106/handouts/SLoSP-2008-1-1up.pdf>
[2] https://www.nltk.org/ <https://www.nltk.org/>
[3] http://www.pitt.edu/~naraehan/ling1330/nltk_book.html <http://www.pitt.edu/~naraehan/ling1330/nltk_book.html> or as PDF
http://www.datascienceassn.org/sites/default/files/Natural%20Language%20Processing%20with%20Python.pdf <http://www.datascienceassn.org/sites/default/files/Natural%20Language%20Processing%20with%20Python.pdf>
[4] https://learnenglish.britishcouncil.org/english-grammar-reference/verbs <https://learnenglish.britishcouncil.org/english-grammar-reference/verbs>

> On 18 Jul 2020, at 03:31, Paola Di Maio <paola.dimaio@gmail.com> wrote:
> 
> Thank you Dave
> Neat stuff!
> 
> good example of how a simple human interaction needs a lot of thinking and planning to be reproduced-
> I ll be interested in the implementation, is there going to be a demo?
> 
> My approach is a bit different, in the sense that I would never attempt to reproduce a human level conversation
> (which you do well in your example) and I would expect that a conversational agent would be implemented in a highly digitized environment
> where there is not need to tell the customer that the table x is not available and that the dish y is not available
> because in a digital environment this information would be updated in the system
> 
> I ll be more narrow to get (probably) the same result (the table and the food ordered) with less thinking
> for example, I d go more about
> 
> waiter - welcome, what can I do for you?  //maybe provide a list of options, like order now, reserve for later or  after order service follow up on a an earlier order such as enquire about lost and found items or a credit card charge etc)
> customer - order dinner/meal, please
> waiter- here or takeaway?
> customer - here
> waiter - please choose your table from those available (from a table plan /map)
> /I would assume the customer figures out that there is no available table near the window if it is not on the available seats plan which is updated every time a customer arrives/leaves//
> waiter-  here is the menu
> //I would assume if an item is not available /off it would not be on the menu!! which is digitally updated every minute//
> etc etc
> 
> I would also want a button that says çall me the human please always flashing
> 
> So the bottom line of my comment here is that we develop automated agents thinking of a nonautomated deployment environment
> I think thats a bit of a general flaw
> 
> PDM
> 
> 
> 
> On Fri, Jul 17, 2020 at 9:20 PM Dave Raggett <dsr@w3.org <mailto:dsr@w3.org>> wrote:
> Natural language will be key to future human-machine collaboration, as well as to being able to teach everyday skills to cognitive agents. There are many potential market opportunities, and many challenges to overcome.
> 
> I previously developed a simple demo for natural language parsing based around the Towers of Hanoi game. This demo uses very simple language, and allows you to type or speak the command to move discs between pegs. The demo uses a shift-reduce parser with the parse tree represented in chunks.
> 
>  https://www.w3.org/Data/demos/chunks/nlp/toh/ <https://www.w3.org/Data/demos/chunks/nlp/toh/>
> 
> I am now working on a more ambitious demo featuring a dialogue between a waiter and a customer dining at a restaurant. The idea is to have a single web page emulate the waiter and customer as separate cognitive agents, and for each agent to apply natural language generation and understanding as they each take turns to speak and listen to each other. The text they speak will be shown with chat bubbles in a manner familiar from smart phone chat services. The demo scenario was chosen as the language usage, the semantics and pragmatics are well understood and limited in scope.
> 
> The aim is to support word by word incremental concurrent processing of syntax and semantics without backtracking. This selects the most appropriate meaning given the preceding words, the dialogue history and other knowledge through the application of rules and graph algorithms, including spreading activation. This process works in reverse for natural language generation.
> 
> My starting point has been to define a dinner plan as a sequence of stages (greetings, find table, read menu, place order, …), where each stage links to the following stage. I’ve represented the utterances as a sequence of chunks, where each utterance links to the previous utterance, and to the associated stage in the plan. This has involved a commitment to a small set of speech acts, e.g. greeting, farewell, assertion, question, and answer, along with positive and negative acknowledgements that are associated with additional information.
> 
> Along the way, I am evolving a means to represent the parse trees for utterances as linked chunks, and will next work on the semantics and pragmatics for polite discourse.  I also want to explore how to use the statistics in natural language understanding (competence) for natural language generation (performance). You can follow my progress  on the following page:
> 
>  https://github.com/w3c/cogai/blob/master/demos/nlp/dinner/README.md <https://github.com/w3c/cogai/blob/master/demos/nlp/dinner/README.md>
> 
> Note: you will need to click the bottom of the section on knowledge representation to view the chunk representation of the utterances including the parse trees.
> 
> If anyone would like to help with this work, including offering guidance, please get in touch!
> 
> Many thanks,
> 
> Dave Raggett <dsr@w3.org <mailto:dsr@w3.org>> http://www.w3.org/People/Raggett <http://www.w3.org/People/Raggett>
> W3C Data Activity Lead & W3C champion for the Web of things 
> 
> 
> 
> 

Dave Raggett <dsr@w3.org> http://www.w3.org/People/Raggett
W3C Data Activity Lead & W3C champion for the Web of things 

Received on Sunday, 19 July 2020 10:21:55 UTC