W3C home > Mailing lists > Public > public-cogai@w3.org > July 2020

Simulated natural language dialogue

From: Dave Raggett <dsr@w3.org>
Date: Fri, 17 Jul 2020 14:20:29 +0100
Message-Id: <F8D798EA-395D-4672-A4F1-875B0A7D10A7@w3.org>
To: public-cogai@w3.org
Natural language will be key to future human-machine collaboration, as well as to being able to teach everyday skills to cognitive agents. There are many potential market opportunities, and many challenges to overcome.

I previously developed a simple demo for natural language parsing based around the Towers of Hanoi game. This demo uses very simple language, and allows you to type or speak the command to move discs between pegs. The demo uses a shift-reduce parser with the parse tree represented in chunks.

	https://www.w3.org/Data/demos/chunks/nlp/toh/ <https://www.w3.org/Data/demos/chunks/nlp/toh/>

I am now working on a more ambitious demo featuring a dialogue between a waiter and a customer dining at a restaurant. The idea is to have a single web page emulate the waiter and customer as separate cognitive agents, and for each agent to apply natural language generation and understanding as they each take turns to speak and listen to each other. The text they speak will be shown with chat bubbles in a manner familiar from smart phone chat services. The demo scenario was chosen as the language usage, the semantics and pragmatics are well understood and limited in scope.

The aim is to support word by word incremental concurrent processing of syntax and semantics without backtracking. This selects the most appropriate meaning given the preceding words, the dialogue history and other knowledge through the application of rules and graph algorithms, including spreading activation. This process works in reverse for natural language generation.

My starting point has been to define a dinner plan as a sequence of stages (greetings, find table, read menu, place order, …), where each stage links to the following stage. I’ve represented the utterances as a sequence of chunks, where each utterance links to the previous utterance, and to the associated stage in the plan. This has involved a commitment to a small set of speech acts, e.g. greeting, farewell, assertion, question, and answer, along with positive and negative acknowledgements that are associated with additional information.

Along the way, I am evolving a means to represent the parse trees for utterances as linked chunks, and will next work on the semantics and pragmatics for polite discourse.  I also want to explore how to use the statistics in natural language understanding (competence) for natural language generation (performance). You can follow my progress  on the following page:

	https://github.com/w3c/cogai/blob/master/demos/nlp/dinner/README.md <https://github.com/w3c/cogai/blob/master/demos/nlp/dinner/README.md>

Note: you will need to click the bottom of the section on knowledge representation to view the chunk representation of the utterances including the parse trees.

If anyone would like to help with this work, including offering guidance, please get in touch!

Many thanks,

Dave Raggett <dsr@w3.org> http://www.w3.org/People/Raggett
W3C Data Activity Lead & W3C champion for the Web of things 





Received on Friday, 17 July 2020 13:20:33 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 July 2020 13:20:34 UTC