W3C home > Mailing lists > Public > public-cogai@w3.org > July 2020

Re: Simulated natural language dialogue

From: Paola Di Maio <paola.dimaio@gmail.com>
Date: Sat, 18 Jul 2020 10:31:09 +0800
Message-ID: <CAMXe=Sq3eNtVJ1r8Bk8sj9P1ZOGpOcmZwJeOboOCUksn58ZOuQ@mail.gmail.com>
To: Dave Raggett <dsr@w3.org>
Cc: public-cogai@w3.org
Thank you Dave
Neat stuff!

good example of how a simple human interaction needs a lot of thinking and
planning to be reproduced-
I ll be interested in the implementation, is there going to be a demo?

My approach is a bit different, in the sense that I would never attempt to
reproduce a human level conversation
(which you do well in your example) and I would expect that a
conversational agent would be implemented in a highly digitized environment
where there is not need to tell the customer that the table x is not
available and that the dish y is not available
because in a digital environment this information would be updated in the

I ll be more narrow to get (probably) the same result (the table and the
food ordered) with less thinking
for example, I d go more about

waiter - welcome, what can I do for you?  //maybe provide a list of
options, like order now, reserve for later or  after order service follow
up on a an earlier order such as enquire about lost and found items or a
credit card charge etc)
customer - order dinner/meal, please
waiter- here or takeaway?
customer - here
waiter - please choose your table from those available (from a table plan
/I would assume the customer figures out that there is no available table
near the window if it is not on the available seats plan which is updated
every time a customer arrives/leaves//
waiter-  here is the menu
//I would assume if an item is not available /off it would not be on the
menu!! which is digitally updated every minute//
etc etc

I would also want a button that says çall me the human please always

So the bottom line of my comment here is that we develop automated agents
thinking of a nonautomated deployment environment
I think thats a bit of a general flaw


On Fri, Jul 17, 2020 at 9:20 PM Dave Raggett <dsr@w3.org> wrote:

> Natural language will be key to future human-machine collaboration, as
> well as to being able to teach everyday skills to cognitive agents. There
> are many potential market opportunities, and many challenges to overcome.
> I previously developed a simple demo for natural language parsing based
> around the Towers of Hanoi game. This demo uses very simple language, and
> allows you to type or speak the command to move discs between pegs. The
> demo uses a shift-reduce parser with the parse tree represented in chunks.
> https://www.w3.org/Data/demos/chunks/nlp/toh/
> I am now working on a more ambitious demo featuring a dialogue between a
> waiter and a customer dining at a restaurant. The idea is to have a single
> web page emulate the waiter and customer as separate cognitive agents, and
> for each agent to apply natural language generation and understanding as
> they each take turns to speak and listen to each other. The text they speak
> will be shown with chat bubbles in a manner familiar from smart phone chat
> services. The demo scenario was chosen as the language usage, the semantics
> and pragmatics are well understood and limited in scope.
> The aim is to support word by word incremental concurrent processing of
> syntax and semantics without backtracking. This selects the most
> appropriate meaning given the preceding words, the dialogue history and
> other knowledge through the application of rules and graph algorithms,
> including spreading activation. This process works in reverse for natural
> language generation.
> My starting point has been to define a dinner plan as a sequence of stages
> (greetings, find table, read menu, place order, …), where each stage links
> to the following stage. I’ve represented the utterances as a sequence of
> chunks, where each utterance links to the previous utterance, and to the
> associated stage in the plan. This has involved a commitment to a small set
> of speech acts, e.g. greeting, farewell, assertion, question, and answer,
> along with positive and negative acknowledgements that are associated with
> additional information.
> Along the way, I am evolving a means to represent the parse trees for
> utterances as linked chunks, and will next work on the semantics and
> pragmatics for polite discourse.  I also want to explore how to use the
> statistics in natural language understanding (competence) for natural
> language generation (performance). You can follow my progress  on the
> following page:
> https://github.com/w3c/cogai/blob/master/demos/nlp/dinner/README.md
> Note: you will need to click the bottom of the section on knowledge
> representation to view the chunk representation of the utterances including
> the parse trees.
> If anyone would like to help with this work, including offering guidance,
> please get in touch!
> Many thanks,
> Dave Raggett <dsr@w3.org> http://www.w3.org/People/Raggett
> W3C Data Activity Lead & W3C champion for the Web of things
Received on Saturday, 18 July 2020 02:32:01 UTC

This archive was generated by hypermail 2.4.0 : Saturday, 18 July 2020 02:32:02 UTC