Re: Linguistics for the Age of AI

‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Saturday, October 23rd, 2021 at 12:07 PM, Paola Di Maio <paola.dimaio@gmail.com> wrote:

> Thank you for sharing
> It would be great to highlight the relevance and impact on AI KR
> 


It is my first time reading that book, I am mid way through the book and skimmed until the end and read the whole epilogue.


When used as lecture text-book they recommend some knowledge about linguistics.

In the beginning they both survey what they call 'mainstream AI' hence 'mainstream NLP' and that is 'Machine Learning' that they call knowledge-lean approach (they never mention 'sub-symbolic' possibly because it might be perceived as a negative term). They say the current mainstream AI is wrong headed, because it is too narrow in the scope of the tasks they tackle, solving problems piece-wise does not and will not yield progress toward AGI, they have not enough celestial goals, but ML useful. They recommend and use an hybrid approach, that integrates knowledge computed by ML algorithm into their semantic algorithms to start micro-theories that may or may not be invalidated later in the pipeline or further input by the end-user. They rely on Standford Core NLP library as the first pass in their system. That is both a success and a failure. A success because they did not have to develop the equivalent semantic algorithms, and their system works, a failure because they did not forcast the required engineering work to interop with the rest of their system that is semantic based (but they hope it is still worth it, because that also means they have more people taking part in the progress of the their system).

Downstream they both rely on a lexicon (even if they say wordnet is not without infecillities), and an ontology knowledge base. They are knowledge engineers that are in charge of teaching the agent new words, and new concepts. They keep stressing that knowledge engineers and knowledge workers must engage into a life long cooperation with the system.

That is my favorite part: they put together a lexicon of only 30 000 words upon which they bootstrapped at least two applications they give as example 1) a robot patient to help medical personnel to learn 2) a self driving car. 


Toward easing the self-learning process of the agent, knowledge engineer can submit typed matrices with holes, that look like typed feature-structures (not written like that in the book), that can be subsumed or unified at runtime (not written like that in the book).

The epilogue is a good summary. Past 30 years, current approaches especially ML is wrong headed. If instead of aiming for short term yields, the community  considered a life long conversation with a semantic system where ML can be a helper, will yield better result *toward* AI complete systems and AGI. Their system is already pratical and has seen successful use (like any other...). They also mention micro-theories (blackboard like), and handling combinatorial explosing due to considering the best scoring senses of any given utterance. Also, they underscore the fact that the system to make progress should both be practical and toward that goal it should be goal and action oriented, I call that interactive. TIL microtheories about mindreading the user. They also mention the word chunks with double quotes.

They do not mention MultiNet, they mention framenet, and clearly there is some overlap with Herrman Helbig's Multinet work. They also mention that W3C semweb is wrong headed and of no practical value referencing previous publication on that very topic.

As coder, I read a lot of good ideas, but little or no actual implementation details, but that's prolly not what you were asking for.

The most impact on AI KR is that they urge people to reconsider the investement into ML, sub-symbolic, knowledge-lean, and redirect that money / energy / efforts to systems like the one they are building that is first knowledge-based. They also repeat that even if they have results, there will remain a lot to achieve.


> On Sat, Oct 23, 2021 at 2:05 PM Amirouche BOUBEKKI <amirouche@hyper.dev> wrote:
> 

> > I just started reading the book.
> > 

> > There is already several ideas that I took for granted (fwiw), in my previous work like:
> > 

> > A) AI agents must be interactive, it reads as action-oriented in the book, quote:
> > 

> > > Our model of NLU does not require that agents exhaustively interpret every input to an externally imposed standard of perfection. Even  people  don’t do that. Instead, agents operating in human-agent teams need to understand inputs to the degree required to determine which goals, plans, and actions they should pursue as a result of NLU
> > 

> > B) The importance of explainable AI, quote:
> > 

> > > The importance of explainable AI cannot be overstated: society at large is unlikely to cede important decision-making in domains like health care or finance to machines that cannot explain their advice.
> > 

> > I highly recommend to read the introduction entitled 'Setting the Stage':
> > 

> >   https://direct.mit.edu/books/book/chapter-pdf/1891673/9780262363136_f000100.pdf
> > 

> > The whole book is open-access pdf available at:
> > 

> >   https://direct.mit.edu/books/book/5042/Linguistics-for-the-Age-of-AI

Received on Saturday, 23 October 2021 12:21:55 UTC