- From: Dave Raggett <dsr@w3.org>
- Date: Wed, 11 Oct 2023 14:13:06 +0100
- To: public-cogai <public-cogai@w3.org>
- Message-Id: <AA039B17-C4DA-43BA-B2EE-CF89F7250D1B@w3.org>
I gave an invited lecture yesterday to the ART-AI group at the University of Bath, UK. See: UKRI CDT in Accountable, Responsible and Transparent AI. Website: https://cdt-art-ai.ac.uk <https://cdt-art-ai.ac.uk/> Title: The role of symbolic knowledge at the dawn of AGI Abstract: Large language models and generative AI have shown amazing capabilities. We tend to see them as much more intelligent than they actually are. It is time to embrace the many research challenges ahead before we can truly realise AGI. Work in the cognitive sciences can help us to better mimic human cognition, and to understand how to address generative AI failures such as factual errors, logical errors, inconsistencies, limited reasoning, toxicity, and fluent hallucinations. How can we architect systems that continuously learn from limited data like we do, combining observations and direct experience along with autonomous, algorithmic and reflective cognition? If machine learning is so effective for neural networks, where does that leave symbolic AI? My conjecture is that symbolic AI has a strong future as the basis for semantic interoperability between systems, along with knowledge graphs as an evolutionary replacement for today's relational databases. We, do however, need to recognise that human interactions and our understanding of the world is replete with uncertainty, imprecision, incompleteness and inconsistentency. Logicians have largely turned a blind eye to the challenges of imperfect knowledge. This is despite a long tradition of work on argumentation, stretching all the way back to Ancient Greece. This tradition underpins courtroom proceedings, ethical guidelines, political discussion and everyday arguments. I will introduce the plausible knowledge notation as a way to address plausible inference of properties and relationships, fuzzy scalars and quantifiers, along with analogical reasoning. Work on symbolic AI can help guide research on neural networks, and vice versa, neural networks can assist human researchers, speeding the development of new insights. The slides are available at: http://www.w3.org/2023/10/10-Raggett-AI.pdf Dave Raggett <dsr@w3.org>
Received on Wednesday, 11 October 2023 13:13:20 UTC