- From: Dave Raggett <dsr@w3.org>
- Date: Thu, 26 Oct 2023 11:38:13 +0100
- To: ProjectParadigm-ICT-Program <metadataportals@yahoo.com>
- Cc: Patrick Logan <patrickdlogan@gmail.com>, "paoladimaio10@googlemail.com" <paoladimaio10@googlemail.com>, W3C AIKR CG <public-aikr@w3.org>
- Message-Id: <7D7073B0-781C-4340-99DD-B4A196A8400F@w3.org>
Thanks Milton. One consideration for the subjective experience of artificial agents is the ability for them to reason about their past, present and future, i.e. an awareness of time that forms the basis for planning, understanding cause and effect, inferring the intents of other agents, and for learning from an agent’s experience. In humans, episodic memories are consolidated in the neocortex after initial modelling in the hippocampus. Memories of past events can be retrieved using a combination of cues for what, where and when. An accessible account is “how does the brain make memories”, see https://www.eurekalert.org/news-releases/945017. Our brain includes so called boundary neurons that decide when to start a record for a new episode, analogous to creating a new folder. We also record links between these “folders” to represent temporal relationships. An open question is how to design artificial neural networks for managing episodic memories, and how to integrate this with artificial neural networks for language models and encyclopaedic memories. From an AIKR perspective, the notion of episodes as “folders” relates to named graphs as a data type, something that is under active discussion in the W3C RDF-star working group that is currently defining RDF 1.2. Quite how this would be represented in neural networks is still unclear. What we can say though is the potential for designing datasets that test an agent’s ability to form and reason with episodic memories. Better yet would be the means to curate such datasets from existing resources and apply them for self-guided machine learning as has been done for large language models. > On 25 Oct 2023, at 17:44, ProjectParadigm-ICT-Program <metadataportals@yahoo.com> wrote: > > Dear all, > > I concur with Dave Raggett's take on the subject, science and engineering do not deal with soft issues of linguistic interpretation typically found in religion, philosophy and psychology. > > The following link provides an interesting article on how language shapes our formation of abstract concepts > > Exploring the brain basis of concepts by using a new type of neural network > https://medicalxpress.com/news/2023-10-exploring-brain-basis-concepts-neural.html > > The funny thing is, that these findings are nothing new, Buddhist philosophers have pointed this out in many forms. It is just only now that all of these areas of investigation are converging. > Linguistic ambiguity and cognitive bias are no new subjects, but are only now becoming important in the context of creating AGI. > > I propose sticking to the path described by Dave, but be mindful of what we come across in the process as long as it contributes to enhancing our formal knowledge representation modeling. > > Milton Ponson Dave Raggett <dsr@w3.org>
Received on Thursday, 26 October 2023 10:38:30 UTC