AIKRCG & StratML

Dave, my interest is in helping people achieve their objectives.  

 

I joined the Artificial Intelligence Knowledge Representation Community Group to explore potential relationships to the StratML standard (ISO 17469-1), whose vision is:  A worldwide web of intentions, stakeholders, and results.  

 

It seems to me the vision of the AIKRCG <http://stratml.us/carmel/iso/AIKRCGwStyle.xml#_63f01ac6-83e9-11e8-9a9a-23c8e53a5ccc>  – knowledge is exchanged and reused to enable learning and participation – is closely related to the purposes of StratML.

 

If you’d like to share the outlines of your proposal, I’d love to include it in the StratML collection.  

 

The notion of conversational agents that help people achieve their objectives is particularly appealing.

 

Owen

 

From: Dave Raggett <dsr@w3.org> 
Sent: Sunday, August 26, 2018 6:38 AM
To: Owen Ambur <Owen.Ambur@verizon.net>
Cc: public-aikr@w3.org
Subject: Re: Human Brain Project

 

Owen, is this just a theoretical interest of yours, or do you have an interest in a practical computational account that builds upon cognitive science, sociology and psychology? 

 

There has been lots of really valuable work in different fields, but very little cross over between the different research communities. Cognitive Scientists have for instance, done great work on memory, skill acquisition and the cognitive constraints of consciousness as a serial processor with very constrained working memory despite essential unlimited long term memory and massive parallel processing in sensory and motor systems. See John R. Anderson’s ACT-R.

 

Others have worked on understanding emotions and their relationship to cognition, and yet others on attention. Relatively little work has been done on episodic memory, which is surprising given the importance of a sense of time to consciousness. Sociology has a lot to say about our motivations and functioning as members of social groups. Work on understanding humour and one’s sense of worth is important to what drives us and how we interact with others.

 

Unfortunately, AI has been largely hijacked by deep learning, despite obvious shortcomings as noted by Gary Marcus and Geoffrey Hinton. A new wave of AI research is needed to focus on building upon previous experience, with continuous learning and a focus on the dynamic role of explanations for our conscious awareness of what’s around us and what we’re doing. This all points to a synthesis between sub-symbolic processing and symbolic processing.

 

Work on natural language has essentially ignored meaning and understanding in favour of tricks and weak surrogates. It is now timely to work on how to teach everyday knowledge and skills through a combination of lessons and play in virtual environments. I have some very practical ideas for how to bring this all together as a modest research program, but so far have seen very little interest, and it seems that most people actually are more interested in talking about consciousness than in working on practical computational models that could be used for conversational agents, self driving cars and so forth. Progress depends upon the questions you ask. Good questions suggest avenues for investigation, and in turn lead to further questions as progress is made.

 

Best regards,

 

Dave Raggett <dsr@w3.org <mailto:dsr@w3.org> > http://www.w3.org/People/Raggett

W3C Data Activity Lead & W3C champion for the Web of things 

 

 

 

 

 

Received on Monday, 27 August 2018 02:29:05 UTC