Re: definitions, problem spaces, methods

I had the fortune over the weekend to meet a PhD student from the Donders Institute for Brain,Cognition and Behaviour at the Radboud University in Nijmegen, The Netherlands, with a BSc in mathematics and physics and a MSc in physics, who is now doing research in artificial cognitive systems (https://www.ru.nl/donders/research/theme-4-neural-computation-neurotechnology/research-groups-theme-4/artificial-cognitive-systems/)

We had an hour and a half interchange of ideas, and one of the main items discussed is how compression plays a key role in intelligence, as already visible in data compression and information compressing in machine learning.
The other key takeaway from this discussion is that our primary sensory input channel in cognition is the sense of sight, and that we should see language, symbols and semiotics and for that matter any visual input percepts as visual percepts.
Humans as intelligent beings have been able to come up with language symbols sets (letters, characters, glyphs) , formal symbols, pictograms, signs and other visuals to assign in various forms meaning or identification to objects and concepts to process visually.
Which would suggest that the KR for AI should focus on explainable visual input.
And which would expand on purely symbolic explainability, because visuals can be defined as objects which lend themselves for formal conceptualization in category theory, model theory, representation theory and visualization theory.

Milton Ponson
GSM: +297 747 8280
PO Box 1154, Oranjestad
Aruba, Dutch Caribbean
Project Paradigm: Bringing the ICT tools for sustainable development to all stakeholders worldwide through collaborative research on applied mathematics, advanced modeling, software and standards development 

    On Monday, November 7, 2022 at 11:16:04 AM AST, Paola Di Maio <paoladimaio10@gmail.com> wrote:  
 
 Thanks Daveand yes Adeel, not just explainability, but reliability, replicability etc etc
the only solution I know is metadata/classes  (tagging the datasets with symbolic KR categories)
BTW, I think I may need to correct my statement about the lack of KR in the case of stable diffusion
if the machine uses a prompt (say feet, mushroom, girl, boy) and outputs something that reflects the promptthen even stable diffusion must use some metadata or some way of matching the prompt, which is natural language expression, to the images it fetches,
 I d say we could study these examples, and figure something out
I cannot play with these systems now, but if someone does, please share the results.
I  have updated the table that needs filling out also added a column where people can linkto experiments (sets of prompts/results) we can use the wiki to create a page for each experimentfor any of the systems mentioned, but we can think of other systems as well they can be added to table obviously



On Mon, Nov 7, 2022 at 9:05 PM Adeel <aahmad1811@gmail.com> wrote:

Hello,
But, none of those models have explainability.So, cannot explain precisely how they are reaching those conclusions and decisions because they are essentially working in a black-box?
Thanks,
Adeel
On Mon, 7 Nov 2022 at 12:58, Dave Raggett <dsr@w3.org> wrote:

GPT3, BLOOM as examples of large language models
DALLE-E, Stable Diffusion as examples of text to image
AlphaFold for predicting 3D protein structures
These all embed knowledge obtained from deep learning  against large corpora. The models combine the networks and their trained connection parameters, e.g. BLOOM has 176 billion parameters and DALL-E 2 has around 3.5 billion. This approach discovers its own (distributed) knowledge representation and scales much better than hand-authored KR. However like  hand-authored KR, it is still brittle when it comes to generalising beyond its training data, something that humans are inherently better at.  Deep learning suffers from a lack of transparency, and there has been quite a bit of work trying to improve on that, e.g. showing which parts of an image were most important when it came to recognising an object. One big potential advantage is in handling imprecise context dependent knowledge, where hand authored KR (e.g. RDF) has a hard time. There is a lot of current effort on graph embeddings as a synthesis of neural networks and symbolic graphs. However, these are still far from being able to model human reasoning with chains of plausible inferences and metacognition (reasoning about reasoning).

On 7 Nov 2022, at 10:59, Paola Di Maio <paola.dimaio@gmail.com> wrote:
Dave perhaps you could post a few examples of non symbolic KR so that we can get our heads aroundsuch a thing-
Please note that my postulate shared on this list 
https://lists.w3.org/Archives/Public/public-aikr/2019Aug/0045.htmlstates thatTo support AI explainability, learnability,verifiability and
reproducibility, it is postulated that
for each MLA *machine learning algorithm,
there should correspond a natural language expression or other type of
symbolic knowledge representationhttps://figshare.com/articles/poster/A_New_Postulate_for_Knowledge_Representation_in_AI/9730268/2
was also slightly reworded in different presentations

On Mon, Nov 7, 2022 at 5:45 PM Dave Raggett <dsr@w3.org> wrote:

The statement “We can only pursue artificial intelligence via symbolic means” is false, since artificial neural networks eschew symbols, and have been at the forefront of recent advances in AI.  I therefore prefer the Wikipedia definition of KR which is less restrictive:

“Knowledge representation and reasoning (KRR, KR&R, KR) is the field of artificial intelligence (AI) dedicated to representing information about the world in a form that a computer system can use to solve complex tasks”

See: https://en.wikipedia.org/wiki/Knowledge_representation_and_reasoning

On 7 Nov 2022, at 03:03, Mike Bergman <mike@mkbergman.com> wrote:
 
Hi All,

It is always useful to have a shared understanding within a community for what defines its interests and why they have shared interests as a community. I applaud putting these questions out there. Like all W3C community groups, we have both committed students and occasional grazers. One can generally gauge usefulness of a given topic in a given group by the range of respondents to a given topic. Persistence seems to be more a function of specific interlocuters not letting go rather than usefulness.
 

After researching what became a book to consider the matter, I came to the opinion that AI is a subset of KR [1]. The conclusion of that investigation was:
 

"However, when considered, mainly using prescission, it becomes clear that KR
 can exist without artificial intelligence, but AI requires knowledge representation.
 We can only pursue artificial intelligence via symbolic means, and KR is the transla -
 tion of information into a symbolic form to instruct a computer. Even if the com-
 puter learns on its own, we represent that information in symbolic KR form. This
 changed premise for the role of KR now enables us to think, perhaps, in broader
 terms, such as including the ideas of instinct and kinesthetics in the concept. This
 kind of re-consideration alters the speculative grammar we have for both KR and AI,
 helpful as we move the fields forward." (p 357)
 
 

That also caused me to pen a general commentary on one aspect of the KR challenge, how to consider classes (types) versus individuals (tokens) [2]. I would also argue these are now practically informed topics, among many, that augment or question older bibles like Brachman and Levesque.
 

Best, Mike
 [1] https://www.mkbergman.com/pubs/akrp/chapter-17.pdf
 [2]https://www.mkbergman.com/2286/knowledge-representation-is-a-tricky-business/
 -- 
__________________________________________

Michael K. Bergman
319.621.5225
http://mkbergman.com
http://www.linkedin.com/in/mkbergman
__________________________________________  

Dave Raggett <dsr@w3.org>





Dave Raggett <dsr@w3.org>




  

Received on Monday, 7 November 2022 21:07:36 UTC