Systems 1 & 2

Donald Norman has cautioned the greatest peril is that of "experiencing when one should be reflecting ... where entertainment takes precedence over thought.”  https://ambur.net/smart.pdf
Following that train of thought, some neuroscientists have suggested we primarily use our of powers of logic/reasoning to justfy our behaviour after-the-fact.  https://en.wikipedia.org/wiki/Rationalization_(psychology)  
On the other hand, David Eagelman does allow:

Conscious parts of the brain train other parts of the neural machinery, establishing the goals and allocating the resources… Consciousness is the long-term planner, the CEO of the company, while most of the day-to-day operations are run by all those parts of her brain to which she has no access… This is what consciousness does: it sets the goals, and the rest of the system learns how to meet them.   https://ambur.net/ConsciousCommunities.pdf

It would be good to do our best to apply AI to help maintain a reasonable balance between System 1- and System 2-based thinking and acting.
Toward that end, at least those who are developing AI applications should be expected, if not required, to document their plans in an open, standard, machine-readable format for the benefit of stakeholders who may be affected by them.  Armed with such information and aided by AI agents acting with our behalf, we'll be better equipped to become aware and begin to understand what others may be aiming to do "for" and "to" us.
Owen Amburhttps://www.linkedin.com/in/owenambur/
 

    On Monday, November 7, 2022 at 08:31:07 AM EST, Dave Raggett <dsr@w3.org> wrote:  
 
 Humans also lack explainability for System 1, but not for System 2.
The importance of explainability depends on the nature of the application, so the corollary is that given an application you can assess the the importance of explainability, the kind of explanations, and the most appropriate technology.  Unfortunately, today’s technologies generally speaking have poor explainability, e.g. proofs from a logic engine, or when decisions are made by hand coded logic in computer programs. This is where research can help to show the kinds of explanations that would be most effective for a specific user.  Humans tend to adapt their explanations to match their understanding of the person that are talking to.


On 7 Nov 2022, at 13:04, Adeel <aahmad1811@gmail.com> wrote:
Hello,
But, none of those models have explainability.So, cannot explain precisely how they are reaching those conclusions and decisions because they are essentially working in a black-box?
Thanks,
Adeel
On Mon, 7 Nov 2022 at 12:58, Dave Raggett <dsr@w3.org> wrote:

GPT3, BLOOM as examples of large language models
DALLE-E, Stable Diffusion as examples of text to image
AlphaFold for predicting 3D protein structures
These all embed knowledge obtained from deep learning  against large corpora. The models combine the networks and their trained connection parameters, e.g. BLOOM has 176 billion parameters and DALL-E 2 has around 3.5 billion. This approach discovers its own (distributed) knowledge representation and scales much better than hand-authored KR. However like  hand-authored KR, it is still brittle when it comes to generalising beyond its training data, something that humans are inherently better at.  Deep learning suffers from a lack of transparency, and there has been quite a bit of work trying to improve on that, e.g. showing which parts of an image were most important when it came to recognising an object. One big potential advantage is in handling imprecise context dependent knowledge, where hand authored KR (e.g. RDF) has a hard time. There is a lot of current effort on graph embeddings as a synthesis of neural networks and symbolic graphs. However, these are still far from being able to model human reasoning with chains of plausible inferences and metacognition (reasoning about reasoning).

On 7 Nov 2022, at 10:59, Paola Di Maio <paola.dimaio@gmail.com> wrote:
Dave perhaps you could post a few examples of non symbolic KR so that we can get our heads aroundsuch a thing-
Please note that my postulate shared on this list 
https://lists.w3.org/Archives/Public/public-aikr/2019Aug/0045.htmlstates thatTo support AI explainability, learnability,verifiability and
reproducibility, it is postulated that
for each MLA *machine learning algorithm,
there should correspond a natural language expression or other type of
symbolic knowledge representationhttps://figshare.com/articles/poster/A_New_Postulate_for_Knowledge_Representation_in_AI/9730268/2
was also slightly reworded in different presentations

On Mon, Nov 7, 2022 at 5:45 PM Dave Raggett <dsr@w3.org> wrote:

The statement “We can only pursue artificial intelligence via symbolic means” is false, since artificial neural networks eschew symbols, and have been at the forefront of recent advances in AI.  I therefore prefer the Wikipedia definition of KR which is less restrictive:

“Knowledge representation and reasoning (KRR, KR&R, KR) is the field of artificial intelligence (AI) dedicated to representing information about the world in a form that a computer system can use to solve complex tasks”

See: https://en.wikipedia.org/wiki/Knowledge_representation_and_reasoning

On 7 Nov 2022, at 03:03, Mike Bergman <mike@mkbergman.com> wrote:
 
Hi All,

It is always useful to have a shared understanding within a community for what defines its interests and why they have shared interests as a community. I applaud putting these questions out there. Like all W3C community groups, we have both committed students and occasional grazers. One can generally gauge usefulness of a given topic in a given group by the range of respondents to a given topic. Persistence seems to be more a function of specific interlocuters not letting go rather than usefulness.
 

After researching what became a book to consider the matter, I came to the opinion that AI is a subset of KR [1]. The conclusion of that investigation was:
 

"However, when considered, mainly using prescission, it becomes clear that KR
 can exist without artificial intelligence, but AI requires knowledge representation.
 We can only pursue artificial intelligence via symbolic means, and KR is the transla -
 tion of information into a symbolic form to instruct a computer. Even if the com-
 puter learns on its own, we represent that information in symbolic KR form. This
 changed premise for the role of KR now enables us to think, perhaps, in broader
 terms, such as including the ideas of instinct and kinesthetics in the concept. This
 kind of re-consideration alters the speculative grammar we have for both KR and AI,
 helpful as we move the fields forward." (p 357)
 
 

That also caused me to pen a general commentary on one aspect of the KR challenge, how to consider classes (types) versus individuals (tokens) [2]. I would also argue these are now practically informed topics, among many, that augment or question older bibles like Brachman and Levesque.
 

Best, Mike
 [1] https://www.mkbergman.com/pubs/akrp/chapter-17.pdf
 [2]https://www.mkbergman.com/2286/knowledge-representation-is-a-tricky-business/
 -- 
__________________________________________

Michael K. Bergman
319.621.5225
http://mkbergman.com
http://www.linkedin.com/in/mkbergman
__________________________________________  

Dave Raggett <dsr@w3.org>





Dave Raggett <dsr@w3.org>





Dave Raggett <dsr@w3.org>


  

Received on Monday, 7 November 2022 17:43:29 UTC