internal representation of the hidden layer [was AI KR, explainabiilty, state of the art

Thank you Dave R

for being part of this CG and others for participatin in the conversation
. Its a place to share the pains and cures of the modern world

Actually there are over 70 members, so i  would say we have some interest
but participation is low
I can only invite members to pitch in, and share what they are working on
which is mostly what I do

I am not going to argue with you or anyone else about how KR *IS* the AI
(in classical AI) as we have had this conversation before {-)
 I respect other people's views/opinions if different, (

There is so much going on, so we should be thankful for every opportunity
to keep track of developments

The representation of the hidden layer for example borders with the KR
occult
yet it is fundamental to explainability of ML , it is 24 years old

Liou, Cheng-Yuan, Hwann-Tzong Chen, and Jau-Chi Huang. "Separation of
internal representations of the hidden layer." *Proceedings of the
international computer symposium, workshop on artificial intelligence*.
2000.

https://www.csie.ntu.edu.tw/~cyliou/red/publications/ICS2000.pdf

Each hidden layer can be thought of as *a level of abstraction, where the
network learns to identify increasingly complex patterns or features in the
input data*. For example, in a convolutional neural network (CNN) designed
for image recognition, the first hidden layer might learn to recognize
edges and simple texture

https://www.csie.ntu.edu.tw/~cyliou/red/NN/Classinfo/SIR.pdf

On Tue, Jun 11, 2024 at 11:23 AM Dave Raggett <dsr@w3.org> wrote:

> First my thanks to Paola for this CG. I’m hoping we can attract more
> people with direct experience. Getting the CG noticed more widely is quite
> a challenge! Any suggestions?
>
> It has been proposed that without knowledge representation. there cannot
> be AI explainability
>
>
> That sounds somewhat circular as it presumes a shared understanding of
> what “AI explainability” is.  Humans can explain themselves in ways that
> are satisfactory to other humans.  We’re now seeing a similar effort to
> enable LLMs to explain themselves, despite having inscrutable internal
> representations as is also true for the human brain.
>
> I would therefore suggest that for explainability, knowledge
> representation is more about the models used in the explanations rather
> than in the internals of an AI system. Given that, we can discuss what
> kinds of explanations are effective to a given audience, and what concepts
> are needed for this.
>
> Explanations further relate to how to making an effective argument that
> convinces people to change their minds.  This also relates to the history
> of work on rhetoric, as well as to advertising and marketing!
>
> Best regards,
>
> Dave Raggett <dsr@w3.org>
>
>
>
>

Received on Wednesday, 12 June 2024 05:38:00 UTC