- From: Milton Ponson <rwiciamsd@gmail.com>
- Date: Wed, 12 Jun 2024 15:08:35 -0400
- To: Paola Di Maio <paoladimaio10@gmail.com>
- Cc: Dave Raggett <dsr@w3.org>, W3C AIKR CG <public-aikr@w3.org>
- Message-ID: <CA+L6P4wE84JH6xd3J3785Fq9mYHOBaQF3B29cbn-s219q=L6UQ@mail.gmail.com>
I have to agree with Dave about knowledge representation having to be about the models used for explainability. If AI is to be explainable the models and hence the KR have to be visible and if the internals of the AI have hidden layers we should be able to model how the inputs give rise to the outputs. There are currently many misconceptions about knowledge and knowledge representation.If we look at how the BICA approach (cognitive architectures modeled after brain structure) tries to model both the biological structures and the processes in the brain and use these for A(G)I modeling, we see the complexity of the issues at hand. When we throw in consciousness, the problems grow almost exponentially in trying to model how the structures give rise to processes and how processes give rise to changes in structures. Mathematically speaking we must assume some basic starting points. Whatever formal systems we devise, they must be finite, constructible, use formal logic and have schemes for representation that factor in the two options of using the axiom of choice or not, and if we are modeling e.g. physics factor in statistics and quantum effects. The real problem is that mathematics per se does not require falsifiability. We either have proofs or conjectures, whereas empirical science requires falsifiability. In creating KNOWLEDGE representation for explainable AI this forces us to create a universe of models that can deal with all of these issues. Such universes can currently only be constructed for mathematics in well defined settings and consequently also for computer science. When we try to come up with KR for specific domains of discourse, we again must adhere to well defined settings. The concept of artificial general intelligence presupposes a navigation system if you will for all the universes. IMHO that is currently out of reach, we only have to look at theoretical physics and cosmology to appreciate why. Milton Ponson Rainbow Warriors Core Foundation CIAMSD Institute-ICT4D Program +2977459312 PO Box 1154, Oranjestad Aruba, Dutch Caribbean On Wed, Jun 12, 2024 at 1:38 AM Paola Di Maio <paoladimaio10@gmail.com> wrote: > Thank you Dave R > > for being part of this CG and others for participatin in the conversation > . Its a place to share the pains and cures of the modern world > > Actually there are over 70 members, so i would say we have some interest > but participation is low > I can only invite members to pitch in, and share what they are working on > which is mostly what I do > > I am not going to argue with you or anyone else about how KR *IS* the AI > (in classical AI) as we have had this conversation before {-) > I respect other people's views/opinions if different, ( > > There is so much going on, so we should be thankful for every opportunity > to keep track of developments > > The representation of the hidden layer for example borders with the KR > occult > yet it is fundamental to explainability of ML , it is 24 years old > > Liou, Cheng-Yuan, Hwann-Tzong Chen, and Jau-Chi Huang. "Separation of > internal representations of the hidden layer." *Proceedings of the > international computer symposium, workshop on artificial intelligence*. > 2000. > > https://www.csie.ntu.edu.tw/~cyliou/red/publications/ICS2000.pdf > > Each hidden layer can be thought of as *a level of abstraction, where the > network learns to identify increasingly complex patterns or features in the > input data*. For example, in a convolutional neural network (CNN) > designed for image recognition, the first hidden layer might learn to > recognize edges and simple texture > > https://www.csie.ntu.edu.tw/~cyliou/red/NN/Classinfo/SIR.pdf > > On Tue, Jun 11, 2024 at 11:23 AM Dave Raggett <dsr@w3.org> wrote: > >> First my thanks to Paola for this CG. I’m hoping we can attract more >> people with direct experience. Getting the CG noticed more widely is quite >> a challenge! Any suggestions? >> >> It has been proposed that without knowledge representation. there cannot >> be AI explainability >> >> >> That sounds somewhat circular as it presumes a shared understanding of >> what “AI explainability” is. Humans can explain themselves in ways that >> are satisfactory to other humans. We’re now seeing a similar effort to >> enable LLMs to explain themselves, despite having inscrutable internal >> representations as is also true for the human brain. >> >> I would therefore suggest that for explainability, knowledge >> representation is more about the models used in the explanations rather >> than in the internals of an AI system. Given that, we can discuss what >> kinds of explanations are effective to a given audience, and what concepts >> are needed for this. >> >> Explanations further relate to how to making an effective argument that >> convinces people to change their minds. This also relates to the history >> of work on rhetoric, as well as to advertising and marketing! >> >> Best regards, >> >> Dave Raggett <dsr@w3.org> >> >> >> >>
Received on Wednesday, 12 June 2024 19:08:51 UTC