- From: Dave Raggett <dsr@w3.org>
- Date: Tue, 11 Jun 2024 10:23:33 +0100
- To: paoladimaio10@googlemail.com
- Cc: W3C AIKR CG <public-aikr@w3.org>
- Message-Id: <8E278823-4D08-4A09-A612-13B027978084@w3.org>
First my thanks to Paola for this CG. I’m hoping we can attract more people with direct experience. Getting the CG noticed more widely is quite a challenge! Any suggestions? > It has been proposed that without knowledge representation. there cannot be AI explainability That sounds somewhat circular as it presumes a shared understanding of what “AI explainability” is. Humans can explain themselves in ways that are satisfactory to other humans. We’re now seeing a similar effort to enable LLMs to explain themselves, despite having inscrutable internal representations as is also true for the human brain. I would therefore suggest that for explainability, knowledge representation is more about the models used in the explanations rather than in the internals of an AI system. Given that, we can discuss what kinds of explanations are effective to a given audience, and what concepts are needed for this. Explanations further relate to how to making an effective argument that convinces people to change their minds. This also relates to the history of work on rhetoric, as well as to advertising and marketing! Best regards, Dave Raggett <dsr@w3.org>
Received on Tuesday, 11 June 2024 09:23:45 UTC