- From: carl mattocks <carlmattocks@gmail.com>
- Date: Thu, 7 Nov 2024 13:08:24 -0500
- To: W3C AIKR CG <public-aikr@w3.org>
- Message-ID: <CAHtonumymTxB==bwWt8fpw7xYpQVn2LOCJOy+G8R6s=NajGVPQ@mail.gmail.com>
Greetings All - It has been a while. Given the interest in AI , I am proposing that we set up a series of online meetings to expand on the AI Strategist work that focused on leveraging StratML. (see attached). The topics include: 1. AI Observability Mechanisms (monitor behavior, data, and performance) 2. KR Models used in the explanations (to a given audience, and what concepts are needed for this) 3. KR ID needed for Knowledge Content (UID, URI) Logistics management 4. Roles of Humans in the Loop (as a creator, and an audience type) 5. Agents having Authority awarded by a Human in the Loop 6. Catalogs of AI capabilities ( see Data Catalog (DCAT) Vocabulary <https://www.w3.org/TR/vocab-dcat-3/> ) 7. AIKR Using / Used in DPROD (specification provides unambiguous and sharable semantics) https://ekgf.github.io/dprod/ Timeslots for meetings will be determined by participants. Please let me know if you are interested. Thanks Carl Mattocks CarlMattocks@WellnessIntelligence.Institute It was a pleasure to clarify On Tue, Jun 11, 2024 at 5:24 AM Dave Raggett <dsr@w3.org> wrote: > First my thanks to Paola for this CG. I’m hoping we can attract more > people with direct experience. Getting the CG noticed more widely is quite > a challenge! Any suggestions? > > It has been proposed that without knowledge representation. there cannot > be AI explainability > > > That sounds somewhat circular as it presumes a shared understanding of > what “AI explainability” is. Humans can explain themselves in ways that > are satisfactory to other humans. We’re now seeing a similar effort to > enable LLMs to explain themselves, despite having inscrutable internal > representations as is also true for the human brain. > > I would therefore suggest that for explainability, knowledge > representation is more about the models used in the explanations rather > than in the internals of an AI system. Given that, we can discuss what > kinds of explanations are effective to a given audience, and what concepts > are needed for this. > > Explanations further relate to how to making an effective argument that > convinces people to change their minds. This also relates to the history > of work on rhetoric, as well as to advertising and marketing! > > Best regards, > > Dave Raggett <dsr@w3.org> > > > >
Attachments
- application/pdf attachment: Report_for_AI_KR_Strategistshtml.pdf
- application/pdf attachment: AI_KR_Strategistsstratnavapps.pdf
- application/pdf attachment: stratnavapp.com_StratML_Part2xml.pdf
Received on Thursday, 7 November 2024 18:09:15 UTC