Re: AI KR Strategist, explainabiilty, state of the art

Needless to say, Carl, I'd be more than interested to participate in any such meetings that may be scheduled.  I'd also be interested to learn if we can productively use Chris' StratNavApp to facilitate a collaborative effort.
BTW, Ivan Metzger of GSA has been charged with finally helping U.S. federal agencies comply with section 10 of the GPRA Modernization Act and he has expressed the intent to use the StratML standard.  So while the past three administrations have ignored that provision of law, it will be interesting to see if Ivan's nascent effort will survive the transition to the next administration.
To the degree that previous administrations have bothered to report performance at all, they have tended to do so in ways to make themselves "look" good, as opposed to reporting actual performance, good, bad, and indifferent.  Moreover, they have thrown out what the previous administrations have done and wasted the taxpayers' money reinventing government-unique, non-interoperable data stovepipes based upon software platforms rather than the applicable data standard.
I will be doing whatever I can to see that the incoming administration doesn't repeat that mistake.
It would be good if the W3C could join in that quest.
Owen Amburhttps://www.linkedin.com/in/owenambur/
 

    On Thursday, November 7, 2024 at 01:09:28 PM EST, carl mattocks <carlmattocks@gmail.com> wrote:   

 Greetings All - It has been a while.
Given the interest in AI , I am proposing that we set up a series of online meetings to expand on the AI Strategist work that focused on leveraging StratML. (see attached).
The topics include:   
   - AI Observability Mechanisms (monitor behavior, data, and performance)
   - KR Models used in the explanations (to a given audience, and what concepts are needed for this)
   - KR ID needed for Knowledge Content (UID, URI) Logistics management
   - Roles of Humans in the Loop (as a creator, and an audience type)
   - Agents having Authority awarded by a Human in the Loop
   - Catalogs of AI capabilities ( see Data Catalog (DCAT) Vocabulary )
   - AIKR Using / Used in DPROD (specification provides unambiguous and sharable semantics) https://ekgf.github.io/dprod/

Timeslots for meetings  will be determined by participants.  Please let me know if you are interested.
Thanks 
Carl Mattocks
CarlMattocks@WellnessIntelligence.InstituteIt was a pleasure toclarify


On Tue, Jun 11, 2024 at 5:24 AM Dave Raggett <dsr@w3.org> wrote:

First my thanks to Paola for this CG. I’m hoping we can attract more people with direct experience. Getting the CG noticed more widely is quite a challenge! Any suggestions?


It has been proposed that without knowledge representation. there cannot be AI explainability 

That sounds somewhat circular as it presumes a shared understanding of what “AI explainability” is.  Humans can explain themselves in ways that are satisfactory to other humans.  We’re now seeing a similar effort to enable LLMs to explain themselves, despite having inscrutable internal representations as is also true for the human brain.
I would therefore suggest that for explainability, knowledge representation is more about the models used in the explanations rather than in the internals of an AI system. Given that, we can discuss what kinds of explanations are effective to a given audience, and what concepts are needed for this.
Explanations further relate to how to making an effective argument that convinces people to change their minds.  This also relates to the history of work on rhetoric, as well as to advertising and marketing!
Best regards,
Dave Raggett <dsr@w3.org>



  

Received on Thursday, 7 November 2024 19:47:59 UTC