Re: AI KR Strategist, explainabiilty, state of the art

Thank you Peter for seeing my point and for bringing attention to Dave's
initial email
that addressed explainability (had overlooked the thread in Carl's response)

Okay, so picking up from Dave's email -

DR
That sounds somewhat circular as it presumes a shared understanding of what
“AI explainability” is.

PDM
Yes, in a KR sense, explainability is explicit knowledge representation of
the AI (of its definitions, workings, models. processes, whatever makes up
the AI)
DR
 Humans can explain themselves in ways that are satisfactory to other
humans.  We’re now seeing a similar effort to enable LLMs to explain
themselves, despite having inscrutable internal representations as is also
true for the human brain.


PDM well explainability can be a challenge also for human communication
(simply see the amount of discussions people have to go through to
understand each other) But in terms of AI, there is tons of literature on
explainability. I recently reviewed it and it is dense and non conclusive
at all
However, somewhere in there I see a web standard for AI explainability
that uses adequate representation

To be discussed?
DR
I would therefore suggest that for explainability, knowledge representation
is more about the models used in the explanations rather than in the
internals of an AI system. Given that, we can discuss what kinds of
explanations are effective to a given audience, and what concepts are
needed for this.

[PDM] discuss? draft a web standard :-)

Have a great weekend!!!

On Sat, Nov 9, 2024 at 1:38 AM Peter Rivett <
pete.rivett@federatedknowledge.com> wrote:

> I agree that Dave's initial email did focus on explainability, but I share
> Paola's concern about subsequent focus since Carl's email says the purpose
> of the series of calls  is " to expand on the AI Strategist work that
> focused on leveraging StratML" - the documents attached seemed to be all
> about the job description of AI KR Strategist Role and included the text
> "explain" only as follows, with relation to glossaries:
>
> Goal Statement: Employ definitions from one or more glossaries when
> explaining AIKR object audit data, veracity facts and (human, social and
> technology) risk mitigation factors So that (business) people more readily
> understand the value that the glossaries bring.
>
> To speak for myself, I may be interested (though with little time
> available) in technical KR techniques and representations that facilitate
> explainability; especially in bridging the gap between academic research
> and practical enterprise application.
> But I'm not at all interested in the role description side of things (even
> role objectives to "ensure explaniability"). Or anything strategy-related
> (organization level as opposed to agent strategy). I'm not saying it's
> unimportant, just not my interest.
>
> Regards
> Pete
>
> Pete Rivett (pete.rivett@federatedknowledge.com)
> Federated Knowledge, LLC (LEI 98450013F6D4AFE18E67)
> tel: +1-701-566-9534
> Schedule a meeting at https://calendly.com/rivettp
>
>
> ------------------------------
> *From:* Paola Di Maio <paoladimaio10@gmail.com>
> *Sent:* Friday, November 8, 2024 4:53 PM
> *To:* carl mattocks <carlmattocks@gmail.com>
> *Cc:* W3C AIKR CG <public-aikr@w3.org>
> *Subject:* Re: AI KR Strategist, explainabiilty, state of the art
>
> Thank you Carl
> Given the vastity and complexity of the subject matter
> I am suggesting that
> perhaps if you could write a couple of lines of summary of what the issues
> under discussion are and how the proposed approach addresses the issue, etc
> etc etc
> if you could, at your convenience
>  Cheers
> P
>
> On Sat, Nov 9, 2024 at 12:42 AM carl mattocks <carlmattocks@gmail.com>
> wrote:
>
> Paola
>
> Please note in the email chain there are statements about 'explainability'
> which continues to be an issue . .. thus the focus of the proposed effort.
>
> Carl
>
> On Fri, Nov 8, 2024, 5:41 PM Paola Di Maio <paola.dimaio@gmail.com> wrote:
>
> Carl, good to hear from you and thanks
> for picking up where you left .
>
> Btw the attachment you sent never made it into W3C Group reports, maybe at
> some point you d like to publish them with some notes explaining how these
> addressed the challenges discussed? The documents you send do not seem to
> explain how the proposed work fits in the AI KR mission (which problem they
> solve).
>
> As previously discussed StratML can be a useful mechanism represent
> knowledge, at syntactic level. A markup language by itself it does not
> address nor resolve the key challenges faced by AI today that KR (thinking
> semantics here) as a whole could tackle. (irrespective of any
> implementation language of choice).
>
> In the work you propose, there is strong coupling between AI KR and
> StratML as a syntax
> (your construct binds the two) This approach may be suitable in a Stratml
> CG (is the one by the way)? rather than an AI KR CG  The focus is AI KR,
> rather than a modeling language by itself
>
> If the line you are interested to explore is StratML only, it could be
> useful if you (or other proponents of this line of work) could  summarise
> how it address the broader AI KR challenges.
> For example, say, knowledge misrepresentation - or miscategorization - or
> wrong recommendations, or making AI more transparent, more reliable, more
> accountable etc.
>
> Perhaps  show how these can be addressed  with use cases or other proof of
> concept.
>
> So basically, I encourage discussions to be focused on AI KR  and whatever
> line of work members propose, please make it clear which problem each
> construct intends to resolve in relation to the overall mission.
>
> Thank  you!
>
> Paola Di Maio, PhD
>
>
> On Thu, Nov 7, 2024 at 6:09 PM carl mattocks <carlmattocks@gmail.com>
> wrote:
>
> Greetings All - It has been a while.
>
> Given the interest in AI , I am proposing that we set up a series of
> online meetings to expand on the AI Strategist work that focused on
> leveraging StratML. (see attached).
>
> The topics include:
>
>    1. AI Observability Mechanisms (monitor behavior, data, and
>    performance)
>    2. KR Models used in the explanations (to a given audience, and what
>    concepts are needed for this)
>    3. KR ID needed for Knowledge Content (UID, URI) Logistics management
>    4. Roles of Humans in the Loop (as a creator, and an audience type)
>    5. Agents having Authority awarded by a Human in the Loop
>    6. Catalogs of AI capabilities ( see Data Catalog (DCAT) Vocabulary
>    <https://www.w3.org/TR/vocab-dcat-3/> )
>    7. AIKR Using / Used in DPROD (specification provides unambiguous and
>    sharable semantics) https://ekgf.github.io/dprod/
>
>
> Timeslots for meetings  will be determined by participants.  Please let me
> know if you are interested.
>
> Thanks
>
> Carl Mattocks
>
> CarlMattocks@WellnessIntelligence.Institute
> It was a pleasure to clarify
>
>
> On Tue, Jun 11, 2024 at 5:24 AM Dave Raggett <dsr@w3.org> wrote:
>
> First my thanks to Paola for this CG. I’m hoping we can attract more
> people with direct experience. Getting the CG noticed more widely is quite
> a challenge! Any suggestions?
>
> It has been proposed that without knowledge representation. there cannot
> be AI explainability
>
>
> That sounds somewhat circular as it presumes a shared understanding of
> what “AI explainability” is.  Humans can explain themselves in ways that
> are satisfactory to other humans.  We’re now seeing a similar effort to
> enable LLMs to explain themselves, despite having inscrutable internal
> representations as is also true for the human brain.
>
> I would therefore suggest that for explainability, knowledge
> representation is more about the models used in the explanations rather
> than in the internals of an AI system. Given that, we can discuss what
> kinds of explanations are effective to a given audience, and what concepts
> are needed for this.
>
> Explanations further relate to how to making an effective argument that
> convinces people to change their minds.  This also relates to the history
> of work on rhetoric, as well as to advertising and marketing!
>
> Best regards,
>
> Dave Raggett <dsr@w3.org>
>
>
>
>

Received on Saturday, 9 November 2024 05:18:51 UTC