Re: AI KR Strategist, explainabiilty, state of the art

Paola

Please note in the email chain there are statements about 'explainability'
which continues to be an issue . .. thus the focus of the proposed effort.

Carl

On Fri, Nov 8, 2024, 5:41 PM Paola Di Maio <paola.dimaio@gmail.com> wrote:

> Carl, good to hear from you and thanks
> for picking up where you left .
>
> Btw the attachment you sent never made it into W3C Group reports, maybe at
> some point you d like to publish them with some notes explaining how these
> addressed the challenges discussed? The documents you send do not seem to
> explain how the proposed work fits in the AI KR mission (which problem they
> solve).
>
> As previously discussed StratML can be a useful mechanism represent
> knowledge, at syntactic level. A markup language by itself it does not
> address nor resolve the key challenges faced by AI today that KR (thinking
> semantics here) as a whole could tackle. (irrespective of any
> implementation language of choice).
>
> In the work you propose, there is strong coupling between AI KR and
> StratML as a syntax
> (your construct binds the two) This approach may be suitable in a Stratml
> CG (is the one by the way)? rather than an AI KR CG  The focus is AI KR,
> rather than a modeling language by itself
>
> If the line you are interested to explore is StratML only, it could be
> useful if you (or other proponents of this line of work) could  summarise
> how it address the broader AI KR challenges.
> For example, say, knowledge misrepresentation - or miscategorization - or
> wrong recommendations, or making AI more transparent, more reliable, more
> accountable etc.
>
> Perhaps  show how these can be addressed  with use cases or other proof of
> concept.
>
> So basically, I encourage discussions to be focused on AI KR  and whatever
> line of work members propose, please make it clear which problem each
> construct intends to resolve in relation to the overall mission.
>
> Thank  you!
>
> Paola Di Maio, PhD
>
>
> On Thu, Nov 7, 2024 at 6:09 PM carl mattocks <carlmattocks@gmail.com>
> wrote:
>
>> Greetings All - It has been a while.
>>
>> Given the interest in AI , I am proposing that we set up a series of
>> online meetings to expand on the AI Strategist work that focused on
>> leveraging StratML. (see attached).
>>
>> The topics include:
>>
>>    1. AI Observability Mechanisms (monitor behavior, data, and
>>    performance)
>>    2. KR Models used in the explanations (to a given audience, and what
>>    concepts are needed for this)
>>    3. KR ID needed for Knowledge Content (UID, URI) Logistics management
>>    4. Roles of Humans in the Loop (as a creator, and an audience type)
>>    5. Agents having Authority awarded by a Human in the Loop
>>    6. Catalogs of AI capabilities ( see Data Catalog (DCAT) Vocabulary
>>    <https://www.w3.org/TR/vocab-dcat-3/> )
>>    7. AIKR Using / Used in DPROD (specification provides unambiguous and
>>    sharable semantics) https://ekgf.github.io/dprod/
>>
>>
>> Timeslots for meetings  will be determined by participants.  Please let
>> me know if you are interested.
>>
>> Thanks
>>
>> Carl Mattocks
>>
>> CarlMattocks@WellnessIntelligence.Institute
>> It was a pleasure to clarify
>>
>>
>> On Tue, Jun 11, 2024 at 5:24 AM Dave Raggett <dsr@w3.org> wrote:
>>
>>> First my thanks to Paola for this CG. I’m hoping we can attract more
>>> people with direct experience. Getting the CG noticed more widely is quite
>>> a challenge! Any suggestions?
>>>
>>> It has been proposed that without knowledge representation. there cannot
>>> be AI explainability
>>>
>>>
>>> That sounds somewhat circular as it presumes a shared understanding of
>>> what “AI explainability” is.  Humans can explain themselves in ways that
>>> are satisfactory to other humans.  We’re now seeing a similar effort to
>>> enable LLMs to explain themselves, despite having inscrutable internal
>>> representations as is also true for the human brain.
>>>
>>> I would therefore suggest that for explainability, knowledge
>>> representation is more about the models used in the explanations rather
>>> than in the internals of an AI system. Given that, we can discuss what
>>> kinds of explanations are effective to a given audience, and what concepts
>>> are needed for this.
>>>
>>> Explanations further relate to how to making an effective argument that
>>> convinces people to change their minds.  This also relates to the history
>>> of work on rhetoric, as well as to advertising and marketing!
>>>
>>> Best regards,
>>>
>>> Dave Raggett <dsr@w3.org>
>>>
>>>
>>>
>>>

Received on Saturday, 9 November 2024 00:42:28 UTC