Re: AI KR Strategist, explainabiilty, state of the art

Paola et al

Quoting Dave Raggett ' hoping we can attract more people with direct
experience.' to the meetings ...

Acknowledging that  (for reference) 12:00 pm New York  meeting is = 5:00 pm
London / 6:00 pm Madrid / 9:00 am Los Angeles/  1:00 am Taipei  - what
alternate time slot would be acceptable ?


Carl
It was a pleasure to clarify


On Sat, Nov 9, 2024 at 8:54 PM Paola Di Maio <paoladimaio10@gmail.com>
wrote:

> I have recently (well a year or two ago) update the wiki a bit
> but I guess updates should me ,made more regularly
> Whatever members wish to discuss/contribute, should be aligned
> Something like
> Intelligence (reasoning, cognition)
> ML/AI to achieve intelligence
> KR (techniques, methods) as explicit/accountable/verifiability AI/a
> mechanisms for explainability (the explicit representation of AI
> processes/outcomes)
>
> I simply would like to encourage alignment, so that each contribution can
> find its place
>
> Rendering the resources into machine readable format is useful, but the
> core challenges in AI (such as explainability), and how KR contributes to
> solve them, are more complex and remain high priority,
>
> Let's keep that requirement for alignment in mind in meeting agendas
> (repeat: ai challenges(aka explainability)/kr solutions)
>
> I may be able to contribute a pre recorded talk on my current work on
> explainability, if of interest, as an agenda item for the meetings (in
> person meetings time is generall short)
>
>  If would be great if CG members could also start putting together some
> ideas so that when Carl gets around to do meetings, people can bring up
> their points either live or via short notes (I hope that Carl can include
> in the meeting minutes)
>
> So Carl, could you perhaps start an agenda on the wiki that people can
> start adding items for discussion to? would that be a good idea?
> (My items for discussion are the points above)
>
> cheers
>
> P
>
> P
>
>
>
> On Sun, Nov 10, 2024 at 12:35 AM Peter Rivett <
> pete.rivett@federatedknowledge.com> wrote:
>
>>
>> I wouldn't say what we have from 2018 on our homepage
>> https://www.w3.org/groups/cg/aikr/ as a list of "proposed outcomes"
>> (mirrored in the StratML version) was ever what I'd call a plan. If what
>> you're saying is the homepage should be updated to reflect what we're
>> actually doing then that makes sense. If only to attract others who might
>> be interested in our actual work.
>> I think what Paola has summarized as challenges in this thread already
>> provides a reasonable start.
>>
>> Carl started asking for expressions of interest. Again, I'm expressing
>> interest in *KR to support explainable AI.* I'm not interested in Role
>> Descriptions, Strategy Formulations or structured Plans. And not, for now
>> as [part of this Group, the other 6 of the 7 items that Carl listed.
>>
>> Cheers
>> Pete
>>
>> PS I guess the homepage should also link to the AI KR Strategist Role
>> Description already produced (though not sure how that fits).
>>
>>
>> Pete Rivett (pete.rivett@federatedknowledge.com)
>> Federated Knowledge, LLC (LEI 98450013F6D4AFE18E67)
>> tel: +1-701-566-9534
>> Schedule a meeting at https://calendly.com/rivettp
>>
>> ------------------------------
>> *From:* Owen Ambur <owen.ambur@verizon.net>
>> *Sent:* Saturday, November 9, 2024 8:23 AM
>> *To:* Paola Di Maio <paoladimaio10@gmail.com>; carl mattocks <
>> carlmattocks@gmail.com>
>> *Cc:* W3C AIKR CG <public-aikr@w3.org>
>> *Subject:* Re: AI KR Strategist, explainabiilty, state of the art
>>
>> From my perspective, this exchange might be more productive if it focused
>> directly on the elements of the plan, if any, that we aim to craft and
>> pursue together.
>>
>> At this point, I am unable to decipher those elements from this exchange
>> of E-mail messages.  It reminds me of an assertion relating to the
>> Capability Maturity Model (CMM):
>>
>> E-mail is a stage of immaturity through which we must pass.
>>
>>
>> Plans previously considered by the AIKR CG are available in StratML
>> format at https://stratml.us/drybridge/index.htm#AIKRCG
>>
>> Perhaps we might at least revisit and perhaps update this one:
>> https://stratml.us/docs/AIKRCG.xml
>>
>> It would be nice to report any progress that may have been made on any of
>> the objectives it sets forth for us.
>>
>> It is available for comments at
>> https://stratml.us/carmel/iso/part2/AIKRCGforComment.xml and for editing
>> in StratML Part 2, Performance Plan/Report, format at
>> https://stratml.us/drybridge/index.htm#AIKRCG
>>
>> Owen Ambur
>> https://www.linkedin.com/in/owenambur/
>>
>>
>> On Saturday, November 9, 2024 at 07:34:28 AM EST, carl mattocks <
>> carlmattocks@gmail.com> wrote:
>>
>>
>> Paola
>>
>> To be explicit ... I am not proposing to focus exclusively on  Explainable
>> Artificial Intelligence
>> <https://www.darpa.mil/program/explainable-artificial-intelligence> (a
>> suite of machine learning techniques that: Produce more explainable models
>>  )
>> I do expect to have discussions about models used in the explanations
>> about KR used in AI.
>>
>> cheers
>>
>> Carl
>>
>>
>> It was a pleasure to clarify
>>
>>
>> On Sat, Nov 9, 2024 at 2:10 AM Paola Di Maio <paoladimaio10@gmail.com>
>> wrote:
>>
>> Carl,
>> following my earlier email response, let me make explicit (...)
>> a fundamental point that perhaps came across as implied (...)
>>
>> misrepresentation  miscategorization  correctness
>> transparency, accountability,reliability verifiability
>> and all sorts of AI flaws and errors  = AI challenges
>>  can be addressed at least in part with KR
>> and mitigated through explainability
>> however
>> and that the  field of XAI, based on a review of the state of the art,
>> has become paradoxically inextricable  and unexplainable in its own right
>>
>> Proposed approaches must tackle directly the challenges, and possibly be
>> supported with some evidence/proof of their effectiveness
>> (usefulness notwithstanding)
>>
>> P
>>
>> s, or making AI more transparent, more reliable, more accountable
>> that
>>
>> On Sat, Nov 9, 2024 at 12:42 AM carl mattocks <carlmattocks@gmail.com>
>> wrote:
>>
>> Paola
>>
>> Please note in the email chain there are statements about
>> 'explainability' which continues to be an issue . .. thus the focus of the
>> proposed effort.
>>
>> Carl
>>
>> On Fri, Nov 8, 2024, 5:41 PM Paola Di Maio <paola.dimaio@gmail.com>
>> wrote:
>>
>> Carl, good to hear from you and thanks
>> for picking up where you left .
>>
>> Btw the attachment you sent never made it into W3C Group reports, maybe
>> at some point you d like to publish them with some notes explaining how
>> these addressed the challenges discussed? The documents you send do not
>> seem to explain how the proposed work fits in the AI KR mission (which
>> problem they solve).
>>
>> As previously discussed StratML can be a useful mechanism represent
>> knowledge, at syntactic level. A markup language by itself it does not
>> address nor resolve the key challenges faced by AI today that KR (thinking
>> semantics here) as a whole could tackle. (irrespective of any
>> implementation language of choice).
>>
>> In the work you propose, there is strong coupling between AI KR and
>> StratML as a syntax
>> (your construct binds the two) This approach may be suitable in a Stratml
>> CG (is the one by the way)? rather than an AI KR CG  The focus is AI KR,
>> rather than a modeling language by itself
>>
>> If the line you are interested to explore is StratML only, it could be
>> useful if you (or other proponents of this line of work) could  summarise
>> how it address the broader AI KR challenges.
>> For example, say, knowledge misrepresentation - or miscategorization - or
>> wrong recommendations, or making AI more transparent, more reliable, more
>> accountable etc.
>>
>> Perhaps  show how these can be addressed  with use cases or other proof
>> of concept.
>>
>> So basically, I encourage discussions to be focused on AI KR  and
>> whatever line of work members propose, please make it clear which problem
>> each construct intends to resolve in relation to the overall mission.
>>
>> Thank  you!
>>
>> Paola Di Maio, PhD
>>
>>
>> On Thu, Nov 7, 2024 at 6:09 PM carl mattocks <carlmattocks@gmail.com>
>> wrote:
>>
>> Greetings All - It has been a while.
>>
>> Given the interest in AI , I am proposing that we set up a series of
>> online meetings to expand on the AI Strategist work that focused on
>> leveraging StratML. (see attached).
>>
>> The topics include:
>>
>>    1. AI Observability Mechanisms (monitor behavior, data, and
>>    performance)
>>    2. KR Models used in the explanations (to a given audience, and what
>>    concepts are needed for this)
>>    3. KR ID needed for Knowledge Content (UID, URI) Logistics management
>>    4. Roles of Humans in the Loop (as a creator, and an audience type)
>>    5. Agents having Authority awarded by a Human in the Loop
>>    6. Catalogs of AI capabilities ( see Data Catalog (DCAT) Vocabulary
>>    <https://www.w3.org/TR/vocab-dcat-3/> )
>>    7. AIKR Using / Used in DPROD (specification provides unambiguous and
>>    sharable semantics) https://ekgf.github.io/dprod/
>>
>>
>> Timeslots for meetings  will be determined by participants.  Please let
>> me know if you are interested.
>>
>> Thanks
>>
>> Carl Mattocks
>>
>> CarlMattocks@WellnessIntelligence.Institute
>> It was a pleasure to clarify
>>
>>
>> On Tue, Jun 11, 2024 at 5:24 AM Dave Raggett <dsr@w3.org> wrote:
>>
>> First my thanks to Paola for this CG. I’m hoping we can attract more
>> people with direct experience. Getting the CG noticed more widely is quite
>> a challenge! Any suggestions?
>>
>> It has been proposed that without knowledge representation. there cannot
>> be AI explainability
>>
>>
>> That sounds somewhat circular as it presumes a shared understanding of
>> what “AI explainability” is.  Humans can explain themselves in ways that
>> are satisfactory to other humans.  We’re now seeing a similar effort to
>> enable LLMs to explain themselves, despite having inscrutable internal
>> representations as is also true for the human brain.
>>
>> I would therefore suggest that for explainability, knowledge
>> representation is more about the models used in the explanations rather
>> than in the internals of an AI system. Given that, we can discuss what
>> kinds of explanations are effective to a given audience, and what concepts
>> are needed for this.
>>
>> Explanations further relate to how to making an effective argument that
>> convinces people to change their minds.  This also relates to the history
>> of work on rhetoric, as well as to advertising and marketing!
>>
>> Best regards,
>>
>> Dave Raggett <dsr@w3.org>
>>
>>
>>
>>

Received on Sunday, 10 November 2024 17:21:06 UTC