- From: Paola Di Maio <paoladimaio10@gmail.com>
- Date: Thu, 4 Jul 2024 08:43:44 +0200
- To: Owen Ambur <owen.ambur@verizon.net>
- Cc: Dave Raggett <dsr@w3.org>, W3C AIKR CG <public-aikr@w3.org>
- Message-ID: <CAMXe=So=_fRPq5=q1qS2xJHT7ogp6qiW2ey0yLuT7_o=_HUvyQ@mail.gmail.com>
Congrats Owen for publishing something on the web that machine can find and use Is it because machine simply looks for the machine readable info? On Thu, Jul 4, 2024 at 2:12 AM Owen Ambur <owen.ambur@verizon.net> wrote: > When first I asked, ChatGPT disclaimed having any developers much less a > plan. However, upon prompting, it disclosed the plan outlined in StratML > format at https://stratml.us/docs/CGPT.xml > > Likewise, Claude.ai was a bit skittish about divulging its objectives but > also disgorged some upon prompting, at https://stratml.us/docs/CLD.xml > > From my perspective, a good explanation would report in open, standard, > machine-readable format reliable metrics by which human beings can readily > comprehend how well the avowed objectives are being served. > > I'll look forward to learning what other alternative there might be. > > Owen Ambur > https://www.linkedin.com/in/owenambur/ > > > On Tuesday, June 11, 2024 at 05:24:00 AM EDT, Dave Raggett <dsr@w3.org> > wrote: > > > First my thanks to Paola for this CG. I’m hoping we can attract more > people with direct experience. Getting the CG noticed more widely is quite > a challenge! Any suggestions? > > > It has been proposed that without knowledge representation. there cannot > be AI explainability > > > That sounds somewhat circular as it presumes a shared understanding of > what “AI explainability” is. Humans can explain themselves in ways that > are satisfactory to other humans. We’re now seeing a similar effort to > enable LLMs to explain themselves, despite having inscrutable internal > representations as is also true for the human brain. > > I would therefore suggest that for explainability, knowledge > representation is more about the models used in the explanations rather > than in the internals of an AI system. Given that, we can discuss what > kinds of explanations are effective to a given audience, and what concepts > are needed for this. > > Explanations further relate to how to making an effective argument that > convinces people to change their minds. This also relates to the history > of work on rhetoric, as well as to advertising and marketing! > > Best regards, > > Dave Raggett <dsr@w3.org> > > > >
Received on Thursday, 4 July 2024 06:49:22 UTC