- From: carl mattocks <carlmattocks@gmail.com>
- Date: Mon, 28 Nov 2022 15:23:14 -0500
- To: paoladimaio10@googlemail.com
- Cc: W3C AIKR CG <public-aikr@w3.org>
- Message-ID: <CAHtonum8u-viXZPge8R1b-Xy3BZS2dOVpXcx5x7-ZDY0=wzsdA@mail.gmail.com>
For potential additional topics (and background reading) the link is to an "overview of the last five years of literature about explainability and Explainable AI, framed from the human perspective and focused on human-in-the-loop approaches and techniques employing human knowledge to achieve their goals " *https://www.mdpi.com/2306-5729/7/7/93/pdf <https://www.mdpi.com/2306-5729/7/7/93/pdf> * a conclusion is .. "in the field of Explainable AI and Explainability should focus their efforts on developing heuristics and methods to (1) properly evaluate and compare model explainability, i.e., able to consider a variety of aspects both related with models and humans (e.g., faithfulness and interpretability), (2) design generalisable methods able to deal with a wide variety of contexts and models, and (3) explore the intrinsic complexity associated with humans’ and models’ contexts. enjoy Carl Mattocks It was a pleasure to clarify On Mon, Nov 28, 2022 at 11:29 AM Paola Di Maio <paola.dimaio@gmail.com> wrote: > great you find the reference of interest > > btw- somewhere in the long text they suggest that human in the loop > is better defined as humans at the center of AI > but they are just milking the last drop of the argument whatever way they > want to define it > > - humans are by definition at the center of AI since they have developed it > > On Tue, Nov 29, 2022 at 12:21 AM carl mattocks <carlmattocks@gmail.com> > wrote: > >> This is a useful reference for our Human-in-Loop discussion ... I will >> add the following as initial Topics.. >> >> - Instruction tuning, the practice of fine-tuning 'Models' with human >> feedback, >> - A scenario consists of a task, a domain (consisting of what genre >> the text is, who wrote it, and when it was written), >> - 7 metrics - accuracy, calibration, robustness, fairness, bias, >> toxicity, and efficiency >> >> Carl Mattocks >> >> It was a pleasure to clarify >> >> >> On Sun, Nov 27, 2022 at 9:30 AM Paola Di Maio <paola.dimaio@gmail.com> >> wrote: >> >>> I am glad to see Stanford conceding that humans must remain at the >>> center of AI. There is A LOT >>> to dig into relevant to this CG- what are the implications for us here? >>> >>> https://hai.stanford.edu/news/language-models-are-changing-ai-we-need-understand-them >>> >>> >>>
Received on Monday, 28 November 2022 20:24:19 UTC