Re: Nist accepting comments on the draft AI explainability

Greetings all, apologies for lack of activity on the lists, I ll soon
post a summary of
AI KR related contributions and a bottom line conclusions of the
exploratory work done  in the last couple of years

we can then decide whether ther is enough material for a report and
the future of the AI KR CG

I summarize this email the feedback for the NIST draft on
explainability  9due 15 Oct) as it is related to KR
https://www.nextgov.com/emerging-tech/2020/08/nist-releases-core-principles-judge-explainable-ai/167833/

with a draft voice narration (14 minutes) which I think I ll record
again as this is a bit ranty

"https://www.loom.com/share/0f2cd37e8f854cd9b788def429b7e283"

FEEDBACK NOTES FOR NIST ON EXPLAINABILITY
Draft NISTIR 8312

from PAOLA DI MAIO.

13 October 2020

PREAMBLES
a) before explanaibility can be addressed in the context of AI, AI
should be better understood/defined. The reality is that we may not
yet have AI after all
b) In addition to the distinction between narrow and general AI, the
distinction closed vs open system AI is also necessary. This
particularly applies to the point Knowledge limits in the draft.

GENERAL COMMENTS ON THE PRINCIPLES IN THE DRAFT

1. EXPLANATION type mismatch among the principles
for example explanation, is a noun, while meaningful is an adjective,
would be advisable to have some consistency in the naming conventions?
2. MEANINGFUL explanation is described as a prinicple that mandates an
explanation for AI, and meaningful is described as a principle that
the explanation is meaningful, but it does not describe
criteria/pameters for meaningfulness. This does not seem up to
standard. Looks to me that meaningful is a qualifier for explanation
(1)
3. EXPLANATION ACCURACY - same as above, this does not seem a
principle more like a qualifier for principle 1. Looks to me that 2
and 3 are qualifiers for 1. hoever they should be better defined
4. Knowledge Limits - this is new (ie. unheard of) Is there a
reference for such a notion? Where does it come from? who may have
come up with such an idea?
Intelligence can be said to overcome knowledge limits, ie, given
limited knowledge an intelligent process relies on logical inferences
deduction, abduction to achieve a conclusion. Reasoning with limited
knowledge is a defining characteristic of intelligent systems.
Furthermore in open systems, knowledge is not limited, by contrast, it
is continually updated with new knowledge.  To consider limited
knowledge for intelligent systems/AI is a contradiction in terms.  A
knowledge limit applies to closed database systems not to AI.

OTHER
= In addition to meaningful and accurate, explanations should
also be timely, accessible, updatable etc

- symbolic KR is central to subsymbolic explainability  and should be
mentioned in this document

-there should be a standard for systems explainabilty


-------------------------------------

On Thu, Aug 20, 2020 at 7:44 PM Paola Di Maio <paola.dimaio@gmail.com> wrote:
>
> I could not understand the word limits in the context of this list, but seems to be used in this report
> https://www.nextgov.com/emerging-tech/2020/08/nist-releases-core-principles-judge-explainable-ai/167833/
>
> if anyone want to provide input to the report  lets gather them here first?
>
> PDM

Received on Tuesday, 13 October 2020 05:40:57 UTC