Re: Toward a web standard for XAI?

Hi Adam & All,

You may be interested in the way explanations are generated in the platform
online at the site below.

First is a headline.  Clicking on that gets the first layer of detail.
Clicking on that....well, you see the idea.

This is done in a subject-independent way, by analyzing the underlying call
graph.

                                                   Cheers, -- Adrian

Adrian Walker
Executable English LLC
San Jose, CA, USA
860 830 2085
https://www.executable-english.com


On Fri, Nov 16, 2018 at 9:47 PM Adam Sobieski <adamsobieski@hotmail.com>
wrote:

> Paola Di Maio,
>
>
>
> When considering explanations of artificial intelligence systems’
> behaviors or outputs or when considering arguments that artificial
> intelligence systems’ behaviors or outputs are correct or the best
> possible, we can consider diagrammatic, recursive, component-based
> approaches to the design and representation of models and systems (e.g.
> Lobe). For such approaches, we can consider simple components,
> interconnections between components, and composite components which are
> comprised of interconnected subcomponents. For such approaches, we can also
> consider that components can have settings, that components can be
> configurable.
>
>
>
> As we consider recursive representations, a question is which level of
> abstraction should one use when generating an explanation – when composite
> components can be double-clicked upon to reveal yet more interconnected
> components? Which composite components should one utilize in an explanation
> or argument and which should be zoomed in upon and to which level of
> detail? We can generalize with respect to generating explanations and
> arguments from recursive models of: (1) mathematical proofs, (2) computer
> programs, and (3) component-based systems. We can consider a number of
> topics for all three cases: explanation planning, context modeling, task
> modeling, user modeling, cognitive load modeling, attention modeling,
> relevance modeling and adaptive explanation.
>
>
>
> Another topic important to XAI is that some components are trained on
> data, that the behavior of some components, simple or composite, is
> dependent upon training data, training procedures or experiences in
> environments. Brainstorming, we can consider that components or systems can
> produce data, e.g. event logs, when training or forming experiences in
> environments, such that the produced data can be of use to generating
> explanations and arguments for artificial intelligence systems’ behaviors
> or outputs. Pertinent topics include contextual summarization and narrative.
>
>
>
> XAI topics are interesting; I’m enjoying the discussion. I hope that these
> theoretical topics can be of some use to developing new standards.
>
>
>
>
>
> Best regards,
>
> Adam
>
>
>
>
>
> Schiller, Marvin, and Christoph Benzmüller. "Presenting proofs with
> adapted granularity." In Annual Conference on Artificial Intelligence, pp.
> 289-297. Springer, Berlin, Heidelberg, 2009.
>
>
>
> Cheong, Yun-Gyung, and Robert Michael Young. "A Framework for Summarizing
> Game Experiences as Narratives." In AIIDE, pp. 106-108. 2006.
>
>
>
> *From: *Paola Di Maio <paola.dimaio@gmail.com>
> *Sent: *Monday, November 12, 2018 9:59 PM
> *Cc: *public-aikr@w3.org; semantic-web at W3C <semantic-web@w3c.org>
> *Subject: *Re: Toward a web standard for XAI?
>
>
>
> Dear Adam
>
> thanks and sorry for taking time to reply.
>
> Indeed triggered some thinking
>
> In the process of doing so,irealised whatever we come up with has to match
> the web stack, and then realised that we do not have a stack for the
> distributed web yet, as such
>
> Is this what you are thinking, Adam Sobieski, please share more
>
> sounds like in the right direction
>
> PDM
>
>
>
>
>
>
>
> Artificial intelligence and machine learning systems could produce
> explanation and/or argumentation [1].
>
>
>
> Deep learning models can be assembled by interconnecting components
> [2][3]. Sets of interconnected components can become interconnectable
> composite components. XAI [4] approaches should work for deep learning
> models assembled by interconnecting components. We can envision
> explanations and arguments, or generators for such, forming as deep
> learning models are assembled from components.
>
>
>
> What do you think about XAI and deep learning models assembled by
> interconnecting components?
>
>
>
>
>
> Best regards,
>
> Adam Sobieski
>
> http://www.phoster.com/contents/
>
>
>
> [1] https://www.w3.org/community/argumentation/
>
> [2] https://www.lobe.ai/
>
> [3] https://www.youtube.com/watch?v=IN69suHxS8w
>
> [4] https://www.darpa.mil/program/explainable-artificial-intelligence
>
>
>
> *From: *Paola Di Maio <paola.dimaio@gmail.com>
> *Sent: *Wednesday, October 31, 2018 9:31 AM
> *To: *public-aikr@w3.org; semantic-web at W3C <semantic-web@w3c.org>
> *Subject: *Toward a web standard for XAI?
>
>
>
>
> Just wondering
>
>
> https://www.w3.org/community/aikr/2018/10/31/towards-a-web-standard-for-explainable-ai/
>
>
>
>
>

Received on Saturday, 17 November 2018 14:42:04 UTC