Re: Toward a web standard for XAI?

 by means of a working example

there is a known/accepted  gap between territory and the map
the assumption that the map corresponds to the territory, can result
in fatal error

1. i d like to see how category theory resolves this particular dichotomy

2.  if all intelligent systems humanity depends on, rely on such
common wrong assumptions,
that's where critical failures occur

3, thats where the work needs to be done, imho




On Fri, Nov 23, 2018 at 11:07 AM Paola Di Maio <paola.dimaio@gmail.com> wrote:
>
> I suppose that category theory , with its advantages and liimitations,
> could well correspond to limitation of human and machine intelligence,
> as long as we are aware of its limitations and possible distortions in
> reasoning,
> that could lead to distortions in decision making, are addressed by
> complementary approaches
> >
> > I too was skeptical initially, and yes there may be some flaws in category theory, but convergence of sting theory, quantum theory, computability issues, software engineering, generalized frameworks for discussing all forms of logic and underlying calculi lead to one common ground: the use of category theory.
>
> >>>any pointers?  if category theory and its flaws in knowledge representation (the gap beetween the reality and the abstraction ?) is all we have, then I am not surprising everyone is using it
>
>
> > Even Cognitive sciences and Biologically Inspired Cognitive Architectures draw heavily from category theory.
> >
> that's where the limitations of human and machien cognition come from?
>
> > The current state of category theory has not yet considered unifying many fields, but string theorists and quantum physicists are actually the ones asking the pertinent questions, the answers of which point to a common ground.
> >
> I am familiar, to some extent, with the common ground <g>
> I d love to see a unifying category of everything, sure it will make us laugh
>
>
> > On Wednesday, November 21, 2018 8:05 PM, Paola Di Maio <paola.dimaio@gmail.com> wrote:
> >
> >
> > Milton
> > I have been thinking
> > There have beeb (long) discussions on category theory before.
> > Of course it has merits and useful applications, but how to leverage
> > its power without falling prey of its known fallacies?
> > I dont know if there is any recent good enough reference work to
> > advance the science
> > But maybe this would be also a good opportunity to work on that, since
> > we are throwing the entire universe of discourse into
> > the cauldron
> > How to overcome ihe obvious flaws .of category theory (I think the
> > argument is well made in Lakoff';s Women Fire and other dangerous
> > things)
> > this blog post summarises some of the points
> > https://jeremykun.com/2013/04/16/categories-whats-the-point/
> >
> > In fact, this should be my next life mission.
> >
> > Dr Paola Di Maio
> > Artificial Intelligence Knowledge Representation
> > Special Issue, Systems MDPI
> > *Cfp  accepting manuscripts
> > A bit about me
> >
> >
> >
> > On Mon, Nov 19, 2018 at 1:36 AM ProjectParadigm-ICT-Program
> > <metadataportals@yahoo.com> wrote:
> > >
> > > When considering explanations of artificial intelligence systems’ behaviors or outputs or when considering arguments that artificial intelligence systems’ behaviors or outputs are correct or the best possible, we can consider diagrammatic, recursive, component-based approaches to the design and representation of models and systems (e.g. Lobe). For such approaches, we can consider simple components, interconnections between components, and composite components which are comprised of interconnected subcomponents. For such approaches, we can also consider that components can have settings, that components can be configurable.
> > >
> > >
> > > This recursiveness in what are very obviously category representations can be formalized by higher-dimensional categories in category theory.
> > >
> > > As we consider recursive representations, a question is which level of abstraction should one use when generating an explanation – when composite components can be double-clicked upon to reveal yet more interconnected components? Which composite components should one utilize in an explanation or argument and which should be zoomed in upon and to which level of detail? We can generalize with respect to generating explanations and arguments from recursive models of: (1) mathematical proofs, (2) computer programs, and (3) component-based systems. We can consider a number of topics for all three cases: explanation planning, context modeling, task modeling, user modeling, cognitive load modeling, attention modeling, relevance modeling and adaptive explanation.
> > >
> > >
> > > This can be done using a category graph based programming language where recursiveness is embedded in the syntax structure and where at the bottom of the parsing tree calls to context specific programming languages are made to recursively determined context specific components.
> > >
> > > Another topic important to XAI is that some components are trained on data, that the behavior of some components, simple or composite, is dependent upon training data, training procedures or experiences in environments. Brainstorming, we can consider that components or systems can produce data, e.g. event logs, when training or forming experiences in environments, such that the produced data can be of use to generating explanations and arguments for artificial intelligence systems’ behaviors or outputs. Pertinent topics include contextual summarization and narrative.
> > >
> > >
> > > Context can be made explicit by assigning categories.
> > >
> > >
> > > XAI topics are interesting; I’m enjoying the discussion. I hope that these theoretical topics can be of some use to developing new standards.
> > >
> > >
> > >
> > > Milton Ponson
> > > GSM: +297 747 8280
> > > PO Box 1154, Oranjestad
> > > Aruba, Dutch Caribbean
> > > Project Paradigm: Bringing the ICT tools for sustainable development to all stakeholders worldwide through collaborative research on applied mathematics, advanced modeling, software and standards development
> > >
> > >
> > > On Saturday, November 17, 2018 1:41 AM, Adam Sobieski <adamsobieski@hotmail.com> wrote:
> > >
> > >
> > > Paola Di Maio,
> > >
> > > When considering explanations of artificial intelligence systems’ behaviors or outputs or when considering arguments that artificial intelligence systems’ behaviors or outputs are correct or the best possible, we can consider diagrammatic, recursive, component-based approaches to the design and representation of models and systems (e.g. Lobe). For such approaches, we can consider simple components, interconnections between components, and composite components which are comprised of interconnected subcomponents. For such approaches, we can also consider that components can have settings, that components can be configurable.
> > >
> > > As we consider recursive representations, a question is which level of abstraction should one use when generating an explanation – when composite components can be double-clicked upon to reveal yet more interconnected components? Which composite components should one utilize in an explanation or argument and which should be zoomed in upon and to which level of detail? We can generalize with respect to generating explanations and arguments from recursive models of: (1) mathematical proofs, (2) computer programs, and (3) component-based systems. We can consider a number of topics for all three cases: explanation planning, context modeling, task modeling, user modeling, cognitive load modeling, attention modeling, relevance modeling and adaptive explanation.
> > >
> > > Another topic important to XAI is that some components are trained on data, that the behavior of some components, simple or composite, is dependent upon training data, training procedures or experiences in environments. Brainstorming, we can consider that components or systems can produce data, e.g. event logs, when training or forming experiences in environments, such that the produced data can be of use to generating explanations and arguments for artificial intelligence systems’ behaviors or outputs. Pertinent topics include contextual summarization and narrative.
> > >
> > > XAI topics are interesting; I’m enjoying the discussion. I hope that these theoretical topics can be of some use to developing new standards.
> > >
> > >
> > > Best regards,
> > > Adam
> > >
> > >
> > > Schiller, Marvin, and Christoph Benzmüller. "Presenting proofs with adapted granularity." In Annual Conference on Artificial Intelligence, pp. 289-297. Springer, Berlin, Heidelberg, 2009.
> > >
> > > Cheong, Yun-Gyung, and Robert Michael Young. "A Framework for Summarizing Game Experiences as Narratives." In AIIDE, pp. 106-108. 2006.
> > >
> > > From: Paola Di Maio
> > > Sent: Monday, November 12, 2018 9:59 PM
> > > Cc: public-aikr@w3.org; semantic-web at W3C
> > > Subject: Re: Toward a web standard for XAI?
> > >
> > > Dear Adam
> > > thanks and sorry for taking time to reply.
> > > Indeed triggered some thinking
> > > In the process of doing so,irealised whatever we come up with has to match the web stack, and then realised that we do not have a stack for the distributed web yet, as such
> > > Is this what you are thinking, Adam Sobieski, please share more
> > > sounds like in the right direction
> > > PDM
> > >
> > >
> > >
> > >
> > > Artificial intelligence and machine learning systems could produce explanation and/or argumentation [1].
> > >
> > > Deep learning models can be assembled by interconnecting components [2][3]. Sets of interconnected components can become interconnectable composite components. XAI [4] approaches should work for deep learning models assembled by interconnecting components. We can envision explanations and arguments, or generators for such, forming as deep learning models are assembled from components.
> > >
> > > What do you think about XAI and deep learning models assembled by interconnecting components?
> > >
> > >
> > > Best regards,
> > > Adam Sobieski
> > > http://www.phoster.com/contents/
> > >
> > > [1] https://www.w3.org/community/argumentation/
> > > [2] https://www.lobe.ai/
> > > [3] https://www.youtube.com/watch?v=IN69suHxS8w
> > > [4] https://www.darpa.mil/program/explainable-artificial-intelligence
> > >
> > > From: Paola Di Maio
> > > Sent: Wednesday, October 31, 2018 9:31 AM
> > > To: public-aikr@w3.org; semantic-web at W3C
> > > Subject: Toward a web standard for XAI?
> > >
> > >
> > > Just wondering
> > > https://www.w3.org/community/aikr/2018/10/31/towards-a-web-standard-for-explainable-ai/
> > >
> > >
> > >
> > >
> > >
> >
> >

Received on Friday, 23 November 2018 03:54:12 UTC