Re: Explanation, Mechanistic Reasoning, and Abstraction: Hypertext and Hypermodels

FYI - I am exploring causal models as part of plausible reasoning with imperfect knowledge, and part of the broader area of common sense knowledge and reasoning, see:

 https://github.com/w3c/cogai/blob/master/demos/nlp/commonsense.md <https://github.com/w3c/cogai/blob/master/demos/nlp/commonsense.md>
 https://github.com/w3c/cogai/blob/master/demos/reasoning/README.md <https://github.com/w3c/cogai/blob/master/demos/reasoning/README.md>

Human reasoning operates from imperfect knowledge and what's plausible given past experience. This uses a patch work of informal knowledge rather than mathematically sound principles. It needs to be good enough to understand situations, to make informed guesses as to what will happen next, and to decide on what actions are needed to realise desired outcomes whilst minimising undesirable outcomes. It also needs to know how to revise existing knowledge as new knowledge is learned.

The longer term aim is to develop the means to model how children acquire a good grasp of common sense by the time they are six or seven years old.

This is also a matter of moving beyond logical deduction and ontological entailment, to embrace a much broader range of reasoning, something deemed essential if we are to move beyond narrowly focused AI applications of today and the more general, human-like AI systems hoped for in the future, with acknowledgement to DARPA’s machine commonsense program.

> On 27 Nov 2021, at 12:00, Adam Sobieski <adamsobieski@hotmail.com> wrote:
> 
> Semantic Web Interest Group,
>  
> While recently exploring causal models and machine learning (see also: [1][2]), I had some thoughts about graph-based knowledge representations. These thoughts pertain to explanation, mechanistic reasoning, and abstraction. These thoughts also pertain to organizing and navigating spaces of related (causal) models.
>  
> One can consider the following, increasingly detailed, set of explanatory sentences.
>  
> The robot caused the elevator to arrive.
> The robot pressed the button which caused the elevator to arrive.
> The robot used its arm to press the button which caused the elevator to arrive.
> The robot pressed the button which closed a circuit which sent electricity to a control system while simultaneously causing the button to light up. The elevator control system, having received the electric signal from the button press, dispatched an elevator to the floor that the robot was on.
> The robot used its arm, hand, and finger to press the button which closed a circuit which sent electricity to a control system while simultaneously causing the button to light up. The elevator control system, having received the electric signal from the button press, dispatched an elevator to the floor that the robot was on.
>  
> When considering these sentences from a larger space of (graph of) explanatory sentences, from a simple explanation through increasingly detailed sentences, one can also consider that each sentence can be described as mapping to a graph-based knowledge representation of that sentence’s semantics. In this case, each sentence can also be described as mapping to a (causal) model.
>  
> One could use hypertext to provide a “hyper-explanation” such that users could click on components of explanatory sentences, their phrases or lexemes, to navigate to other explanations having a greater level of detail with respect to the clicked-on contents. One could also use context menus to navigate. Sentences’ phrases and lexemes could each be expanded in a number of ways.
>  
> Similarly, as these sentences each map to (graph-based) diagrams, one could use “hypermodels” or “hyperdiagrams” to provide users with the capability of clicking on various visual diagram components to navigate through increasingly detailed models or diagrams. One could, similarly, use context menus to navigate these spaces. Graph nodes and edges (and perhaps subgraphs) could each be expanded in a number of ways.
>  
> Users could navigate through spaces of (graphs of) explanatory sentences and/or spaces of (graphs of) diagrammatic (causal) models by interacting with hypertext representations and/or by interacting with “hypermodel” or “hyperdiagram” representations.
>  
> Thank you. I hope that these ideas are of some interest. Any thoughts on these topics?
>  
>  
> Best regards,
> Adam Sobieski
> http://www.phoster.com <http://www.phoster.com/>
>  
> [1] https://www.microsoft.com/en-us/research/video/panel-challenges-and-opportunities-of-causality/ <https://www.microsoft.com/en-us/research/video/panel-challenges-and-opportunities-of-causality/>
> [2] https://crossminds.ai/video/yoshua-bengio-towards-causal-representation-learning-603e9c53706789c68965058c/ <https://crossminds.ai/video/yoshua-bengio-towards-causal-representation-learning-603e9c53706789c68965058c/>
Dave Raggett <dsr@w3.org> http://www.w3.org/People/Raggett
W3C Data Activity Lead & W3C champion for the Web of things 

Received on Saturday, 27 November 2021 16:51:14 UTC