mapping resilience engineering RE, in relation to PEL

 KR is vast,  the current scope of work is about natural language models
and conceptual diagrams of things
that matter to AI. AI Risks/reliability is a matter of concern that was
first raised on this list in 2025/

*The rationale*
There is a need to capture, measure and improve the reliability of AI
systems
How do we define reliability then?

A bubble *elipse? was  added to help define the AIKR metamodel
https://www.w3.org/community/aikr/wiki/File:AI_KR_VOCABS_NOV_2025.jpg

I am now sharing a draft concept map for RE  *working on more refined
versions
could benefit from being curated
https://www.w3.org/community/aikr/wiki/Reliability_Engineering

The version of the RE concept model is shared following Stephen Watt  in cc
intro post to PEL *People Evidence Lab

PDM

Received on Thursday, 14 May 2026 08:17:31 UTC