- From: Paola Di Maio <paoladimaio10@gmail.com>
- Date: Fri, 15 May 2026 00:43:09 +0800
- To: yuqiang <yuqiang@humanjudgment.org>
- Cc: Milton Ponson <rwiciamsd@gmail.com>, W3C AIKR CG <public-aikr@w3.org>, Stephen Watt <stevewatt13@peoplesevidencelab.com>
- Message-ID: <CAMXe=SrQWZ24_cVn4qWrvXFyVpC3nTTxsNn3e9QWpWZeYZFfkg@mail.gmail.com>
Milton and Yunjang Thank you Please do populate the CG resources *wiki or send me a file *with our name and contribution upload to the AI KR github repo list of existing resources *public vocabularies could be useful Milton you mention the definitions in the EU AI act. are they publicly accessible and bear an open license? send url? if not, we could complement existing efforts with a vocabulary under MIT license Please contribute existing concepts/categories *either cc by or that you own the rights to Each supported by one or more use cases would be awesome Then we can continue to elaborate some examples [image: image.png] On Thu, May 14, 2026 at 11:42 PM yuqiang <yuqiang@humanjudgment.org> wrote: > Dear Milton and all, > > Thank you for raising this point. I also share the concern that using > “resilience engineering” directly in the AI context may create ambiguity, > especially if the term already has specific meanings in regulatory, > institutional, or safety-engineering contexts. > > A practical distinction that may help the AIKR/PEL discussion is the > difference between naming a concept and assessing a claim. Whether the term > is “reliability”, “resilience”, “risk”, or something else, it may be useful > to specify what is being assessed, what observations or evidence are > available, and what kind of counterexample would show that the current > evidence is insufficient. > > This could provide a lightweight bridge between terminology, knowledge > representation, and later formalization: define the concept, define the > claim being assessed, define the relevant evidence, and define the > condition under which the evidence would not be enough. > > Best regards, > Yuqiang > > > ------------------------------------------------------------------ > 发件人:Milton Ponson <rwiciamsd@gmail.com> > 发送时间:2026年5月14日(周四) 22:49 > 收件人:paoladimaio10<paoladimaio10@googlemail.com> > 抄 送:W3C AIKR CG<public-aikr@w3.org>; Stephen Watt< > stevewatt13@peoplesevidencelab.com> > 主 题:Re: mapping resilience engineering RE, in relation to PEL > > Resilience engineering is defined in terms in the EU AI Act and similar > legislation or recommendations from international bodies. > Unfortunately not necessarily the best theoretical frameworks are used in > most cases, because the EU and most other international organizations and > national governments have chosen to implement legislation that is > politically sufficient for empirical adequacy. > And the very term resilient has so many operational definitions, that > comparing national legislation across multiple countries can often be > analogous to the apples and oranges comparison predicament. > > It saddens me to say that mathematicians, scientists and engineers are > often the last ones to consult in drafting legislation on highly technical > issues, in particular related to Internet services, software and AI > development, and worse their recommendations set aside, ignored or watered > down to be effectively not useful any more, but they will be the FIRST ONES > to blame, when the proverbial "shit hits the fan". > And when mathematicians, computer scientists and software engineers do > sound the alarm in AI companies publicly, the messengers get killed and the > message promptly downplayed, repudiated or set aside or dismissed as a > minority opinion. > > Resilience engineering has different meanings across multiple academic > fields where AI is used and coming up with a generalized definition will be > very hard. > It would be more useful to find a term that suits knowledge representation > for AI, and see how this translates into the mathematical framing in > academic fields of application. > > On Thu, May 14, 2026, 04:17 Paola Di Maio <paola.dimaio@gmail.com> wrote: > > KR is vast, the current scope of work is about natural language models > and conceptual diagrams of things > that matter to AI. AI Risks/reliability is a matter of concern that was > first raised on this list in 2025/ > > *The rationale* > There is a need to capture, measure and improve the reliability of AI > systems > How do we define reliability then? > > A bubble *elipse? was added to help define the AIKR metamodel > https://www.w3.org/community/aikr/wiki/File:AI_KR_VOCABS_NOV_2025.jpg > > I am now sharing a draft concept map for RE *working on more refined > versions > could benefit from being curated > https://www.w3.org/community/aikr/wiki/Reliability_Engineering > > The version of the RE concept model is shared following Stephen Watt in > cc intro post to PEL *People Evidence Lab > > PDM > > > Milton Ponson > Rainbow Warriors Core Foundation > CIAMSD Institute-ICT4D Program > +2977459312 > PO Box 1154, Oranjestad > Aruba, Dutch Caribbean > >
Attachments
- image/png attachment: image.png
Received on Thursday, 14 May 2026 16:43:52 UTC