This is Paola from the AIKR Community Group, Artificial Intelligence Knowledge presentation at WC. And it's June. Finally, I get around to show something. I've got something to show finally. So, about the slides very quickly. To provide an introduction because a few people are new to this work and they may still haven't quite grasped what is being done and why. And myself, I also, as this goalposts were shifting and the ideas behind this work were kind of shifting. I haven't been sure what I was doing for quite a while. So, it's just to say this is what we're doing. And also, there are a few people who would like to contribute. I thought we should make it easier for them. Also, there have been a couple of questions that I hope this slides will have to answer. Requesting expression of interest. This is very important. So, people are saying, do we have funding? Why do we have funding? And because I wasn't able to be very clear. Now, what can start? It's opening the proposal. And I'm hoping there will be people will be people will come on board who are capable of articulating budgeting and all of the things and handle the bureaucracy involved. Maybe institutional partners or, you know, backers of all sorts. And also, it's going to be a way of opening up feedback so people can see what has been done with a few slides and finally be able to give feedback on what has been done and contribute. So, people who have been in the CG, the community group know most of what has been going on. You just search W3C AI KRSG, you get to the landing page, you get to the wiki. Mostly, it's been discussions. But there has been quite a lot of discussions. And there's been quite a lot of updates. So, the main task was try to understand what is going on in AI. Because this has been more easy. So, around the time when we started is the time when symbolic AI and knowledge presentation were being nailed in the coffin by the Turing Prize scholars. And at that time, myself, a few people started kind of, you know, getting nervous. And they were saying, "where is this going?" And soon after that, AI, how do you say, explainability risks and it all became mayhem of where, you know, what is it going to happen and how we're going to fix it. And a lot of people have jumped on the AI bandwagon. And WAGOR fits. Still not understanding AI, which is important. And I don't make any claims that I understand AI, but at least from this time onwards, it is documented on a mailing list that I've been paying attention, at least to whatever I could understand. Out of the need to understand machine learning, KR was involved. So, when we couldn't figure out what was happening in machine learning, the black box, we needed to call upon knowledge representation to figure. And at the time, people still weren't sure outside the niche of the KR people. We were able to communicate. So, we were able to communicate what was. And out of the need to understand machine learning, KR was needed. But I don't think anybody knew exactly how it was going to be used to achieve, to control the AI risk. We still don't know, but we're working on it. This is that progress. Took us a few years. So what is KR? We know that. You can read the rehab risk. Excellent education materials. I don't have to tell you. You say, okay, I'm going to school. I've been studying KR for years. I've been a researcher. I'm a professor in KR. There are a few, not many, a few people. But as you can see from the educational resources, which we are creating on the wiki, there are a lot of courses in knowledge of recitation. Some of them are excellent and they're accessible. And there are books. So one can say, okay, I'm going to study KR. But that doesn't mean that people who know KR, the professor or even the experts, know how to use it to address the current AI concerns. Or can't, you know. So nobody can do anything in absolute terms because AI is evolving, how do you say, organically. Nobody has control over, you know, these algorithms are out there, the computational power is there. And a lot of wonderful things are happening, a lot of useful applications. So we know that we need KR for accessibility and safety in general. So we're bridging the machine learning. So now that you see, KR typically has been used to make the machine to program machines. So we wanted an intelligent function. We need KR to tell the computer to execute it. Now the opposite is happening. We tell the computer is executing an intelligent function. And we need KR to figure out how it's doing it. So I think this is very exciting to see for me. And this is what makes it worth it, really. So in order to communicate what KR is, is such a huge domain, we need this vertical because there is so much of it. And we need to also explain the role of KR in explainability, which becomes a bit circular. We have to open the black box. And we have to use KR to create reliable, safe, ethical, unbiased, and all of that. So now this is the other very exciting bit, using KR in the creation of agents. Non-trivial because the KR is a huge subject. So you can point people to KR, educational resources, and there are books and exercises and lectures they can follow. But will they, is anyone that, I don't know anybody who can guarantee a safe, ethical AI system, whether they have studied KR or not. So, you know, but still we need to be able to, to summarize what KR is or which concepts and tools and methods in KR can be leveraged to guarantee AI safety. Because even experts in KR may not be familiar with the entire domain or with the state of the art. So typical expert people I've spoken with, they, they were experts 50 years ago when things have changed. AI has changed, computational, computer science has changed. Because using KR to make the black box transparent may require advancing the state of the art in many fields. KR but also logic. And this is another exciting area. So, at least we, so we have to start by mapping the knowledge domain for KR. So ultimately we're doing an ontology for knowledge representation. The scenario, important thing to keep in mind is that AI, whatever is going to be, which there is a lot here, which there is so much here I cannot describe. So, AI is actually driving and influencing human evolution, cognition, information, belief, decision, the politics, the economics. So AI is not just by itself. It's something which is very fundamental in human contemporary evolution and in social technologies. So, KR can help to mitigate existential risks. To be honest here, I have a whole talk on this. Systemic aberration. So, what is happening now is that things are being done which are not being described properly. And things that are said that are not being thought. There is a lot of confusion by mislabeling. And we can only use knowledge representation to help us disambiguate and get a grip with what is going on. So, if you want, I can share the link if you cannot click it. If you're interested, that's about an hour tall. If the standards being developed to ensure AI safety, and I have sifted through a few AI standards on the hub, are missing out on this essential AI safety concept, then the risk of critical failures in AI safety increase. So, I hope this is fairly logical and clear, despite a lot of effort. So, there is a lot of money and noise in AI safety. Nonetheless, the safety standards are missing out on essential safety concepts. And therefore, I think we need to use leverage KR to get out of this loop. So, I'm mapping the KR domain, simply because when people ask me what is KR, I want to point to something that is cognitively can be processed by a human and by the machine obviously. But, at the moment, KR is all over the place. So, I have divided into sub domain. So, I'm saying the knowledge presentation domain is made up of upper foundational ontologies, knowledge presentation languages and formalisms, domain ontologies. Oops, spelling mistake here. So, I can never remember what ODDS stands for. Domain? Ontology domain? Basically, these are domain ontologies. And there is a whole field and I'm looking at it. When I can remember what it is. AI safety standards. So, here is where I found the critical issues. Some critical issues. which are my worry, and I think they should be your worry too. So we have a lot happening in the knowledge presentation learning and ontology-driven agents, very exciting stuff. We cannot conceptualize all this massive field without a bit of sweat. So this is what we're doing now, and I appreciate very much your attention and the concentration you're applying to follow. Now, this is the point where there was a bit of a question. Peter Rivett said, "Oh, you're saying that you're going to do something that can make the system reliable." He wasn't quite sure this is something to be taken seriously or feasible. So I thought, okay, we cannot have any AI ethical safety standards unless... Unless we understand the notion of reliability engineering. So I'm intersecting these two here and I'm happy to look at it later. So today we are starting from this. We are modeling. So these are subdomains. Each of these is the subdomain for the knowledge representation in that domain, which is huge. And today we are just focusing on this state. Because I think we should take this step by step. So first question, are the subdomains adequate? And this is a nice little evaluation task for anyone who has a spare neuron. So, okay. So, upper ontology, knowledge representation languages, domain ontology, knowledge representation learning, and all of these. So is there anything in this subdomains that shouldn't be there, that doesn't belong? Or is there anything fundamentally missing? Please let me know. And so what are we doing? What I have done so far is that I've taken all the upper top level ontologies that I could put my hands on. I have identified the sources of knowledge, which were textbooks and papers and repositories and abstracted terms. In particular, I have extracted the classes, I think. And so now it's a matter of evaluating. So then we're going to develop a definition for each class or category. or streamline the duplicate clean up boil it down a little bit make a list and effect well so when I say list it looks like I'm doing nothing you know but I'm looking I'm making a bag of words which is a fundamental thing and I'll pay mm-hmm so I have you can click here if you cannot I'm gonna send a link separately a parentology book up typically I would put this in an open spreadsheet open for anyone to open but because there have been people who have kind of taken advantage of the openness we don't know who they are we have there is no way to track whose access to what resource maybe using it without credit to see doesn't matter because ultimately these are going to be published and licensed for everyone to use they're gonna be on an open access but in the meantime some people could just use this work to do their own thing and not acknowledge no so this this is unfortunately happened so here is the upper ontology vocab if you would like to look at it or provide feedback in any way please request and if you cannot see it please request access I will however already managed to I will send the vocab to the people who are already working on this one so two or three people have already provided feedback on a drums and these will receive it probably in an email with a request not to share it outside at the community group at this stage. Please check the current list of terms. Annotate it. So in the Excel sheet you may be able to create a column with your name and then in the column you could write something for each term, something like add, you know, this doesn't, whatever. So feel free to annotate each term or to create an annotation that applies to the entire vocabulary or even just email. So are there any terms in cost? About 200 terms now. Anything to be added? Anything that should be deleted? How can these upper ontology vocabulary, upper ontology be used? Is it useful? Complete? How can it be used? Make some use cases? Then eventually this is going to be published and it's going to be constituting the first step into the publication of a series of vocabularies which overall will represent the knowledge representation domain. And probably the basis for an ontology of knowledge representation. So if anyone interested in working on the list, please get in touch. Thank you so much for listening. And I'll see you in the group. Wow, this is being disclosed already. Look at the machine. This is the machine. It's doing it. The machine is opening. Bye.