- From: carl mattocks <carlmattocks@gmail.com>
- Date: Tue, 26 May 2020 10:34:20 -0400
- To: W3C AIKR CG <public-aikr@w3.org>
- Message-ID: <CAHtonu=DHG9T1bsc+UXKfgdrwqrA_Kc53fHMZHdzxN-q9Ssn9Q@mail.gmail.com>
a KRID focused goal is Create a Core Ontology that clarifies role/use/properties of KRID within context of 'goals' mapped in StratMl Towards that goal please peruse : Core Software Ontology Core Ontology of Software Components Core Ontology of Services which references Ontology of Goals http://km.aifb.kit.edu/sites/cos/ An Ontology to aid the Goal-oriented Requirements Elicitation and Specification for Self-Adaptive Systems https://www.researchgate.net/publication/221270123_GOORE_Goal-Oriented_and_Ontology_Driven_Requirements_Elicitation_Method Carl It was a pleasure to clarify On Mon, May 18, 2020 at 7:52 PM Paola Di Maio <paoladimaio10@gmail.com> wrote: > Thanks Carl for clarifying > > what about setting the goal for clarifying /sketch out KRID so that we > can have a discussion > I plan to put my hands on the plan in the stratnavapp soon > P > > On Thu, May 14, 2020 at 10:09 PM carl mattocks <carlmattocks@gmail.com> > wrote: > >> Towards adopting stratml for the AIKRCG 'strategy' ... >> Given we are AIKR ... we understand that Kairos signifies a proper or >> opportune time for action and our usage of StratMl to EXPLAIN makes us >> interested in Knowledge-directed Artificial Intelligence Reasoning Over >> Schemas (KAIROS) DARPA-SN-19-19 . >> >> Our discussions have focused on: >> StratML is our Schema start point for reasoning, as in, the performance >> of AIKR inference >> <https://www.merriam-webster.com/dictionary/inferences>s is scoped / >> weighed by the declared strategy. >> AIKR reasoning uses KRID identifiers and data (aka metadata) properties, >> such as KR TYPE. >> KR Types include Declarative and Imperative (aka procedural). >> >> Carl >> >> Chair AIKRCG >> It was a pleasure to clarify >> >> >> On Thu, May 14, 2020 at 7:37 AM Paola Di Maio <paola.dimaio@gmail.com> >> wrote: >> >>> It is under Owen and Chris's leadership that we are making some progress >>> towards >>> adopting stratml for the AIKRCG 'strategy' >>> >>> In sum. what are we doing/planning to do as a group is going to be >>> documented in the plan. and although we are still working things out, as >>> we do have moments of brilliance and outbursts of productivity we can put >>> them down in this stratml plan on the stratnav app so that they ll be a >>> record of that. should be useful. I apologize again for being very tired >>> but 9 pm is very late for me. especially when I have had a full day incl >>> other calls etc- >>> >>> a few notes below >>> >>> >>> the plan being developed here >>>> <https://www.stratnavapp.com/StratML/Part1/413d648b-bd36-418d-af74-e15b0cd8281d/Styled>. >>>> if anyone is inspired to chip in pls ask editing pass to Chris on this list >>>> >>> >>> >>>> With reference to our Frameworks goal >>>> <https://www.stratnavapp.com/StratML/Part1/413d648b-bd36-418d-af74-e15b0cd8281d/Styled#Goal_f1a62bb5-9910-4052-946a-344c0e22272f>, >>>> I will endeavor to render in StratML Part 2 format any frameworks that may >>>> be discovered and available on the Web. Please apprise me of any of which >>>> you are aware. >>>> >>> To clarify - Jorge aske whether we are using any framework of reference >>> for our work. which loosely attempts to study explainability for machine >>> learning. That particular goal for our CG may need to be refined a little - >>> I dont think a frameworks exists as such (strategies, methods) but there is >>> interesting work being done, which I dont think is a framework yet. rather >>> a compilation of possible techniques. the effectiveness of which may need >>> to be evaluated in the field. So to answer the question, methods to address >>> explainability of ML exist but >>> a) I dont think are frameworks/strategies - this may be our goal? to >>> gather what is in the field and make a framework? >>> b) evaluation criteria for the effectiveness of these methods may not >>> yet be studied, again >>> could this be our work? I am doing some research in this direction but >>> not yet conclusive >>> I volunteered to take up this task and shall soon update the plan with >>> some links but I am putting together a presentation- >>> anyone want to contribute? >>> >>> The caveat is that statistical pronability and non parametric methods >>> in ML are unpredictable by definition >>> https://machinelearningmastery.com/uncertainty-in-machine-learning/ >>> http://mlg.eng.cam.ac.uk/zoubin/talks/mit12csail.pdf >>> >>> (this is not my field at all, does anyone care to expand?) >>> so I am not sure how to address this unpredictability other than with >>> the question >>> >>> Can we use known symbolilc KR to explain ML? >>> >>> In the meantime, this Google site-specific query >>>> <https://www.google.com/search?ie=UTF-8&oe=UTF-8&q=AI+framework&btnG=Google+Search&domains=stratml.us&sitesearch=stratml.us> >>>> of the StratML collection turns up about 29 hits on the terms "AI >>>> framework". Here's <https://www.modzy.com/platform-and-marketplace/> >>>> the top paid ad-placed hit (not yet in the StratML collection but soon to >>>> be). >>>> >>> thanks - how do we query for ML explainability framework (a bit more >>> precise semantically in relation to what we are doing here) >>> >>> KRID - Carl is putting forward a category/concept/type whereby KR is >>> identified >>> so KRID = some value to describe KR identity >>> Carl started by suggesting the top level distinction for this concept >>> would be >>> declarative/procedural >>> i do not yet have an opinion about this, but would request Carl to >>> start sketching out >>> the taxonomy for KRID as he envisions it. so that we can have a >>> discussion about it >>> One considertation is: to what extent is declarative/procedural >>> knowledge relevant to support ML? or is KRID intended for AI in general >>> (not ML) . Carl perhaps you should create this as a goal for yourself.Also >>> could you clarify the relation of KRID to KAIROS? >>> >>> Thanks! >>> >>> PDM >>> >>
Received on Tuesday, 26 May 2020 14:35:10 UTC