- From: Paola Di Maio <paoladimaio10@gmail.com>
- Date: Mon, 27 Jul 2020 07:31:31 +0800
- To: carl mattocks <carlmattocks@gmail.com>
- Cc: W3C AIKR CG <public-aikr@w3.org>
- Message-ID: <CAMXe=SrzMbV2Mg+45AgY407Kz-B14ObmeQkkyVtHEUXQbnbkWA@mail.gmail.com>
Carl- We are working on the Mission and Goals, and I explained - with slides and narration- why requirements should, in my view not be in the Goals and why the MIssion is phrased wrong imho Please consider this input at the next meeting, The overall requirement for KR is that it should meet the criteria for representational adequacy (which itself can be complex and I am not sure can be stated as a set of top-level goals for a strategy plan and should enable the correct system function as intended (which includes identification of bias and debiasing- We can try to figure out the requirements when the mission and goals et are sorted/ agreed upon adequacy supports the overall correct system function Maybe the goal is correct system function and its associated objective is representational adequacy? Please consider my input/suggestions for the next revision of the draft plan, I do have a couple of considerations about requirements for KR but would like to achieve a tidier strategic plan first On Sun, Jul 26, 2020 at 9:55 PM carl mattocks <carlmattocks@gmail.com> wrote: > Paola et al > > Please provide a list of criteria that could be used to identify which KR > types can be used for a specific type of AI . .. which helps fulfill the > Goal : KR Requirements and would help define related Objectives > > thanks > > Carl > It was a pleasure to clarify > > > On Sat, Jul 25, 2020 at 8:56 PM Paola Di Maio <paoladimaio10@gmail.com> > wrote: > >> Hi Carl >> >> I am trying to help it to shape the plan with the comments I sent, not >> being able to easily attend meetings >> consider these suggestions as input- >> >> I have sent in my slides as a contribution to shaping the plan what I >> think should not be in the plan, and why >> and what I think should be in the plan and why >> >> I dont think I can do any more than that :-) >> >> >> It could be easier to understand what I say if the team would design an >> AI system >> even just a use case AI system (I have done a few in my life and thats >> how I understood what KR is and how is done). One thing that is not >> addressed in the plan clearly imho is that >> everything (including verification and testing) depends on what type of >> KR is adopted >> There s a bunch of stuff thrown in the current draft which may or may be >> applicable depending what kind of AI is being done >> >> I understand that you may be trying to address the KR Requirements in the >> plan this is why there is an entry requirements? >> >> I am not sure that's the way it works- >> the requirements for KR come from the AI system design >> >> KR is not fulfilling its own requirements per se, KR is fulfilling the AI >> system requirement >> which is decided/stated elsewhere, I think >> >> these AI systems requirements are fulfilled by a) choosing the >> appropriate KR method/s b) ensuring these are implemented corrected c) >> validated and maintained throughout the life of the system >> which is what I suggest should be in the plan >> >> If however you guys dont see it that way, I ll simply publish my stuff as >> Plan B >> :-) >> >> On Sat, Jul 25, 2020 at 11:27 PM carl mattocks <carlmattocks@gmail.com> >> wrote: >> >>> >>> If you could help shape it - we could add a GOAL ' Knowledge >>> Representation Requirements' .. which might address options >>> >>> https://www.sciencedirect.com/topics/social-sciences/knowledge-representation >>> >>> >>> Carl >>> >>> It was a pleasure to clarify >>> >>> >>> On Sat, Jul 25, 2020 at 10:49 AM Paola Di Maio <paoladimaio10@gmail.com> >>> wrote: >>> >>>> Carl >>>> >>>> Please consider these as my comments/contribution to the draft you have >>>> shared so far >>>> and some suggestions for what in my understanding constitutes a AI KR >>>> strategy- >>>> >>>> Although I am attending the meetings only occasionally, I appreciate >>>> the opportunity to provide feedback/input >>>> on what is being done >>>> >>>> We can also produce two or more strategies- if one fits the needs of >>>> some better than another >>>> >>>> cheeers >>>> >>>> P >>>> >>>> >>>> >>>> On Sat, Jul 25, 2020 at 8:38 PM carl mattocks <carlmattocks@gmail.com> >>>> wrote: >>>> >>>>> agreed- this is a different plan. The plan these slides speak too has >>>>> not been worked on >>>>> It was a pleasure to clarify >>>>> >>>>> >>>>> On Fri, Jul 24, 2020 at 10:08 PM Paola Di Maio <paola.dimaio@gmail.com> >>>>> wrote: >>>>> >>>>>> Good morning >>>>>> a few minutes of my best thinking first thing in the morning (before >>>>>> the focus fades elsewhere) >>>>>> please let me know if the slides and narration links open correctly >>>>>> I have not yet upgraded screencasfity so each snip is 5 mins long >>>>>> please let me know if you have questions >>>>>> >>>>>> editable slides >>>>>> >>>>>> https://docs.google.com/presentation/d/1ojB2VIuV6R1OBdVSimcP7QN6UH1V32LjkzIv421foas/edit?usp=sharing >>>>>> >>>>>> three short snippets of the slides with narrative with a bit of a >>>>>> metallic voice >>>>>> >>>>>> >>>>>> https://drive.google.com/file/d/1vjc6yZUPeJZlpQa1koZJhbLxeJvsOp36/view >>>>>> >>>>>> >>>>>> >>>>>> https://drive.google.com/file/d/1Nv2wQnSDfX8xOu4Q1UEDZfJdsU0_0X5G/view >>>>>> >>>>>> >>>>>> >>>>>> https://drive.google.com/file/d/1O4anIPzc1Z0vJ_vC8VEmnbhcuIONgEmY/view >>>>>> >>>>>> >>>>>
Received on Sunday, 26 July 2020 23:32:27 UTC