- From: Owen Ambur <owen.ambur@verizon.net>
- Date: Tue, 8 Nov 2022 02:39:09 +0000 (UTC)
- To: W3C AIKR CG <public-aikr@w3.org>, "paoladimaio10@googlemail.com" <paoladimaio10@googlemail.com>
- Cc: Naval Sarda <nsarda@epicomm.net>, "pradeep.jain@ictect.com" <pradeep.jain@ictect.com>, Gayanthika Udeshani <gayaudeshani@gmail.com>, Kranthi Kiran <kranthi@thoughtflow.io>
- Message-ID: <1386119352.549546.1667875149848@mail.yahoo.com>
Thanks for the Qs, Paola. My answer to your first question is, yes, at least theoretically speaking, StratML can be used to describe technical systems as specialized types of "organizations". However, practically speaking, whether it make sense to do so is another matter. For example, my sense is that at some point the connection between human objectives and declarative programming will be made more explicit, in terms more readily comprehensible to human beings, but doing so is beyond my level comprehension and capability right now. It also seems to me that StratML can help make more comprehensible the relationships and distinctions between deontological ethics versus consequentalism. Again, doing so is beyond my current level of expertise but StratML Part 2 enables the documentation (representation) of personal values (ethics) as well as stakeholders and results (consequences in terms of performance indicators). So both intentions and results can be documented and shared more efficiently and effectively with stakeholders, thereby empowering them (through value-added intermediary services) to render their own judgments and act accordingly. Regarding your second set of questions, yes, "what a piece of AI does and how" it does it could be documented in StratML Part 2 format. The how could be represented in the StratML Value Chain comprised of Inputs, Input_Processing, Outputs, Output_Processing, and Outcomes. Again, however, whether it make sense to that, practically speaking, I cannot say at this point nor do I have any examples. On the other hand, as revealed by this Google site-specific query, about 160 of the >5K files in the StratML collection reference AI. As I have argued in a separate message, it seems to me that human beings working on AI applications that may affect others should be expected, if not required, to document their plans in an open, standard, machine-readable format, like StratML. In the meantime, Naval, Pradeep, and I are conspiring to develop a more capable query service that takes advantage of the semantics and structure of the StratML schema, as outlined in the two plans, one conceptual and the other technical, at https://aboutthem.info/. There are different ways to "model" reality. I believe StratML is a good way to document intentions and results and that the intentions and results of AI agents should be made explicit to human beings in ways they can readily comprehend. However, I am not necessarily arguing that StratML is the best way to model all levels of technical complexity at this point. I hope these responses are helpful. As always, I look forward to learning if the smart, capable people on this listserv can coalesce around and effectively carry out a plan with at least one goal and set of performance indicators. A half dozen of those that have been proposed in the past are available in StratML format at https://stratml.us/drybridge/index.htm#AIKRCG Owen Amburhttps://www.linkedin.com/in/owenambur/ On Saturday, November 5, 2022 at 10:00:29 PM EDT, Paola Di Maio <paola.dimaio@gmail.com> wrote: Dave often says he wants get down to the technical level But there is little point in doing so, in my experience, until the less technical issues are framed correctly (scope, goals etc) stratML aims to make explicit scope, goals etc, of organisationsand their strategic plans does stratML apply to technical systems such as software and AI? Can we have a stratML scheme that explains what a piece of AI doesand how etc? would that be useful? if it has been done, can we see examples? In systems development (in particular, software system development)people start hacking code and demos because that is what they are interested in- but if more research an analysis of the problem space had been carried out first, maybe they would have come up with a different set of system goals and corresponding solutions. Technical issues are trivial, there are no real technical problems unsolved but plenty of unresolved non technical issues that end up creating technical problems that would not exist, had these issues been tackled first. There are many fun articles that explain the pareto principle in relation to software, I always start my day with the Pareto principle (read it the way you want it) But in my technical life, I try to figure out first the 80 percent of the system/research I am trying to do (analysis and design) and only after worry about the 20 percent to the coding/implementation/writing up itself. This was gold dust given to me while I was being schooled and it was one of the most important lessons in my life. Do not trust code that cannot be explained in plain language or programmers who refuse to speak to non programmers about what they do. Software should read like a book, give or take some technicalities. On this list we are still trying to figure out what is KR, why are we talking about it, what are we trying to achieve etc. That is for members to figure out(I am working on my own stuff, while doing a lot of reading and sharing what I learn). I understand of course that Dave may be interested in prioritizing what he likes to do the most, which is doing great demos, but from a research perspective I d like to ask what does the demo do, why, what problem does it solve, what need does it fulfil, how does it address the issues being tackled (in our case the bruning KR in ML topic) before devoting my full attention and eyeballs to itPDM
Received on Tuesday, 8 November 2022 02:39:50 UTC