- From: Owen Ambur <Owen.Ambur@verizon.net>
- Date: Mon, 2 Mar 2020 15:34:01 -0500
- To: public-aikr@w3.org
- Cc: Chris Fox <chris@chriscfox.com>
- Message-ID: <4b6721c6-ecc6-b2dd-d16b-b65741443919@verizon.net>
Paola, I defer to Carl to announce the logistics for our next televideo
conference on Tuesday, March 10. However, I posted a link at
https://www.w3.org/community/aikr/wiki/Main_Page#CALLS It points to the
shell draft plan that will provide the focal point for our discussion.
Anyone who'd like to participate in collaboratively editing and
commenting on it should address a request for an invitation to Chris
Fox's to gain access to his StratNavApp <https://www.stratnavapp.com/>.
From my perspective, this article
<https://www.datainnovation.org/2020/03/initial-lessons-learned-from-piloting-the-eus-ai-ethics-assessment-list/>
-- entitled "Initial Lessons Learned From Piloting the EU’s AI Ethics
Assessment List" -- is a requirements statement for StratML. See these
assertions:
A better alternative to explainability is algorithmic
accountability—the principle that an algorithmic system should
employ a variety of controls to ensure the operator can verify
algorithms work in accordance with its intentions and identify and
rectify harmful outcomes.
Intentions should be documented in an open, standard, machine-readable
format like StratML Part 1, Strategic Plans (ISO 17469-1) and outcomes
should be reported in a format like StratML Part 2, Performance Plans
and Reports.
If the goal of transparency is to increase trust by providing
sufficient information, this can better be achieved by presenting
users with a clear description of the data the algorithm uses and a
basic explanation of how it makes decisions.
Such "descriptions" should be rendered as value chains in open,
standard, machine-readable format, like StratML Part 2. Value chains
are comprised of: Inputs, Input Processing, Outputs, Output Processing,
and Outcomes. Outputs and Outcomes should be reported to stakeholders.
Due to the complexity and speed of AI agents, AI agencies will be needed
to help human beings keep track of them. Such agencies should leverage
the StratML standard and quarantine agents whose behavior and impacts
cannot be monitored, tracked, and controlled by stakeholders. (The same
is essentially true of the credibility of Web content in general but we
may have a better chance of ignoring it without being adversely affected.)
The article prompted me to convert these two plans to StratML format:
DARPA XAI - https://stratml.us/carmel/iso/XAIwStyle.xml
IBM 360 - https://stratml.us/carmel/iso/IAIEwStyle.xml
Owen
On 3/2/2020 5:58 AM, Paola Di Maio wrote:
> Dear all,
> thanks a lot for contributing interesting work -
>
> have seen several emails relating to the calls, stratml, stratnave,
> and an audiofile
> (which was not attached/linked in the email mentioning it)
>
> May I request you great people doing great work enter the essence of
> what is going on
> so that we can take a look and grasp at a glance what we should do next?
> I can of course pull up all the relevant emails from inbox but there
> is a change I may miss something
> Please keep the wiki updated!!!
> Thanks a lot!!!
>
> PDM
Received on Monday, 2 March 2020 20:34:22 UTC