- From: Owen Ambur <Owen.Ambur@verizon.net>
- Date: Tue, 21 Apr 2020 10:53:54 -0400
- To: public-aikr@w3.org
- Message-ID: <998dcf01-138e-2634-8d3b-5e883e34e733@verizon.net>
FATML.org's about statement is now available in StratML format at
https://stratml.us/drybridge/index.htm#FATML
So too are their Principles for Accountable Algorithms and a Social
Impact Statement for Algorithms. Like corporate social responsibility
plans and reports, social impact statements for algorithms should be
published on the Web in an open, standard, machine-readable format like
StratML Part 2.
Anyone who is socially responsible enough to do that for their algorithm
could get started as easily as by clicking on this link
<http://stratml.us/forms/walt5.pl?url=http://stratml.us/carmel/iso/SIS4A.xml>
and editing the document to include the relevant performance indicators
and stakeholder roles.
Might a more generic version of the plan be a good deliverable for the
AIKR CG?
See these StratML use cases:
Goal 4: Corporations
<https://stratml.us/carmel/iso/UC4SwStyle.xml#_1f82f648-083e-11e6-a8aa-42bd45c7ae33>
- Publish corporate social responsibility (CSR) plans and reports on
the Web in open, standard, machine-readable format.
Goal 30: Artificial Intelligence
<https://stratml.us/carmel/iso/UC4SwStyle.xml#_6f069874-bb92-11e7-9b76-f79f9342c8d9>
- Document on the Web in StratML format the performance plans of
proposed artificial intelligence agents.
Goal 33: Artificial Ignorance
<https://stratml.us/carmel/iso/UC4SwStyle.xml#_7f412cd0-81a7-11ea-8156-25622d83ea00>
- Help human beings overcome their personal biases that prevent them
from attending to evidence that is applicable to the realization of
their objectives.
Owen
On 4/20/2020 10:26 PM, Paola Di Maio wrote:
> Hello Frank
> Thanks for reply and for your interest
> (At the back of my mind I wonder if you are related to Nicola)
>
> I am working on FAT AI - yes, there is strong AI. weak AI and FAT AI -
> ha ha
> In particular, I developing a knowledge object for FAT KR, fair,
> accountable transparent
>
> https://docs.google.com/drawings/d/1ARnEiubC7bDkSsJzAKvapYISYGANz5D9oOTEvuxR-lE/edit?usp=sharing
> Please note this is an infographic, not a UML nor flowchart
>
> I am preparing a lecture and writing up note do nto ahve a narrative
> yet but in sum, we need a way of instilling the notion of adequacy
> into KR. At the moment it is a bit notionally done. And FAT is one set
> of such possible evaluation criteria for adequacy
>
> (Also others of course)
> I am interested in feedback on the diagram , can you make sense of it?
> can it be clarified/improved?
>
>
> I’ve personally spent years working with data-driven
> schema-less models that help eliminate such biases and open up
> a world of model representations that allow knowledge to form
> freely and adjust dynamically to data changes.
>
> Please do share your stuff , i d like to include/reference it in this work
> cheeers
>
> PDM
>
> On Tue, Apr 21, 2020 at 9:08 AM Frank Guerino <frank.guerino@if4it.com
> <mailto:frank.guerino@if4it.com>> wrote:
>
> Hi Paola,
>
> This is very interesting. Thank you for sharing it.
>
> In addition to researching bias as a pathology resulting from poor
> knowledge modeling, you may want to also consider the reverse
> (i.e. poor modelling/models that result from biases). One such
> bias arises from the notion that model structures must be
> pre-designed and imprinted in database schemas in order to capture
> model data, forcing data to be restructured/transformed to fit the
> model’s design rather than having the model result from the ever
> changing data, itself. We see this with enterprise modeling tools
> (e.g. Architecture Modeling Tools, Cause & Effect Models, CMDBs,
> etc.). I’ve personally spent years working with data-driven
> schema-less models that help eliminate such biases and open up a
> world of model representations that allow knowledge to form freely
> and adjust dynamically to data changes.
>
> Another example is “standards” (which are like belly buttons
> because everyone has one). Often, standards establish
> pre-conceived notions and cause severe narrowmindedness, yielding
> the opposite of their original intent.
>
> There are many such biases that cause bad modelling/models and you
> may want to explore them as well.
>
> My Best,
>
>
> Frank
>
> --
>
> /Frank Guerino, Principal Managing Partner/
>
> */The International Foundation for Information Technology (IF4IT)
> /*/http://www.if4it.com
> 1.908.294.5191 (M)/
>
> /Guerino1_Skype (S)/
>
> *From: *Ontolog Forum <ontolog-forum@googlegroups.com
> <mailto:ontolog-forum@googlegroups.com>> on behalf of Paola Di
> Maio <paola.dimaio@gmail.com <mailto:paola.dimaio@gmail.com>>
> *Reply-To: *Ontolog Forum <ontolog-forum@googlegroups.com
> <mailto:ontolog-forum@googlegroups.com>>
> *Date: *Saturday, April 18, 2020 at 4:18 AM
> *To: *Ontolog Forum <ontolog-forum@googlegroups.com
> <mailto:ontolog-forum@googlegroups.com>>, W3C AIKR CG
> <public-aikr@w3.org <mailto:public-aikr@w3.org>>
> *Subject: *[ontolog-forum] Catalog of Biases
>
> This is a very good find for me
>
> https://catalogofbias.org/biases/
>
> and hopefully also for fellows on the lists
>
> I am researching bias as a pathology resulting from poor knowledge
> modelling, the remedy is
>
> knowledge representation
>
> It happens to be structured as a taxonomy, what fun
>
> PDM
>
> --
> All contributions to this forum are covered by an open-source license.
> For information about the wiki, the license, and how to subscribe or
> unsubscribe to the forum, see http://ontologforum.org/info/
> ---
> You received this message because you are subscribed to the Google
> Groups "ontolog-forum" group.
> To unsubscribe from this group and stop receiving emails from it,
> send an email to ontolog-forum+unsubscribe@googlegroups.com
> <mailto:ontolog-forum+unsubscribe@googlegroups.com>.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/ontolog-forum/CAMXe%3DSo%2B%3D1X3A4VGN6Ecv78MD604vWRU7600oimG3jDr0fsLtw%40mail.gmail.com
> <https://groups.google.com/d/msgid/ontolog-forum/CAMXe%3DSo%2B%3D1X3A4VGN6Ecv78MD604vWRU7600oimG3jDr0fsLtw%40mail.gmail.com?utm_medium=email&utm_source=footer>.
>
> --
> All contributions to this forum are covered by an open-source license.
> For information about the wiki, the license, and how to subscribe or
> unsubscribe to the forum, see http://ontologforum.org/info/
> ---
> You received this message because you are subscribed to the Google
> Groups "ontolog-forum" group.
> To unsubscribe from this group and stop receiving emails from it,
> send an email to ontolog-forum+unsubscribe@googlegroups.com
> <mailto:ontolog-forum+unsubscribe@googlegroups.com>.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/ontolog-forum/CD63D594-3C23-42D1-BFDD-6D3A383FC126%40if4it.com
> <https://groups.google.com/d/msgid/ontolog-forum/CD63D594-3C23-42D1-BFDD-6D3A383FC126%40if4it.com?utm_medium=email&utm_source=footer>.
>
Received on Tuesday, 21 April 2020 14:54:13 UTC