fatml /stratml templates

Owen
Thank you for this template. Is this new?
 Now let me take a moment to reflect

is this stratml templating facility new, or has it always been there?
I remember asking some time ago and I dont remember seeing it before but I
am
overloaded and my memory is highly compressed -(to the point of sounding
demented at times?)

When the template is filled out , where is it saved? (I think I asked this
a couple of years ago or so but dont remember the answer, ah the bliss of
dementia)

Is this the usual standard stratml template where the usual stratml
elements are been used to map out
some FAT construct?  I am trying to figure out how much on this form is
stratml  and what is fatml
I cannot distinguish them as such but its very early morning here

This is hot stuff. we should think of a way of making the best use of it
and drum it up -
can you please suggest some ways that it would be beneficial to declare
algorithmic fat using ml
and explain how stratml supports fatml? some quantifiable/verifiable
benefits?

(I know the obvious benefits of machine learning, but we could tinker out
some specific benefits then we must demonstrate them with some cases and
example and document this work as a must do for everyone)

I suppose to evaluate/show/demonstrate/leverage the benefits over a set of
examples we need the parser?

we can then write a  short release note from this group,  glorify this a
bit and maybe do one or two papers
for workshops
this is definitely a deliverable and possibly a valuable contribution to
the FAT movement
I am sure

PDM







On Thu, Apr 23, 2020 at 3:21 AM Owen Ambur <Owen.Ambur@verizon.net> wrote:

> The template is now available at
> https://stratml.us/drybridge/index.htm#TSISA with a link that opens it
> for editing in an XForm.
>
> Owen
> On 4/21/2020 7:58 PM, Paola Di Maio wrote:
>
> Great stuff Owen
> thanks a lot
> I am working on integrating FAT  into knowledge representation
> The website has a great list of resources to work with,
> lets work on this too,
> P
>
> On Tue, Apr 21, 2020 at 10:54 PM Owen Ambur <Owen.Ambur@verizon.net>
> wrote:
>
>> FATML.org's about statement is now available in StratML format at
>> https://stratml.us/drybridge/index.htm#FATML
>>
>> So too are their Principles for Accountable Algorithms and a Social
>> Impact Statement for Algorithms.  Like corporate social responsibility
>> plans and reports, social impact statements for algorithms should be
>> published on the Web in an open, standard, machine-readable format like
>> StratML Part 2.
>>
>> Anyone who is socially responsible enough to do that for their algorithm
>> could get started as easily as by clicking on this link
>> <http://stratml.us/forms/walt5.pl?url=http://stratml.us/carmel/iso/SIS4A.xml>
>> and editing the document to include the relevant performance indicators and
>> stakeholder roles.
>>
>> Might a more generic version of the plan be a good deliverable for the
>> AIKR CG?
>>
>> See these StratML use cases:
>>
>> Goal 4: Corporations
>> <https://stratml.us/carmel/iso/UC4SwStyle.xml#_1f82f648-083e-11e6-a8aa-42bd45c7ae33>
>> - Publish corporate social responsibility (CSR) plans and reports on the
>> Web in open, standard, machine-readable format.
>>
>> Goal 30: Artificial Intelligence
>> <https://stratml.us/carmel/iso/UC4SwStyle.xml#_6f069874-bb92-11e7-9b76-f79f9342c8d9>
>> - Document on the Web in StratML format the performance plans of proposed
>> artificial intelligence agents.
>>
>> Goal 33: Artificial Ignorance
>> <https://stratml.us/carmel/iso/UC4SwStyle.xml#_7f412cd0-81a7-11ea-8156-25622d83ea00>
>> - Help human beings overcome their personal biases that prevent them from
>> attending to evidence that is applicable to the realization of their
>> objectives.
>>
>> Owen
>> On 4/20/2020 10:26 PM, Paola Di Maio wrote:
>>
>> Hello Frank
>> Thanks for reply and for your interest
>> (At the back of my mind I wonder if you are related to Nicola)
>>
>> I am working on FAT AI - yes, there is strong AI. weak AI and FAT AI - ha
>> ha
>> In particular, I developing a knowledge object for FAT KR, fair,
>> accountable transparent
>>
>>
>> https://docs.google.com/drawings/d/1ARnEiubC7bDkSsJzAKvapYISYGANz5D9oOTEvuxR-lE/edit?usp=sharing
>> Please note this is an infographic, not a UML nor flowchart
>>
>> I am preparing a lecture and writing up note do nto ahve a narrative yet
>> but in sum, we need a way of instilling the notion of adequacy
>> into KR. At the moment it is a bit notionally done. And FAT is one set of
>> such possible evaluation criteria for adequacy
>>
>> (Also others of course)
>> I am interested in feedback  on the diagram , can you make sense of it?
>> can it be clarified/improved?
>>
>>
>>>  I’ve personally spent years working with data-driven schema-less models
>>> that help eliminate such biases and open up a world of model
>>> representations that allow knowledge to form freely and adjust dynamically
>>> to data changes.
>>
>>
>> Please do share your stuff , i d like to include/reference it in this work
>> cheeers
>>
>> PDM
>>
>> On Tue, Apr 21, 2020 at 9:08 AM Frank Guerino <frank.guerino@if4it.com>
>> wrote:
>>
>>> Hi Paola,
>>>
>>>
>>>
>>> This is very interesting.  Thank you for sharing it.
>>>
>>>
>>>
>>> In addition to researching bias as a pathology resulting from poor
>>> knowledge modeling, you may want to also consider the reverse (i.e. poor
>>> modelling/models that result from biases).  One such bias arises from the
>>> notion that model structures must be pre-designed and imprinted in database
>>> schemas in order to capture model data, forcing data to be
>>> restructured/transformed to fit the model’s design rather than having the
>>> model result from the ever changing data, itself.  We see this with
>>> enterprise modeling tools (e.g. Architecture Modeling Tools, Cause & Effect
>>> Models, CMDBs, etc.).  I’ve personally spent years working with data-driven
>>> schema-less models that help eliminate such biases and open up a world of
>>> model representations that allow knowledge to form freely and adjust
>>> dynamically to data changes.
>>>
>>>
>>>
>>> Another example is “standards” (which are like belly buttons because
>>> everyone has one).  Often, standards establish pre-conceived notions and
>>> cause severe narrowmindedness, yielding the opposite of their original
>>> intent.
>>>
>>>
>>>
>>> There are many such biases that cause bad modelling/models and you may
>>> want to explore them as well.
>>>
>>>
>>>
>>> My Best,
>>>
>>>
>>> Frank
>>>
>>> --
>>>
>>> *Frank Guerino, Principal Managing Partner*
>>>
>>>
>>> *The International Foundation for Information Technology (IF4IT) *
>>> *http://www.if4it.com <http://www.if4it.com> 1.908.294.5191 (M)*
>>>
>>> *Guerino1_Skype (S)*
>>>
>>>
>>>
>>>
>>>
>>> *From: *Ontolog Forum <ontolog-forum@googlegroups.com> on behalf of
>>> Paola Di Maio <paola.dimaio@gmail.com>
>>> *Reply-To: *Ontolog Forum <ontolog-forum@googlegroups.com>
>>> *Date: *Saturday, April 18, 2020 at 4:18 AM
>>> *To: *Ontolog Forum <ontolog-forum@googlegroups.com>, W3C AIKR CG <
>>> public-aikr@w3.org>
>>> *Subject: *[ontolog-forum] Catalog of Biases
>>>
>>>
>>>
>>> This is a very good find for me
>>>
>>> https://catalogofbias.org/biases/
>>>
>>>  and hopefully also for fellows on the lists
>>>
>>>
>>>
>>> I am researching bias as a pathology resulting from poor knowledge
>>> modelling, the remedy is
>>>
>>> knowledge representation
>>>
>>>
>>>
>>> It happens to be structured as a taxonomy, what fun
>>>
>>>
>>>
>>> PDM
>>>
>>>
>>>
>>> --
>>> All contributions to this forum are covered by an open-source license.
>>> For information about the wiki, the license, and how to subscribe or
>>> unsubscribe to the forum, see http://ontologforum.org/info/
>>> ---
>>> You received this message because you are subscribed to the Google
>>> Groups "ontolog-forum" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to ontolog-forum+unsubscribe@googlegroups.com.
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/ontolog-forum/CAMXe%3DSo%2B%3D1X3A4VGN6Ecv78MD604vWRU7600oimG3jDr0fsLtw%40mail.gmail.com
>>> <https://groups.google.com/d/msgid/ontolog-forum/CAMXe%3DSo%2B%3D1X3A4VGN6Ecv78MD604vWRU7600oimG3jDr0fsLtw%40mail.gmail.com?utm_medium=email&utm_source=footer>
>>> .
>>>
>>> --
>>> All contributions to this forum are covered by an open-source license.
>>> For information about the wiki, the license, and how to subscribe or
>>> unsubscribe to the forum, see http://ontologforum.org/info/
>>> ---
>>> You received this message because you are subscribed to the Google
>>> Groups "ontolog-forum" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to ontolog-forum+unsubscribe@googlegroups.com.
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/ontolog-forum/CD63D594-3C23-42D1-BFDD-6D3A383FC126%40if4it.com
>>> <https://groups.google.com/d/msgid/ontolog-forum/CD63D594-3C23-42D1-BFDD-6D3A383FC126%40if4it.com?utm_medium=email&utm_source=footer>
>>> .
>>>
>>

Received on Wednesday, 22 April 2020 23:25:18 UTC