Re: Bias & History, Near & Far

Yes, of course, Paola, malicious users can declare falsehoods in StratML 
as well as any other format.  However, over time, as actual results are 
documented and communities of results (CoRs) are built, AI can help to 
expand Dunbar's Number <https://en.wikipedia.org/wiki/Dunbar%27s_number> 
in the cyberage.

Since you reference Chris and his StratNavApp, I'm looping him into this 
exchange.  He and I and several other are angling to use his app to 
flesh out a plan for the development of a StratML query service, for 
potential hosting at https://aboutthem.info/


Owen


On 1/7/2022 8:22 PM, Paola Di Maio wrote:
> Thank you Owen and Brandt
> of course disclosure is key, and #stratml useful in this respect etc
> but not necessarily sufficient in my experience
> Bias is not black and white, its very complex (I have been mapping 
> /tracking very closely)
> Just a few quick points
>
> /AI applications should be able to do a pretty good job of inferring 
> values based upon what people and organizations say and do.
> /
> But they do not do a pretty good job at all, this is the problem, and 
> it takes an awful lot of
> KR and resources to figure
>
>     /In the meantime, services like these can help:
>     https://personalvalu.es/ |
>     https://moralfoundations.org/questionnaires/ |
>     https://www.idrlabs.com/morality/6/test.php |
>     https://principlesyou.com/ | https://jamesclear.com/core-values/

>
> Useful and fun thanks, but.... do you realise how these services 
> reflect the bias of their designers?
> The bias detection services are in themselves biased, this is one of 
> the problems
>
> /As individuals and organizations begin to publish their performance 
> plans and reports in open, standard, machine-readable format, it will 
> be pretty easy for AI agents to determine biases, priorities, and 
> effectiveness (which is why many people and orgs will resist doing so)./
>
> Not at all easy, definitely NON TRIVIAL. In fact, my main critique to 
> StratML is that it can be
>
> used to declare facts which are not true, If you remember my first 
> suggestion to stratapp when
>
> chris first demoed it, is that it should have an additional module to 
> follow up and verify the assertions
>
> Declaring Just the contrary of what they say they do can also be true 
> of apps :-)
>
>
> PDM
>
>     Owen
>
>
>     On 1/7/2022 7:10 PM, Paola Di Maio wrote:
>>     Dear Brandt and all
>>     thanks for reply-
>>     yes, bias is inevitable, however the consequences of some bias
>>     can be more harmful/lethal than others, and in pursuit of
>>     fairness, bias should be mitigated/minimized
>>     In particular, algorithmic bias can amplify and reinforce harmful
>>     bias
>>     this is the crux of bias in AI >ML issues.
>>     Question: how can KR mitigate harmful bias?
>>
>>     <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=icon>
>>      Virus-free. www.avast.com
>>     <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=link>
>>
>>
>>
>>     On Sat, Jan 8, 2022 at 8:03 AM <brandt@redd.org> wrote:
>>
>>         Happy New Year to all of you!
>>
>>         I’m not familiar with any learning standard that specifically
>>         engages with the bias issue. But with the subject at such
>>         prominence in the public sphere, that will probably change.
>>
>>         At MatchMaker Education Labs, the startup I’m working on, we
>>         have to address bias issues because we strive to match
>>         competency standards across frameworks – and every framework
>>         has a bias. In our debates, we have concluded that bias is
>>         inevitable. To be sure, extreme bias is to be avoided but
>>         some perspective will always be present. For that reason, we
>>         advocate for acknowledging bias and making it explicit.
>>
>>         When I read an author’s bio, for example, I naturally look
>>         for keywords and background that will indicate that author’s
>>         bias. I expect most of you probably do the same. Perhaps when
>>         writing our bios, we could be more explicit about such things.
>>
>>         Thank you all!
>>
>>         Brandt (Moderate-Right but Classically Liberal with a dash of
>>         Libertarianism 😊)
>>
>>         *From:* Owen Ambur <Owen.Ambur@verizon.net>
>>         *Sent:* Thursday, 6 January, 2022 7:36 PM
>>         *To:* paoladimaio10@googlemail.com
>>         *Cc:* Brandt Redd <brandt@redd.org>; Scott Yates
>>         <scott@certifiedcontentcoalition.org>; Carl Mattocks
>>         <carlmattocks@gmail.com>; Michael Sessa
>>         <michael.sessa@pesc.org>; Larry Fruth <lfruth@a4l.org>
>>         *Subject:* Bias & History, Near & Far
>>
>>         Happy new year, Paola.  We were with family over the holidays
>>         and just returned home this week, whereupon I found that my
>>         E-mail client is still routing to my junk folder messages
>>         like yours from the W3C listservs.
>>
>>         Having scanned the article you cite, I've taken particular
>>         note of the concluding sentence: "History classes must begin
>>         to use strategies that identify and challenge biases found in
>>         textbooks, and develop ethical frameworks based on justice
>>         and equality that students and teachers can use to interpret
>>         and evaluate American history."
>>
>>         As you may know, I'm a bit impatient with entreaties
>>         referencing fuzzy concepts like "strategies" and "frameworks"
>>         (as well as "democracy") that fail to propose model
>>         performance plans, upon which interested stakeholders might
>>         take action.  If you are aware of any actual plan(s) to do as
>>         Romanowski suggests, I may wish to render it(them) in StratML
>>         format.
>>
>>         In the meantime, I'm copying Brandt in the event there may be
>>         any education standards relevant to this issue and I'm
>>         copying Scott since it is unlikely that historical reports
>>         can be credible if contemporaneous records are not.  While
>>         the victors may (or may not) write history
>>         <https://historyofyesterday.com/is-history-written-by-the-victors-here-are-5-examples-of-losers-writing-history-815b4f28e37c>,
>>         they most certainly do not have a monopoly on the truth, the
>>         whole truth, and nothing but the truth.
>>
>>         The about statements for the initiatives the CredWeb CG plans
>>         to evaluate are available in StratML format at
>>         https://stratml.us/drybridge/index.htm#CWCG

>>
>>         I wonder if, for example, Overtone.ai's logic might be
>>         applied to historical texts. They say
>>         <https://stratml.us/carmel/iso/OVRTNwStyle.xml#values_>,
>>         "Ultimately, this is a journey that goes way, way beyond text
>>         based news content. This is about the way in which human
>>         beings communicate – about any topic, at any length, using
>>         any medium, and with anybody."
>>
>>         The education standards identified by Data Standards United
>>         work group that Brandt chairs are documented in StratML
>>         format at https://stratml.us/drybridge/index.htm#DSU2 Based
>>         upon a word-find search of the StratML rendition of the
>>         directory <https://stratml.us/carmel/iso/DLSwStyle.xml>, it
>>         appears that none of them addresses the issues of "history"
>>         or "bias" or "knowledge" per se.  To me, that seems to be an
>>         opportunity rather than a problem.
>>
>>         All the best you.  Looking forward to learning what we might
>>         be able to accomplish together this year.
>>
>>         Owen
>>
>>         On 12/25/2021 6:24 AM, Paola Di Maio wrote:
>>
>>             Hello AI KR CG folks, Ontologers and SW people from all
>>             walks of life
>>
>>             I have been thinking of some meaningful wishes to send
>>             outin relation to AI KR in the context of the Winter
>>             festivities.The closes relevant topic that comes to mind is
>>
>>             Knowledge Misrepresentation in History
>>
>>             https://www.socialstudies.org/sites/default/files/publications/se/6003/600310.html

>>
>>             and
>>
>>
>>               Bias in Historical Description, Interpretation, and
>>               Explanation
>>
>>             Debates among historians show that they expect
>>             descriptions of past people and events, interpretations
>>             of historical subjects, and genetic explanations of
>>             historical changes to be fair and not misleading
>>
>>             https://www.semanticscholar.org/paper/Bias-in-Historical-Description%2C-Interpretation%2C-and-Mccullagh/5e9ef86edd2c7b955606ba45fdf981feef713b14

>>
>>
>>             When designing intelligent systems, we use knowledge from
>>             various repositories
>>
>>             and databases, the quality and validity of which is not
>>             always questioned, especially
>>
>>             in the case of long term historical perspectives which
>>             form the basis for many widely held views.
>>
>>             Today, as we celebrate the important and sometimes
>>             debated (problematic even as described by some!!!)
>>             historical  events surrounding  the birth of JC, we
>>             should remember misrepresentation in history, and how
>>             misrepresentation is deliberately designed to manipulate
>>             history
>>
>>             The articles above are mere pointers to the topic  not
>>             endorsed nor exhaustive, and intended as mere reading
>>             recommendations
>>
>>             Let Bias and misrepresentation not spoil the festivities,
>>             but  let's remain aware that history does not always
>>             warrant celebration and let's remind ourselves, what is
>>             there really, to celebrate hoping that everyone gets at
>>             least some.
>>
>>             Happy and meaningful winter holidays!!
>>
>>             In wisdom
>>
>>             Paola DM
>>

Received on Saturday, 8 January 2022 01:48:21 UTC