Re: [i4j Forum] Re: G7 leaders call for ‘guardrails’ on development of AI

Hi Paul - back in 2019 and 2020, Ray Wang and I published the following
with MIT Sloan Management Review re: 5 steps to People-Centered AI:

https://mitsloan.mit.edu/ideas-made-to-matter/5-steps-to-people-centered-artificial-intelligence

*1. Classify what you're trying to accomplish with AI*

Most organizations are pursuing initiatives to do the following:

   - Automate tasks with machines so humans can focus on strategic
   initiatives.
   - Augment — applying intelligence and algorithms to build on people’s
   skill sets.
   - Discover — find patterns that wouldn’t be detected otherwise.
   - Aid in risk mitigation and compliance.

*2. Embrace three guiding principles *

*Transparency. *Whenever possible, make the high-level implementation
details of an AI project available to all involved. This will help people
understand what artificial intelligence is, how it works, and what data
sets are involved.

*Explainability. *Ensure employees and external stakeholders understand how
any AI system arrives at its contextual decisions —specifically, what
method was used to tune the algorithms and how decision-makers will
leverage any conclusions.

*Reversibility.* Organizations must also be able to reverse what deep
learning knows: The ability to unlearn certain knowledge or data helps
protect against unwanted biases in data sets. Reversibility is something
that must be designed into the conception of an AI effort and often will
require cross-functional expertise and support, the experts said.
*3. Establish data advocates*

When it comes to data, the saying, “garbage in, garbage out” holds. Some
companies are installing chief data officers
<https://mitsloan.mit.edu/ideas-made-to-matter/make-room-executive-suite-here-comes-cdo-2-0>
to oversee data practices, but Bray and Wang said that’s not enough..

The pair suggested identifying stakeholders across the entire organization
who understand the quality issues and data risks and who will work from a
people-centered code of ethics. These stakeholders are responsible for
ensuring data sets are appropriate and for catching any errors or flaws in
data sets or AI outputs early.

“It’s got to be a cavalry — it can’t be relegated to just a few people in
the organization,” Bray said. One approach the experts suggested is to
appoint an ombuds function that brings together stakeholders from different
business units as well as outside constituents.
*4. Practice “mindful monitoring”*

Creating a process for testing data sets for bias can help reduce risk.
Bray and Wang suggested identifying three pools of data sets: Trusted data
used to train the AI implementation; a queued data pool of potentially
worthwhile data; and problematic or unreliable data. And data should be
regularly assessed — for example, whether previously approved trusted data
is still relevant or unreliable, or if queued data has a newfound role in
improving the existing pool of trusted data for specific actions.
*5. Ground your expectations*

Managing expectations of internal and external stakeholders is crucial to
long-term success. To gain consensus and keep focus on a people-oriented AI
agenda, organizations should ask and answer such questions as: What is our
obligation to society? What are the acknowledged unknowns? What are
responsible actions or proactive things we can accomplish with AI
implementations, and what are the proper safeguards?

In the end, it makes sense to approach AI as an experimental learning
activity, with ups, downs, and delays. “There will periods of learning,
periods of diminished returns, and [times when] the exponential gain
actually benefits the organization,” Bray said. “You need to be grounded
and say, ‘This is how we’ve chosen to position ourselves.’ It will serve as
your North Star as you move towards the final goal.”

On Fri, May 19, 2023 at 2:26 PM Paul Werbos <pwerbos@gmail.com> wrote:

> Thanks, Timothy, for updating our awareness and asking us to think about
> the implications:
>
> On Fri, May 19, 2023 at 9:39 AM Timothy Holborn <timothy.holborn@gmail.com>
> wrote:
>
>> I was alerted to: https://twitter.com/FT/status/1659481447428751360
>>
>> “We reaffirm that AI policies and regulations should be *human centric*
>> and based on nine democratic values, including protection of human rights
>> and fundamental freedoms and the protection of privacy and personal data..We
>> also reassert that AI policies and regulations should be risk-based and
>> forward-looking to preserve an open and enabling environment for AI
>> development and deployment that maximises the benefits of the technology
>> for people and the planet while mitigating its risks,” the ministers’
>> communique stated.
>>
>> Source is from:
>>
>> https://g7digital-tech-2023.go.jp/topics/pdf/pdf_20230430/ministerial_declaration_dtmm.pdf
>>
>> FWIW: personally, i think of many of these requirements as 'safety
>> protocols', but am open and interested to hear the views of others...
>>
>
> My views: I see an analogy to great pronouncements and even goals on
> climate change a few years ago,
> WITHOUT the kind of groundwork needed to get the great goals implemented.
> Useful implementation is MORE URGENT here,
> because the worst case pathways to extincion run even
> faster with internet/AGI/IOT than with climate. It is far more difficult,
> because the physical details are harder for people to understand. (For
> example, H2S in atmosphere is a lot easier to visualize than QAGI.)
>
> The design requirements are simply not under this open discussion. I hope
> Jerry's effort can help close this life or death gap.
>
>
>>
>>
>> Cheers,
>>
>> Timothy Holborn
>> www.humancentricai.org
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "The Peace infrastructure Project" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to peace-infrastructure-project+unsubscribe@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/peace-infrastructure-project/CAM1Sok19z8kZ0NPyCqwGX_sxhPAqW%2BK8Fmdm%3DiMsGVzsv7j4kA%40mail.gmail.com
>> <https://groups.google.com/d/msgid/peace-infrastructure-project/CAM1Sok19z8kZ0NPyCqwGX_sxhPAqW%2BK8Fmdm%3DiMsGVzsv7j4kA%40mail.gmail.com?utm_medium=email&utm_source=footer>
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
> --
> Get the i4j book 'The People-Centered Economy' on Amazon in Paperback and
> for Kindle.
> https://www.amazon.com/People-Centered-Economy-Ecosystem-Work/dp/1729145922
> ============================
> If you don't want to receive more emails click "UNSUBSCRIBE" or send an
> email to i4j@peoplecentered.net for questions and comments.
> ---
> To unsubscribe from this group and stop receiving emails from it, send an
> email to i4j+unsubscribe@i4jsummit.org.
>

Received on Saturday, 20 May 2023 20:55:05 UTC