Re: [i4j Forum] Re: G7 leaders call for ‘guardrails’ on development of AI

I think consumers are owed a systematic check to see if the machine is
reliable.

Just like most people don't know if medication XYZ with additional
ingredients ABC is good or bad for them - or effective or a placebo.
However we have folks who do the checks so a seal of approval (or a market
ban) can be done to help consumers have confidence in the product?


On Fri, May 19, 2023 at 5:07 PM David Michaelis <michaelisdavid@yahoo.com>
wrote:

> “ things you don’t want to teach the machine “…
> Well we are already in the next stage- the machine wants to teach you!. It
> has sometimes amazing fast solutions that are unexplained but useful. Do
> you reject them because you don’t understand how it got there?.
>
>
> Sent from Yahoo Mail for iPhone
> <https://mail.onelink.me/107872968?pid=nativeplacement&c=Global_Acquisition_YMktg_315_Internal_EmailSignature&af_sub1=Acquisition&af_sub2=Global_YMktg&af_sub3=&af_sub4=100000604&af_sub5=EmailSignature__Static_>
>
> On Saturday, May 20, 2023, 7:01 am, David Bray, PhD <
> david.a.bray@gmail.com> wrote:
>
> Even with "black boxes" one can still do transparency on:
>
> * the data collection procedures (how do you collect the data? how do you
> obtain consent?)
> * the data curation procedures (how do you correct for errors or things
> you don't want to teach the machine?)
> * the review of the AI outputs (how do you assess if what the AI is
> outputting is socially acceptable? correct/accurate if that's a
> qualification? etc.)
> * the review of the AI impacts on people (how do you review to confirm the
> AI isn't causing unintentional harm?)
> * the review of the AI's biases (all machines will have biases, and even
> correcting for socially unacceptable biases will introduce other biases,
> how do you review and make changes as appropriate?)
>
> Which could be posted publicly as what does this organization do to answer
> and address these important areas.
>
> Hope this helps,
>
> -d.
>
>
> On Fri, May 19, 2023 at 4:55 PM David Michaelis <michaelisdavid@yahoo.com>
> wrote:
>
> Hi David
> Interesting challenges in your principles.
> How can one ask for transparency when the black box is not transparent??!
> At this stage there are too many unknowns in this Golem we have built.
>
>
> Sent from Yahoo Mail for iPhone
> <https://mail.onelink.me/107872968?pid=nativeplacement&c=Global_Acquisition_YMktg_315_Internal_EmailSignature&af_sub1=Acquisition&af_sub2=Global_YMktg&af_sub3=&af_sub4=100000604&af_sub5=EmailSignature__Static_>
>
> On Saturday, May 20, 2023, 6:26 am, David Bray, PhD <
> david.a.bray@gmail.com> wrote:
>
> Hi Paul - back in 2019 and 2020, Ray Wang and I published the following
> with MIT Sloan Management Review re: 5 steps to People-Centered AI:
>
>
> https://mitsloan.mit.edu/ideas-made-to-matter/5-steps-to-people-centered-artificial-intelligence
>
> *1. Classify what you're trying to accomplish with AI*
>
> Most organizations are pursuing initiatives to do the following:
>
>    - Automate tasks with machines so humans can focus on strategic
>    initiatives.
>    - Augment — applying intelligence and algorithms to build on people’s
>    skill sets.
>    - Discover — find patterns that wouldn’t be detected otherwise.
>    - Aid in risk mitigation and compliance.
>
> *2. Embrace three guiding principles *
>
> *Transparency. *Whenever possible, make the high-level implementation
> details of an AI project available to all involved. This will help people
> understand what artificial intelligence is, how it works, and what data
> sets are involved.
>
> *Explainability. *Ensure employees and external stakeholders understand
> how any AI system arrives at its contextual decisions —specifically, what
> method was used to tune the algorithms and how decision-makers will
> leverage any conclusions.
>
> *Reversibility.* Organizations must also be able to reverse what deep
> learning knows: The ability to unlearn certain knowledge or data helps
> protect against unwanted biases in data sets. Reversibility is something
> that must be designed into the conception of an AI effort and often will
> require cross-functional expertise and support, the experts said.
> *3. Establish data advocates*
>
> When it comes to data, the saying, “garbage in, garbage out” holds. Some
> companies are installing chief data officers
> <https://mitsloan.mit.edu/ideas-made-to-matter/make-room-executive-suite-here-comes-cdo-2-0>
> to oversee data practices, but Bray and Wang said that’s not enough.
>
> The pair suggested identifying stakeholders across the entire organization
> who understand the quality issues and data risks and who will work from a
> people-centered code of ethics. These stakeholders are responsible for
> ensuring data sets are appropriate and for catching any errors or flaws in
> data sets or AI outputs early.
>
> “It’s got to be a cavalry — it can’t be relegated to just a few people in
> the organization,” Bray said. One approach the experts suggested is to
> appoint an ombuds function that brings together stakeholders from different
> business units as well as outside constituents.
> *4. Practice “mindful monitoring”*
>
> Creating a process for testing data sets for bias can help reduce risk.
> Bray and Wang suggested identifying three pools of data sets: Trusted data
> used to train the AI implementation; a queued data pool of potentially
> worthwhile data; and problematic or unreliable data. And data should be
> regularly assessed — for example, whether previously approved trusted data
> is still relevant or unreliable, or if queued data has a newfound role in
> improving the existing pool of trusted data for specific actions.
> *5. Ground your expectations*
>
> Managing expectations of internal and external stakeholders is crucial to
> long-term success. To gain consensus and keep focus on a people-oriented AI
> agenda, organizations should ask and answer such questions as: What is our
> obligation to society? What are the acknowledged unknowns? What are
> responsible actions or proactive things we can accomplish with AI
> implementations, and what are the proper safeguards?
>
> In the end, it makes sense to approach AI as an experimental learning
> activity, with ups, downs, and delays. “There will periods of learning,
> periods of diminished returns, and [times when] the exponential gain
> actually benefits the organization,” Bray said. “You need to be grounded
> and say, ‘This is how we’ve chosen to position ourselves.’ It will serve as
> your North Star as you move towards the final goal.”
>
> On Fri, May 19, 2023 at 2:26 PM Paul Werbos <pwerbos@gmail.com> wrote:
>
> Thanks, Timothy, for updating our awareness and asking us to think about
> the implications:
>
> On Fri, May 19, 2023 at 9:39 AM Timothy Holborn <timothy.holborn@gmail.com>
> wrote:
>
> I was alerted to: https://twitter.com/FT/status/1659481447428751360
>
> “We reaffirm that AI policies and regulations should be *human centric*
> and based on nine democratic values, including protection of human rights
> and fundamental freedoms and the protection of privacy and personal data.We
> also reassert that AI policies and regulations should be risk-based and
> forward-looking to preserve an open and enabling environment for AI
> development and deployment that maximises the benefits of the technology
> for people and the planet while mitigating its risks,” the ministers’
> communique stated.
>
> Source is from:
>
> https://g7digital-tech-2023.go.jp/topics/pdf/pdf_20230430/ministerial_declaration_dtmm.pdf
>
> FWIW: personally, i think of many of these requirements as 'safety
> protocols', but am open and interested to hear the views of others...
>
>
> My views: I see an analogy to great pronouncements and even goals on
> climate change a few years ago,
> WITHOUT the kind of groundwork needed to get the great goals implemented.
> Useful implementation is MORE URGENT here,
> because the worst case pathways to extincion run even
> faster with internet/AGI/IOT than with climate. It is far more difficult,
> because the physical details are harder for people to understand. (For
> example, H2S in atmosphere is a lot easier to visualize than QAGI.)
>
> The design requirements are simply not under this open discussion. I hope
> Jerry's effort can help close this life or death gap.
>
>
>
>
> Cheers,
>
> Timothy Holborn
> www.humancentricai.org
>
> --
> You received this message because you are subscribed to the Google Groups
> "The Peace infrastructure Project" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to peace-infrastructure-project+unsubscribe@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/peace-infrastructure-project/CAM1Sok19z8kZ0NPyCqwGX_sxhPAqW%2BK8Fmdm%3DiMsGVzsv7j4kA%40mail.gmail.com
> <https://groups.google.com/d/msgid/peace-infrastructure-project/CAM1Sok19z8kZ0NPyCqwGX_sxhPAqW%2BK8Fmdm%3DiMsGVzsv7j4kA%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
> For more options, visit https://groups.google.com/d/optout.
>
> --
> Get the i4j book 'The People-Centered Economy' on Amazon in Paperback and
> for Kindle.
> https://www.amazon.com/People-Centered-Economy-Ecosystem-Work/dp/1729145922
> ============================
> If you don't want to receive more emails click "UNSUBSCRIBE" or send an
> email to i4j@peoplecentered.net for questions and comments.
> ---
> To unsubscribe from this group and stop receiving emails from it, send an
> email to i4j+unsubscribe@i4jsummit.org.
>
> --
> Get the i4j book 'The People-Centered Economy' on Amazon in Paperback and
> for Kindle.
> https://www.amazon.com/People-Centered-Economy-Ecosystem-Work/dp/1729145922
> ============================
> If you don't want to receive more emails click "UNSUBSCRIBE" or send an
> email to i4j@peoplecentered.net for questions and comments.
> ---
> To unsubscribe from this group and stop receiving emails from it, send an
> email to i4j+unsubscribe@i4jsummit.org.
>
> --
> Get the i4j book 'The People-Centered Economy' on Amazon in Paperback and
> for Kindle.
> https://www.amazon.com/People-Centered-Economy-Ecosystem-Work/dp/1729145922
> ============================
> If you don't want to receive more emails click "UNSUBSCRIBE" or send an
> email to i4j@peoplecentered.net for questions and comments.
> ---
> To unsubscribe from this group and stop receiving emails from it, send an
> email to i4j+unsubscribe@i4jsummit.org.
>
>

Received on Saturday, 20 May 2023 20:55:05 UTC