Re: [i4j Forum] Re: G7 leaders call for ‘guardrails’ on development of AI

Hi all,
Some thoughts, I hope there's something helpful herein - note, both draft,
and.. more illustrative than anything else..  a stream of consciousness,
imo...

Fundamentally, I think better bridges need to be built between the human
centric (AI) works and the People Centered Internet works - outcomes need
to be interoperable and the differences can be notated using RDF or
similar, but moreover, i do see 'internet' as being something different to
'human centric' (web / ai) stuff - from a
https://en.wikipedia.org/wiki/OSI_model and more broadly otherwise...  as I
hope i've noted sufficiently below.

IMO: Focus should firstly be on:
https://en.wikipedia.org/wiki/Maslow%27s_hierarchy_of_needs - and with this
in mind, notes per below.

FWIW: list link - https://groups.google.com/g/peace-infrastructure-project

The point of the peace infrastructure project is to help define a practical
focus.
Some Clips from: https://www.youtube.com/@ShotsOfAwe
https://www.youtube.com/watch?v=nH9IPpDrVTs
https://www.youtube.com/watch?v=lHB_G_zWTbc

The Toaster - like its a parable.
https://www.youtube.com/watch?v=5ODzO7Lz_pw

a few others;
https://www.biblegateway.com/passage/?search=Luke%2010%3A25-37&version=NIV
https://www.biblegateway.com/passage/?search=Matthew%2021%3A28-32&version=NIV

The reality is that there's a competitive 'race' or game, where the
objective is - to radically improve the lives of billions of people on
earth.

The only real question is, how will that be achieved and who is going to be
the successful participants whose works deliver the most capacity for
others, to support one-another's ability to meaningfully participate in the
delivery of that goal, our goal...

https://en.wikipedia.org/wiki/Golden_Rule

As i draft this email i came across;
https://twitter.com/timparrique/status/1659857006616797196

Foundationally, i think there's only really one priority - support for
human rights.

Yet, this needs to be qualified, and perhaps the only way to meaningfully
achieve this outcome isn't actually via the use of modified human rights
instruments in the first instance, perhaps that can only happen down the
track, if successful - i hope not, but maybe?

i started on:
https://docs.google.com/presentation/d/1q6Cl__FbUddtKtBPxw2KDviO_PEl7Ti-XYZOT1EQM14/edit#slide=id.p

Although there's earlier related works that go back many years...  in any
case,I do hope the general concept is able to be understood by the above
linked example.  'credentials', wasn't simply about payments or
professional credentials, imo..  at least, not in the narrow form now
implemented.  Nonetheless, I think various 'human centric' solutions
seemingly need to be made 'optional', which is considered as a consequence
of my view that not all actors / agents want 'reality check tech' - but
that doesn't mean rule of law should be illegal, or unavailable.

https://cdn.knightlab.com/libs/timeline3/latest/embed/index.html?source=1WXgSplqAB62oMSdwqli_1G3k37c0y6fZkZJLzc5Www8&font=OldStandard&lang=en&hash_bookmark=true&initial_zoom=4&height=650

https://press.un.org/en/2018/sc13493.doc.htm

It's difficult to find reasonably useful statistics that represent the cost
of corruption in terms of CO2, which is seemingly - particularly important
at this time as we seek to achieve 'Net Zero'...

https://cdn.knightlab.com/libs/timeline3/latest/embed/index.html?source=1r-bo83ImIEjSCmOFFMcT7F79OnCHDOGdkC_g9bOVFZg&font=Default&lang=en&hash_bookmark=true&initial_zoom=4&height=750

It is interesting to consider how the way terms such as 'identity' have
been sought to be successfully engendered with different meanings overtime,
as the work on better understanding the nature of, and how to influence
consciousness - has advanced overtime...


RE: W3C lists,

https://www.w3.org/community/humancentricai/ - there's some posts on:
https://lists.w3.org/Archives/Public/public-humancentricai/ - noting in
particular atm, i think figuring out how to create a tool to use generative
ai to anonymise or in-effect, characterise sensitive use-cases, is
something that would be particularly helpful for various reasons..

The point of W3C Works is that decisions are made by consensus of
participating members...  As such, I'd prefer the group work towards
resolutions rather than making declarations that may not reflect the views
of the group, nonetheless, the point or purpose was about interoperability
and various safety protocols...  stuff that's needed to support human
rights, morally, and otherwise...

Nonetheless, I have some strong views that relate to (my) works on a
distinct implementation...

RE: Trust Factory / Webizen / Web Civics | Human Centric AI

I've been working on 'Human Centric AI', in-effect, for about a decade,
notwithstanding the starting point being more than a decade earlier
overall; but the term itself 'human centric', was distilled in 2015, noted
on the list re: credentials 2016 and more documents from then onwards, but
the AI stuff was generally implicit, as the tooling was to create a
personal and private dynamic agent built upon support via RDF / SemWeb
tooling; foundationally, as a means to support rule of law and particularly
therein, human rights, for the most vulnerable.  use-cases, still without
good answers / outcomes.

So whilst i'm working on an implementation i call webizen - the functional
works led to a belief that it was morally appropriate to support various
interoperability requirements firstly via http://www.humancentricai.org/
advancing existing community work that had started years earlier of
facebook (group); and then, as a consequence of observations at WSIS2023

https://soundcloud.com/ubiquitous-au/my-question-human-rights-instruments-for-digital-wallets-valuescredentials

Instigated the process to establish the W3C CG - that may well grow to
support an ISOC SIG or similar, broadly via http://www.humancentricai.xyz/
 - that i've started https://github.com/HumanCentricAI-xyz  to support -
but haven't had enough time, and lack resources to have been able to get
more done on it yet...

NB Also: the last slide of:
https://drive.google.com/file/d/1fvrYlTYUCYQMjW4pTel6QOIvJ03GtQT7/view?usp=drivesdk

re: 'signing souls', from 2017.  Sadly, not enough has happened to better
advance works in that area,  notwithstanding the enormous volume of
signatures created since...

I didn't really realise and/or think about the implications of mMy
Historical work on 'knowledge banking' systems, with respect to information
management topologies required for people in developing nations; but yet
still, there are augmentations that gives rise to the importance of
identifying solutions for new problems, and in-turn also - means to address
other issues that were not more clearly considered earlier, whilst noting
that the context has changed quite a bit.  Nonetheless, when focused works
are done in areas relating to human rights support - there's many profound
experiences, when better learning about the areas where there are such
enormous gaps, and such extraordinary costs...  It most certainly takes me
some time to regain my composure at times, whilst working the problem,
before being able to better comprehend the nature of the puzzle...




On Sat, 20 May 2023, 7:25 am David Michaelis, <michaelisdavid@yahoo.com>
wrote:

> Not sure if the FDA is the correct answer.
> With open source ChatGpt/AI now anyone can develop good and evil
> programs/products.
> You don’t need billions to make a new application.
> As there is a dark net there will be dark products market out of anyone’s
> control.
>

DarkBert was trained on the DarkWeb

https://arxiv.org/pdf/2305.08596.pdf

https://futurism.com/the-byte/ai-trained-dark-web



>
> Sent from Yahoo Mail for iPhone
> <https://mail.onelink.me/107872968?pid=nativeplacement&c=Global_Acquisition_YMktg_315_Internal_EmailSignature&af_sub1=Acquisition&af_sub2=Global_YMktg&af_sub3=&af_sub4=100000604&af_sub5=EmailSignature__Static_>
>
> On Saturday, May 20, 2023, 7:17 am, David Bray, PhD <
> david.a.bray@gmail.com> wrote:
>
> I think consumers are owed a systematic check to see if the machine is
> reliable.
>
> Just like most people don't know if medication XYZ with additional
> ingredients ABC is good or bad for them - or effective or a placebo.
> However we have folks who do the checks so a seal of approval (or a market
> ban) can be done to help consumers have confidence in the product?
>
>


https://www.youtube.com/watch?v=iDkANArgdKI - noting that ther's others:
https://www.youtube.com/@GS1USvideos/videos --> Phil is the guru in this
area, afaik...

An old one i made about pharmacies from 2012:
https://drive.google.com/file/d/1MnkbM0x2-rKY2nmTML21F8KcBX_dmqdQ/view?usp=sharing


an old schema from that time was ArtsXML
https://drive.google.com/drive/folders/16YViPcGkAdOzWw1Wo9ryHL2e2uUTIPas

some related work that was done later associated to:
https://github.com/ouisharelabs/food-dashboard or
https://github.com/ouisharelabs/food-dashboard/issues/6  (i had to delete
my github at one stage, due to attacks that were only sorted out much
later... anyhow).

but the use-cases about diagnostics and pharmaceuticals should also
associated with medical reporting(ie: https://openvaers.com/ ) and various
other use cases, that i'd consider to be 'human centric' overall - such as
ensuring support for persons prescribed restricted substances - or indeed
also - checks to ensure vulnerable people are 'of sound mind',
https://drive.google.com/file/d/1_9Y0R_qY4zReWhZQ940piXRn4FA05pUC/view?usp=sharing
for both legal and human rights related purposes...



>
>
> On Fri, May 19, 2023 at 5:07 PM David Michaelis <michaelisdavid@yahoo.com>
> wrote:
>
> “ things you don’t want to teach the machine “…
> Well we are already in the next stage- the machine wants to teach you!. It
> has sometimes amazing fast solutions that are unexplained but useful. Do
> you reject them because you don’t understand how it got there?.
>
>
I find the existing GPT like systems very good with defining 'global
platform' based solutions,but they're not very good with defining /
supporting the creation of decentralised solutions; indeed, they
effectively seem to interfere with the ability to do so...

There are particular ideologies that it appears these systems seek to
normalise, including the use of english in a manner that is american
english, not british or Australian english - which leads to other
questions, like, what do they think i'm talking about if i say i like my
thongs...  being australian, and all..  will it impact my experience online
- there's historical examples where sensing systems, promote inappropriate
advertising - i suspect these sorts of problems will become worse with
these sorts of systems / solutions...

and i suspect, alternatives will be welcomed - if allowed!


>
>
> Sent from Yahoo Mail for iPhone
> <https://mail.onelink.me/107872968?pid=nativeplacement&c=Global_Acquisition_YMktg_315_Internal_EmailSignature&af_sub1=Acquisition&af_sub2=Global_YMktg&af_sub3=&af_sub4=100000604&af_sub5=EmailSignature__Static_>
>
> On Saturday, May 20, 2023, 7:01 am, David Bray, PhD <
> david.a.bray@gmail.com> wrote:
>
> Even with "black boxes" one can still do transparency on:
>
> * the data collection procedures (how do you collect the data? how do you
> obtain consent?)
> * the data curation procedures (how do you correct for errors or things
> you don't want to teach the machine?)
> * the review of the AI outputs (how do you assess if what the AI is
> outputting is socially acceptable? correct/accurate if that's a
> qualification? etc.)
> * the review of the AI impacts on people (how do you review to confirm the
> AI isn't causing unintentional harm?)
> * the review of the AI's biases (all machines will have biases, and even
> correcting for socially unacceptable biases will introduce other biases,
> how do you review and make changes as appropriate?)
>
> Which could be posted publicly as what does this organization do to answer
> and address these important areas.
>
> Hope this helps,
>
> -d.
>
>
I think 'safety protocols', one of which could be agent labelling systems -
basically, providing an 'open badge' or 'credential' in association to the
AI data-service (api) or interface, etc...  but there's broader concerns
about the nature of consideration, as the implications aren't simply static
- rather - there's complex causal graphs, that are multi-dimensional, and
are likely to negatively impact good actors far more than bad actors...
 expertise in deceptive & misleading statements / behaviours, acting as a
'game player', rather than being honest - honesty is likely far less safe
than gaming / engaging in mutually understood terms of exploitative conduct
- race to the bottom kinda stuff...  how do you honestly make money..  feed
the kids, or if that's made too hard - perhaps even have means to know
them, etc...

i saw a meme recently - said, why are you passionate about being a cleaner
- the response was, i'm passionate about being able to afford food...

certainly alot of work in this area is unpaid, i still don't understand how
/ why people don't better understand how digital slavery associated issues
are unsustainable; but again, its a problem that has a greater impact on
people who are honest vs. those who are not... the worst situations is
where its institutionally supported like an organisational policy, and
whilst this may be refuted - the evidence is able to be illustrated by
procurement policies, processes, procedures, etc..   If someone does
something that's incredibly valuable, but it only takes them an hour, and
they don't work for the company - irrespective of how much the money the
company (or government) might save - how is the person paid for their
work??  perhaps, it took that person an hour to explain something that took
months to understand to a level where it could be explained in an hour flat
- does that underlying knowledge travel with the derivative?  how do these
systems be wary of dragons?  submarine issues, not clearly comprehensible
to the uninitiated, etc...

These 'big systems', kinda like religions: UDHR Article 18;
https://docs.google.com/presentation/d/1q6Cl__FbUddtKtBPxw2KDviO_PEl7Ti-XYZOT1EQM14/edit#slide=id.g22d4493b77c_0_136
Everyone has the right to freedom of thought, conscience and religion; this
right includes freedom to change his religion or belief, and freedom,
either alone or in community with others and in public or private, to
manifest his religion or belief in teaching, practice, worship and
observance.

The same sort of consideration should also apply to Platforms / AI...

So,

if a child, upon the knowledge of conception - has an AI model that is
built about them, as a future AI 'assistant' that may travel with that
child over many years, forming a ANN (or similar) that is specifically
about that natural person...  how do they migrate?

how do people take their 'friends' (addressbook) with them if they leave
facebook atm...?

Does
https://web.archive.org/web/20230000000000*/https://datatransferproject.dev/
help them do it?  will platforms support portability standards - perhaps
built around solid stuff, as that's largely compatible with graph
back-ends...

perhaps the means to discuss how human rights for people who are in
'digital prisons' is a better path:
https://docs.google.com/document/d/10exQ8MIJnSWo2YSPJp8gUTpAz1ClcL16RgmrsOiI7uQ/edit#heading=h.k16vb4o6c6er

in-turn also leading to creating distinctions between those who are
lawfully subject to orders and those who should not otherwise be interfered
with as a free person not guilty or in the process of attending to
allegations of wrong-doing and/or compensatory / natural justice measures
(should any such hindrances be deemed / found, unlawful, etc.).

In the Human Centric approach what i wanted to achieve was that root-cause
analysis is able to be done - because issues are not the fault of 'the
government' or 'the company' rather, specific natural persons made
decisions that had effects upon themselves (ie: income) and others - ie:
https://robodebt.royalcommission.gov.au/
https://twitter.com/search?q=%23robodebtrc  - which goes back to the
concept of 'do unto others' or the golden rule or what human beings do -
matters...


separately otherwise; robots with personhood, seems like something that's
more dangerous than nuclear weapons,  imo... MAD...

https://en.wikipedia.org/wiki/Utah_Data_Center
https://www.forbes.com/sites/digital-assets/2023/03/07/-new-hampshire-utah-recognize-daos-as-legal-persons/
https://cointelegraph.com/news/openai-needs-a-dao-to-manage-chatgpt

Nonetheless, out of scope as far as i'm concerned - unless, there's a human
centric ai related use-case that seeks to produce the provenance of who
exactly did what to create it...

https://timeline.knightlab.com/  is simple, by comparison to the sort of
tooling I hope to create in future..




>
> On Fri, May 19, 2023 at 4:55 PM David Michaelis <michaelisdavid@yahoo.com>
> wrote:
>
> Hi David
> Interesting challenges in your principles.
> How can one ask for transparency when the black box is not transparent??!
> At this stage there are too many unknowns in this Golem we have built.
>
>
'internet' is not the same as the content layers that live on top of it...
re: https://www.un.org/techenvoy/global-digital-compact

The relationship between W3C works and 'web of data' (ai) is seemingly
poorly understood.
https://twitter.com/DameWendyDBE/status/1172470883610431489
https://www.youtube.com/@websciencetrust7606

Web3 / SSI has different 'standards efforts' such as
https://trustoverip.org/ afaik... IMO...  nonetheless, akin to the other
point made about religion earlier - its important that people can leave and
migrate to alternative ecosystems, and that there's labelling systems
developed that can better support the means for consumers to make informed
opinions / consumer protection requirements - noting, not all ecologies are
about 'consumers' from the human centric POV, imo...

The other issue was that - whilst there's an urgent need to make cautious
progress, i didn't see a lot of ISOC mentioned in WSIS related discussions
about the future of internet governance - certainly, various ISOC members
and such were present & highly involved,but i didn't hear the words about
ISOC of its inter-national foot-print of chapters mentioned much at all..

Whilst Web3 ecosystems have alternatives for DNS, I think ICANN and/or
whomever - needs to figure out how to radically improve accessibility for
domain ownership and assignments of IPv6 subnets - with and/or without VPN
tools being thereafter employed...



>
> Sent from Yahoo Mail for iPhone
> <https://mail.onelink.me/107872968?pid=nativeplacement&c=Global_Acquisition_YMktg_315_Internal_EmailSignature&af_sub1=Acquisition&af_sub2=Global_YMktg&af_sub3=&af_sub4=100000604&af_sub5=EmailSignature__Static_>
>
> On Saturday, May 20, 2023, 6:26 am, David Bray, PhD <
> david.a.bray@gmail.com> wrote:
>
> Hi Paul - back in 2019 and 2020, Ray Wang and I published the following
> with MIT Sloan Management Review re: 5 steps to People-Centered AI:
>
>
> https://mitsloan.mit.edu/ideas-made-to-matter/5-steps-to-people-centered-artificial-intelligence
>
> *1. Classify what you're trying to accomplish with AI*
>
> Most organizations are pursuing initiatives to do the following:
>
>    - Automate tasks with machines so humans can focus on strategic
>    initiatives.
>    - Augment — applying intelligence and algorithms to build on people’s
>    skill sets.
>    - Discover — find patterns that wouldn’t be detected otherwise.
>    - Aid in risk mitigation and compliance.
>
> *2. Embrace three guiding principles *
>
> *Transparency. *Whenever possible, make the high-level implementation
> details of an AI project available to all involved. This will help people
> understand what artificial intelligence is, how it works, and what data
> sets are involved.
>
> *Explainability. *Ensure employees and external stakeholders understand
> how any AI system arrives at its contextual decisions —specifically, what
> method was used to tune the algorithms and how decision-makers will
> leverage any conclusions.
>
> *Reversibility.* Organizations must also be able to reverse what deep
> learning knows: The ability to unlearn certain knowledge or data helps
> protect against unwanted biases in data sets. Reversibility is something
> that must be designed into the conception of an AI effort and often will
> require cross-functional expertise and support, the experts said.
> *3. Establish data advocates*
>
> When it comes to data, the saying, “garbage in, garbage out” holds. Some
> companies are installing chief data officers
> <https://mitsloan.mit.edu/ideas-made-to-matter/make-room-executive-suite-here-comes-cdo-2-0>
> to oversee data practices, but Bray and Wang said that’s not enough.
>
> The pair suggested identifying stakeholders across the entire organization
> who understand the quality issues and data risks and who will work from a
> people-centered code of ethics. These stakeholders are responsible for
> ensuring data sets are appropriate and for catching any errors or flaws in
> data sets or AI outputs early.
>
> “It’s got to be a cavalry — it can’t be relegated to just a few people in
> the organization,” Bray said. One approach the experts suggested is to
> appoint an ombuds function that brings together stakeholders from different
> business units as well as outside constituents.
> *4. Practice “mindful monitoring”*
>
> Creating a process for testing data sets for bias can help reduce risk.
> Bray and Wang suggested identifying three pools of data sets: Trusted data
> used to train the AI implementation; a queued data pool of potentially
> worthwhile data; and problematic or unreliable data. And data should be
> regularly assessed — for example, whether previously approved trusted data
> is still relevant or unreliable, or if queued data has a newfound role in
> improving the existing pool of trusted data for specific actions.
> *5. Ground your expectations*
>
> Managing expectations of internal and external stakeholders is crucial to
> long-term success. To gain consensus and keep focus on a people-oriented AI
> agenda, organizations should ask and answer such questions as: What is our
> obligation to society? What are the acknowledged unknowns? What are
> responsible actions or proactive things we can accomplish with AI
> implementations, and what are the proper safeguards?
>
> In the end, it makes sense to approach AI as an experimental learning
> activity, with ups, downs, and delays. “There will periods of learning,
> periods of diminished returns, and [times when] the exponential gain
> actually benefits the organization,” Bray said. “You need to be grounded
> and say, ‘This is how we’ve chosen to position ourselves.’ It will serve as
> your North Star as you move towards the final goal.”
>
> DIGITAL “KNOWLEDGE” BANKING - 2012 -
https://drive.google.com/file/d/13jjiVON6uodPX3fwoiZbvQ43IjzJnOLc/view?usp=sharing
2013:
https://drive.google.com/file/d/1EOTzwJmgJhuFl7uvKhlUwmUO6FcQXd0_/view?usp=sharing
- since then, names have changed, some aspects have evolved...   but not
the foundational spirit of it...

I couldn't see the clear statement that defined 'people centered' as to
mean human solely and specifically - broadly otherwise also noting, that
most 'human centered' solutions i've seen, haven't actually been about the
human rights of the 'consumer', even in government systems...   indeed, i
only started some of the more indepth research into the history of the SSI
works recently,  finding 'life event' stuff much earlier than i'd
previously been aware of...
https://cdn.knightlab.com/libs/timeline3/latest/embed/index.html?source=1Kab5bDqGkCGwkOUlAQ8NNlNUUBBVyzXQaYS9q6jLDZo&font=Default&lang=en&hash_bookmark=true&initial_zoom=3&height=900#event-id-and-egovernment

*Learning - it happens all the time!!! ;) *

but i think the journey to achieve -
https://web.archive.org/web/*/https://www.theatlantic.com/magazine/archive/1945/07/as-we-may-think/303881/
- is far more about knowledge, than it is merely otherwise denoted by the
term 'data', although he is listed on my artificial minds list;
https://docs.google.com/spreadsheets/d/11P5X3al6DlFULOPU3w9-eeoDEyLqYmgqmsDm0U2wgsU/edit#gid=1227058217


Which is not to suggest that all systems are necessarily the same, that
there cannot be different alternatives - like google vs. bing vs. android
vs. linux vs. osx, etc...   ford vs. bmw, audi, etc...

rather, that in consideration of the scope - PCI speaks about - perhaps
broadening the scope of the use-cases specifically about benefiting human
beings lives - in ways other than as a consumer, or as 'human resources' or
broadly perhaps -
https://en.wikipedia.org/wiki/Maslow%27s_hierarchy_of_needs - because if
human's aren't ok, then they're not really able to perform to the best of
their ability and the 'things' they serve and/or are corpus members to (ie:
corporation = body of people, which is a different kind of structure - that
may not be well supported by PCI???
https://www.youtube.com/playlist?list=PLCbmz0VSZ_vrdO4fybsy5gsJUPqwmhC17
(also kinda the way things like yacht-clubs operate, from memory - this
might have a few useful points: https://www.youtube.com/watch?v=Yb4JhkF1240
???)


There's different 'value accounting' (
https://docs.google.com/document/d/1UK9eWFjQRbGtF7wAMBNyntT7kZNpvSIe5cSY_VAuE0Q/edit#heading=h.hnkfq6amf134
) methodologies that are emerging; beyond 'git', for code / contributions;
or 'likes' as social currencies, etc... irrespective of which technologies
offer the best energy profile for supporting different types of 'verifiable
claims' (in-effect);  innovation in silicon valley, is said historically -
to have started in garages by people building stuff, they were later able
to start to sell, long before they ended-up building or leasing any of the
big offices.  So, nowadays, how are they able to have some sort of value
attributed to their useful works???  Certainly the USPTO is going to
continue to care about priority dates, the copyright clause is in the US
Constitution - but - how will these sorts of 'sources of light' shine upon
the people of our world...

How can i help, and what do you think should be worked on to ensure some
level of interoperability, etc...?


> On Fri, May 19, 2023 at 2:26 PM Paul Werbos <pwerbos@gmail.com> wrote:
>
> Thanks, Timothy, for updating our awareness and asking us to think about
> the implications:
>
> On Fri, May 19, 2023 at 9:39 AM Timothy Holborn <timothy.holborn@gmail.com>
> wrote:
>
> I was alerted to: https://twitter.com/FT/status/1659481447428751360
>
> “We reaffirm that AI policies and regulations should be *human centric*
> and based on nine democratic values, including protection of human rights
> and fundamental freedoms and the protection of privacy and personal data.We
> also reassert that AI policies and regulations should be risk-based and
> forward-looking to preserve an open and enabling environment for AI
> development and deployment that maximises the benefits of the technology
> for people and the planet while mitigating its risks,” the ministers’
> communique stated.
>
> Source is from:
>
> https://g7digital-tech-2023.go.jp/topics/pdf/pdf_20230430/ministerial_declaration_dtmm.pdf
>
> FWIW: personally, i think of many of these requirements as 'safety
> protocols', but am open and interested to hear the views of others...
>
>
> My views: I see an analogy to great pronouncements and even goals on
> climate change a few years ago,
>
> there's so many solutions - here's some work done earlier, noting it's not
an exhaustive illustration of the receipts associated with my interest in
the broader biosphere / sociosphere wellbeing / health - areas of interest,
etc...
https://docs.google.com/spreadsheets/d/1TF7zoU3jZrgpDt222Wa0yX3g6moy2tPaDyFehk73XwQ/edit#gid=1470401579



> WITHOUT the kind of groundwork needed to get the great goals implemented.
> Useful implementation is MORE URGENT here,
> because the worst case pathways to extincion run even
> faster with internet/AGI/IOT than with climate. It is far more difficult,
> because the physical details are harder for people to understand. (For
> example, H2S in atmosphere is a lot easier to visualize than QAGI.)
>
>
[image: Fwdw2LAWIAAbbud.jpg]

My diagram from sometime ago:

https://docs.google.com/drawings/d/1oUsSlPEh8erOdkQJCLzFHBaqp7AYOJCqDw82YrCg9f4/edit


https://en.wikipedia.org/wiki/Maslow%27s_hierarchy_of_needs
vs. https://press.un.org/en/2018/sc13493.doc.htm

So, in this fictional example - that young lady, over the coming decade or
so, may well want to establish herself in a safe home, in a loving
relationship and have children, whereby the focus becomes - happy kids...

How does free work help her achieve this goal, whether it be her free work
or the free work sought to be done by the person she seeks to share her
life & have children with, whom one might assume it is hoped the children
will depend upon also...

https://twitter.com/DemocracyAus/status/1630569008104873985

nb: https://en.wikipedia.org/wiki/Quality-adjusted_life_year


>
> The design requirements are simply not under this open discussion. I hope
> Jerry's effort can help close this life or death gap.
>
>
IMO: if there isn't work done to ensure an option is created to address the
issues of moral poverty - as an alternative 'operating system' that people
could choose to use, be supported & protected by - in-effect... Then, I
fear the economic consequences will lead to terrible outcomes, but the
worst part of it - is that in many of those scenarios, is a set or series
of qualities implemented via designs that have the effect of leading to an
outcome where - it'll be considered no-body's fault.  by design.

one might think, that at least doing to the work on a basis that the
hedging bet diversifies the risk substantively enough to mitigate losses,
would be enough - to get onto the job of figuring out how to more
effectively be the best competitor in the world, in the field of furnishing
capabilities that best improve productivity and consequently also, as a
focused effort - delivers meaningful results better than others - for
transformationally improving the lives of billions of people around the
world...

I remember the news about china delivering a hospital in 10 days;
https://www.abc.net.au/news/2020-02-03/china-completes-wuhan-makeshift-hospital-to-treat-coronavirus/11923000

so, when looking at the economic forces at play - when associating the
concept of economics to the production of useful outcomes;
https://www.youtube.com/playlist?list=PLCbmz0VSZ_vpYsIEl_gb9BUwFnp7EHh08

What is the true cost of corruption??
https://press.un.org/en/2018/sc13493.doc.htm*  In ESG / CO2 terms???  *


>
>
>
> Cheers,
>
> Timothy Holborn
> www.humancentricai.org
>
>
T.C.H.

>
> --
> You received this message because you are subscribed to the Google Groups
> "The Peace infrastructure Project" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to peace-infrastructure-project+unsubscribe@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/peace-infrastructure-project/CAM1Sok19z8kZ0NPyCqwGX_sxhPAqW%2BK8Fmdm%3DiMsGVzsv7j4kA%40mail.gmail.com
> <https://groups.google.com/d/msgid/peace-infrastructure-project/CAM1Sok19z8kZ0NPyCqwGX_sxhPAqW%2BK8Fmdm%3DiMsGVzsv7j4kA%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
> For more options, visit https://groups.google.com/d/optout.
>
> --
> Get the i4j book 'The People-Centered Economy' on Amazon in Paperback and
> for Kindle.
> https://www.amazon.com/People-Centered-Economy-Ecosystem-Work/dp/1729145922
> ============================
> If you don't want to receive more emails click "UNSUBSCRIBE" or send an
> email to i4j@peoplecentered.net for questions and comments.
> ---
> To unsubscribe from this group and stop receiving emails from it, send an
> email to i4j+unsubscribe@i4jsummit.org.
>
> --
> Get the i4j book 'The People-Centered Economy' on Amazon in Paperback and
> for Kindle.
> https://www.amazon.com/People-Centered-Economy-Ecosystem-Work/dp/1729145922
> ============================
> If you don't want to receive more emails click "UNSUBSCRIBE" or send an
> email to i4j@peoplecentered.net for questions and comments.
> ---
> To unsubscribe from this group and stop receiving emails from it, send an
> email to i4j+unsubscribe@i4jsummit.org.
>
> --
> Get the i4j book 'The People-Centered Economy' on Amazon in Paperback and
> for Kindle.
> https://www.amazon.com/People-Centered-Economy-Ecosystem-Work/dp/1729145922
> ============================
> If you don't want to receive more emails click "UNSUBSCRIBE" or send an
> email to i4j@peoplecentered.net for questions and comments.
> ---
> To unsubscribe from this group and stop receiving emails from it, send an
> email to i4j+unsubscribe@i4jsummit.org.
>
> --
> Get the i4j book 'The People-Centered Economy' on Amazon in Paperback and
> for Kindle.
> https://www.amazon.com/People-Centered-Economy-Ecosystem-Work/dp/1729145922
> ============================
> If you don't want to receive more emails click "UNSUBSCRIBE" or send an
> email to i4j@peoplecentered.net for questions and comments.
> ---
> To unsubscribe from this group and stop receiving emails from it, send an
> email to i4j+unsubscribe@i4jsummit.org.
>
>

Received on Saturday, 20 May 2023 14:26:04 UTC