Re: Open letter urging to pause AI

"In parallel, AI developers must work with policymakers to dramatically
accelerate development of robust AI governance systems. These should at a
minimum include: new and capable regulatory authorities dedicated to AI;
oversight and tracking of highly capable AI systems and large pools of
computational capability; provenance and watermarking systems to help
distinguish real from synthetic and to track model leaks; a robust auditing
and certification ecosystem; liability for AI-caused harm; robust public
funding for technical AI safety research; and well-resourced institutions
for coping with the dramatic economic and political disruptions (especially
to democracy) that AI will cause."  source:
https://futureoflife.org/open-letter/pause-giant-ai-experiments/

IMO:

Converting https://github.com/unicode-org/udhr  into verifiable credentials
that can be used with these 'wallets' when engaging in online contracts,
between one-another (legal entities) would be a start.
i've put many of the other UN instruments into a table:
https://docs.google.com/spreadsheets/d/17WfvOyoVQDv8wwPYroX6xrKLM7n3stD9vFv4rejqmo8/edit?usp=sharing
The broader underlying concept I call values credentials, as it helps
people define their values to one-another, and in-turn, understand what
values people present themselves to be committed to.

Conversely, Imagine if only the king was able to define the terms of the
magna carta (which, fwiw, was defined in a church in holborn!)...

other notes:
https://docs.google.com/document/d/1KwMdyGDPZ-9NZS8EyCreDKf-Px81w0ADZEokKNHZAJw/edit#heading=h.54kz1wuee7pv

alongside other comments made:
https://lists.w3.org/Archives/Public/public-cogai/2023Mar/

There's alot w3c communities could do, I'm just not sure stakeholders want
to...  IDK.

apparently solid supports personal / private AI; that can in-turn support
personal ontology,

https://www.cnbc.com/2023/02/17/tim-berners-lee-thinks-we-will-have-our-own-ai-assistants-like-chatgpt.html

So, perhaps it's all been done already?  IDK..  But there is certainly a
role the W3C community could choose to take, perhaps even revisit the W3C
Webizen work attempted some years ago, idk.

TImothy Holborn.

On Fri, 31 Mar 2023 at 19:00, adasal <adam.saltiel@gmail.com> wrote:

> The status of scientific publication is relevant to this discussion.
>
> I do not understand how a system that evolved organically and has its
> critics can be successfully protected in law.
>
> Since RDF, particularly Linked Data is the focus of these mailing lists,
> which is relevant to the discussion on the possible deleterious disruption
> that AI is causing, this seems germane.
>
> Adam Saltiel
>
>
>
> On Fri, Mar 31, 2023 at 7:03 AM ProjectParadigm-ICT-Program <
> metadataportals@yahoo.com> wrote:
>
>> CORRECTION.
>>
>> Here is an answer to the question how to direct and guide the unstoppable
>> process.
>>
>>
>> https://www.unesco.org/en/articles/artificial-intelligence-unesco-calls-all-governments-implement-global-ethical-framework-without
>>
>> Since it is given that all of us as members of the mailing lists of the
>> W3C actually *READ* scientific publications, there are two special
>> groups *IN PARTICULAR* that we should dialogue with, being academic
>> publishers and (scientific) libraries.
>>
>> They are the gatekeepers and keepers of scientific knowledge respectively
>> and both play a crucial role in the UNESCO global ethical framework for AI.
>>
>> Any direction and steering processes should start with these two groups
>> and international standards bodies, professional regulatory bodies and
>> professional associations.
>>
>> Perhaps an idea to organize a conference on this?
>>
>>
>>
>>
>> Milton Ponson
>> GSM: +297 747 8280
>> PO Box 1154, Oranjestad
>> Aruba, Dutch Caribbean
>> Project Paradigm: Bringing the ICT tools for sustainable development to
>> all stakeholders worldwide through collaborative research on applied
>> mathematics, advanced modeling, software and standards development
>>
>>
>> On Friday, March 31, 2023 at 01:59:35 AM AST, ProjectParadigm-ICT-Program
>> <metadataportals@yahoo.com> wrote:
>>
>>
>> Here is an answer to the question how to direct and guide the unstoppable
>> process.
>>
>>
>> https://www.unesco.org/en/articles/artificial-intelligence-unesco-calls-all-governments-implement-global-ethical-framework-without
>>
>> Since it is given that all of us as members of the mailing lists of the
>> W3C actually scientific publications, there are two special groups that we
>> should dialogue with, being academic publishers and (scientific) libraries.
>>
>> They are the gatekeepers and keepers of scientific knowledge respectively
>> and both play a crucial role in the UNESCO global ethical framework for AI.
>>
>> Any direction and steering processes should start with these two groups
>> and international standards bodies, professional regulatory bodies and
>> professional associations.
>>
>> Perhaps an idea to organize a conference on this?
>>
>>
>> Milton Ponson
>> GSM: +297 747 8280
>> PO Box 1154, Oranjestad
>> Aruba, Dutch Caribbean
>> Project Paradigm: Bringing the ICT tools for sustainable development to
>> all stakeholders worldwide through collaborative research on applied
>> mathematics, advanced modeling, software and standards development
>>
>>
>> On Thursday, March 30, 2023 at 11:36:18 PM AST, Georg Rehm <
>> georg.rehm@dfki.de> wrote:
>>
>>
>> This letter is nothing but hype. Emily Bender dissected the letter on
>> Twitter and put together the essence of it on Medium. I’d like to invite
>> everyone to have a look:
>>
>>
>> https://medium.com/@emilymenonbender/policy-makers-please-dont-fall-for-the-distractions-of-aihype-e03fa80ddbf1
>>
>> Best regards,
>> Georg
>>
>>
>> On 29. Mar 2023, at 21:46, ProjectParadigm-ICT-Program <
>> metadataportals@yahoo.com> wrote:
>>
>> This letter speaks for itself.
>>
>>
>> https://www.reuters.com/technology/musk-experts-urge-pause-training-ai-systems-that-can-outperform-gpt-4-2023-03-29/
>>
>>
>> I may not want to put it as bluntly as Elon Musk, who cautioned against
>> unregulated AI which he called "more dangerous than nukes", but when Nick
>> Bostrom, the late Stephen Hawking, and dozens, no hundreds of international
>> experts, scientists and industry leaders start ringing the bell, is is time
>> to pause and reflect.
>>
>> Every aspect of daily life, every industry, education systems, academia
>> and even our cognitive rights will be impacted.
>>
>> I would also like to point out that some science fiction authors have
>> done a great job on very accurately predicting a dystopian future ruled by
>> technology, perhaps the greatest of them all being Philip K. Dick.
>>
>> But there are dozens of other authors as well and they all give a fairly
>> good impression what awaits us if we do not regulate and control the
>> further development of AI now.
>>
>>
>> Milton Ponson
>> GSM: +297 747 8280
>> PO Box 1154, Oranjestad
>> Aruba, Dutch Caribbean
>> Project Paradigm: Bringing the ICT tools for sustainable development to
>> all stakeholders worldwide through collaborative research on applied
>> mathematics, advanced modeling, software and standards development
>>
>>
>> --
>>
>> *Prof. Dr. Georg Rehm <http://georg-re.hm>*
>> Principal Researcher and Research Fellow
>> [image: DFKI] <http://www.dfki.de>
>>
>> DFKI GmbH <http://www.dfki.de>, Alt-Moabit 91c, 10559 Berlin, Germany
>> Phone: +49 30 23895-1833 – Fax: -1810 – Mobile: +49 173 2735829
>> georg.rehm@dfki.de
>> Deutsches Forschungszentrum für Künstliche Intelligenz GmbH
>> Firmensitz: Trippstadter Strasse 122, D-67663 Kaiserslautern
>> Geschäftsführung: Prof. Dr. Antonio Krüger (Vorsitzender), Helmut Ditzer
>> Vorsitzender des Aufsichtsrats: Dr. Ferri Abolhassan
>> Amtsgericht Kaiserslautern, HRB 2313
>>
>>

Received on Friday, 31 March 2023 09:27:54 UTC