Re: Interest in a zoom meeting on safe AGI?

Re: fake news, I did this back in 2017

https://drive.google.com/file/d/1fvrYlTYUCYQMjW4pTel6QOIvJ03GtQT7/view?usp=drivesdk

But I've been focused on trying to create some basic preliminary docs
suggesting a review / creation of new instrument to better consider the
importance of ICT from a international humanitarian law perspective.  Alot
has changed since the Geneva conventions, etc.

https://docs.google.com/document/d/1215uqSsLbj3KACqXZIPSI-SQbg43AQ4VyixMK9K2InE/edit?usp=drivesdk

If internet is simply turned off, the implications could be significant.
Yet, designing solutions that have a humanitarian only mode, if required to
ensure tooling for peace / humanitarian purposes only, would seemingly
require specific designs... And then also, means to support it.  There's
alot to do to achieve SDGs, what if something happens and we don't have any
way to work with humanitarian technology professionals to advance
development of the peace infrastructure projects we need to do, due to
where they were born?  What if the only way ICT (inc. AI) works means that
there's few options other than to turn it off, irrespective of the costs,
due to design issues...

Broadly otherwise,

Theory about Human Centric AI stuff, is to define "safety protocols" that
in turn provide a means to address various forms of "social attack
vectors".

IMO: A problem has been that it's difficult to get into sensitive usecases
(or their repercussions) regarding some of the most important issues to
address, via public mailing lists.

This is a recent work providing a relatively innocuous usecase,

https://youtu.be/_C8ru8DVwHg?si=zYVnCyUzUMKalIn-

of a problem that's got some horrific associated issues..  modern slavery
issues, have many very significant associated problems, far beyond the
economic factors alone.

My hope is that an ISOC Human Centric AI SIG will be set up to help address
these production requirements, however there are some great topics members
have to pick from, so, we will have to see what happens..

Note about it (and video of me)

https://lists.w3.org/Archives/Public/public-humancentricai/2023Nov/0005.html

Regardless, I would like to make progress on the safety protocols stuff via
the human centric AI CG...

But I think it might be usefully important to bake solid based "social web"
into the browser via an extension, as to illustrate, whilst only in a
limited way, the difference between the "human centric" approaches and what
people are otherwise familiar with.

https://github.com/WebCivics/SocialWeb-WebExtensionDev-v4

FWIW: I don't see a future where there's no work.  Perhaps far less paid
work, but that's a different problem that most certainly also, relates both
to safety & socio-economics.

With respect to the "values credentials" (as is a form of safety protocol)
I would be very interested to learn how cogai could be applied to UN
instruments, supportively.

I think this is one of works that I hope to advance better, soon, as to
support the use of them when people are defining their relationships with
one another.

I did some example code of conduct work, quickly, (needs alot of work)

https://github.com/WebCivics/isoc-hc-ai-sig-prep/tree/main/ontology

Whilst noting that there's temporal differences between systems that cast
judgements based upon claims vs. those designed to take into consideration
(perhaps also, calculate the costs) when assumptions are later found to be
incorrect, ie: in a court of law...

Has meaningful relationships with human rights & moreover also, behaviour
and social equity characteristics & related considerations...

Facts should stand the test of time, without undue interference, etc...  as
noted, sensitive usecases are difficult to process on public lists.
Particularly those pertaining to human rights related, issues &
considerations.

Safety, support for human rights is a top priority for me.. it was the
underlying reason for me to start getting involved in w3c work.

Still, there's alot to do. But also alot done.

Best wishes.

Tim.H


On Wed, 8 Nov 2023, 7:41 pm Dave Raggett, <dsr@w3.org> wrote:

> Based upon the responses, I think we are better off sticking with email at
> least for now.
>
> I am surprised that more attention hasn’t been given to applying AI to
> combat disinformation and inflammatory content on social media, which seems
> to be the biggest threat to society right now after climate change.  Social
> media companies probably need regulations imposed on them to make this work
> and those regulations will only happen if people make a fuss and lobby for
> them.
>
> Further out, I would like to see more discussion on how AI can boost
> worker productivity. Elon Musk’s recent proclamation that AI would take
> over all jobs wasn’t helpful.  Talk about a fixed universal income is
> likewise naive and ignores human values.
>
> On 1 Nov 2023, at 15:07, Dave Raggett <dsr@w3.org> wrote:
>
> I would like to gauge the level of potential interest in scheduling a call
> on safe AGI.  This would be recorded and shared publicly after the event.
> Please let me know if you are interested in attending and what your time
> zone is.
>
>
> Dave Raggett <dsr@w3.org>
>
>
>
>

Received on Wednesday, 8 November 2023 10:13:37 UTC