Re: Interest in a zoom meeting on safe AGI?

The solutions sought to be advanced back then, was far more about semantic
labelling, overall..

Semantics on the web ATM, often have alot of problems.  It's been a
difficult problem to address.

If the wallets were issued at things not people, then, there's some
ecosystems differences; but I consider this approach to be different to
others that many have been so very passionate about over the years...

FWIW also,

I'm assuming support for spatiotemporal relations, alongside logical
programming & computational geometry, as to factor the inputs that may then
be employed by Neural Network related tools, etc.

Which isn't really possible for people being defined by wallets, who are
also better equipped for thin client models; that seemingly have different
mainframe/cloud side requirements..

I don't know how personal ontology can be supported via wallet mode
clients.

Therefore human centric --> AI.

But I do understand you're working on many different usecases / end users
paradigms.

FWIW: whilst it's got nothing to do with my w3c work, etc.

Wrote this recently.

https://docs.google.com/document/d/1Vl446ylY4qzJs0rN8SaKhWMPNTMAYgA3g6XN6s_dk34/edit?usp=drivesdk

If people are able to bet on whether a claim on a website is going to end
up being true or false, and then entity relations enables tracking of
authors, etc..

You could have awards for who was the most accurate journalist & who
produced the most fake news...  That, alongside the economic mechanisms,
may well act to sort out the fake news issues via human factors, given
other more faithful attempts have historically failed...  But I'm also not
sure if I'll pursue it.

Looking into how to add webcredits / rwwcoin like tooling into the web
extension.

Then figure out how to provide hooks for developers.  As noted, there's a
bunch of safety protocols work thats a priority to make systems that reduce
Intermediatories between all human relationships, safe.

Tim.h.


On Wed, 8 Nov 2023, 8:29 pm Dave Raggett, <dsr@w3.org> wrote:

> Reverting to public-cogai only to avoid cross posting ...
>
> On 8 Nov 2023, at 10:13, Timothy Holborn <timothy.holborn@gmail.com>
> wrote:
>
> Re: fake news, I did this back in 2017
>
>
> Current generative AI is now very much better than then, and can be
> designed to understand text, images and a variety of other media formats.
> Training such a system to recognise disinformation and inflammatory content
> is non-trivial, and it will be expensive for social media companies to run
> this on all posts.
>
> This is why the discussion should be focused on how to pressure
> governments to regulate to force social media companies to introduce and
> maintain such defences.
>
>
> On Wed, 8 Nov 2023, 7:41 pm Dave Raggett, <dsr@w3.org> wrote:
>
>> Based upon the responses, I think we are better off sticking with email
>> at least for now.
>>
>> I am surprised that more attention hasn’t been given to applying AI to
>> combat disinformation and inflammatory content on social media, which seems
>> to be the biggest threat to society right now after climate change.  Social
>> media companies probably need regulations imposed on them to make this work
>> and those regulations will only happen if people make a fuss and lobby for
>> them.
>>
>
> Dave Raggett <dsr@w3.org>
>
>
>
>

Received on Wednesday, 8 November 2023 10:51:11 UTC