Re: Interest in a zoom meeting on safe AGI?

I wrote this a while back, it's still draft.

https://docs.google.com/document/d/1Fwx3-YYyKgeigaScoMVoTFc3V2p-0jVwOg0IvMr8TZs/edit?usp=drivesdk

With the historical rww/webpayments/creds/(your wot) works, etc.

Point was about producing the tooling for the local agent. Therein, the
ability to run stuff "at home" with the support of a basic hosting services
or via "knowledge bank" in a similar way, but containerised.

So, my theory is that the NN models, etc. are effectively extensions on a
comprehensively well defined semantics stack, plus a bit of vector wizardry
(thinking that it'll also be important for future systems);  thereby
improving the feedstock (structured data) into llms, etc.

So, the objective is that it should largely run locally on a laptop or
similar.  If there's a networked box in the house, great, and corporate
systems would have more grunt; but also, support the seperation between
entity relations & roles, etc...  rww style...

Expert systems could be produced for businesses of different kinds (ie:
like NeXT), but it all works better if human agency is firstly supported;
considered in two constituencies,

Selfhood = like a personal "vault".
Personhood = social informatics, including permissive commons (DLT
structured stuff, that is property of the "commons" relationships, which
could be very broad or could be the informatics pertaining to a personal
intimate relationships, etc.).

Network grabs components when needed, depending on what the user is doing.

I made this December 2022,

https://docs.google.com/presentation/d/1nFZUVL3Uh4zB82F3ViU189Rz05ZPKmtZU3tDq4w8OZQ/edit?usp=drivesdk

Then ended up getting involved in the w3c work, etc.  as the human rights
"values credentials" were needed, and support for languages in a way
different; so, I didn't want to make proprietary solutions that would
interfere with my purpose, which is fundamentally about supporting human
rights, as a basis upon which we can produce the tooling to support moral
socio-economic development,  et.al.

So, fundamentally, I think the approach I've been seeking to pursue is
different to the wallet identity approach; understanding that there needs
to be interoperability,

Re: media, note also,

https://www.mico-project.eu/portfolio/sparql-mm/

But part of the rationale around investigating how HDF5 can be used, is
that it doesn't need the whole container to be loaded into memory and can
store all sorts of stuff...

Noting lastly, that whilst I have particular designs - that I'm pursuing
due to logic, IMO. If there's better solutions, great đź‘Ť

But what the Human Centric Communities seek to do, isn't up to me.  I'm
just putting in my view of how an alternative ecosystems for managing our
"thoughtware" can be produced, etc.  which, I need to focus on more
implementation work, than the other stuff that's taken up alot of time this
year.

Although I was pleased to hear world bank speak about supporting human
rights.

https://youtube.com/clip/UgkxNNoH3uluw9VWVmwNrcLTkSXp6l4gCjQy?si=fHKV2g0S4DQEdDP8

Budget is difficult, I don't have much in terms of resources and the
struggles of poverty are a continuous problem.

But I do hope this helps.  I hope to employ your cogai works in the
proposed outcomes. But there's alot who don't understand anything about the
old meaning of web 3.0, rww, etc..

Also sad about bblfish, although he is amongst others.

I think there needs to be an update from foaf. Was hoping for natural
language ontologies.

Hope this helps.

Tim.H.🙏


On Wed, 8 Nov 2023, 9:31 pm Dave Raggett, <dsr@w3.org> wrote:

> Hmm, what we can do know is a lot more sophisticated than you seem to
> imply.  LLMs can be trained to recognise harmful content and to describe
> what’s wrong. Disinformation is harder, as it involves fact checking. This
> can be addressed by connecting the LLM to external services, e.g. using
> retrieval augmented generation (RAG). Images, audio and video are harder to
> deal with, but this is more a matter of effort than of technical barriers.
>
> A further challenge is privacy. This is where it is better to be able to
> execute the checks locally without having to send personally sensitive
> information to the cloud.  LLMs can be distilled to run on local systems,
> and I would see this as an important area of research.
>
> The UK government recently sought to require social media companies to
> prevent harmful content from being seen by children, but there was a lot of
> push back by the companies and privacy folks. This requires a) governments
> to impose regulatory burdens on social media companies, b) to encourage
> research on privacy preserving solutions and c) to educate the public to
> understand how their privacy and safety is to be ensured.
>
> On 8 Nov 2023, at 10:50, Timothy Holborn <timothy.holborn@gmail.com>
> wrote:
>
> The solutions sought to be advanced back then, was far more about semantic
> labelling, overall..
>
> Semantics on the web ATM, often have alot of problems.  It's been a
> difficult problem to address.
> ...
>
> On Wed, 8 Nov 2023, 8:29 pm Dave Raggett, <dsr@w3.org> wrote:
>
>> Reverting to public-cogai only to avoid cross posting ...
>>
>> On 8 Nov 2023, at 10:13, Timothy Holborn <timothy.holborn@gmail.com>
>> wrote:
>>
>> Re: fake news, I did this back in 2017
>>
>>
>> Current generative AI is now very much better than then, and can be
>> designed to understand text, images and a variety of other media formats.
>> Training such a system to recognise disinformation and inflammatory content
>> is non-trivial, and it will be expensive for social media companies to run
>> this on all posts.
>>
>> This is why the discussion should be focused on how to pressure
>> governments to regulate to force social media companies to introduce and
>> maintain such defences.
>>
>>
>> On Wed, 8 Nov 2023, 7:41 pm Dave Raggett, <dsr@w3.org> wrote:
>>
>>> Based upon the responses, I think we are better off sticking with email
>>> at least for now.
>>>
>>> I am surprised that more attention hasn’t been given to applying AI to
>>> combat disinformation and inflammatory content on social media, which seems
>>> to be the biggest threat to society right now after climate change.  Social
>>> media companies probably need regulations imposed on them to make this work
>>> and those regulations will only happen if people make a fuss and lobby for
>>> them.
>>>
>>
> Dave Raggett <dsr@w3.org>
>
>
>
>

Received on Wednesday, 8 November 2023 11:59:39 UTC