Re: Lived experience

On Fri, 21 Oct 2022, 7:34 pm Dave Raggett, <dsr@w3.org> wrote:

>
> > On 20 Oct 2022, at 19:20, Timothy Holborn <timothy.holborn@gmail.com>
> wrote:
> >
> > Can we look to do a model, that seeks to constantly explain the use of a
> human subject's time (work, as is distinct to recreation or sleep, whilst
> extensible)?
> >
> > Might take into consideration intent vs outcome & plausible reasoning as
> to why.
>
> That could be massively abused by authoritarian regimes seeking to spy on,
> and control, their subjects’ behaviour.


Well, then they'd have to explain the use of their time in a court of
law...

However, in everyday conversations, people use their understanding of each
> other to guide what they say, something that is described by Grice’s
> conversational maxims.
>
> Before we get there, we first need working implementations of language
> understanding, reasoning and learning.


My work on plausible reasoning is a step in that direction, and I hope to
> next show how it can be extended to demonstrate simple language
> understanding and common sense reasoning.
>

I'm still working on the values stuff.  Although, I feel decided
overwhelmed at time.  Fairly sure AI will be run by previously passive
artificial entities (corporations / Gov / military) and the near future of
"democracy" is doomed, due to greedy fools; and I don't think it's capable
of supporting peace.

I think you're work is important. But I do fear we're doomed. Taking
responsibility for harmful behaviours is discouraged generally,
particularly if wealthy / acting in a senior capacity for large entities.

Personally, I wanted to ensure child abusers involved in the post family
seperation industry, operated in an environment that would lead to evidence
for courts; but the work on verifiable claims seemingly sought to ensure
those sorts of usecases we're not part of what was achieved, as does in
turn demonstrate the prevailing ideology, about whose interests are
preferentially supported.

These sorts of things should be easily identifiable by "artificial minds",
as such, who will they serve?

Anyhow.  I'm not sure it's feasible to protect work in the interests of
seeking to ensure it's not used for evil, so perhaps it's better to build
the evil usecases - or is that considerations already well established?

How do we store records about how evil robots ruin peoples lives, or is
that discouraged? Starting with autocorrect, leading to situations of
documents being rewritten to record different versions of events, and / or
dataloss...

Totalitarian AI tyranny, led by slaves...

But even then, tracking what they've been doing during the day would still
be interested, even if the concept of law becomes decommissioned / made to
be a historical concept.   AFAIK, it would breach w3c licensing rules.  How
many people in the world are legal aliens anyway...

There's no "values" available for people to add to electronic contracts,
only "qualifications", payment instruments, etc.

I think that there's some sort of ethical status that regularly exists & is
worth protecting today, is a fallacy.

Seems to me the most honest form of new media is memes, probably because
they've not started blocking them as "fake art" yet.

Sorry for the whinge.

Yet this idea that we can get a good outcome for humanity by deciding that
all they should reasonably own for their own private means, is a few public
keys; and going to lead to any capacity to do good.

Apparently all blockchains are now "web", go figure...  They must all be so
honest.

In any case, if we could work on more demos / practical examples, that
would be good.  At this stage, maybe illustrating the ramifications of the
otherwise hidden evil, might be a faster path to good, notwithstanding how
it makes my heart sink, particularly given the view that I have about how
successful any such similar strategies have historically been for others.

Kind regards & best wishes,


Timothy Holborn.




> Dave Raggett <dsr@w3.org>
>
>
>
>

Received on Saturday, 22 October 2022 10:27:08 UTC