Re: Deep Fakes, Phishing & Epistemological War - how we can help combat these.

> On 5 Jun 2019, at 11:39, Charles 'chaals' (McCathie) Nevile <chaals@yandex.ru> wrote:
> 
> On Wed, 05 Jun 2019 10:54:15 +0200, Graham Klyne <gk@ninebynine.org> wrote:
> 
>> On 04/06/2019 09:37, Henry Story wrote:
>>> In a recent article on Deep Fakes in the Washington Post,
>>> Assistant Prof. of Global Politics Dr. Brian Klaas, University
>>> College London, wrote
>>> "You thought 2016 was a mess? You ain't seen nothing yet.”
>>> https://www.washingtonpost.com/opinions/2019/05/14/deepfakes-are-coming-were-not-ready/
>>> 
>>> ... There is no turning back this technology, and this will bring us
>>> back to a pre-photographic world, where trust in the coherence and
>>> authorship of a story is all we have to go by for believability.
>> 
>> This, from Tim Bray?:
>> 
>> "How about camera companies install a signed cert on each device, and the device signs each photo/video-clip before saving? #TruthTech"
>> 
>> -- https://twitter.com/timbray/status/942176960632971264
> 
> In a world with a couple of billion existing cameras, where "citizen journalism" is the answer to the capture of advertising away from organisations that provide trusted information, that seems unrealistic.

I agree, this type of answer would require huge overhead, laws, processes
and more to be put in place. Not something that we can expect to come soon. And
as I argued it would in fact require an institutional web of trust to deploy it in an
open manner globally to start with.

> 
> When the deployment of cameras that do sign content happens at sufficient scale, will be be doomed to a world of unedited videos?

That indeed adds a complicated wrinkle to the simple crypto ”NotFake” 
signature type answer. One would need to require photo retouching software
to add signatures for each type of alteration applied to a film!

> Or will we just do the same as with other information, and base our trust decisions on what we already believe?

I am quite willing to trust statements that go against my previous beliefs if 
the information is given to me by people from institutions that have the processes 
and knowledge to make such statements.  And I don’t think I am alone. 
To take a drastic example, every day (too) many people discover that they 
have cancer, AIDS, or other diseases that require them to fundamentally 
rethink their life.  Information theory teaches us that the value of information
resides in many ways in how ”surprising” it is. 

At present the problem is that there is no way for us to tell generally what 
type of institution the web site we are getting information from is. This is 
what makes Phishing so easy, as a small change to a URL can fool most of 
us. It is easy to make web sites that look like existing news agencies, university,
church, or even government web site. Such institutions used to have large buildings
that could not be built overnight and that were visible to all and recognizable.
If we are to have knowledge and not end up with the web as a dream machine,
connecting subcontious prejudices, we need to be able to build the infrastructure
to allow people to recognize real institutions and people working for them.

Regarding the web as a dream machine, it is worth reading the talk by 
Pierre Bellanger to the French army cyber division recently 
”The Internet is the Rainbow Serpent”
which develops the problem of deep fakes.
https://pierrebellanger.skyrock.com/3322616570-Internet-est-le-serpent-arc-en-ciel.htmlb <https://pierrebellanger.skyrock.com/3322616570-Internet-est-le-serpent-arc-en-ciel.htmlb>

His vision is too dark I think because he does not consider that
we could change the game with an institutional web of trust that is
built on open standards, a light addition to the current web.

(A very good online translator is  https://www.deepl.com/translator <https://www.deepl.com/translator> )

Henry

> 
> cheers
> 
> -- 
> Using Opera's mail client: http://www.opera.com/mail/
> 

Received on Wednesday, 5 June 2019 20:51:00 UTC