Re: Deep Fakes, Phishing & Epistemological War - how we can help combat these.

> On 5 Jul 2019, at 19:07, adasal <> wrote:
> Yes. So two angles. 
> The “what is truth” angle and the UN of the internet. 
> I’m beginning to think the former may be easier. 

What is truth is a philosophical question, that has some pretty good answers
by now, but keeps being worked on. (The ones I like most are those of Lewis
who tie it to possibilities and so to dreaming.) We can assume there is truth, even 
if getting an overview of the different ways of thinking of it is a huge project.
In any case it is not one we need to explore here.

As for the Web of Nations — which is not a UN but a decentralised mesh of nations
based on Linked Data that ‘only' need to agree on some ontologies and understand
the Linked Data publication  mechanisms, so that it can be used by browsers, operating systems, 
and software — things are clearly progressing as witness the discussion here.

The first  people we need to  understand or at least be aware of the concept are 
the semantic web people. The next are institutions, and the existence of a 
Linked Data version  of <> shows that things are moving 
along there too. <>

The next thing is making political scientists aware of this techno/geopolitical
system, and then getting governments involved through their cyber-security 
branches that are desperate for answers, but in a way that they can understand
the importance of the technologies already standardized at the W3C to this

The technical side is quite simple, but that it has not been done indicates that
there has been an inability to think the organology of political societies, the
role of institutions, what nations are, how they relate to others, and what
all this has to do with technology, especially writing (of which the semantic
web is the latest transformation), and epistemology.

Philosophers like Bernard Stiegler, who have a philosophy that puts technology
at the center of their philosophy may be very helpful in creating a conceptual frame
in which this can be thought. He is currently working on the notion of the ”Internation”
as developed by Marcel Mauss in the 1930ies. This tries to think the local and
the global and their relation. <>
They are actually trying to present a "Memorandum of Understanding”  to the UN
in 2020.

In the English tradition, Prof. Nigel Shadbol who wrote  with a book ”The Digital Ape” 
and runs the ODI also takes the techno/philosophical aspects seriously. So does
Kieron O’Hara. 

It’s a process, that will be very large and go beyond any single individual, cross
philosophical boundaries, eg: between analytic and continental philosophical traditions,
between countries, cultures, etc… 

I know there are people here who have experience getting big projects like this
going. Perhaps they can contact me and teach me how to do this, or let me know
how I can help out. 


> Any luck with the latter?
> Adam
> On Fri, 5 Jul 2019 at 09:06, Henry Story < <>> wrote:
>> On 5 Jul 2019, at 05:37, Paola Di Maio < <>> wrote:
>> What I want to say Henry
>> is that misrepresentation of truth is already widespread, and takes many forms
>> (as in advertising and propagandas of all sorts)
> There are rules in the UK against false advertising, and in many other countries.
> If you analyze what adverts say, you will notice they rarely say something false,
> but they trie to build projections of how their product can help.
> Propaganda is also a form of projection into an idealized future.
> I’d need to study those.
>> this technology makes it easier and can spread misrepresentation faster at mass
>> information level, the underlying need for fact checking, understanding data bias,
>> interpretation and context, and scratching beyond the surface of information are
>> historical issues that exists well before this new capability. 
> Each new technology requires new structures to be put in place to regulate their
> healthy use. These regulatory structures must come after the appearance of 
> a technology, as it is difficult to legislate in anticipation of something that is new
> and so not quite known (and usually not even taken seriously, as only few are
> good at futurology).
> The problem with internet regulation is that it goes beyond the national and so
> lacks strong enforcement rules. After all the danger of nations imposing their
> regulations outside of their borders is that if everyone does this there will
> be conflicting laws, and so conflicting judgements, and so more and more 
> reason for conflict.
> Instead of trying to go for one World Order, I suggest it would be better to
> make it possible for actors to make visible their ties to legal spaces under which
> they fall in order to allow good actors to distinguish themselves from those that
> do not want to make themselves responsible, and for these legal spaces to be
> diplomatically tied together in a web of nations, which could change as alliances
> change.
>> PDM
>> On Fri, Jul 5, 2019 at 11:20 AM Paola Di Maio < <>> wrote:
>> Thanks Henry 
>> for the extensive reply
>> >>>The mayhem appears when things
>> >>>are published as true by sites that look like official ones.  
>> But this happens already, quite a lot, everywhere
>> From political websites, to institutional, even scientific sources (a lot of gibberish
>> that nobody understand and nobody can reproduce/verify, and noboy has the time
>> to investigate further is published as science and taken as fact. Its only when somone, often by chance stumbles across some issue that certain things come to light)
>> Official websites are full of lies or partial truths
>> Even omission of facts is a misrepresentation of truth
>> Telling the truth is actually perceived as a silly and stigmatized
>> (people are ridiculed when they things as they are, so there is great
>> fear in telling the truth) So I would say first and foremost is a cultural thing
>> that we trust implicitly what information comes from institutions
>> But institutions have hidden agendas and use information not for information sake and to make people more knowledgeable, but to influence and stir
>> opinions and behaviours
>> p
>> On Thu, Jul 4, 2019 at 10:09 PM Henry Story < <>> wrote:
>> On 4 Jul 2019, at 10:33, Paola Di Maio < <>> wrote: >
>> > Reality is so manipulated (at all levels) that humans have lost  (maybe
>> > never had) the ability to understand of what is real beyond doubt,
>> That is actually the subject of Epistemology. This comes in two parts.  1)
>> The problm of definition: What is knowledge?  2) The sceptical problem:
>> how can we know anything given that we can always find reason to doubt?
>> Knowledge was defined by Socrates according to the reports by Plato as
>> Justified True Belief. More than 2 thousand years later, after the development
>> of modern quantified logic with Frege and Russell/Whitehead, the questions
>> came to be to find logical necessary and sufficient definition of knowledge.
>> These lead to well known problems defined by American Philosopher Edmund
>> Gettier <>
>> Around the same time Modal Logic came to have a mathematical formalisation
>> and Hintikka used this to defined
>>  S knows that P iff
>>    in all the worlds compatible with the information S has, P is true.
>> Robert Nozick in the award Winning book "Philosophical Explanations" showed
>> that there was a problem with this definitiion. By updating Descartes'
>> Meditations to the Science Fiction realm, and arguing that we could always
>> imagine that aliens from Alpha Centauri had come at night, kidnapped S,
>> attached his brain to a super-alien-computer and induce in him fake by realistic
>> sense impressions. Since this doubt can always be brought up in that form or
>> the more ancient one of dreaming, the question becomes how we can know at 
>> all, since that possibility cannot be excluded.
>> The answer come by way of using the David Lewis' later logic of counterfactuals
>> that organises possible worlds by a distance relation. Redefining knowledge
>> using counterfactuals as Nozick does, it turns out that one does not need 
>> to consider more distant and outrageous possible worlds to know some everyday
>> fact about how much money one has in one’s pocket.
>> I give an overview of that in "Epistemology in the Cloud - on Fake News
>> and Digital Sovereignty" (And if you don't want to read the paper you will
>> find two presentations with slides, one of which I gave at the Chaos
>> Computer Club Vienna's Privacy Week)
>> <> There
>> I add a Cloud computing related twist to it, leading us to take seriously
>> the locality of information.
>> > The vastness of widespread deceit (about news, history, and even science!)
>> > and limited resources to verify everything that we hear, we need to limit
>> > our fact checking to the strictly necessary facts that support our
>> > decision making/ So when I read or hear some fact, I do my best to verify
>> > its true.  
>> Yes, so if you are going to verify the truth of a statement quickly you
>> may need to use the internet to do so.
>> In the pre-internet world, you would do so by finding someone knowledgeable
>> on the subject, which in many case would be someone educated in the area,
>> or working for a company that is known to be able to make knowledgeable
>> statements on a topic. So you may go to a dentist to get a prescription
>> for your tooth pain, or to get a tooth pulled, not to someone you just met
>> in the bar, even if they can speak very convincingly on the subject. Or
>> you could read a book published by an expert in the area, and that expertise
>> would be verifiable by knowing which institution they were speaking from.
>> Of course if you are a mathematician reading a mathematical proof you would
>> just need to verify the proof for yourself, but you may yet want to filter
>> the things you read by knowing where the person writing things came from.
>> This thinking gets one to understand the role of institutions and legal
>> systems in our claims to knowledge. To make statements in a factual context
>> is to be make oneself responsible for what one says, and requires one to
>> not follow up by saying something contradictory to that. To make a promise
>> requires one to be able to follow up on it, and then to try to follow up,
>> and so limits one's future possible lives to those compatible with one's
>> promises. Entering an institution is to make a certain promise to uphold
>> its values.
>> But the web currently has not useful information about what institutions
>> is behind a web site. A little typo, or clicking on a phishing link can
>> make you end up on a web site that looks very much like what you are
>> expecting but be a fake site. This was very unlikely to happen when buildings
>> in a town gave you a way to recognise the institution you were talking to.
>> That building would in any case mean the presence of people on legally
>> delimted soil.
>> So before the large public can even get around to fact checking we need
>> to build an institutional Web of Trust (WoT), which can play the role of
>> buildings in local life, by letting people know the legal framework a web
>> site is tied to.  I describe how to do that in the blog post "Stopping
>> (https) Phishing"
>> <>
>> This can be done with Linked Data because we do not require global consensus,
>> and so we can allow different nations to have differnet points of views
>> on each other and even how to map ontologies, when disagreements arise.
>> > Deepfakes adds another layer to that manipulation and falsification of 
>> > reality, by leveraging new technology.  
>> > I see two areas of concern
>> >  a) technology ethics - a fun  technology developed 
>> > to animate fictional output is used to falsify reality  (making people say 
>> > what they have not) with potentially devastanting consequences is  not 
>> > entirely new-manipulation has always occurred by twisting, falsifying 
>> > or taking out of context what people may say.  Misinformation and 
>> > misrepresentation are  a less technologically sophisticated, but with 
>> > similar consequences (to manipulate public opinion and behaviours) This 
>> > already happened with emails.  Deepfakes is a progression of  spoofing 
>> > tech where someone fakes another person email address.
>> Deep fakes are not a problem if they are annotated as fictional.  Terminator
>> 1, 2 and 3 did not cause global mayhem, because they appeared in cinemas
>> and were clearly labled as science-fiction.  The mayhem appears when things
>> are published as true by sites that look like official ones.
>> > b) the increased value of authenticity, and authentication tech
>> That will be important especially for allowing private citizens to also
>> make clear which legal space they are speaking from, when say they publish
>> a photo or film about something happening.
>> > From a systems view point, another layer of risk, can be addressed
>> > with  another layer of architecture (strenghten authentication layer?)
>> Yes, we need a new layer, but not the authentication one. We have that
>> already. The domain name to DNS authentiation layer technology does 
>> its job well enough if one uses X509 certificates and DANE on DNS-SEC.
>> What is missing is the institutional web of trust that can then be used by the
>> browser to display rich information on a secured screen such as the Apple
>> Touch Bar, in a seamless but helpful way. The information contained
>> in X509 Certificates is much much too poor to be of interest and hence
>> of use.
>> For an example of how this institutional web of trust could be tied to
>> hardware see the blog post "Phishing in Context - Epistemology of the
>> Screen" <>
>> As for authentication of citizens using Verifiable Claims so that they too can
>> make claims (such as location claims if they were a witness to something)
>> needs the institutional web of trust to work for networks that go beyond
>> a few degrees of seperation, since if you go a few more jumps you have the
>> whole world in your network.
>> Henry Story

Received on Friday, 5 July 2019 17:45:19 UTC