W3C home > Mailing lists > Public > semantic-web@w3.org > June 2019

Re: Deep Fakes, Phishing & Epistemological War - how we can help combat these.

From: Henry Story <henry.story@bblfish.net>
Date: Wed, 12 Jun 2019 08:00:01 +0200
To: semantic-web <semantic-web@w3.org>
Message-Id: <73B70A51-DED7-4B86-BEC3-763401184E42@bblfish.net>
Just yesterday Vice published an article on DeepFakes with two example
videos posted to Instagram:

1. Mark Zuckerberg saying something about how Spectre showed him that whoever
controls the data controls the future.

2. Kim Kardashian saying that she got rich because of Spectre, and how she loves
to manipulate people online for money.

https://www.vice.com/en_us/article/ywyxex/deepfake-of-mark-zuckerberg-facebook-fake-video-policy <https://www.vice.com/en_us/article/ywyxex/deepfake-of-mark-zuckerberg-facebook-fake-video-policy>

The videos are very realistic.
The article makes clear the context in which these need to be interpreted, namely as fiction/deep-fake.
This shows that there can be legitimate reasons to publish such videos: namely to make people
aware of their existence.

How could one make publishing of deep fake content, and other fictional content or data for that
matter allowable? We can’t live without fiction after all, it is how we explore possibilities that are not
actual, if only to be able to avoid them becoming so. [1]

One suggestion is  that we would need a fiction ontology. Given that
an HTTP server that served these should it seems 

  1. have a Link relation that specifies the fictional type of such content
  2. Only serve the content to clients that recognize and display such information clearly [2]
     (requiring thus an agent capability ontology)
  3. Perhaps embed that relation in the video metadata too, along with a link to the original author
    (which can be verified by an http GET) so that copying the  content does not  remove the 
    metadata.
   

Henry

[1] for some good logical/philosophical literature on the topic one can start with 
David Lewis’ ”Truth in Fiction”
https://pdfs.semanticscholar.org/3708/9f9d514e41ebc215ad51306a51125a9ac175.pdf



> On 4 Jun 2019, at 10:37, Henry Story <henry.story@bblfish.net> wrote:
> 
> In a recent article on Deep Fakes in the Washington Post, 
> Assistant Prof. of Global Politics Dr. Brian Klaas, University 
> College London, wrote
> "You thought 2016 was a mess? You ain't seen nothing yet.” 
> https://www.washingtonpost.com/opinions/2019/05/14/deepfakes-are-coming-were-not-ready/
> 
> Deep fakes are produced by new technological breakthroughs that allows one to
> realistically create live videos of real people, to make them say whatever one
> wants them to say with the right tone of voice too. There is no turning back 
> this technology, and this will bring us back to a pre-photographic world, 
> where trust in the coherence and authorship of a story is all we have to go 
> by for believability. 
> 
> But we have no good system of trust one the web. X509 certificates are much 
> too uninformative to be of interest.  With the deployment of Let’s Encrypt
> anyone can get a free certificate. That is actually great, because it solves
> the problem that TLS can solve: namely that one has reached the web server
> named by the domain. But it cannot tell us anything interesting about where
> we landed, what company it is, what jurisdiction they are under, what legal
> system it is repsonbible to, and how that is related diplomatically to the 
> country to which the web surfer is embedded. We do not know if that entity is 
> in legal trouble or not. We know nothing really. Is it surprising that
> fake news and scams have completely ovewhelmed us?
> 
> The tremendous growth of Phishing is just one aspect of the fake news problem
> that has been plaguing us recently. And the only answer is to tie the legal
> institutions in an open way into the browsing experience of every day users.
> 
> I have detailed how this can be done in my 2nd year Phd report, and have 
> also written this up as a couple of blog posts
> 
> "Stopping (https) phishing"
> https://medium.com/cybersoton/stopping-https-phishing-42226ca9e7d9
> 
> In the thesis I have started using Abadi’s logic of "saying that” which is 
> both a modal logic and a strong monad from category theory, to work out how
> one can formalize the intuitions of the Linked Data community.
> 
> One thing this allows us to do is to think logically also of user interfaces, 
> and to make actually very coherent and enticing  proposals for how one can make 
> this information available in our everyday browsing experience. In
> "Phishing in Context - Epistemology of the screen"
> https://medium.com/cybersoton/phishing-in-context-9c84ca451314
> 
> The semantic web as a knowledge representation language that is decentralised
> is exactly the right tool to use here, as it can help us weave nations together
> into a web, without requiring impossible global centralisation.
> 
> We have all the technology to do this. We just need to bring the right people together,
> a task that the W3C excels at.
> 
> 
> Henry Story
> 
> 
> 
> 
> 
> 
> 
> 
Received on Wednesday, 12 June 2019 06:00:29 UTC

This archive was generated by hypermail 2.3.1 : Wednesday, 12 June 2019 06:00:31 UTC