- From: Paola Di Maio <paola.dimaio@gmail.com>
- Date: Thu, 4 Jul 2019 16:33:04 +0800
- To: Henry Story <henry.story@bblfish.net>
- Cc: semantic-web <semantic-web@w3.org>
- Message-ID: <CAMXe=Sp9SY6T3F1EnGt+6NA+FuKmyCOC-dPj4fxf0bkzuBSiUA@mail.gmail.com>
Henry- had some further thoughts on Deepfakes, cannot remember if I posted this email or not (sorry too much info) If I have already said this just ignore. Reality is so manipulated (at all levels) that humans have lost (maybe never had) the ability to understand of what is real beyond doubt, The vastness of widespread deceit (about news, history, and even science!) and limited resources to verify everything that we hear, we need to limit our fact checking to the strictly necessary facts that support our decision making So when I read or hear some fact, I do my best to verify its true. (I remember posting about truth and fact checking as a technical requirement for accurate systems development on a prior occasions) Deepfakes adds another layer to that manipulation and falsification of reality, by leveraging new technology. I see two areas of concern a) technology ethics - a fun technology developed to animate fictional output is used to falsify reality (making people say what they have not) with potentially devastanting consequences is not entirely new- manipulation has always occurred by twisting, falsifying or taking out of context what people may say. Misinformation and misrepresentation are a less technologically sophisticated, but with similar consequences (to manipulate public opinion and behaviours) This already happened with emails. Deepfakes is a progression of spoofing tech where someone fakes another person email address. b) the increased value of authenticity, and authentication tech From a systems view point, another layer of risk, can be addressed with another layer of architecture (strenghten authentication layer?) On Wed, Jun 12, 2019 at 2:08 PM Henry Story <henry.story@bblfish.net> wrote: > Just yesterday Vice published an article on DeepFakes with two example > videos posted to Instagram: > > 1. Mark Zuckerberg saying something about how Spectre showed him that > whoever > controls the data controls the future. > > 2. Kim Kardashian saying that she got rich because of Spectre, and how she > loves > to manipulate people online for money. > > > https://www.vice.com/en_us/article/ywyxex/deepfake-of-mark-zuckerberg-facebook-fake-video-policy > > The videos are very realistic. > The article makes clear the context in which these need to be interpreted, > namely as fiction/deep-fake. > This shows that there can be legitimate reasons to publish such videos: > namely to make people > aware of their existence. > > How could one make publishing of deep fake content, and other fictional > content or data for that > matter allowable? We can’t live without fiction after all, it is how we > explore possibilities that are not > actual, if only to be able to avoid them becoming so. [1] > > One suggestion is that we would need a fiction ontology. Given that > an HTTP server that served these should it seems > > 1. have a Link relation that specifies the fictional type of such content > 2. Only serve the content to clients that recognize and display such > information clearly [2] > (requiring thus an agent capability ontology) > 3. Perhaps embed that relation in the video metadata too, along with a > link to the original author > (which can be verified by an http GET) so that copying the content > does not remove the > metadata. > > > Henry > > [1] for some good logical/philosophical literature on the topic one can > start with > David Lewis’ ”Truth in Fiction” > > https://pdfs.semanticscholar.org/3708/9f9d514e41ebc215ad51306a51125a9ac175.pdf > > > > On 4 Jun 2019, at 10:37, Henry Story <henry.story@bblfish.net> wrote: > > In a recent article on Deep Fakes in the Washington Post, > Assistant Prof. of Global Politics Dr. Brian Klaas, University > College London, wrote > "You thought 2016 was a mess? You ain't seen nothing yet.” > > https://www.washingtonpost.com/opinions/2019/05/14/deepfakes-are-coming-were-not-ready/ > > Deep fakes are produced by new technological breakthroughs that allows one > to > realistically create live videos of real people, to make them say whatever > one > wants them to say with the right tone of voice too. There is no turning > back > this technology, and this will bring us back to a pre-photographic world, > where trust in the coherence and authorship of a story is all we have to > go > by for believability. > > But we have no good system of trust one the web. X509 certificates are > much > too uninformative to be of interest. With the deployment of Let’s Encrypt > anyone can get a free certificate. That is actually great, because it > solves > the problem that TLS can solve: namely that one has reached the web server > named by the domain. But it cannot tell us anything interesting about where > we landed, what company it is, what jurisdiction they are under, what legal > system it is repsonbible to, and how that is related diplomatically to the > country to which the web surfer is embedded. We do not know if that entity > is > in legal trouble or not. We know nothing really. Is it surprising that > fake news and scams have completely ovewhelmed us? > > The tremendous growth of Phishing is just one aspect of the fake news > problem > that has been plaguing us recently. And the only answer is to tie the legal > institutions in an open way into the browsing experience of every day > users. > > I have detailed how this can be done in my 2nd year Phd report, and have > also written this up as a couple of blog posts > > "Stopping (https) phishing" > https://medium.com/cybersoton/stopping-https-phishing-42226ca9e7d9 > > In the thesis I have started using Abadi’s logic of "saying that” which is > both a modal logic and a strong monad from category theory, to work out how > one can formalize the intuitions of the Linked Data community. > > One thing this allows us to do is to think logically also of user > interfaces, > and to make actually very coherent and enticing proposals for how one can > make > this information available in our everyday browsing experience. In > "Phishing in Context - Epistemology of the screen" > https://medium.com/cybersoton/phishing-in-context-9c84ca451314 > > The semantic web as a knowledge representation language that is > decentralised > is exactly the right tool to use here, as it can help us weave nations > together > into a web, without requiring impossible global centralisation. > > We have all the technology to do this. We just need to bring the right > people together, > a task that the W3C excels at. > > > Henry Story > > > > > > > > > >
Received on Thursday, 4 July 2019 08:34:08 UTC