- From: Paola Di Maio <paola.dimaio@gmail.com>
- Date: Sat, 29 Oct 2022 11:45:33 +0800
- To: W3C AIKR CG <public-aikr@w3.org>
- Message-ID: <CAMXe=SqnubsQVdp15DejKKXvngwK+n_Zgfr0MTOpSPCj-ArF7A@mail.gmail.com>
Dave R shared on the cogai mailing list a link to Stabled Diffusion https://huggingface.co/spaces/stabilityai/stable-diffusion It is one of the many many versions available apparently, I played with it and commented on a related post Totally by synchronicity in the very remote village I live in, last night I walked into someone random (feeding the village cats) and the topic of stable diffusion came up, I was told Its becoming very popular, in different flavours, although some people are upset apparently It is relevant to KR From a AI KR perspective, however innocent fun Stable_Diffiusion may be, its output can be misleading, By not exposing the source data and algorithm to generate the image, its outcome can be used to mislead people into thinking this is some kind of original artwork In a good world, this can be amusing and intriguing In our wicked world, Deepfakes are used maliciously In the media https://www.bbc.com/news/technology-42912529 In the scholarly literature, the CNN behind the mechanism https://arxiv.org/abs/1905.00582 Only KR can identify, expose and prevent deepfakes
Received on Saturday, 29 October 2022 03:47:35 UTC