- From: Adeel <aahmad1811@gmail.com>
- Date: Sat, 29 Oct 2022 05:01:54 +0100
- To: paoladimaio10@googlemail.com
- Cc: W3C AIKR CG <public-aikr@w3.org>
- Message-ID: <CALpEXW1tMKQ0nqBLQP4FgViO-vw1JOSaMS8zNBHjsT=C1oAk6A@mail.gmail.com>
Hello, You can also use styleGAN to create fake faces, or even fake paintings. Thanks, Adeel On Sat, 29 Oct 2022 at 04:48, Paola Di Maio <paola.dimaio@gmail.com> wrote: > Dave R shared on the cogai mailing list a link to Stabled Diffusion > https://huggingface.co/spaces/stabilityai/stable-diffusion > It is one of the many many versions available apparently, > I played with it and commented on a related post > > Totally by synchronicity in the very remote village I live in, last night > I walked into someone random (feeding the village cats) and the topic of > stable diffusion came up, I was told Its becoming very popular, in > different flavours, although some people are upset apparently > > It is relevant to KR > > From a AI KR perspective, however innocent fun Stable_Diffiusion may be, > its output can be misleading, By not exposing the source data and algorithm > to generate the image, its outcome can be used to mislead people into > thinking this is some kind of original artwork > > In a good world, this can be amusing and intriguing > > In our wicked world, Deepfakes are used maliciously > In the media > https://www.bbc.com/news/technology-42912529 > > In the scholarly literature, the CNN behind the mechanism > https://arxiv.org/abs/1905.00582 > > Only KR can identify, expose and prevent deepfakes >
Received on Saturday, 29 October 2022 04:02:18 UTC