Re: Talk on AI to UCL, 27 October 2023

I forgot one detail. One approach to personalising responses is through augmenting the user’s prompt with personal details. This is best done using locally executed LLMs.

> On 28 Oct 2023, at 11:29, Dave Raggett <dsr@w3.org> wrote:
> 
> Yesterday I have an invited talk at University College London. It was an updated version of the talk I presented earlier this month at the University of Bath.  I’ve therefore updated the following link:
> 
>  https://www.w3.org/2023/10/10-Raggett-AI.pdf
> 
> There were many good questions, including the following:
> 
> Q: What are the implications for gathering datasets in respect to personalised responses by large language models?   Won’t this require lots of very sensitive personal information to fine tune the models?
> 
> A: One approach is based upon curating a set of principles that can be used to train automated means to assess responses and thereby update language models to generate responses consistent with those principles.  Users can be matched to broad categories, e.g. adults and children when it comes to tailoring responses to their needs. A further approach is to condense large language models to enable them to be run locally on the user’s own devices, thereby obviating the need to pass any sensitive information to the cloud. Advances in AI hardware will make local execution of models easier to deploy.
> 
> Q: What about the risks of AI in respect to disinformation and deep fakes?
> 
> A: AI will be increasingly important for combatting disinformation and for safeguarding social media. This is likely to be a repeat of the evolutionary battle we have seen in respect to spam on email and the spread of computer viruses and other malware.  AI will allow social media operators to scale up safeguards in a way that is impractical using human moderators. Regulators have a role to play in ensuring the operators deliver on this.
> 
> Q: What about the existential risks of AGI and the threat of “Skynet” like systems?
> 
> A: I see this as primarily a political ploy to divert public attention from the short term risks of how businesses and governments are applying AI, along with the dominance of a few giant companies.  I am particularly concerned with dehumanising effects of using AI in place of human to human interactions. We are a social species, and focusing only on lowering corporate costs via ever further automation will have increasingly negative consequences for society. We should instead focus on how to apply AI for the good of society, to boost prosperity, reduce inequality and give people more meaningful and satisfying work.
> 
> Generative AI is far from AGI in that it is based upon statistical prediction without continual learning. AGI won’t appear suddenly out of the blue. There is a lot of research to be done before we get to AGI, and we should focus on modest capabilities at first, and ensuring that we have a good practical understanding of how such systems work as we evolve the capabilities further, including how to build-in safety measures.
> 
> Human-like AI is key to progress on safe AGI, and as our understanding improves, we can also look forward to progress on specialised AI e.g. powering advances in medical care through understanding of protein folding and cellular biochemistry. I am also keen to see work on replacing plastics with new materials that can be readily recycled without harm to the environment.
> 
> Dave Raggett <dsr@w3.org>
> 
> 
> 

Dave Raggett <dsr@w3.org>

Received on Saturday, 28 October 2023 13:56:26 UTC