- From: Albert Meroño Peñuela <albert.meronyo@gmail.com>
- Date: Tue, 9 Nov 2021 11:06:47 +0000
- To: Albert Meroño Peñuela <albert.meronyo@gmail.com>
- Message-ID: <CAL=mkuz0YxS20Z6DvVJTK7kcAbFUdZbpejZnpC=WG6PaiHXbYw@mail.gmail.com>
Are you interested in pursuing a PhD in safe and trusted artificial intelligence? 12 fully funded PhD studentships available for entry in September 2022. Round A application deadline 8 December 2021. Apply here <https://safeandtrustedai.org/apply-now/>! Register to meet us at an Online Information Session on Tues 16 Nov at 13:00 GMT <https://www.eventbrite.co.uk/e/ukri-centre-for-doctoral-training-in-safe-and-trusted-ai-info-session-tickets-199899182837> . About Us The UKRI Centre for Doctoral Training (CDT) in Safe and Trusted Artificial Intelligence (STAI) <https://safeandtrustedai.org/> brings together world leading experts from King’s College London and Imperial College London to train a new generation of researchers in safe and trusted artificial intelligence (AI). AI technologies are increasingly ubiquitous in modern society, with the potential to fundamentally change all aspects of our lives. While there is great interest in deploying AI in existing and new applications, serious concerns remain about the safety and trustworthiness of current AI technologies. These concerns are well-founded: there is now ample evidence in several application domains (autonomous vehicles, image recognition, etc.) that AI systems may currently be unsafe because of the lack of assurance over their behaviour. The overarching aim of the CDT is to train the first generation of AI scientists and engineers in methods of safe and trusted AI. An AI system is considered to be safe when we can provide some assurance about the correctness of its behaviour, and it is considered to be trusted if the average user can have confidence in the system and its decision making. What we offer The CDT offers a unique four-year PhD programme <https://safeandtrustedai.org/programme/>, focussed on the use of model-based AI techniques for ensuring the safety and trustworthiness of AI systems. Model-based AI techniques provide an explicit language for representing, analysing and reasoning about systems and their behaviours. As a student at the CDT, you will engage in various training activities alongside your individual PhD project, ensuring that not only are you trained in state-of-the-art AI techniques, but also that you acquire a deep understanding of ethical, societal, and legal implications of AI in a research and industrial setting. You will graduate as an expert in safe and trusted AI, able to consider the implications of AI systems in a deep and serious fashion, to recognise this as a key part of the AI development process and equipped to meet the needs of industrial and public sector organisations. Funding The CDT will fund approximately 12 students to join the programme in September 2022. Our fully-funded studentships are 4 year awards that include tuition fees, a tax-free stipend set at the UKRI rate <https://www.ukri.org/our-work/developing-people-and-skills/find-studentships-and-doctoral-training/get-a-studentship-to-fund-your-doctorate/> plus London-weighting, and a generous allowance for research consumables and conference travel. How to Apply Applications are now open! The CDT will consider applications in several rounds, until all places have been filled. Round A deadline is 8 December 2021. Apply Now - Safe & Trusted AI (safeandtrustedai.org) <https://safeandtrustedai.org/apply-now/> We look forward to meeting you online, register here: UKRI Centre for Doctoral Training in Safe and Trusted AI - Info Session Tickets, Tue 16 Nov 2021 at 13:00 GMT | Eventbrite <https://www.eventbrite.co.uk/e/ukri-centre-for-doctoral-training-in-safe-and-trusted-ai-info-session-tickets-199899182837> Questions? Email stai-cdt-admissions@kcl.ac.uk
Received on Tuesday, 9 November 2021 11:07:17 UTC