- New grants offered to researchers to push the limits of AI security research
- funding program launched as the UK Government seeks to explore new methods to increase the safe and reliable deployment of AI Funded research
- the grants will aim to understand how society can adapt to the transformations generated by the advent of AI.
At AI At the summit in Seoul today (May 22), co-hosted by the UK and the Republic of Korea, Technology Secretary Michelle Donelan announced that the UK government would offer grants to researchers to study how to protect society against AI risks such as deepfakes and cyberattacks, while helping to harness its benefits, such as increased productivity.
The most promising proposals will be transformed into longer-term projects and may benefit from additional funding.
THE program (published on www.aisi.gov.uk) will be carried out as part of the UK Government’s pioneering project AI Safety Institute by Shahar Avin, a AI security researcher who will join the UK Institute on secondment and Christopher Summerfield, UK AI Research Director of the Security Institute. The research program will be carried out in partnership with UK Research and Innovation, the Alan Turing Institute and the United Kingdom. AI Safety Institute will aim to collaborate with other AI International security institutes. Applicants will need to be based in the UK but will be encouraged to collaborate with other researchers around the world.
The British government, pioneer AI The Safety Institute is a world leader in the testing and evaluation of AI models, advancing the cause of safe and reliable AI. Earlier this week, the AI The Safety Institute has released its first set of public test results. AI models. It also announced the opening of a new office in the United States and a partnership with the Canadian company AI Safety Institute – building on a historic agreement with the United States earlier this year.
The new grant program is designed to expand the Institute’s mandate to include the emerging field of ” AI security”, which aims to understand how to mitigate the impacts of AI at the societal level and study how our institutions, systems and infrastructures can adapt to the transformations brought about by this technology.
Examples of in-scope proposals would include ideas on how to curb the spread of false images and disinformation by intervening on the platforms that spread them, rather than on the platforms that spread them. AI models that generate them.
Technology Secretary Michelle Donelan said:
When the United Kingdom launched the first AI Safety Institute last year, we committed to an ambitious but urgent mission to reap the positive benefits of AI by advancing the cause of AI security.
With evaluation systems for AI models now in place, phase 2 of my plan to safely exploit opportunities for AI it must be about doing AI safe throughout society.
This is exactly what we are making possible with this funding which will enable our Institute to partner with academia and industry to ensure that we continue to be proactive in developing new approaches that can help us ensure AI continues to be a transformative force for good.
I am keenly aware that we can only address this momentous challenge by tapping into a broad and diverse pool of talent and disciplines and moving forward with new approaches that push the boundaries of existing knowledge and methodologies.
The Honorable François-Philippe Champagne, Minister of Innovation, Science and Industry, said:
Canada continues to play a leading role in global governance and the responsible use of human resources. AI.
Our role in championing the creation of the Global Partnership on AI (GPAI), to the creation of a national project AI strategy, to be among the first to propose a legislative framework to regulate AIWe will continue to collaborate with the global community to shape the international discourse to build confidence around this transformational technology.
THE AISI The Systemic Security Program aims to attract proposals from a wide range of public and private sector researchers, who will work closely with the UK Government to ensure their ideas have maximum impact.
It will take place alongside the evaluation and testing by the Institute of AI Models with which the Institute will continue to work AI laboratories to establish development standards and help guide AI towards a positive impact.
Christopher Summerfield, United Kingdom AI The Safety Institute’s research director said:
This new grant program is a major step in ensuring that AI is deployed safely in society.
We need to think carefully about how to adapt our infrastructure and systems to a new world in which AI is ingrained in everything we do. This program is designed to generate a large body of ideas on how to solve this problem and to ensure that the big ideas can be put into practice.
THE AI Seoul Summit builds on inaugural summit AI Security Summit hosted by the UK at Bletchley Park in November last year and is one of the largest ever gatherings of nations, businesses and civil society on AI.
UKRI Director General, Professor Dame Ottoline Leyser, said:
THE AI The work of the Safety Institute is essential to understanding AI risks and create solutions to maximize the societal and economic value of AI for all citizens. UKRI is delighted to work closely with the Institute on this new program to ensure that the UK’s institutions, systems and infrastructure can safely benefit from AI.
This program builds on the UK’s world leader AI expertise, and UKRIIt is AI investment portfolio encompassing skills, research, infrastructure and innovation, to ensure effective governance of AI deployment in society and the economy.
The program will put security research at the heart of government, underpinning the innovation-friendly regulation that will shape the UK’s digital future.
Professor Helen Margetts, Director of Public Policy at the Alan Turing Institute, said:
We are delighted to be part of this important initiative which we hope will have a significant impact on the UK’s ability to deal with threats from AI technology and ensuring the safety of people. Rapid technological advances are driving profound changes in the information environment, shaping our social, economic and democratic interactions.
This is why the financing AI security is vital – to ensure we are all protected from the potential risks of misuse whilst maximizing the benefits of AI for a positive impact on society.
Notes to editors
AI researcher Shahar Avin will lead the grants program from the UK AI Safety Institute, and bring a wealth of knowledge and experience to ensure that the proposals achieve their full potential in protecting the public from the risks of AI while exploiting its benefits. He is a senior research associate at the Center for the Study of Existential Risk (CSER) and previously worked at Google.
THE program will be carried out in partnership with UK Research and Innovation and the Alan Turing Institute.
You can learn more about the recent announcement regarding the opening of the Institute’s San Francisco office on AI result of model testing, and UNITED KINGDOM AISIThe partnership with the United States and Canada AI Security institutes.