Government Launches AI Safety Scheme to Tackle Deepfakes

Summary: The UK government has launched a new AI safety research program aimed at enhancing resilience against AI-related threats such as deepfakes and misinformation, with grants of up to £200,000 available for researchers. This initiative seeks to foster public trust in AI technologies while supporting innovative solutions to mitigate potential risks in critical sectors.

Threat Actor: UK Government | UK Government
Victim: Public and Industries | Public and Industries

Key Point :

  • The AI Safety Institute’s Systemic Safety Grants Programme will fund up to 20 projects to address AI threats.
  • Research aims to identify critical risks associated with frontier AI adoption in sectors like healthcare and finance.
  • The initiative is part of a broader strategy to boost public trust in AI technologies and their applications.
  • 30% of information security professionals reported experiencing a deepfake-related incident in the past year.

The UK government has announced a new AI safety research program that it hopes will accelerate adoption of the technology by improving resilience to deepfakes, misinformation, cyber-attacks and other AI threats.

The first phase of the AI Safety Institute’s Systemic Safety Grants Programme will provide researchers with up to £200,000 ($260,000) in grants.

Launched in partnership with the Engineering and Physical Sciences Research Council (EPSRC) and Innovate UK, it will support research into mitigating AI threats and potentially major systemic failures.

The hope is that this scientific scrutiny will identify the most critical risks of so-called “frontier AI adoption” in sectors like healthcare, energy and financial services, alongside potential solutions which will aid the development of practical tools to mitigate these risks.

Read more on AI safety: AI Seoul Summit: 16 AI Companies Sign Frontier AI Safety Commitments.

Science, innovation and technology secretary, Peter Kyle, said that his focus is to accelerate AI adoption in order to boost growth and improve public services.

“Central to that plan though is boosting public trust in the innovations which are already delivering real change. That’s where this grants programme comes in,” he added.

“By tapping into a wide range of expertise from industry to academia, we are supporting the research which will make sure that as we roll AI systems out across our economy, they can be safe and trustworthy at the point of delivery.”

The Systemic Safety Grants Programme will ultimately back around 20 projects with funding of up to £200,000 each in this first phase. That’s around half of the £8.5m announced by the previous government at May’s AI Seoul Summit. Additional funding will become available as further phases are launched.

“By bringing together researcher from a wide range of disciplines and backgrounds into this process of contributing to a broader base of AI research, we’re building up empirical evidence of where AI models could pose risks so we can develop a rounded approach to AI safety for the global public good,” said AI Safety Institute chair, Ian Hogarth.

Research released in May revealed that 30% of information security professionals had experienced a deepfake-related incident in the previous 12 months, the second most popular answer after “malware infection.”

Source: https://www.infosecurity-magazine.com/news/uk-government-launches-ai-safety