OpenAI preparedness framework: Enhancing global AI safety

OpenAI has released details of its new preparedness framework that aims to mitigate AI risks and prioritise safe and responsible model development

OpenAI has this week (18th December 2023) released an initial version of its preparedness framework to better facilitate safe and responsible AI models.

As part of the AI company expanding its safety processes, a new safety advisory group has been put in place to make recommendations to leadership. Most notably, the board will maintain veto power and can choose to prevent the release of an AI model even if leadership declares the AI as safe.

This news comes at the end of what has been a very exciting year for OpenAI. In addition to experiencing fast-paced development, the company has also seen turbulence in its executive board, with Sam Altman having been ousted and then reinstated as the company CEO in a space of one week in November 2023.

Advancing the study into AI risk

The ChatGPT developer says in its framework: “The study of frontier AI risks has fallen far short of what is possible and where we need to be. To address this gap and systematise our safety thinking, we are adopting the initial version of our Preparedness Framework. 

It describes OpenAI’s processes to track, evaluate, forecast and protect against risks posed by increasingly powerful AI models.

“By catastrophic risk, we mean any risk which could result in hundreds of billions of dollars in economic damage or lead to the severe harm or death of many individuals - this includes, but is not limited to, existential risk,” the company says.

As reported by The Washington Post, Sam Altman says that regulation to try to prevent harmful impacts of AI shouldn’t make it harder for smaller companies to compete. It also highlighted that at the same time, Altman has pushed the company to commercialise its technology to facilitate faster growth.

OpenAI’s decision to publicise its framework highlights how every company developing AI needs to hold itself to account - balancing business growth with responsibility. Given the immense popularity that ChatGPT has seen in just one year, the company clearly recognises the significance of ensuring AI is without risk.

Eliminating bias and mitigating global concerns

Its framework will focus on mitigating the misuse of current AI models and products like ChatGPT. The preparedness team will be led by Professor Aleksander Madry and will hire AI researchers, computer scientists, national security experts and policy professionals to monitor the technology, continually test it and warn the company if it believes any of its AI capabilities are becoming dangerous.

The Preparedness team will also map out the emerging risks of frontier models, with the company investing in capability evaluations and forecasting to better detect emerging risks. In particular, the company wishes to go beyond the hypothetical and work with data-driven predictions. 

In addition, the company has said that it will run evaluations and continually update ‘scorecards’ for its models. It will evaluate all of its frontier models to help the team assess the risks of its models to develop protocols for added safety and outside accountability. This will include preventing racial biases, for instance, to ensure that the AI systems do not develop to the point of causing harm. 

Previously, the company was a part of forming the Frontier Model Forum with Google, Anthropic and Microsoft with the goal of regulating AI development to ensure it is developed and harnessed responsibly. 

The forum aims to help advance research into AI safety, identity safety best practices for frontier models and share knowledge with policymakers and academics to advance responsible AI development and leverage AI to address social challenges.

******

Make sure you check out the latest edition of AI Magazine and also sign up to our global conference series - Tech & AI LIVE 2024

******

AI Magazine is a BizClik brand

Share

Featured Articles

Mobile AI in 2024: Unlocking smartphone opportunities

From Samsung, to Google, to Qualcomm, AI Magazine considers how enterprises are unlocking further value in Mobile AI via smartphones and other devices

A year of events: Tech LIVE Virtual, Cloud & 5G LIVE & more

We look back at our events from 2023, which focused on some of the hottest topics in technology: from sustainability and AI to quantum computing

Magazine roundup: Top 100 women in technology 2023

We take a look at some of the leading women in the tech sector and how their contributions to the field are advancing global digital transformation

GenAI as key to accelerating digital transformation in India

AI Strategy

Humane chooses cloud telecom Optiva BSS for AI Pin launch

AI Applications

A call for increased investment following AI summit

AI Strategy