UK government gains research access to big tech AI models

In a significant move, the UK government secures access to AI models from tech giants, aiming to enhance research and safety measures in the field of artif

UK Prime Minister Rishi Sunak recently unveiled a significant development in the country's AI oversight strategy during his speech at London Tech Week.

In an effort to enhance research and safety measures, notable technology companies such as Google DeepMind, OpenAI, and Anthropic have made a commendable commitment to granting the UK government "early or priority access" to their AI models.

Although the extent and nature of the data that will be shared with the UK government remain undisclosed at the moment, inquiries have been made to Google, OpenAI, and Anthropic to shed light on this matter.

UK Government seeks accountability, safety and transparency

This announcement follows the government's recent declaration of conducting an initial evaluation of AI model accountability, safety, transparency, and other ethical concerns.

Playing a pivotal role in this endeavour will be the country's Competition and Markets Authority. 

Moreover, the UK has demonstrated its dedication to investing an initial sum of £100m (approximately US$125.5m) to establish the Foundation Model Taskforce.

This taskforce will be responsible for developing "sovereign" AI that will drive economic growth in Britain while simultaneously minimising ethical and technical predicaments.

Industry leaders call for temporary suspension of AI development

The move by the UK government comes in response to mounting calls from industry leaders and experts for a temporary suspension of AI development.

These concerns stem from worries that developers are forging ahead without adequate consideration for safety measures.

Although generative AI models like OpenAI's GPT-4 and Anthropic's Claude have garnered praise for their vast potential, they have also sparked apprehension due to issues such as inaccuracies, misinformation, and potential abuses, including cheating.

Consequently, the UK's decision to gain direct access to these models represents a significant step towards mitigating these concerns and proactively addressing problematic models before they cause substantial harm.

It is important to note that this arrangement does not grant the UK government unfettered access to the entirety of these models and their underlying code.

Furthermore, it does not guarantee that every major issue will be promptly identified by the government. However, the access provided does hold the promise of yielding valuable insights.

At the very least, this initiative signifies a marked increase in transparency within the AI landscape, a crucial development given the current uncertainty surrounding the long-term impact of these systems.

Share

Featured Articles

Mobile AI in 2024: Unlocking smartphone opportunities

From Samsung, to Google, to Qualcomm, AI Magazine considers how enterprises are unlocking further value in Mobile AI via smartphones and other devices

A year of events: Tech LIVE Virtual, Cloud & 5G LIVE & more

We look back at our events from 2023, which focused on some of the hottest topics in technology: from sustainability and AI to quantum computing

Magazine roundup: Top 100 women in technology 2023

We take a look at some of the leading women in the tech sector and how their contributions to the field are advancing global digital transformation

OpenAI preparedness framework: Enhancing global AI safety

Machine Learning

GenAI as key to accelerating digital transformation in India

AI Strategy

Humane chooses cloud telecom Optiva BSS for AI Pin launch

AI Applications