US to regulate tech companies into sharing AI safety results

The government order will require technology companies to share data on AI safety, hoping to continue a global precedent concerning responsible systems

The White House has announced what it calls “the most significant actions ever taken by any government to advance the field of AI safety.”

An executive order from President Biden requires AI developers to share safety results with the United States (US) government, hoping to place the country at the centre of the global debate when it comes to AI governance.

Inevitably, continued widespread adoption of AI has raised concerns relating to trust, bias, privacy and safety. This news from the US comes in the midst of multiple countries developing new regulations and strategies to handle AI and ensure the safety of these new systems. 

An ‘aggressive strategy’ to ensure responsible AI

The order, which was issued on Monday (30th October 2023), is by far the most significant step taken by the US to tackle AI risk in relation to national security.

Measures include creating new safety and security standards for AI, which includes regulations that require technology companies to share safety test results with the federal government.

In addition, the goal is to create programs to evaluate potentially harmful AI practices, including those within the healthcare sector, as well as creating resources on how to responsibly use AI tools.

These standards will expect businesses to work with their international partners to create ‘best practice’ solutions, as well as protecting the value data of both companies and customers.

“President Biden is rolling out the strongest set of actions any government in the world has ever taken on AI safety, security and trust,” said Bruce Reed, White House Deputy Chief of Staff, as reported by The Financial Times. “It’s the next step in an aggressive strategy to do everything on all fronts to harness the benefits of AI and mitigate the risks.”

Technology giants like Google, Anthropic, Microsoft and OpenAI have already voluntarily partnered to champion responsible AI systems and ensure safe development. Prior to the announcement from the White House, their goals were to advance AI safety research to promote responsible development of frontier models, minimise risks and enable independent, standardised evaluations of capabilities and safety.

The ‘race’ to become global AI leaders?

This news comes in the wake of the United Kingdom (UK) announcing that it will hold the world's first-ever Global AI Safety Summit this week (1st November 2023) at Bletchley Park, signifying a level of commitment and dedication towards safeguarding the world from AI threats.

In his speech last week, UK Prime-Minister Rishi Sunak touched on how “AI doesn’t respect borders,” citing the importance of collaboration between the world’s leading powers. 

The summit is aiming to bring together global leaders and technology experts to discuss in detail appropriate regulations for AI technology. It will focus on addressing the potential misuse of AI by malicious individuals, including cyberattacks and bioweapon development, as well as the risks associated with the loss of AI control.

US Vice President Kamala Harris, as well as top executives from the US tech giants, will be arriving in the UK to attend the summit.

“We're bringing together the world's leading representatives from civil society to the company's pioneering AI, and the country's most advanced in using it. And yes, we’ve invited China,” said Sunak.

The Prime-Minister also pointed out that there can be no serious strategy for AI without at least trying to engage all of the world's leading AI powers. He says: “That might not have been the easy thing to do, but it was the right thing to do.”

The US and The People’s Republic of China have faced a complicated relationship in the face of AI this year, with the US imposing regulations on the country to limit investment into Chinese technology sectors.

Set to be imposed in 2024, the order would increase regulation in Chinese AI, semiconductor and quantum computing sectors - inevitably impacting international business relations.

******

For more insights into the world of AI - check out the latest edition of AI Magazine and be sure to follow us on LinkedIn & Twitter.

Other magazines that may be of interest - Technology Magazine | Cyber Magazine.

Please also check out our upcoming event - Net Zero LIVE on 6 and 7 March 2024.

******

BizClik is a global provider of B2B digital media platforms that cover Executive Communities for CEOs, CFOs, CMOs, Sustainability leaders, Procurement & Supply Chain leaders, Technology & AI leaders, Cyber leaders, FinTech & InsurTech leaders as well as covering industries such as Manufacturing, Mining, Energy, EV, Construction, Healthcare and Food.

BizClik – based in London, Dubai, and New York – offers services such as content creation, advertising & sponsorship solutions, webinars & events.

Share

Featured Articles

Mobile AI in 2024: Unlocking smartphone opportunities

From Samsung, to Google, to Qualcomm, AI Magazine considers how enterprises are unlocking further value in Mobile AI via smartphones and other devices

A year of events: Tech LIVE Virtual, Cloud & 5G LIVE & more

We look back at our events from 2023, which focused on some of the hottest topics in technology: from sustainability and AI to quantum computing

Magazine roundup: Top 100 women in technology 2023

We take a look at some of the leading women in the tech sector and how their contributions to the field are advancing global digital transformation

OpenAI preparedness framework: Enhancing global AI safety

Machine Learning

GenAI as key to accelerating digital transformation in India

AI Strategy

Humane chooses cloud telecom Optiva BSS for AI Pin launch

AI Applications