Laws required as machines make friends and influence people

AI abuse is rising and humans need to be protected by proactive laws and regulations as machines are forging relationships with humans, and other machines

A trans-Atlantic team of researchers claims the Fourth Industrial Revolution has an overwhelming focus on technology-enabled business benefits and has called for global industry to act quickly to regulate emerging machine capabilities, including how machines can forge relationships with humans and other machines.

Focusing on business outcomes rather than societal benefits means far less attention is paid to the increased risk and damage to humans. “We’re talking about AI the wrong way — focusing on technology, not people — moving us away from the things we want, such as better medications, elder care and safety regulations, and toward the things we don’t, like harmful deep fakes, job losses and biased decision making,” says Nigel Melville, associate professor of technology and operations at U-M’s Ross School of Business and design science program director.

“Our new framework is a theory-based attempt to go in a new direction by recentering humans in actions and outcomes in the discourse about ever-smarter machines.”

Melville co-authored the study, which appears in the Information Systems Journal, with Lionel Robert, professor at the U-M School of Information, and Xiao Xiao, associate professor at the Copenhagen Business School.

Taking ChatGPT as an example, the researchers say developers likely never intended to enable student cheating or AI-generated recommendation letters. But it’s clear they didn’t fully consider the social implications of their AI chatbot, focusing instead on benefits such as reduced operational costs.

Their study aims to help society, including regulators considering AI safety regulations and organisations considering AI, understand the societal implications of ever-smarter machines.

For example, the results suggest a shift in how policy is made. Lawmakers are reactively considering regulations about technologies already entering the market, so laws tend to adopt a narrow view of AI.  Instead, the study suggests they need to turn their attention to the bigger picture and write proactive laws that address four emerging machine capabilities:

Decision-making

Where do we draw the line on machine decision-making, how do we move that line as machines advance and how do we proactively manage bias and other forms of invalid decisions?

Creativity automation

How will we manage job losses in creative fields such as visual illustration and music, who owns the rights to AI-generated work and what to do about perfect and cheap deep fakes of anyone?

Machine-human relationships

As humans increasingly interact with machines with human-like capabilities, how will workers react to AI bosses, how might “AI friends” influence democracy and how might human relationships themselves change over time as a result?

Machines teaming with other machines

How do we manage exponential capabilities of intermachine teaming, craft rules about machine interaction to balance positive outcomes with risk and determine how much autonomy is appropriate when considering the risks of intermachine teaming as complex systems?

The researchers believe that framing AI as human-like machine capabilities of cognition and communication could be a valuable approach for the research community to support the societal implications of ever-smarter machines. This human-centred framing takes a step away from the obscurity of technology, instead focusing on what machines do in human terms and the implications for humans, society, and businesses. By doing so, the potential social implications and ethical considerations of AI are more apparent and easier to address.

However, the study recognises that much more research is required to build on these ideas, such as developing ethical frameworks to support policies for machine capabilities. The authors believe that it is essential to start thinking about and addressing the implications of AI before smart machines that emulate human capabilities operate in the world. This proactive approach can help correct any issues early on or even prevent them from occurring in the first place.

“In the short run, our approach may simply limit their negative impacts, but in the long run, it may lead to the development and deployment of AI systems based on their benefits and costs for our society,” says Robert.

Share

Featured Articles

Mobile AI in 2024: Unlocking smartphone opportunities

From Samsung, to Google, to Qualcomm, AI Magazine considers how enterprises are unlocking further value in Mobile AI via smartphones and other devices

A year of events: Tech LIVE Virtual, Cloud & 5G LIVE & more

We look back at our events from 2023, which focused on some of the hottest topics in technology: from sustainability and AI to quantum computing

Magazine roundup: Top 100 women in technology 2023

We take a look at some of the leading women in the tech sector and how their contributions to the field are advancing global digital transformation

OpenAI preparedness framework: Enhancing global AI safety

Machine Learning

GenAI as key to accelerating digital transformation in India

AI Strategy

Humane chooses cloud telecom Optiva BSS for AI Pin launch

AI Applications