AI ethics model could emerge as “quasi-government" watchdog

Framework for guarding against potential AI harms that sidelines the inflexible nature of government regulation is in the works but will need global buy-in

US researchers are working on a governance model for ethical AI guidance that combines speed and flexibility with enforcement powers but doesn’t rely on a government regulator to be a watchdog to crack down on those who refuse to follow the industry’s lead.

However, the researchers accept that without a government body to take overall responsibility, the new model would demand the involvement of a significant portion of the AI community if it is to work and would empower the model controllers as a quasi-government regulator of artificial intelligence.

Referred to as Copyleft AI with Trusted Enforcement (CAITE), the Texas A&M University School of Public Health researchers behind this model believe it can safeguard against the potential harms of AI while still promoting technological progress. Cason Schmit, an Assistant Professor at the School of Public Health and Director of the Program in Health Law and Policy, along with Megan Doerr from Sage Bionetworks and Jennifer Wager from Penn State, recently described the details of their model in a Science journal article.

Rapid pace of change demands ethical guidance

Artificial intelligence has the potential to revolutionise almost every aspect of our daily lives, but its misuse could result in harm, particularly for communities already facing inequality and discrimination. This possibility of harm calls for ethical guidance through regulation and policy. However, the rapid pace of AI advancements and the often rigid nature of government regulation has made it challenging to create such ethical guidance.

Schmit, Doerr, and Wager developed the CAITE model to address these challenges. CAITE combines copyleft licensing and the patent-troll model, two methods of managing intellectual property rights that can appear incompatible.

Copyleft licensing permits the sharing of intellectual property under specific conditions, such as attributing the original creator or using it non-commercially, and derived works must employ the same licence terms as the original. Creative Commons licences are an example of copyleft licensing. However, copyleft licensing schemes usually have little enforcement power.

On the other hand, the patent-troll approach uses enforcement rights to guarantee compliance. An organisation that owns intellectual property rights and sues others in court to earn revenue rather than creating or licensing technology is referred to as a patent troll.

The CAITE model is built on an ethical use licence. This licence would restrict certain unethical AI applications and require users to follow a code of conduct. It would use a copyleft approach to ensure that developers creating derivative models and data must use the same licence terms as the parent works. The enforcement rights of the licence would be assigned to a designated third party known as a CAITE host. Enforcement rights for all these ethical use licences would pool in a single organisation, empowering the CAITE host as a quasi-government regulator of AI.

“This approach combines the best of two worlds: a model that is as fast and flexible as industry, but with enforcement teeth and power of a traditional government regulator,” says Schmit.

Significant portion of the AI community required

Entrusting a non-government party designated by the AI developer community could offer greater enforcement flexibility and increase oversight trust, say the authors. CAITE hosts can establish consequences for unethical behaviour, including financial penalties or reporting violations of consumer protection laws. 

At the same time, the CAITE approach allows for leniency policies that promote self-reporting and provide flexibility often lacking in typical government enforcement schemes. For example, offering incentives to AI users who disclose biases in their AI models could enable the CAITE host to alert other AI users who depend on those potentially hazardous AI models.

Schmit and colleagues acknowledge that the CAITE approach, while adaptable, will demand the involvement of a significant portion of the AI community. Pilot implementations of ethical policies developed using the CAITE approach will require additional research and funding. Implementing this model will also rely on AI community members from many different disciplines to design its features and address challenges that arise.

Although it will require substantial community support and perhaps government incentives, Schmit and colleagues suggest that the industry will likely prefer the more flexible CAITE framework to the strict and slow-to-adjust regulations that governments may eventually impose.

“Efforts to promote ethical and trustworthy AI must go beyond what is legally mandated as the baseline for acceptable conduct,” says Wagner. “We can and should strive to do better than what is minimally acceptable.”

Share

Featured Articles

Mobile AI in 2024: Unlocking smartphone opportunities

From Samsung, to Google, to Qualcomm, AI Magazine considers how enterprises are unlocking further value in Mobile AI via smartphones and other devices

A year of events: Tech LIVE Virtual, Cloud & 5G LIVE & more

We look back at our events from 2023, which focused on some of the hottest topics in technology: from sustainability and AI to quantum computing

Magazine roundup: Top 100 women in technology 2023

We take a look at some of the leading women in the tech sector and how their contributions to the field are advancing global digital transformation

OpenAI preparedness framework: Enhancing global AI safety

Machine Learning

GenAI as key to accelerating digital transformation in India

AI Strategy

Humane chooses cloud telecom Optiva BSS for AI Pin launch

AI Applications