Monday, November 25, 2024

A Way To Govern Ethical Use Of AI

- Advertisement -

Researchers have developed an Artificial Intelligence (AI) model that guards against the potential harms of AI without hindering technological advances.

Artificial intelligence (AI) promises to revolutionize nearly every aspect of our daily lives. However, misuse of AI-based tools could cause harm. This potential for harm calls for ethical guidance through regulation and policy. But the rapid advancement of AI and the often-inflexible nature of government regulation have made creating such ethical guidance challenging.

Researchers from Texas A&M University School of Public Health have developed a new governance model for ethical guidance and enforcement in the rapidly advancing field of artificial intelligence (AI) known as Copyleft AI with Trusted Enforcement, or CAITE. This model combines aspects of copyleft licensing and the patent-troll model, two methods of managing intellectual property rights that can be considered at odds with one another.

- Advertisement -

The CAITE model is built on an ethical use license. This license would restrict certain unethical AI uses and require users to abide by a code of conduct. Importantly, it would use a copyleft approach to ensure that developers who create derivative models and data must also use the same license terms as the parent works. The license would assign the enforcement rights of the license to a designated third-party known as a CAITE host. In this way, the enforcement rights for all these ethical use licenses would pool in a single organization, empowering the CAITE host as a quasi-government regulator of AI.

CAITE hosts can set consequences for unethical actions such as financial penalties or reporting instances of consumer protection law violations. On the other hand, the CAITE approach allows for leniency policies that can promote self-reporting and gives flexibility that typical government enforcement schemes often lack. For example, incentives for AI users to report biases that they discover in their AI models could enable the CAITE host to warn other AI users who are relying on those potentially dangerous AI models.

Researchers say that the model, while flexible, will require participation of a large portion of the AI community. Although, pilot implementation of ethical policies built using the CAITE approach will require further research and funding. Implementing this model will also rely on AI community members from many different disciplines to develop its features and overcome challenges that arise.

Reference : C. D. Schmit et al, Leveraging IP for AI governance, Science (2023). DOI: 10.1126/science.add2202


SHARE YOUR THOUGHTS & COMMENTS

EFY Prime

Unique DIY Projects

Electronics News

Truly Innovative Electronics

Latest DIY Videos

Electronics Components

Electronics Jobs

Calculators For Electronics