Although their attempts to keep up with developments in artificial intelligence have mostly failed, regulators around the world are taking very different approaches to controlling the technology. The result is a highly fragmented and confusing global regulatory landscape for borderless technology that promises to transform labor markets, contribute to the spread of disinformation, or even pose a risk to humanity.
The main regulatory frameworks for AI include:
European law based on risk: The European Union’s AI law, under negotiation on Wednesday, imposes regulations proportional to the level of risk posed by an AI tool. The idea is to create a sliding scale of regulations aimed at imposing the heaviest restrictions on the riskiest AI systems. The law would categorize AI tools into four designations: unacceptable, high, limited and minimal risk.
Unacceptable risks include AI systems that perform social scoring of individuals or real-time facial recognition in public places. They would be banned. Other tools with less risk, such as software that generates manipulated videos and “deepfake” images, should reveal that people are seeing AI-generated content. Violators could be fined 6% of their global sales. Minimal risk systems include spam filters and AI-generated video games.
American Voluntary Codes of Conduct: The Biden administration has given businesses the ability to voluntarily self-regulate against safety and security risks. In July, the White House announced that several AI makers, including Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI, had agreed to self-regulate their systems.
Voluntary commitments included third-party security testing of tools, known as networking, research into bias and privacy issues, sharing risk information with governments and other organizations, and developing tools to combat societal challenges such as climate change, while including transparency measures to identify AI-generated material. Companies were already meeting many of these commitments.
American law based on technology: Any substantive regulation of AI will have to come from Congress. Senate Majority Leader Chuck Schumer, Democrat of New York, has promised a comprehensive AI bill, perhaps next year.
But so far, lawmakers have introduced bills focused on the production and deployment of AI systems. Proposals include creating an agency like the Food and Drug Administration that could create regulations for AI providers, approve licenses for new systems and set standards. Sam Altman, chief executive of OpenAI, supported the idea. Google, however, has proposed that the National Institute of Standards and Technology, founded more than a century ago without regulatory authority, serve as a center for government oversight.
Other bills focus on copyright violations by AI systems that gobble up intellectual property to create their systems. Proposals on election security and limiting the use of “deep fakes” have also been put forward.
China is moving quickly in speech regulation: Since 2021, China made rapid progress in implementing regulations on recommendation algorithms, synthetic content such as deep fakes and generative AI. The rules prohibit price discrimination by recommendation algorithms on social networks, for example. AI creators must label AI-generated synthetic content. And draft rules for generative AI, like OpenAI’s chatbot, would require that training data and content created by the technology be “true and precise”, which many see as an attempt to censor what the systems say.
Global cooperation: Many experts say effective regulation of AI will require global collaboration. So far, these diplomatic efforts have produced few concrete results. One idea that has been floated is the creation of an international agency, similar to the International Atomic Energy Agency, created to limit the proliferation of nuclear weapons. The challenge will be overcoming the geopolitical distrust, economic competition and nationalist impulses that have become so closely linked to the development of AI.