AI is likely to make our world more divided
Asian governments are split into two camps: Pro-innovation and pro-security
From California to Beijing, and Tokyo to Taipei, global technology leaders are racing to advance artificial intelligence developments, including by expanding new large language models (LLMs) and making rapid progress in improving infrastructure for computing power and chipmaking.
At the same time, however, policymakers around the world have grown increasingly anxious about maintaining a handle on the societal and economic impact of AI and other emerging technologies. The European Union may be proud of having the world's first comprehensive AI law, but the EU AI Act, effective from August this year, should not be viewed as the best role model for others. Many have argued that side effects of the bloc's overregulating approach will slow down innovation as well as increase legal and compliance costs, especially for startups.
According to a recent research report published by The Asia Group (TAG) on Asian governments' different regulatory approaches to AI, most Asian governments do not plan to simply copy the EU model. The so-called Brussels effect -- i.e., the EU's influence on shaping global regulation -- doesn't seem to apply to Asia's AI governance landscape.
Instead, Asian governments are now divided into two camps on AI governance: one group eager to focus on fostering innovation and economic benefits, and another keen to prioritize national security concerns. The more pro-innovation camp (which includes Japan, Taiwan and Singapore) is likely to take a more guidance-oriented and self-regulatory approach. The more pro-security group (which includes China and Vietnam) is already adopting much stricter policies, including many mandatory requirements to approve AI models before they are put into commercial use, as well as heavy financial penalties on AI developers if they violate certain rules.
In the U.S., the Biden administration has adopted a combination of executive orders and self-regulatory guidance to give Silicon Valley more flexibility as it continues to lead AI development globally. In comparison, China has issued dozens of rules and regulations to keep AI under the tight control of the government -- so it can manage to mitigate social and political risks.
Japan and South Korea seem likely to come up with their own ways to govern AI, while India is expected to increase government scrutiny and incorporate AI-related regulations into the nation's forthcoming Digital India Act.
Given these diverse and divided dynamics, it is unlikely that there will be a broad alignment or any uniform AI governance framework with global reach -- although various national approaches will certainly influence one another.
Meanwhile, the AI industry itself is also being dragged into a debate similar to the ongoing competition between Apple -- the owner of the closed iOS ecosystem -- versus Google, the parent of the open-sourced Android system for the smartphone business. OpenAI's co-founder and CEO Sam Altman published an opinion piece in the Washington Post in July, in which he asked a politically charged question: Who should control AI? The U.S. or China?
"The challenge of who will lead on AI is not just about exporting technology, it's about exporting the values that the technology upholds," Altman argued in his opinion piece, also signalling the potential risks in an open-sourced AI model, for which Meta founder and CEO Mark Zuckerberg is a strong advocate.
The U.S. and China are already locked in a de facto "AI war." Soon this U.S.-China rivalry will also extend to global AI standard-setting. China has been assertive in drafting its own AI laws and aims to shape global AI standards through United Nations frameworks and other channels.
Some countries in Asia, Africa and the Middle East will perhaps wait and see which model for AI governance -- the U.S. or China -- may work best for their own social and economic context. A divided world of AI governance may pose more economic challenges to the Global South as some of them already feel they are left behind in the AI race.
Despite the recent trend in regulatory fragmentation -- a major concern highlighted by the United Nation's latest report on global AI governance -- there remains hope that if global leaders could join hands to explore and govern AI, they may consider the technology both the biggest challenge and the biggest opportunity for humankind rather than the property of any single nation on this planet.
Some experts have highlighted the growing demand to create something akin to the International Atomic Energy Agency for AI, or something like the Internet Corporation for Assigned Names and Numbers, also known as ICANN, which was founded to standardize how we navigate the internet. We need international standards for AI, akin to those we already have for internet domains and country codes for telephones.
Looking ahead, however, the AI regulatory landscape will likely to become more -- rather than less -- diverse as both developed and emerging economies begin to advocate for their own AI governance frameworks that are likely to be constantly shifting and inconsistent across jurisdictions, creating significant compliance and operational risks for multinational, data-driven businesses.
(This article, co-byline with Nick Ackert, was first published in Nikkei Asia on October 7, 2024)