What the U.S. Can Learn From China About Regulating AI
Over the past two years, China has enacted some of the world’s earliest and most sophisticated rules for AI.
By Matt Sheehan, a fellow at the Carnegie Endowment for International Peace.
SEPTEMBER 12, 2023, 3:04 PM
On Sept. 13, U.S. Senate Majority Leader Chuck Schumer will hold a
closed-door AI Insight Forum to inform how Congress should approach
regulating artificial intelligence. Among the attendees will be Alphabet
CEO Sundar Pichai and Tesla CEO Elon Musk, as well as representatives
from U.S. labor and civil society organizations.
But for grounded insights into how to regulate AI, Schumer’s team should look in an unlikely place: China.
Over the past two years, China has enacted some of the world’s earliest
and most sophisticated regulations targeting AI. On the surface, these
regulations are often anathema to what U.S. leaders hope to achieve. For
instance, China’s recent generative AI regulation mandates that
companies uphold “core socialist values,” whereas Schumer has called for
legislation requiring that U.S. AI systems “align with our democratic
values.”
Yet those headline ideological differences blind us to an uncomfortable
reality: The United States can actually learn a lot from China’s
approach to governing AI. Of course, Washington shouldn’t require that
AI systems “adhere to the correct political direction,” as one Chinese
regulation mandates. But if we can look beyond the ideological content
of the rules, we can learn from the underlying structure of the
regulations and the process by which China has rolled them out. If taken
seriously, those structure- and process-oriented lessons could be
invaluable as U.S. leaders navigate a morass of AI issues over the
coming years.
The clearest difference between the nascent congressional approach and
China’s regulations lies in their scope. Schumer is leading a push for
comprehensive AI legislation that would address the technology’s impact
on national security, jobs, misinformation, bias, democratic values, and
more. That approach is praiseworthy for its ambition, but cramming
solutions to all of these problems into a single piece of legislation is
almost impossible. The contours of these problems are just coming into
focus, and the interventions needed to address each issue may prove
wildly different.
By contrast, the Chinese government has taken a targeted and iterative
approach to AI governance. Instead of immediately going for one
all-encompassing law that covers all of AI, China has picked out
specific applications that it was concerned about and developed a series
of regulations to tackle those concerns. That has allowed it to
steadily build up new policy tools and regulatory know-how with each new
regulation. And when China’s initial regulations proved insufficient
for a fast-moving technology like AI, it quickly iterated on them.
The Chinese government started with a pair of regulations targeting AI
applications that threatened a core priority: control over the creation
and dissemination of information online. Algorithm-driven news apps,
including one created by TikTok’s parent company, ByteDance, were
eroding the Chinese Communist Party’s ability to prioritize which news
stories got put in front of Chinese readers.
So, in 2021, China’s cyberspace regulator rolled out new rules governing
the recommendation algorithms used to personalize content for users. It
ordered companies to ensure recommended content didn’t violate
censorship controls and gave Chinese users new rights, such as the right
to turn off algorithmic recommendations and to delete certain tags used
to personalize content for them. The regulations even reinforced the
rights of gig workers whose schedules and compensation are determined by
algorithms—an attempt by Chinese regulators to address public outcry
over exploitative labor practices by algorithm-driven food delivery
companies.
At the same time, the Chinese government was growing concerned about the
impact of deepfakes. So, in 2022, it imposed a set of rules covering
“deep synthesis,” a Chinese term for synthetically generated images,
video, audio, and text—what we today call generative AI. The regulation
contained many boilerplate ideological controls, but it also mandated
that companies apply digital watermarks and conspicuous labels to
synthetically generated content, a policy idea recently pushed by the
White House.
However, just five days after China’s deep synthesis regulation was
enacted, OpenAI changed the game when it released ChatGPT. The Chinese
regulation technically covered AI-generated text, but it was designed
with visual content in mind. Large language models such as ChatGPT
presented new issues, so Chinese regulators quickly set to work crafting
a new generative AI regulation addressing those concerns. They released
a draft regulation in April and a finalized version in July. Even that
finalized regulation that went into effect in August is still labeled as
“interim,” allowing for further iteration on it as the technology
evolves.
That quick turnaround was made easier because China had used the
previous two regulations to begin building out its regulatory toolkit
for AI. Key among these tools was the algorithm registry, a government
database for gathering basic information on algorithmic systems.
Companies deploying algorithms in regulated fields must disclose what
datasets they were trained on, whether they utilize biometric
information, and the results of a “security self-assessment” conducted
by the companies.
The registry was created by the regulation on recommendation algorithms,
and it was reused in the deep synthesis and generative AI regulations.
Similarly, the requirement to label AI-generated content first appeared
in the deep synthesis regulation and was then included in the generative
AI regulation.
Along the way, Chinese regulators have been learning from and iterating
on these requirements. They’re figuring out what they don’t know and
what information is actually useful for regulators to gather. It’s a
learning-by-doing approach that prioritizes getting started on specific
problems before trying to craft one all-encompassing regulation.
And that’s where Schumer and his colleagues can learn something from
China’s approach. Instead of trying to pass one massive piece of
umbrella AI legislation, Congress should pick one or two concrete issues
to tackle first—for example, misinformation threats from highly
realistic deepfakes.
In crafting that targeted regulation, policymakers can build up their
understanding of the technology and effective interventions. They can
begin creating new regulatory tools, such as technical watermarking
requirements or model audits, that can be reused and iterated on in
future legislation. Accumulating that know-how and building up
regulatory tools will then allow policymakers to respond more quickly to
new AI challenges that lie ahead.
As countries around the world begin to experiment with how to govern AI,
there is an enormous opportunity to learn from one another. In my
conversations with members of China’s AI policy community, they
consistently ask about the latest proposals in the United States and
Europe, analyzing these debates and picking out ideas that can be
adapted for the Chinese context. Despite the political and ideological
differences, China’s policy community remains committed to understanding
the U.S. approach to governing AI and learning from it where it can.
That willingness to learn from a rival can be a major advantage in
geopolitics. If policymakers in the United States can manage to do the
same, it might just give them a leg up in the competition to shape the
future of AI, both at home and abroad.
Matt Sheehan is a fellow at the Carnegie Endowment for International Peace. Twitter: @mattsheehan88