China Races To Regulate AI After Playing Catchup To ChatGPT

After playing catchup to ChatGPT, China is racing to regulate the rapidly-advancing field of artificial intelligence (AI).

Under draft regulations released this week, Chinese tech companies will need to register generative AI products with China’s cyberspace agency and submit them to a security assessment before they can be released to the public.

The regulations cover practically all aspects of generative AI, from how it is trained to how users interact with it, in an apparent bid by Beijing to control the at times unwieldy technology, the break-neck development of which has prompted warnings from tech leaders including Elon Musk and Apple co-founder Steve Wozniak.

Under the rules unveiled by the Cyberspace Administration of China on Tuesday, tech companies will be responsible for the “legitimacy of the source of pre-training data” to ensure content reflects the “core value of socialism”.

Companies must ensure AI does not call for the “subversion of state power” or the overthrow of the ruling Chinese Communist Party (CCP), incite moves to “split the country” or “undermine national unity”, produce content that is pornographic, or encourage violence, extremism, terrorism or discrimination.

They are also restricted from using personal data as part of their generative AI training material and must require users to verify their real identity before using their products.

Those who violate the rules will face fines of between 10,000 yuan ($1,454) and 100,000 yuan ($14,545) as well as a possible criminal investigation.

While China has yet to match the success of California-based Open AI’s groundbreaking ChatGPT, its push to regulate the nascent field has moved faster than elsewhere.

AI in the United States is still largely unregulated outside of the recruiting industry. AI regulation has yet to receive much traction in US Congress, although privacy-related regulations around AI are expected to start rolling out at the state level this year.

The European Union has proposed sweeping legislation known as the AI Act that would classify which kinds of AI are “unacceptable” and banned, “high risk” and regulated, and unregulated.

The law would follow up on the EU’s 2018 General Data Protection Regulation, passed in 2018, which is considered one of the toughest data privacy-protection laws in the world.

eu
The EU is preparing legislation to designate certain AI as “unacceptable” and “high risk” [File: Johanna Geron/Reuters]

Beyond the US and the EU, Brazil is also working towards AI regulation, with a draft law under consideration by the country’s Senate.

The proposed rules, which are still in the draft stage and are open to public feedback until May, come on the heels of a broader regulatory crackdown on its tech industry that began in 2020, targeting everything from anti-competitive behaviour to how user data is handled and stored.

Since then, Chinese regulators have introduced data privacy rules, created a registry of algorithms and, most recently, began to regulate deep synthesis, aka “deep fake”, technology.

The regulatory push ensures that “big tech companies in China are following a direction that the party-state wants,” Chim Lee, a Chinese technology analyst at the Economist Intelligence Unit, told Al Jazeera.

Compared with other tech, Generative AI poses a particularly sticky problem for the CCP, which is “concerned about the potential for these large language models to generate politically sensitive content”, Jeffrey Ding, an assistant professor at George Washington University who studies the Chinese tech sector, told Al Jazeera.

Human-like chatbots like ChatGPT, which is restricted in China, scrap millions of data points from across the internet, including on topics deemed taboo by Beijing, such as Taiwan’s disputed political status and the 1989 Tiananmen Square crackdown.

In 2017, two early Chinese chatbots were taken offline after they told users they did not love the CCP and wanted to move to the US.

ChatGPT, which was released in November, has also generated controversy in the West, from telling a user posing as a mental health patient to take their own life to encouraging a New York Times journalist to leave his wife.

While the answers to questions produced by ChatGPT have impressed many users, they have also included inaccurate information and other hiccups such as broken URL links.

Chinese competitors to ChatGPT, like Baidu’s ERNIE, trained on data from outside China’s “Great Firewall”, including information gleaned from banned websites such as Wikipedia and Reddit. Despite its access to information deemed sensitive by Beijing, ERNIE has been widely viewed as inferior to ChatGPT.

Beijing’s rules around AI could be a major headache to implement for companies like Baidu and Alibaba, which this week released its ChatGPT rival Tongyi Qianwen, Matt Sheehan, a researcher at the Carnegie Endowment for International Peace, told Al Jazeera.

Sheehan said the regulations set an “extremely high bar” and it was unclear if companies would be able to meet them under the currently available technology.

Regulators may choose to not enforce the rules strictly in the beginning, unless they find particularly egregious violations or decide to make an example of a particular company, Sheehan added.

“Like a lot of Chinese regulation, they define things pretty broadly, so it essentially shifts the power to the regulators and the enforcers such that they can enforce, and they can punish companies when they choose to,” he said, adding this may particularly be the case if they produce “inaccurate” outputs that go against the official government narrative. Al Jazeera

About newsroom

Check Also

Access To Information Should Not Be Abused To Cause Anarchy

Zimbabweans should not abuse the government’s enhancement of the right to access information by peddling …

Leave a Reply

Your email address will not be published. Required fields are marked *