This has ben reported heavily over the last week – here’s Tech Monitor’s take.
Six years ago, memes comparing Xi Jinping to Winnie the Pooh spread like wildfire across China’s internet before being snuffed out by the country’s censors. Creating and disseminating more sophisticated digital imagery of the honey-loving bear could now earn you a prison term in the country, as a new deepfakes law called the ‘Provisions on the Administration of Deep Synthesis of Internet Information Services’ comes into effect this week. As nations around the world mull over regulations to target one of the most disruptive media technologies in recent years, Beijing is preparing to wage a new war on any online content it considers to be a threat to its stability and legitimacy in the eyes of the Chinese people.
China is not the only nation to consider new regulations on deepfakes. Both the UK and Taiwanese governments have announced their intention to ban the creation and sharing of deepfake pornographic videos without consent, with similar legislation being proposed in the US at the federal level (several states have already passed such laws.) The latest regulations in China, however, extend to any deepfake content, imposing new rules on its creation, dissemination and labelling – in effect, going much further in scope and detail than most other existing national legislation concerning synthetic audio and video.
China first
Part of the reason why China has decided to press ahead with such wide-ranging regulations on deepfakes is down to its desire to set the agenda for regulating the next generation for disruptive technologies. The CCP has always been aggressive about content regulation, explains Rui Ma, a Chinese technology analyst and co-host of the Tech Buzz China podcast. “China has made it clear it wants to be a regulatory leader in emerging tech,” says Ma. “It realizes that if large markets make the rules first, those rules tend to stick and become a reference point for other countries – assuming it is well-researched and reasonable, of course.”
She adds that China is well aware of how norms surrounding emerging technologies are established through regulations, as “many of its own laws are based upon precedent in the United States and the European Union”. One of the more prominent examples is China’s Personal Information Protection Law introduced in November 2021, which largely mirrors the EU’s landmark General Data Protection Regulation. With its new regulations on deepfakes, China is taking an even greater step in establishing itself as a reference point, rather than following the lead of other jurisdictions.
But the scope of the incoming deepfakes law goes far beyond what most people assume deepfakes are, which might include artificially created videos of public figures overlaid with audio from someone else: an eventuality the Chinese state had to contend with in 2019 with the emergence of ZAO, a popular deepfaking app shut down three days after release for privacy violations. According to China’s Cyberspace Administration, the government views deepfakes as a wide-ranging medium to conduct all sorts of crimes and mischief, from spreading ‘illegal and bad information’, to ‘harming the legitimate rights and interests of the people, endangering national security and social stability’.
But for Henry Ajder, a prominent adviser on generative AI and synthetic media, the wide-ranging scope of the deepfakes law makes sense from the perspective of the Chinese state. “Having a short-term horizon of looking at what is possible now will mean that these laws are going to be rapidly outdated,” says Ajder. “Given how long it takes for these laws to get passed, it makes sense to try and future-proof it by covering as many different kinds of synthetic as possible, particularly [given] how 2022 saw a rapid change in the accessibility of these tools.”
Some aspects of China’s deepfakes law also mirror the areas being discussed in international regulation around synthetic content. This includes building secure data pipelines to protect user privacy, fostering algorithmic transparency to understand security vulnerabilities and how bias creeps in, and clearly labelling deepfake content – all of which has been included in the EU’s AI Act and Digital Services Act, as well as in various state and federal legislation in the US, explains Ajder.
“Having a responsibility, either as an end user or as a platform, to label fake content is probably going to be something we’re going to have to start relying on,” he says. After all, Ajder explains, “it’s only a matter of time before more sophisticated technology comes in the form of gamified and low to no-code kinds of applications that anyone can use.”