top of page

China follows EU’s lead in labeling AI-generated content

  • Staff Writer
  • Sep 16, 2024
  • 3 min read

Updated: Dec 14, 2024


AI Image

China wants digital platforms to label all content generated with artificial intelligence (AI). Cyberspace Administration of China, the country’s top Internet regulator, has released a new draft seeking comments from the tech industry on how to label AI-generated online content. 


The draft proposes that for all AI-generated content, platforms should add an AI logo on the text, image and video as well as in the embedded metadata of the file. In case of audio files, a voice prompt that tells listeners the content is AI generated, also needs to be added at the start and the end too. 

Online users will also have to label their AI generated content along with the name of the AI software used to generate it. If the metadata is missing in a  file, the online platform is required to verify if its AI or user generated and to add a label if its the former. 


Lawmakers in the European Union have already taken similar steps to ensure users know when they are interacting with an AI-generated content. EU’s new AI Act, adopted in April 2024, also requires AI service providers to label their output as AI generated.


Many tech firms have already started taking steps to align with these new regulations.  For instance, Meta requires its users to label video or audio content that has been digitally generated or altered using AI. Although users are not required to label AI generated images, Meta automatically adds a label if its systems detect use of AI in its creation or alteration.  Moreover, Meta has issued warnings that it will penalize users for violating the AI labeling rules. 


Last week, Meta announced that it will be rolling out a change in the “AI info” labels to reflect the extent of AI used in content on Facebook and Instagram. 


The recent breakthrough in generative AI and the subsequent roll out of applications that can create human-like text, art and images have flooded social media networks with AI-generated content. It's often difficult to tell such content apart from user-generated content. 


Further, the use of AI-generated deepfakes to manipulate voters in the lead up to the elections in several countries along with the rise in online frauds, has prompted lawmakers and online platforms to take action against such content. 


In January 2024, an AI-generated robocall impersonating the voice of US president Joe Biden was made to thousands of US voters with a false message that urged them to boycott the upcoming US elections in November. 


Unlike the US where use of deepfakes in elections has been banned, in many countries including India several politicians used their audio and video deepfakes to reach out to voters and deliver messages in different languages and dialects. 

According to a May 2024 report in the Wired, more than 50 million AI-generated voice clone calls were made to voters in two months before the elections in April. 


After the release of ChatGPT in October 2022, several users turned to the generative AI based chatbot to write messages, emails, briefs and articles. Many students were also caught using it to write assignments, leading to a lot of backlash from colleges and teachers. 


OpenAI on its part has also been mulling the idea of adding a watermark to text generated with ChatGPT. While OpenAI claims that its text watermarking tool is quite accurate and effective even against paraphrasing, there is a lack of consensus within the company to make it public. 



Image credit: Pixabay


bottom of page