What can the EU learn from China’s generative AI regulations before passing its AI law?

By Dr Kris Shrishak, Senior Researcher, Irish Council for Civil Liberties
If the EU is careful, it could choose to adopt two specific requirements of China’s generative AI bill: prohibit the use of copyrighted content and personal data to train AI without consent, writes Dr. Kris Shrishak.
Companies are in a rush to deploy new versions of generative AI. They are also integrating these AI systems into various products.
Google recently announced that it would use generative AI to serve search results, an approach that mimics that of Microsoft. Other companies prohibit their employees from using ChatGPT.
As companies work to promote and ban generative AI, the risks associated with these systems increase.
These generative AI systems are used as “experiments” in the world despite knowledge of their flaws. What should regulators and legislators do about it?
The EU tries to face the risks
Competition and data protection regulators can and do use the tools at their disposal to address AI risks.
Lina Khan, chair of the US Federal Trade Commission (FTC) for competition and consumer protection, has written how existing powers can be enforced. At the same time, the UK competition authority has launched an investigation into generative AI.
The European Data Protection Board has launched a working group to promote the exchange of information relating to the application of data protection related to ChatGPT. The Italian data protection authority took action and forced OpenAI to make limited data protection improvements.
These disparate attempts have only amplified calls for the regulation of AI systems.
On May 11, 2023, lawmakers in the European Parliament voted on a draft AI Regulation, also known as the AI Act, which attempts to regulate generative AI systems like ChatGPT by setting requirements minimal.
Until now, EU lawmakers have either ignored or only superficially attempted to address the risks of generative AI. And this despite the fact that some of these risks were already known in 2021.
Meanwhile, China is developing its own regulations
However, this new attempt is not the first in the world. China proposed regulations suitable for generative AI in April 2023.
The EU and China have taken opposing approaches to regulating generative AI. The EU’s approach may seem superficially harsh, but it is China that imposes strict requirements on the development of generative AI.
It might come as a surprise that two requirements for the development of generative AI included in China’s draft law might strengthen people’s protection: stronger copyright and data protection.
The lawsuits against Stability AI have raised the question of whether copyrighted content can be used to develop generative AI without consent.
China’s bill has an answer to that. It prevents the use of data that infringes intellectual property to develop generative AI.
China may not have a strong track record when it comes to IP enforcement, but that seems to be changing. This change is evident in his bill.
The EU, on the other hand, does not take a firm position on the matter and only requires developers to provide “a sufficiently detailed summary of the use of training data protected by copyright law”. As in the case of Stability AI, the rights holders will have to file a complaint.
You should be solely responsible for your personal data
The General Data Protection Regulation (GDPR) is the EU’s flagship regulation, and with regard to the processing of personal data by generative AI, the AI law does not impose additional requirements .
As OpenAI and Google gobble up personal data in their development of generative AI, EU citizens will have to wait for data protection regulators and courts to decide the legality of this.
China, on the other hand, will only allow the use of personal data for the development of generative AI with consent.
Baidu is not expected to harvest the personal data of individuals in China from the internet for its upcoming generative AI system. What about citizens of the EU and the rest of the world?
Perhaps the most disappointing part of GDPR has been its lack of enforcement.
You could imagine the EU would have learned from its mistake and made the AI law easier to enforce. Alas, this is not the case.
What about the truth?
The irresponsible deployment of generative AI should have prompted lawmakers to require third-party evaluation of these technologies before they are rolled out around the world.
Instead, the EU continues to rely on the self-assessments of the developers of these systems. This is concerning and could create another app debacle, like with GDPR.
If the EU is careful, it could choose to adopt two specific requirements of China’s generative AI bill: prohibit the use of copyrighted content and personal data to train AI without consent.
Text-generating AI systems like ChatGPT are sometimes referred to as “bullshit generators” which can generate and spread misinformation. They have no notion of truth within them.
If the developers had to include the truth, whose would it be?
If China has its way, these AI systems will generate results where the oppression of Uyghurs in Xinjiang is labeled a “fight against separatism, extremism and terrorism.”
The EU bill, at this point, has nothing to say about the truth.
Dr Kris Shrishak is Senior Researcher at the Irish Council for Civil Liberties, Ireland’s oldest independent human rights watchdog.
At Euronews, we believe that all points of view matter. Contact us at view@euronews.com to send presentations or submissions and be part of the conversation.
euronews Gt