Legislation of AI accelerates pace
A display of books on law at Wangfujing Bookstore in Beijing Photo: Yang Lanlan/CSST
The current wave of artificial intelligence (AI) is sweeping across the globe, with the sector gradually developing characteristics of scale and industrialization. However, the rapid development of AI has also introduced challenges such as data privacy and security concerns, algorithmic bias and discrimination, and the increasing complexity of human-machine relationships. These issues are continually testing the limits of existing social order and legislative systems.
On August 1, the world’s first comprehensive AI regulation—the European Union’s Artificial Intelligence Act—officially came into effect. This legislation aims to promote the widespread use of trustworthy AI while safeguarding democracy, human rights, and the rule of law. In fact, many countries and regions around the world are actively working to find the best way for AI technology to coexist with human society by developing regulatory frameworks for AI. Examples include China’s Interim Measures for the Management of Generative Artificial Intelligence Services and Singapore’s Model AI Governance Framework.
New development stage
Current domestic AI governance is advancing into a new phase of innovative development, particularly with the formulation and release of the world’s first generative AI governance regulations, namely the above-mentioned “Interim Measures,” said Zhang Linghan, a professor from the Data Law Research Institute at China University of Political Science and Law (CUPL). China has in fact become the only country in the world to establish a comprehensive and multi-level AI governance system that covers all aspects, including policy planning, science and technology ethics, laws, administrative regulations, departmental rules, and technical standards.
With the accelerated pace of AI governance in China, some scholars observe that the country’s legislative efforts in the field of AI can be characterized as “small steps, rapid progress.” At present, China has already established a multi-level AI compliance system combining laws with regulations. In the view of Gong Tingtai, deputy director of the China Institute of Law and Modernization at Nanjing Normal University, current AI legislation in China presents three major trends: systematization, unification of rule of law and rule of virtue, and the coordination of domestic rule of law and foreign-related rule of law.
From the perspective of coordinating domestic and foreign-related rule of law, the rapid development of AI presents both opportunities and challenges for global development, necessitating a global legal response. Therefore, the combination of domestic rule of law and foreign-related governance is seen as an effective strategy for global AI governance. China has already introduced initiatives such as the Global Initiative on Data Security and the Global Artificial Intelligence Governance Initiative, and is actively participating in the negotiation and implementation of international AI cooperation agreements, including the Regional Comprehensive Economic Partnership Agreement and the Bletchley Declaration. These moves represent a global trend in AI governance and legislation, contributing to the broader goal of building a human community with a shared future.
Academic exploration
China is actively advancing the development of its legal framework for AI systems. However, in practice, problems remain, such as the obvious lack of a systematic approach, poor coordination between different legal provisions, conflicts in the hierarchy of legal norms, and the fragmentation of legal documents. In response, the academic community has engaged in efforts to address these issues through AI legislation. To date, two proposed versions of an “AI law” have been publicly released by scholars. One is the Model Law on Artificial Intelligence (Expert Draft Proposal) Version 2.0, released by the Institute of Law at the Chinese Academy of Social Sciences (CASS) and other institutions. The other is the Artificial Intelligence Law of the People’s Republic of China (Draft of Expert Suggestions), issued by the Data Law Research Institute at CUPL and other institutions.
Zhou Hui, deputy director of the internet and information law research department at the Institute of Law under CASS, highlighted the commonalities between the two versions of the AI law. Both drafts place significant emphasis on establishing frameworks supportive of AI development. Both propose measures to support the AI industry in terms of the use of training data, the provision of computing power, and innovation in algorithm models. Additionally, they emphasize the rational use of public resources and advocate for the development of an open-source AI ecosystem. At the same time, both versions clearly differentiate between the roles of AI developers, providers, and users, and design specific regulatory measures tailored to each of these groups.
The differences between the two versions lie in their respective specific governance schemes. For example, Zhou explained that the Model Law on Artificial Intelligence (Expert Draft Proposal) Version 2.0 proposes the establishment of a dedicated AI regulatory authority to take a more centralized and unified approach to AI governance. This authority would be responsible for overseeing the licensing of AI research and development and would be empowered to create a “negative list” that identifies areas of AI research and development requiring prior regulatory approval. This approach would enhance the predictability of AI industry development and technological application, reduce compliance costs, and ensure strict supervision to avoid regulatory uncertainty.
Zhang noted that the key difference between the two versions lies in the regulatory approach toward AI. The Artificial Intelligence Law of the People’s Republic of China (Draft of Expert Suggestions) introduces regulatory frameworks specifically for critical AI and AI used in special application areas. Unlike the pre-licensing approval system required by the negative list approach, critical AI does not have a predefined entry threshold. Instead, relevant entities need only to fulfill obligations, such as registration on regulatory platforms only after receiving notification from the competent authorities identifying their AI as critical. For AI in special application areas, the draft imposes additional safety obligations tailored to the specific context of the application. This regulatory design aims to reduce the entry barriers for innovation in the AI industry while ensuring safety requirements are met. By alleviating the burdens of pre-approval, it seeks to foster the healthy development of China’s AI industry while maintaining necessary safety standards.
Editor:Yu Hui
Copyright©2023 CSSN All Rights Reserved