{"title":"中国负责任人工智能的规范体系:从软法到硬法","authors":"Weixing Shen, Yun Liu","doi":"10.1017/9781009207898.012","DOIUrl":null,"url":null,"abstract":"Progress in Artificial Intelligence (AI) technology has brought us novel experiences in many fields and has profoundly changed industrial production, social governance, public services, business marketing, and consumer experience. Currently, a number of AI technology products or services have been successfully produced in the fields of industrial intelligence, smart cities, self-driving cars, smart courts, intelligent recommendations, facial recognition applications, smart investment consultants, and intelligent robots. At the same time, the risks of fairness, transparency, and stability of AI have also posed widespread concerns among regulators and the public. We might have to endure security risks when enjoying the benefits brought by AI development, or otherwise to bridge the gap between innovation and security for the sustainable development of AI. The Notice of the State Council on Issuing the Development Plan on the New Generation of Artificial Intelligence declares that China is devoted to becoming one of the world’s major AI innovation centers. It lists four dimensions of construction goals: AI theory and technology systems, industry competitiveness, scientific innovation and talent cultivation, and governance norms and policy framework. Specifically, by 2020, initial steps to build AI ethical norms and policies and legislation in related fields has been completed; by 2025, initial steps to establish AI laws and regulations, ethical norms and policy framework, and to develop AI security assessment and governance capabilities shall be accomplished; and by 2030, more complete AI laws and regulations, ethical norms, and policy systems shall be accomplished. Under the guidance of the plan, all relevant departments in Chinese authorities are actively building a normative governance system with equal emphasis on soft and hard laws. This chapter focuses on China’s efforts in the area of responsible AI, mainly from the perspective of the evolution of the normative system, and it introduces some recent legislative actions. The chapter proceeds mainly in two parts. In the first part, we would present the process of development from soft law to hard law through a comprehensive view on the normative system of responsible AI in China. In the second part, we set out a legal framework for responsible AI with four dimensions: data, algorithms, platforms, and application scenarios, based on statutory requirements for responsible AI in China in terms of existing and developing","PeriodicalId":306343,"journal":{"name":"The Cambridge Handbook of Responsible Artificial Intelligence","volume":"12 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"China's Normative Systems for Responsible AI: From Soft Law to Hard Law\",\"authors\":\"Weixing Shen, Yun Liu\",\"doi\":\"10.1017/9781009207898.012\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Progress in Artificial Intelligence (AI) technology has brought us novel experiences in many fields and has profoundly changed industrial production, social governance, public services, business marketing, and consumer experience. Currently, a number of AI technology products or services have been successfully produced in the fields of industrial intelligence, smart cities, self-driving cars, smart courts, intelligent recommendations, facial recognition applications, smart investment consultants, and intelligent robots. At the same time, the risks of fairness, transparency, and stability of AI have also posed widespread concerns among regulators and the public. We might have to endure security risks when enjoying the benefits brought by AI development, or otherwise to bridge the gap between innovation and security for the sustainable development of AI. The Notice of the State Council on Issuing the Development Plan on the New Generation of Artificial Intelligence declares that China is devoted to becoming one of the world’s major AI innovation centers. It lists four dimensions of construction goals: AI theory and technology systems, industry competitiveness, scientific innovation and talent cultivation, and governance norms and policy framework. Specifically, by 2020, initial steps to build AI ethical norms and policies and legislation in related fields has been completed; by 2025, initial steps to establish AI laws and regulations, ethical norms and policy framework, and to develop AI security assessment and governance capabilities shall be accomplished; and by 2030, more complete AI laws and regulations, ethical norms, and policy systems shall be accomplished. Under the guidance of the plan, all relevant departments in Chinese authorities are actively building a normative governance system with equal emphasis on soft and hard laws. This chapter focuses on China’s efforts in the area of responsible AI, mainly from the perspective of the evolution of the normative system, and it introduces some recent legislative actions. The chapter proceeds mainly in two parts. In the first part, we would present the process of development from soft law to hard law through a comprehensive view on the normative system of responsible AI in China. In the second part, we set out a legal framework for responsible AI with four dimensions: data, algorithms, platforms, and application scenarios, based on statutory requirements for responsible AI in China in terms of existing and developing\",\"PeriodicalId\":306343,\"journal\":{\"name\":\"The Cambridge Handbook of Responsible Artificial Intelligence\",\"volume\":\"12 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1900-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"The Cambridge Handbook of Responsible Artificial Intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1017/9781009207898.012\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"The Cambridge Handbook of Responsible Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1017/9781009207898.012","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
China's Normative Systems for Responsible AI: From Soft Law to Hard Law
Progress in Artificial Intelligence (AI) technology has brought us novel experiences in many fields and has profoundly changed industrial production, social governance, public services, business marketing, and consumer experience. Currently, a number of AI technology products or services have been successfully produced in the fields of industrial intelligence, smart cities, self-driving cars, smart courts, intelligent recommendations, facial recognition applications, smart investment consultants, and intelligent robots. At the same time, the risks of fairness, transparency, and stability of AI have also posed widespread concerns among regulators and the public. We might have to endure security risks when enjoying the benefits brought by AI development, or otherwise to bridge the gap between innovation and security for the sustainable development of AI. The Notice of the State Council on Issuing the Development Plan on the New Generation of Artificial Intelligence declares that China is devoted to becoming one of the world’s major AI innovation centers. It lists four dimensions of construction goals: AI theory and technology systems, industry competitiveness, scientific innovation and talent cultivation, and governance norms and policy framework. Specifically, by 2020, initial steps to build AI ethical norms and policies and legislation in related fields has been completed; by 2025, initial steps to establish AI laws and regulations, ethical norms and policy framework, and to develop AI security assessment and governance capabilities shall be accomplished; and by 2030, more complete AI laws and regulations, ethical norms, and policy systems shall be accomplished. Under the guidance of the plan, all relevant departments in Chinese authorities are actively building a normative governance system with equal emphasis on soft and hard laws. This chapter focuses on China’s efforts in the area of responsible AI, mainly from the perspective of the evolution of the normative system, and it introduces some recent legislative actions. The chapter proceeds mainly in two parts. In the first part, we would present the process of development from soft law to hard law through a comprehensive view on the normative system of responsible AI in China. In the second part, we set out a legal framework for responsible AI with four dimensions: data, algorithms, platforms, and application scenarios, based on statutory requirements for responsible AI in China in terms of existing and developing