China's Normative Systems for Responsible AI: From Soft Law to Hard Law

Weixing Shen, Yun Liu
{"title":"China's Normative Systems for Responsible AI: From Soft Law to Hard Law","authors":"Weixing Shen, Yun Liu","doi":"10.1017/9781009207898.012","DOIUrl":null,"url":null,"abstract":"Progress in Artificial Intelligence (AI) technology has brought us novel experiences in many fields and has profoundly changed industrial production, social governance, public services, business marketing, and consumer experience. Currently, a number of AI technology products or services have been successfully produced in the fields of industrial intelligence, smart cities, self-driving cars, smart courts, intelligent recommendations, facial recognition applications, smart investment consultants, and intelligent robots. At the same time, the risks of fairness, transparency, and stability of AI have also posed widespread concerns among regulators and the public. We might have to endure security risks when enjoying the benefits brought by AI development, or otherwise to bridge the gap between innovation and security for the sustainable development of AI. The Notice of the State Council on Issuing the Development Plan on the New Generation of Artificial Intelligence declares that China is devoted to becoming one of the world’s major AI innovation centers. It lists four dimensions of construction goals: AI theory and technology systems, industry competitiveness, scientific innovation and talent cultivation, and governance norms and policy framework. Specifically, by 2020, initial steps to build AI ethical norms and policies and legislation in related fields has been completed; by 2025, initial steps to establish AI laws and regulations, ethical norms and policy framework, and to develop AI security assessment and governance capabilities shall be accomplished; and by 2030, more complete AI laws and regulations, ethical norms, and policy systems shall be accomplished. Under the guidance of the plan, all relevant departments in Chinese authorities are actively building a normative governance system with equal emphasis on soft and hard laws. This chapter focuses on China’s efforts in the area of responsible AI, mainly from the perspective of the evolution of the normative system, and it introduces some recent legislative actions. The chapter proceeds mainly in two parts. In the first part, we would present the process of development from soft law to hard law through a comprehensive view on the normative system of responsible AI in China. In the second part, we set out a legal framework for responsible AI with four dimensions: data, algorithms, platforms, and application scenarios, based on statutory requirements for responsible AI in China in terms of existing and developing","PeriodicalId":306343,"journal":{"name":"The Cambridge Handbook of Responsible Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"The Cambridge Handbook of Responsible Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1017/9781009207898.012","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Progress in Artificial Intelligence (AI) technology has brought us novel experiences in many fields and has profoundly changed industrial production, social governance, public services, business marketing, and consumer experience. Currently, a number of AI technology products or services have been successfully produced in the fields of industrial intelligence, smart cities, self-driving cars, smart courts, intelligent recommendations, facial recognition applications, smart investment consultants, and intelligent robots. At the same time, the risks of fairness, transparency, and stability of AI have also posed widespread concerns among regulators and the public. We might have to endure security risks when enjoying the benefits brought by AI development, or otherwise to bridge the gap between innovation and security for the sustainable development of AI. The Notice of the State Council on Issuing the Development Plan on the New Generation of Artificial Intelligence declares that China is devoted to becoming one of the world’s major AI innovation centers. It lists four dimensions of construction goals: AI theory and technology systems, industry competitiveness, scientific innovation and talent cultivation, and governance norms and policy framework. Specifically, by 2020, initial steps to build AI ethical norms and policies and legislation in related fields has been completed; by 2025, initial steps to establish AI laws and regulations, ethical norms and policy framework, and to develop AI security assessment and governance capabilities shall be accomplished; and by 2030, more complete AI laws and regulations, ethical norms, and policy systems shall be accomplished. Under the guidance of the plan, all relevant departments in Chinese authorities are actively building a normative governance system with equal emphasis on soft and hard laws. This chapter focuses on China’s efforts in the area of responsible AI, mainly from the perspective of the evolution of the normative system, and it introduces some recent legislative actions. The chapter proceeds mainly in two parts. In the first part, we would present the process of development from soft law to hard law through a comprehensive view on the normative system of responsible AI in China. In the second part, we set out a legal framework for responsible AI with four dimensions: data, algorithms, platforms, and application scenarios, based on statutory requirements for responsible AI in China in terms of existing and developing
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
中国负责任人工智能的规范体系:从软法到硬法
人工智能(AI)技术的进步给我们带来了许多领域的新体验,深刻改变了工业生产、社会治理、公共服务、企业营销和消费者体验。目前,在工业智能、智慧城市、自动驾驶汽车、智能法院、智能推荐、面部识别应用、智能投资顾问、智能机器人等领域,已经成功产生了一批人工智能技术产品或服务。与此同时,人工智能的公平性、透明度和稳定性的风险也引起了监管机构和公众的广泛关注。在享受人工智能发展带来的好处的同时,我们可能要承受安全风险,或者为了人工智能的可持续发展,弥合创新与安全之间的鸿沟。《国务院关于印发新一代人工智能发展规划的通知》宣布,中国致力于成为世界主要人工智能创新中心之一。它列出了建设目标的四个维度:人工智能理论与技术体系、产业竞争力、科技创新与人才培养、治理规范与政策框架。具体而言,到2020年,完成人工智能伦理规范和相关领域政策立法建设的初步步骤;到2025年,初步建立人工智能法律法规、道德规范和政策框架,发展人工智能安全评估和治理能力;到2030年,人工智能法律法规、道德规范和政策体系更加完善。在《规划》指导下,中国各有关部门正在积极构建软法与硬法并重的规范治理体系。本章主要从规范体系演变的角度关注中国在负责任人工智能领域的努力,并介绍了近期的一些立法行动。本章主要分为两部分。在第一部分中,我们将通过对中国负责任人工智能规范体系的全面考察,呈现从软法到硬法的发展过程。第二部分根据中国对负责任人工智能的法定要求,从数据、算法、平台、应用场景四个维度构建了负责任人工智能的法律框架
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Artificial Intelligence and the Right to Data Protection Medical AI: Key Elements at the International Level Liability for Artificial Intelligence: The Need to Address Both Safety Risks and Fundamental Rights Risks Morally Repugnant Weaponry?: Ethical Responses to the Prospect of Autonomous Weapons "Hey Siri, How Am I Doing?": Legal Challenges for Artificial Intelligence Alter Egos in Healthcare
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1