首页 > 最新文献

The Cambridge Handbook of Responsible Artificial Intelligence最新文献

英文 中文
Artificial Intelligence and the Right to Data Protection 人工智能与数据保护的权利
Pub Date : 2021-01-19 DOI: 10.2139/SSRN.3769159
Ralf Poscher
One way in which the law is often related to new technological developments is as an external restriction. Lawyers are frequently asked whether a new technology is compatible with the law. This implies an asymmetry between technology and the law. Technology appears dynamic, the law stable. We know, however, that this image of the relationship between technology and the law is skewed. The right to data protection itself is an innovative reaction to the law from the early days of mass computing and automated data processing. The paper explores how an essential aspect of AI-technologies, their lack of transparency, might support a different understanding of the right to data protection. From this different perspective, the right to data protection is not regarded as a fundamental right of its own but rather as a doctrinal enhancement of each fundamental right against the abstract dangers of digital data collection and processing. This understanding of the right to data protection shifts the perspective from the individual data processing operation to the data processing system and the abstract dangers connected with it. The systems would not be measured by how they can avoid or justify the processing of some personal data but by the effectiveness of the mechanisms employed to avert the abstract dangers associated with a specific system. This shift in perspective should also allow an assessment of AI-systems despite their lack of transparency.
法律经常与新技术发展联系在一起的一种方式是作为一种外部限制。律师们经常被问到一项新技术是否符合法律。这意味着技术和法律之间的不对称。技术是动态的,法律是稳定的。然而,我们知道,这种技术与法律之间关系的形象是有偏差的。数据保护权利本身就是对大规模计算和自动化数据处理早期法律的创新反应。本文探讨了人工智能技术的一个重要方面,即缺乏透明度,如何支持对数据保护权利的不同理解。从这一不同的角度来看,数据保护权利本身不被视为一项基本权利,而是从理论上加强了每项基本权利,以防范数字数据收集和处理的抽象危险。这种对数据保护权利的理解将视角从个人数据处理操作转移到数据处理系统以及与之相关的抽象危险。衡量这些制度的标准不是看它们如何避免或证明处理某些个人资料是正当的,而是看为避免与特定制度有关的抽象危险所采用的机制的有效性。这种观点的转变也应该允许对人工智能系统进行评估,尽管它们缺乏透明度。
{"title":"Artificial Intelligence and the Right to Data Protection","authors":"Ralf Poscher","doi":"10.2139/SSRN.3769159","DOIUrl":"https://doi.org/10.2139/SSRN.3769159","url":null,"abstract":"One way in which the law is often related to new technological developments is as an external restriction. Lawyers are frequently asked whether a new technology is compatible with the law. This implies an asymmetry between technology and the law. Technology appears dynamic, the law stable. We know, however, that this image of the relationship between technology and the law is skewed. The right to data protection itself is an innovative reaction to the law from the early days of mass computing and automated data processing. The paper explores how an essential aspect of AI-technologies, their lack of transparency, might support a different understanding of the right to data protection. From this different perspective, the right to data protection is not regarded as a fundamental right of its own but rather as a doctrinal enhancement of each fundamental right against the abstract dangers of digital data collection and processing. This understanding of the right to data protection shifts the perspective from the individual data processing operation to the data processing system and the abstract dangers connected with it. The systems would not be measured by how they can avoid or justify the processing of some personal data but by the effectiveness of the mechanisms employed to avert the abstract dangers associated with a specific system. This shift in perspective should also allow an assessment of AI-systems despite their lack of transparency.","PeriodicalId":306343,"journal":{"name":"The Cambridge Handbook of Responsible Artificial Intelligence","volume":"48 10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116311424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Data Governance and Trust: Lessons from South Korean Experiences Coping with COVID-19 数据治理与信任:韩国应对COVID-19的经验教训
Pub Date : 1900-01-01 DOI: 10.1017/9781009207898.024
Sangchul Park, Yong Lim, Haksoo Ko
{"title":"Data Governance and Trust: Lessons from South Korean Experiences Coping with COVID-19","authors":"Sangchul Park, Yong Lim, Haksoo Ko","doi":"10.1017/9781009207898.024","DOIUrl":"https://doi.org/10.1017/9781009207898.024","url":null,"abstract":"","PeriodicalId":306343,"journal":{"name":"The Cambridge Handbook of Responsible Artificial Intelligence","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128462404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
China's Normative Systems for Responsible AI: From Soft Law to Hard Law 中国负责任人工智能的规范体系:从软法到硬法
Pub Date : 1900-01-01 DOI: 10.1017/9781009207898.012
Weixing Shen, Yun Liu
Progress in Artificial Intelligence (AI) technology has brought us novel experiences in many fields and has profoundly changed industrial production, social governance, public services, business marketing, and consumer experience. Currently, a number of AI technology products or services have been successfully produced in the fields of industrial intelligence, smart cities, self-driving cars, smart courts, intelligent recommendations, facial recognition applications, smart investment consultants, and intelligent robots. At the same time, the risks of fairness, transparency, and stability of AI have also posed widespread concerns among regulators and the public. We might have to endure security risks when enjoying the benefits brought by AI development, or otherwise to bridge the gap between innovation and security for the sustainable development of AI. The Notice of the State Council on Issuing the Development Plan on the New Generation of Artificial Intelligence declares that China is devoted to becoming one of the world’s major AI innovation centers. It lists four dimensions of construction goals: AI theory and technology systems, industry competitiveness, scientific innovation and talent cultivation, and governance norms and policy framework. Specifically, by 2020, initial steps to build AI ethical norms and policies and legislation in related fields has been completed; by 2025, initial steps to establish AI laws and regulations, ethical norms and policy framework, and to develop AI security assessment and governance capabilities shall be accomplished; and by 2030, more complete AI laws and regulations, ethical norms, and policy systems shall be accomplished. Under the guidance of the plan, all relevant departments in Chinese authorities are actively building a normative governance system with equal emphasis on soft and hard laws. This chapter focuses on China’s efforts in the area of responsible AI, mainly from the perspective of the evolution of the normative system, and it introduces some recent legislative actions. The chapter proceeds mainly in two parts. In the first part, we would present the process of development from soft law to hard law through a comprehensive view on the normative system of responsible AI in China. In the second part, we set out a legal framework for responsible AI with four dimensions: data, algorithms, platforms, and application scenarios, based on statutory requirements for responsible AI in China in terms of existing and developing
人工智能(AI)技术的进步给我们带来了许多领域的新体验,深刻改变了工业生产、社会治理、公共服务、企业营销和消费者体验。目前,在工业智能、智慧城市、自动驾驶汽车、智能法院、智能推荐、面部识别应用、智能投资顾问、智能机器人等领域,已经成功产生了一批人工智能技术产品或服务。与此同时,人工智能的公平性、透明度和稳定性的风险也引起了监管机构和公众的广泛关注。在享受人工智能发展带来的好处的同时,我们可能要承受安全风险,或者为了人工智能的可持续发展,弥合创新与安全之间的鸿沟。《国务院关于印发新一代人工智能发展规划的通知》宣布,中国致力于成为世界主要人工智能创新中心之一。它列出了建设目标的四个维度:人工智能理论与技术体系、产业竞争力、科技创新与人才培养、治理规范与政策框架。具体而言,到2020年,完成人工智能伦理规范和相关领域政策立法建设的初步步骤;到2025年,初步建立人工智能法律法规、道德规范和政策框架,发展人工智能安全评估和治理能力;到2030年,人工智能法律法规、道德规范和政策体系更加完善。在《规划》指导下,中国各有关部门正在积极构建软法与硬法并重的规范治理体系。本章主要从规范体系演变的角度关注中国在负责任人工智能领域的努力,并介绍了近期的一些立法行动。本章主要分为两部分。在第一部分中,我们将通过对中国负责任人工智能规范体系的全面考察,呈现从软法到硬法的发展过程。第二部分根据中国对负责任人工智能的法定要求,从数据、算法、平台、应用场景四个维度构建了负责任人工智能的法律框架
{"title":"China's Normative Systems for Responsible AI: From Soft Law to Hard Law","authors":"Weixing Shen, Yun Liu","doi":"10.1017/9781009207898.012","DOIUrl":"https://doi.org/10.1017/9781009207898.012","url":null,"abstract":"Progress in Artificial Intelligence (AI) technology has brought us novel experiences in many fields and has profoundly changed industrial production, social governance, public services, business marketing, and consumer experience. Currently, a number of AI technology products or services have been successfully produced in the fields of industrial intelligence, smart cities, self-driving cars, smart courts, intelligent recommendations, facial recognition applications, smart investment consultants, and intelligent robots. At the same time, the risks of fairness, transparency, and stability of AI have also posed widespread concerns among regulators and the public. We might have to endure security risks when enjoying the benefits brought by AI development, or otherwise to bridge the gap between innovation and security for the sustainable development of AI. The Notice of the State Council on Issuing the Development Plan on the New Generation of Artificial Intelligence declares that China is devoted to becoming one of the world’s major AI innovation centers. It lists four dimensions of construction goals: AI theory and technology systems, industry competitiveness, scientific innovation and talent cultivation, and governance norms and policy framework. Specifically, by 2020, initial steps to build AI ethical norms and policies and legislation in related fields has been completed; by 2025, initial steps to establish AI laws and regulations, ethical norms and policy framework, and to develop AI security assessment and governance capabilities shall be accomplished; and by 2030, more complete AI laws and regulations, ethical norms, and policy systems shall be accomplished. Under the guidance of the plan, all relevant departments in Chinese authorities are actively building a normative governance system with equal emphasis on soft and hard laws. This chapter focuses on China’s efforts in the area of responsible AI, mainly from the perspective of the evolution of the normative system, and it introduces some recent legislative actions. The chapter proceeds mainly in two parts. In the first part, we would present the process of development from soft law to hard law through a comprehensive view on the normative system of responsible AI in China. In the second part, we set out a legal framework for responsible AI with four dimensions: data, algorithms, platforms, and application scenarios, based on statutory requirements for responsible AI in China in terms of existing and developing","PeriodicalId":306343,"journal":{"name":"The Cambridge Handbook of Responsible Artificial Intelligence","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129157220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fostering the Common Good: An Adaptive Approach Regulating High-Risk AI-Driven Products and Services 促进共同利益:一种监管高风险人工智能驱动产品和服务的适应性方法
Pub Date : 1900-01-01 DOI: 10.1017/9781009207898.011
Thorsten Schmidt, S. Voeneky
{"title":"Fostering the Common Good: An Adaptive Approach Regulating High-Risk AI-Driven Products and Services","authors":"Thorsten Schmidt, S. Voeneky","doi":"10.1017/9781009207898.011","DOIUrl":"https://doi.org/10.1017/9781009207898.011","url":null,"abstract":"","PeriodicalId":306343,"journal":{"name":"The Cambridge Handbook of Responsible Artificial Intelligence","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129483386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Artificial Intelligence: Key Technologies and Opportunities 人工智能:关键技术与机遇
Pub Date : 1900-01-01 DOI: 10.1017/9781009207898.003
Wolfram Burgard
{"title":"Artificial Intelligence: Key Technologies and Opportunities","authors":"Wolfram Burgard","doi":"10.1017/9781009207898.003","DOIUrl":"https://doi.org/10.1017/9781009207898.003","url":null,"abstract":"","PeriodicalId":306343,"journal":{"name":"The Cambridge Handbook of Responsible Artificial Intelligence","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121686564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Forward to the Past: A Critical Evaluation of the European Approach to Artificial Intelligence in Private International Law 展望过去:对欧洲国际私法中人工智能方法的批判性评价
Pub Date : 1900-01-01 DOI: 10.1017/9781009207898.017
J. Hein
{"title":"Forward to the Past: A Critical Evaluation of the European Approach to Artificial Intelligence in Private International Law","authors":"J. Hein","doi":"10.1017/9781009207898.017","DOIUrl":"https://doi.org/10.1017/9781009207898.017","url":null,"abstract":"","PeriodicalId":306343,"journal":{"name":"The Cambridge Handbook of Responsible Artificial Intelligence","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127999553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Discriminatory AI and the Law: Legal Standards for Algorithmic Profiling 歧视性人工智能与法律:算法分析的法律标准
Pub Date : 1900-01-01 DOI: 10.1017/9781009207898.020
A. Ungern-Sternberg
{"title":"Discriminatory AI and the Law: Legal Standards for Algorithmic Profiling","authors":"A. Ungern-Sternberg","doi":"10.1017/9781009207898.020","DOIUrl":"https://doi.org/10.1017/9781009207898.020","url":null,"abstract":"","PeriodicalId":306343,"journal":{"name":"The Cambridge Handbook of Responsible Artificial Intelligence","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130492725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Risk Imposition by Artificial Agents: The Moral Proxy Problem 人工代理的风险强加:道德代理问题
Pub Date : 1900-01-01 DOI: 10.1017/9781009207898.006
J. Thoma
It seems undeniable that the coming years will see an ever-increasing reliance on artificial agents that are, on the one hand, autonomous in the sense that they process information and make decisions without continuous human input, and, on the other hand, fall short of the kind of agency that would warrant ascribing moral responsibility to the artificial agent itself. What I have in mind here are artificial agents such as self-driving cars, artificial trading agents in financial markets, nursebots or robot teachers. As these examples illustrate, many such agents make 1
似乎不可否认的是,未来几年我们将越来越依赖人工智能,一方面,它们在处理信息和做出决定的意义上是自主的,不需要持续的人工输入;另一方面,它们缺乏那种可以将道德责任归咎于人工智能本身的代理。我在这里想到的是人工代理,比如自动驾驶汽车、金融市场的人工交易代理、护士机器人或机器人教师。正如这些例子所说明的,许多这样的代理使1
{"title":"Risk Imposition by Artificial Agents: The Moral Proxy Problem","authors":"J. Thoma","doi":"10.1017/9781009207898.006","DOIUrl":"https://doi.org/10.1017/9781009207898.006","url":null,"abstract":"It seems undeniable that the coming years will see an ever-increasing reliance on artificial agents that are, on the one hand, autonomous in the sense that they process information and make decisions without continuous human input, and, on the other hand, fall short of the kind of agency that would warrant ascribing moral responsibility to the artificial agent itself. What I have in mind here are artificial agents such as self-driving cars, artificial trading agents in financial markets, nursebots or robot teachers. As these examples illustrate, many such agents make 1","PeriodicalId":306343,"journal":{"name":"The Cambridge Handbook of Responsible Artificial Intelligence","volume":"1036 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123131664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Artificial Intelligence, Law, and National Security 人工智能,法律和国家安全
Pub Date : 1900-01-01 DOI: 10.1017/9781009207898.035
Ebrahim Afsah
{"title":"Artificial Intelligence, Law, and National Security","authors":"Ebrahim Afsah","doi":"10.1017/9781009207898.035","DOIUrl":"https://doi.org/10.1017/9781009207898.035","url":null,"abstract":"","PeriodicalId":306343,"journal":{"name":"The Cambridge Handbook of Responsible Artificial Intelligence","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123954225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Autonomization and Antitrust: On the Construal of the Cartel Prohibition in the Light of Algorithmic Collusion 自治与反垄断:基于算法合谋的卡特尔禁令解读
Pub Date : 1900-01-01 DOI: 10.1017/9781009207898.027
Stefan Thomas
{"title":"Autonomization and Antitrust: On the Construal of the Cartel Prohibition in the Light of Algorithmic Collusion","authors":"Stefan Thomas","doi":"10.1017/9781009207898.027","DOIUrl":"https://doi.org/10.1017/9781009207898.027","url":null,"abstract":"","PeriodicalId":306343,"journal":{"name":"The Cambridge Handbook of Responsible Artificial Intelligence","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130836740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
The Cambridge Handbook of Responsible Artificial Intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1