首页 > 最新文献

Ethics and Information Technology最新文献

英文 中文
Design for values and conceptual engineering 为价值和概念工程设计
IF 3.6 2区 哲学 Q1 ETHICS Pub Date : 2023-01-03 DOI: 10.1007/s10676-022-09675-6
Herman Veluwenkamp, J. van den Hoven
{"title":"Design for values and conceptual engineering","authors":"Herman Veluwenkamp, J. van den Hoven","doi":"10.1007/s10676-022-09675-6","DOIUrl":"https://doi.org/10.1007/s10676-022-09675-6","url":null,"abstract":"","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":"25 1","pages":"1-12"},"PeriodicalIF":3.6,"publicationDate":"2023-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44191435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Correction to: the Ethics of AI in Human Resources 更正:人工智能在人力资源中的伦理
IF 3.6 2区 哲学 Q1 ETHICS Pub Date : 2023-01-03 DOI: 10.1007/s10676-022-09671-w
M. Dennis, Evgeni Aizenberg
{"title":"Correction to: the Ethics of AI in Human Resources","authors":"M. Dennis, Evgeni Aizenberg","doi":"10.1007/s10676-022-09671-w","DOIUrl":"https://doi.org/10.1007/s10676-022-09671-w","url":null,"abstract":"","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":"25 1","pages":"1"},"PeriodicalIF":3.6,"publicationDate":"2023-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44078413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Who is controlling whom? Reframing "meaningful human control" of AI systems in security. 谁在控制谁?重新构建安全领域人工智能系统的“有意义的人类控制”。
IF 3.6 2区 哲学 Q1 ETHICS Pub Date : 2023-01-01 DOI: 10.1007/s10676-023-09686-x
Markus Christen, Thomas Burri, Serhiy Kandul, Pascal Vörös

Decisions in security contexts, including armed conflict, law enforcement, and disaster relief, often need to be taken under circumstances of limited information, stress, and time pressure. Since AI systems are capable of providing a certain amount of relief in such contexts, such systems will become increasingly important, be it as decision-support or decision-making systems. However, given that human life may be at stake in such situations, moral responsibility for such decisions should remain with humans. Hence the idea of "meaningful human control" of intelligent systems. In this opinion paper, we outline generic configurations of control of AI and we present an alternative to human control of AI, namely the inverse idea of having AI control humans, and we discuss the normative consequences of this alternative.

安全环境中的决策,包括武装冲突、执法和救灾,通常需要在信息有限、压力和时间压力的情况下做出。由于人工智能系统能够在这种情况下提供一定程度的救济,因此这种系统将变得越来越重要,无论是作为决策支持还是决策系统。然而,鉴于人的生命可能在这种情况下受到威胁,这种决定的道德责任应该由人类承担。因此才有了对智能系统进行“有意义的人类控制”的想法。在这篇观点论文中,我们概述了人工智能控制的一般配置,并提出了人类控制人工智能的替代方案,即让人工智能控制人类的相反想法,我们讨论了这种替代方案的规范后果。
{"title":"Who is controlling whom? Reframing \"meaningful human control\" of AI systems in security.","authors":"Markus Christen,&nbsp;Thomas Burri,&nbsp;Serhiy Kandul,&nbsp;Pascal Vörös","doi":"10.1007/s10676-023-09686-x","DOIUrl":"https://doi.org/10.1007/s10676-023-09686-x","url":null,"abstract":"<p><p>Decisions in security contexts, including armed conflict, law enforcement, and disaster relief, often need to be taken under circumstances of limited information, stress, and time pressure. Since AI systems are capable of providing a certain amount of relief in such contexts, such systems will become increasingly important, be it as decision-support or decision-making systems. However, given that human life may be at stake in such situations, moral responsibility for such decisions should remain with humans. Hence the idea of \"meaningful human control\" of intelligent systems. In this opinion paper, we outline generic configurations of control of AI and we present an alternative to human control of AI, namely the inverse idea of having AI control humans, and we discuss the normative consequences of this alternative.</p>","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":"25 1","pages":"10"},"PeriodicalIF":3.6,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9918557/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10773375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Selling visibility-boosts on dating apps: a problematic practice? 在约会应用程序上提高知名度:一种有问题的做法?
IF 3.6 2区 哲学 Q1 ETHICS Pub Date : 2023-01-01 Epub Date: 2023-05-18 DOI: 10.1007/s10676-023-09704-y
Bouke de Vries

Love, sex, and physical intimacy are some of the most desired goods in life and they are increasingly being sought on dating apps such as Tinder, Bumble, and Badoo. For those who want a leg up in the chase for other people's attention, almost all of these apps now offer the option of paying a fee to boost one's visibility for a certain amount of time, which may range from 30 min to a few hours. In this article, I argue that there are strong moral grounds and, in countries with laws against unconscionable contracts, legal ones for thinking that the sale of such visibility boosts should be regulated, if not banned altogether. To do so, I raise two objections against their unfettered sale, namely that it exploits the impaired autonomy of certain users and that it creates socio-economic injustices.

爱情、性和身体亲密是生活中最受欢迎的商品,Tinder、Bumble和Badoo等约会应用程序也越来越多地寻求这些商品。对于那些想在追逐他人注意力的过程中占上风的人来说,几乎所有这些应用程序现在都可以选择付费,以在一定时间内提高自己的知名度,时间可能从30分钟到几个小时不等。在这篇文章中,我认为,有强有力的道德依据,而且在那些有法律反对不合情理合同的国家,有法律依据认为,即使不是完全禁止,也应该对这种知名度提升的销售进行监管。为此,我对它们的自由销售提出了两个反对意见,即它利用了某些用户受损的自主权,并造成了社会经济不公正。
{"title":"Selling visibility-boosts on dating apps: a problematic practice?","authors":"Bouke de Vries","doi":"10.1007/s10676-023-09704-y","DOIUrl":"10.1007/s10676-023-09704-y","url":null,"abstract":"<p><p>Love, sex, and physical intimacy are some of the most desired goods in life and they are increasingly being sought on dating apps such as Tinder, Bumble, and Badoo. For those who want a leg up in the chase for other people's attention, almost all of these apps now offer the option of paying a fee to boost one's visibility for a certain amount of time, which may range from 30 min to a few hours. In this article, I argue that there are strong moral grounds and, in countries with laws against unconscionable contracts, legal ones for thinking that the sale of such visibility boosts should be regulated, if not banned altogether. To do so, I raise two objections against their unfettered sale, namely that it exploits the impaired autonomy of certain users and that it creates socio-economic injustices.</p>","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":"25 2","pages":"30"},"PeriodicalIF":3.6,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10191813/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9515727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated opioid risk scores: a case for machine learning-induced epistemic injustice in healthcare. 自动阿片类药物风险评分:医疗保健中机器学习引起的认知不公正的案例。
IF 3.6 2区 哲学 Q1 ETHICS Pub Date : 2023-01-01 DOI: 10.1007/s10676-023-09676-z
Giorgia Pozzi

Artificial intelligence-based (AI) technologies such as machine learning (ML) systems are playing an increasingly relevant role in medicine and healthcare, bringing about novel ethical and epistemological issues that need to be timely addressed. Even though ethical questions connected to epistemic concerns have been at the center of the debate, it is going unnoticed how epistemic forms of injustice can be ML-induced, specifically in healthcare. I analyze the shortcomings of an ML system currently deployed in the USA to predict patients' likelihood of opioid addiction and misuse (PDMP algorithmic platforms). Drawing on this analysis, I aim to show that the wrong inflicted on epistemic agents involved in and affected by these systems' decision-making processes can be captured through the lenses of Miranda Fricker's account of hermeneutical injustice. I further argue that ML-induced hermeneutical injustice is particularly harmful due to what I define as an automated hermeneutical appropriation from the side of the ML system. The latter occurs if the ML system establishes meanings and shared hermeneutical resources without allowing for human oversight, impairing understanding and communication practices among stakeholders involved in medical decision-making. Furthermore and very much crucially, an automated hermeneutical appropriation can be recognized if physicians are strongly limited in their possibilities to safeguard patients from ML-induced hermeneutical injustice. Overall, my paper should expand the analysis of ethical issues raised by ML systems that are to be considered epistemic in nature, thus contributing to bridging the gap between these two dimensions in the ongoing debate.

基于人工智能(AI)的技术,如机器学习(ML)系统,在医学和医疗保健领域发挥着越来越重要的作用,带来了新的伦理和认识论问题,需要及时解决。尽管与认知相关的伦理问题一直是辩论的中心,但人们却没有注意到认知形式的不公正是如何被ml诱导的,特别是在医疗保健领域。我分析了目前在美国部署的ML系统的缺点,该系统用于预测患者阿片类药物成瘾和滥用的可能性(PDMP算法平台)。根据这一分析,我的目的是表明,通过米兰达·弗里克(Miranda Fricker)对解释学不公正的描述,可以捕捉到参与这些系统决策过程并受其影响的认知主体所遭受的错误。我进一步认为,机器学习导致的解释学不公正尤其有害,因为我将其定义为机器学习系统方面的自动解释学挪用。如果机器学习系统在不允许人类监督的情况下建立意义和共享解释学资源,则会发生后者,从而损害参与医疗决策的利益相关者之间的理解和沟通实践。此外,非常关键的是,如果医生在保护患者免受ml诱导的解释学不公正的可能性方面受到强烈限制,则可以识别自动解释学挪用。总的来说,我的论文应该扩展对机器学习系统提出的伦理问题的分析,这些问题本质上被认为是认识论的,从而有助于在正在进行的辩论中弥合这两个维度之间的差距。
{"title":"Automated opioid risk scores: a case for machine learning-induced epistemic injustice in healthcare.","authors":"Giorgia Pozzi","doi":"10.1007/s10676-023-09676-z","DOIUrl":"https://doi.org/10.1007/s10676-023-09676-z","url":null,"abstract":"<p><p>Artificial intelligence-based (AI) technologies such as machine learning (ML) systems are playing an increasingly relevant role in medicine and healthcare, bringing about novel ethical and epistemological issues that need to be timely addressed. Even though ethical questions connected to epistemic concerns have been at the center of the debate, it is going unnoticed how epistemic forms of injustice can be ML-induced, specifically in healthcare. I analyze the shortcomings of an ML system currently deployed in the USA to predict patients' likelihood of opioid addiction and misuse (PDMP algorithmic platforms). Drawing on this analysis, I aim to show that the wrong inflicted on epistemic agents involved in and affected by these systems' decision-making processes can be captured through the lenses of Miranda Fricker's account of <i>hermeneutical injustice</i>. I further argue that ML-induced hermeneutical injustice is particularly harmful due to what I define as an <i>automated hermeneutical appropriation</i> from the side of the ML system. The latter occurs if the ML system establishes meanings and shared hermeneutical resources without allowing for human oversight, impairing understanding and communication practices among stakeholders involved in medical decision-making. Furthermore and very much crucially, an automated hermeneutical appropriation can be recognized if physicians are strongly limited in their possibilities to safeguard patients from ML-induced hermeneutical injustice. Overall, my paper should expand the analysis of ethical issues raised by ML systems that are to be considered epistemic in nature, thus contributing to bridging the gap between these two dimensions in the ongoing debate.</p>","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":"25 1","pages":"3"},"PeriodicalIF":3.6,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9869303/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9255824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Value Sensitive Design for autonomous weapon systems - a primer 自主武器系统的价值敏感设计
IF 3.6 2区 哲学 Q1 ETHICS Pub Date : 2023-01-01 DOI: 10.1007/s10676-023-09687-w
C. B. Burken
{"title":"Value Sensitive Design for autonomous weapon systems - a primer","authors":"C. B. Burken","doi":"10.1007/s10676-023-09687-w","DOIUrl":"https://doi.org/10.1007/s10676-023-09687-w","url":null,"abstract":"","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":"1 1","pages":"11"},"PeriodicalIF":3.6,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"52259699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The seven troubles with norm-compliant robots. 符合规范的机器人的七个问题。
IF 3.6 2区 哲学 Q1 ETHICS Pub Date : 2023-01-01 DOI: 10.1007/s10676-023-09701-1
Tom N Coggins, Steffen Steinert

Many researchers from robotics, machine ethics, and adjacent fields seem to assume that norms represent good behavior that social robots should learn to benefit their users and society. We would like to complicate this view and present seven key troubles with norm-compliant robots: (1) norm biases, (2) paternalism (3) tyrannies of the majority, (4) pluralistic ignorance, (5) paths of least resistance, (6) outdated norms, and (7) technologically-induced norm change. Because discussions of why norm-compliant robots can be problematic are noticeably absent from the robot and machine ethics literature, this paper fills an important research gap. We argue that it is critical for researchers to take these issues into account if they wish to make norm-compliant robots.

许多来自机器人、机器伦理学和邻近领域的研究人员似乎都认为,规范代表了良好的行为,社交机器人应该学习这些行为来造福它们的用户和社会。我们想将这一观点复杂化,并提出符合规范的机器人的七个主要问题:(1)规范偏见,(2)家长式作风,(3)多数人的暴政,(4)多元无知,(5)阻力最小的路径,(6)过时的规范,以及(7)技术引发的规范变化。由于机器人和机器伦理文献中明显缺乏关于为什么符合规范的机器人可能存在问题的讨论,因此本文填补了一个重要的研究空白。我们认为,如果研究人员希望制造符合规范的机器人,就必须考虑到这些问题。
{"title":"The seven troubles with norm-compliant robots.","authors":"Tom N Coggins,&nbsp;Steffen Steinert","doi":"10.1007/s10676-023-09701-1","DOIUrl":"https://doi.org/10.1007/s10676-023-09701-1","url":null,"abstract":"<p><p>Many researchers from robotics, machine ethics, and adjacent fields seem to assume that norms represent good behavior that social robots should learn to benefit their users and society. We would like to complicate this view and present seven key troubles with norm-compliant robots: (1) norm biases, (2) paternalism (3) tyrannies of the majority, (4) pluralistic ignorance, (5) paths of least resistance, (6) outdated norms, and (7) technologically-induced norm change. Because discussions of why norm-compliant robots can be problematic are noticeably absent from the robot and machine ethics literature, this paper fills an important research gap. We argue that it is critical for researchers to take these issues into account if they wish to make norm-compliant robots.</p>","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":"25 2","pages":"29"},"PeriodicalIF":3.6,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10130815/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9398061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Conceptualizations of user autonomy within the normative evaluation of dark patterns 暗模式规范评估中用户自主性的概念化
IF 3.6 2区 哲学 Q1 ETHICS Pub Date : 2022-12-01 DOI: 10.1007/s10676-022-09672-9
Sanju Ahuja, Jyotish Kumar
{"title":"Conceptualizations of user autonomy within the normative evaluation of dark patterns","authors":"Sanju Ahuja, Jyotish Kumar","doi":"10.1007/s10676-022-09672-9","DOIUrl":"https://doi.org/10.1007/s10676-022-09672-9","url":null,"abstract":"","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":" ","pages":""},"PeriodicalIF":3.6,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43494344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Reasons for Meaningful Human Control 有意义的人类控制的原因
IF 3.6 2区 哲学 Q1 ETHICS Pub Date : 2022-11-23 DOI: 10.1007/s10676-022-09673-8
Herman Veluwenkamp
{"title":"Reasons for Meaningful Human Control","authors":"Herman Veluwenkamp","doi":"10.1007/s10676-022-09673-8","DOIUrl":"https://doi.org/10.1007/s10676-022-09673-8","url":null,"abstract":"","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":" ","pages":""},"PeriodicalIF":3.6,"publicationDate":"2022-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46428593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Explanation and Agency: exploring the normative-epistemic landscape of the “Right to Explanation” 解释与代理:探索“解释权”的规范认识景观
IF 3.6 2区 哲学 Q1 ETHICS Pub Date : 2022-11-11 DOI: 10.1007/s10676-022-09654-x
Fleur Jongepier, Esther Keymolen
{"title":"Explanation and Agency: exploring the normative-epistemic landscape of the “Right to Explanation”","authors":"Fleur Jongepier, Esther Keymolen","doi":"10.1007/s10676-022-09654-x","DOIUrl":"https://doi.org/10.1007/s10676-022-09654-x","url":null,"abstract":"","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":" ","pages":""},"PeriodicalIF":3.6,"publicationDate":"2022-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41759186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
期刊
Ethics and Information Technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1