首页 > 最新文献

Ethics and Information Technology最新文献

英文 中文
Trustworthiness of voting advice applications in Europe. 欧洲投票建议应用程序的可信度。
IF 3.4 2区 哲学 Q1 ETHICS Pub Date : 2024-01-01 Epub Date: 2024-08-12 DOI: 10.1007/s10676-024-09790-6
Elisabeth Stockinger, Jonne Maas, Christofer Talvitie, Virginia Dignum

Voting Advice Applications (VAAs) are interactive tools used to assist in one's choice of a party or candidate to vote for in an upcoming election. They have the potential to increase citizens' trust and participation in democratic structures. However, there is no established ground truth for one's electoral choice, and VAA recommendations depend strongly on architectural and design choices. We assessed several representative European VAAs according to the Ethics Guidelines for Trustworthy AI provided by the European Commission using publicly available information. We found scores to be comparable across VAAs and low in most requirements, with differences reflecting the kind of developing institution. Across VAAs, we identify the need for improvement in (i) transparency regarding the subjectivity of recommendations, (ii) diversity of stakeholder participation, (iii) user-centric documentation of algorithm, and (iv) disclosure of the underlying values and assumptions.

Supplementary information: The online version contains supplementary material available at 10.1007/s10676-024-09790-6.

投票咨询应用程序(VAAs)是一种互动工具,用于帮助人们在即将举行的选举中选择党派或候选人。它们有可能增加公民对民主结构的信任和参与。然而,一个人的选举选择并没有既定的基本事实,VAA 的建议在很大程度上取决于架构和设计选择。我们根据欧盟委员会提供的《可信人工智能道德准则》,利用公开信息评估了几个具有代表性的欧洲 VAA。我们发现各 VAA 的得分不相上下,大多数要求的得分较低,差异反映了发展中机构的类型。在所有 VAA 中,我们发现在以下方面需要改进:(i) 建议主观性的透明度;(ii) 利益相关者参与的多样性;(iii) 以用户为中心的算法文档;(iv) 基本价值观和假设的披露:在线版本包含补充材料,可查阅 10.1007/s10676-024-09790-6。
{"title":"Trustworthiness of voting advice applications in Europe.","authors":"Elisabeth Stockinger, Jonne Maas, Christofer Talvitie, Virginia Dignum","doi":"10.1007/s10676-024-09790-6","DOIUrl":"10.1007/s10676-024-09790-6","url":null,"abstract":"<p><p>Voting Advice Applications (VAAs) are interactive tools used to assist in one's choice of a party or candidate to vote for in an upcoming election. They have the potential to increase citizens' trust and participation in democratic structures. However, there is no established ground truth for one's electoral choice, and VAA recommendations depend strongly on architectural and design choices. We assessed several representative European VAAs according to the Ethics Guidelines for Trustworthy AI provided by the European Commission using publicly available information. We found scores to be comparable across VAAs and low in most requirements, with differences reflecting the kind of developing institution. Across VAAs, we identify the need for improvement in (i) transparency regarding the subjectivity of recommendations, (ii) diversity of stakeholder participation, (iii) user-centric documentation of algorithm, and (iv) disclosure of the underlying values and assumptions.</p><p><strong>Supplementary information: </strong>The online version contains supplementary material available at 10.1007/s10676-024-09790-6.</p>","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":"26 3","pages":"55"},"PeriodicalIF":3.4,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11415416/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142300499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Large language models and their big bullshit potential. 大型语言模型及其巨大潜力
IF 3.4 2区 哲学 Q1 ETHICS Pub Date : 2024-01-01 Epub Date: 2024-10-04 DOI: 10.1007/s10676-024-09802-5
Sarah A Fisher

Newly powerful large language models have burst onto the scene, with applications across a wide range of functions. We can now expect to encounter their outputs at rapidly increasing volumes and frequencies. Some commentators claim that large language models are bullshitting, generating convincing output without regard for the truth. If correct, that would make large language models distinctively dangerous discourse participants. Bullshitters not only undermine the norm of truthfulness (by saying false things) but the normative status of truth itself (by treating it as entirely irrelevant). So, do large language models really bullshit? I argue that they can, in the sense of issuing propositional content in response to fact-seeking prompts, without having first assessed that content for truth or falsity. However, I further argue that they need not bullshit, given appropriate guardrails. So, just as with human speakers, the propensity for a large language model to bullshit depends on its own particular make-up.

功能强大的新大型语言模型已经崭露头角,应用范围十分广泛。我们现在可以预见,它们的输出量和频率会迅速增加。一些评论家声称,大型语言模型是在胡说八道,它们不顾事实真相,产生令人信服的输出结果。如果这种说法是正确的,那么大型语言模型就会成为非常危险的话语参与者。胡说八道者不仅破坏了真实性规范(说假话),而且破坏了真理本身的规范地位(将真理视为完全无关紧要)。那么,大型语言模型真的会胡说八道吗?我认为大型语言模型真的会胡说八道,因为它们会根据寻找事实的提示发出命题内容,而不会首先评估这些内容的真假。不过,我进一步认为,只要有适当的防护措施,它们就不需要胡说八道。因此,正如人类说话者一样,大型语言模型的废话倾向取决于其自身的特殊构成。
{"title":"Large language models and their big bullshit potential.","authors":"Sarah A Fisher","doi":"10.1007/s10676-024-09802-5","DOIUrl":"10.1007/s10676-024-09802-5","url":null,"abstract":"<p><p>Newly powerful large language models have burst onto the scene, with applications across a wide range of functions. We can now expect to encounter their outputs at rapidly increasing volumes and frequencies. Some commentators claim that large language models are <i>bullshitting</i>, generating convincing output without regard for the truth. If correct, that would make large language models distinctively dangerous discourse participants. Bullshitters not only undermine the norm of truthfulness (by saying false things) but the normative status of truth itself (by treating it as entirely irrelevant). So, do large language models really bullshit? I argue that they can, in the sense of issuing propositional content in response to fact-seeking prompts, without having first assessed that content for truth or falsity. However, I further argue that they <i>need not</i> bullshit, given appropriate guardrails. So, just as with human speakers, the propensity for a large language model to bullshit depends on its own particular make-up.</p>","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":"26 4","pages":"67"},"PeriodicalIF":3.4,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11452423/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142382353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How to teach responsible AI in Higher Education: challenges and opportunities 如何在高等教育中教授负责任的人工智能:挑战与机遇
IF 3.6 2区 哲学 Q1 ETHICS Pub Date : 2023-12-13 DOI: 10.1007/s10676-023-09733-7
Andrea Aler Tubella, Marçal Mora-Cantallops, Juan Carlos Nieves
{"title":"How to teach responsible AI in Higher Education: challenges and opportunities","authors":"Andrea Aler Tubella, Marçal Mora-Cantallops, Juan Carlos Nieves","doi":"10.1007/s10676-023-09733-7","DOIUrl":"https://doi.org/10.1007/s10676-023-09733-7","url":null,"abstract":"","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":"12 3","pages":""},"PeriodicalIF":3.6,"publicationDate":"2023-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139005686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Can machine learning make naturalism about health truly naturalistic? A reflection on a data-driven concept of health 机器学习能否使关于健康的自然主义成为真正的自然主义?对数据驱动的健康概念的思考
IF 3.6 2区 哲学 Q1 ETHICS Pub Date : 2023-12-12 DOI: 10.1007/s10676-023-09734-6
A. Guersenzvaig
{"title":"Can machine learning make naturalism about health truly naturalistic? A reflection on a data-driven concept of health","authors":"A. Guersenzvaig","doi":"10.1007/s10676-023-09734-6","DOIUrl":"https://doi.org/10.1007/s10676-023-09734-6","url":null,"abstract":"","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":"226 6","pages":""},"PeriodicalIF":3.6,"publicationDate":"2023-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139010041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Digital twins, big data governance, and sustainable tourism 数字双胞胎、大数据治理和可持续旅游业
IF 3.6 2区 哲学 Q1 ETHICS Pub Date : 2023-11-16 DOI: 10.1007/s10676-023-09730-w
E. Rahmadian, Daniel Feitosa, Yulia Virantina
{"title":"Digital twins, big data governance, and sustainable tourism","authors":"E. Rahmadian, Daniel Feitosa, Yulia Virantina","doi":"10.1007/s10676-023-09730-w","DOIUrl":"https://doi.org/10.1007/s10676-023-09730-w","url":null,"abstract":"","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":"28 3","pages":""},"PeriodicalIF":3.6,"publicationDate":"2023-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139270569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Public health measures and the rise of incidental surveillance: Considerations about private informational power and accountability 公共卫生措施和偶然监督的兴起:关于私人信息权力和问责制的思考
IF 3.6 2区 哲学 Q1 ETHICS Pub Date : 2023-11-16 DOI: 10.1007/s10676-023-09732-8
Bart Kamphorst, Adam Henschke
{"title":"Public health measures and the rise of incidental surveillance: Considerations about private informational power and accountability","authors":"Bart Kamphorst, Adam Henschke","doi":"10.1007/s10676-023-09732-8","DOIUrl":"https://doi.org/10.1007/s10676-023-09732-8","url":null,"abstract":"","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":"38 12","pages":""},"PeriodicalIF":3.6,"publicationDate":"2023-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139268942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Conceptualising and regulating all neural data from consumer-directed devices as medical data: more scope for an unnecessary expansion of medical influence? 将所有来自消费者自主设备的神经数据视为医疗数据并加以监管:医疗影响力的不必要扩张还有更大空间?
IF 3.6 2区 哲学 Q1 ETHICS Pub Date : 2023-11-15 DOI: 10.1007/s10676-023-09735-5
Brad Partridge, Susan Dodds
{"title":"Conceptualising and regulating all neural data from consumer-directed devices as medical data: more scope for an unnecessary expansion of medical influence?","authors":"Brad Partridge, Susan Dodds","doi":"10.1007/s10676-023-09735-5","DOIUrl":"https://doi.org/10.1007/s10676-023-09735-5","url":null,"abstract":"","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":"51 4","pages":""},"PeriodicalIF":3.6,"publicationDate":"2023-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139272673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Right to Break the Law? Perfect Enforcement of the Law Using Technology Impedes the Development of Legal Systems 违法的权利?利用技术完美执法阻碍了法律制度的发展
IF 3.6 2区 哲学 Q1 ETHICS Pub Date : 2023-11-15 DOI: 10.1007/s10676-023-09737-3
Bart Custers
{"title":"The Right to Break the Law? Perfect Enforcement of the Law Using Technology Impedes the Development of Legal Systems","authors":"Bart Custers","doi":"10.1007/s10676-023-09737-3","DOIUrl":"https://doi.org/10.1007/s10676-023-09737-3","url":null,"abstract":"","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":"27 3","pages":""},"PeriodicalIF":3.6,"publicationDate":"2023-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139273216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Should we embrace “Big Sister”? Smart speakers as a means to combat intimate partner violence 我们应该拥抱“大姐”吗?智能音箱是对抗亲密伴侣暴力的一种手段
2区 哲学 Q1 ETHICS Pub Date : 2023-11-04 DOI: 10.1007/s10676-023-09727-5
Robert Sparrow, Mark Andrejevic, Bridget Harris
Abstract It is estimated that one in three women experience intimate partner violence (IPV) across the course of their life. The popular uptake of “smart speakers” powered by sophisticated AI means that surveillance of the domestic environment is increasingly possible. Correspondingly, there are various proposals to use smart speakers to detect or report IPV. In this paper, we clarify what might be possible when it comes to combatting IPV using existing or near-term technology and also begin the project of evaluating this project both ethically and politically. We argue that the ethical landscape looks different depending on whether one is considering the decision to develop the technology or the decision to use it once it has been developed. If activists and governments wish to avoid the privatisation of responses to IPV, ubiquitous surveillance of domestic spaces, increasing the risk posed to members of minority communities by police responses to IPV, and the danger that more powerful smart speakers will be co-opted by men to control and abuse women, then they should resist the development of this technology rather than wait until these systems are developed. If it is judged that the moral urgency of IPV justifies exploring what might be possible by developing this technology, even in the face of these risks, then it will be imperative that victim-survivors from a range of demographics, as well as government and non-government stakeholders, are engaged in shaping this technology and the legislation and policies needed to regulate it.
据估计,三分之一的女性在其一生中经历过亲密伴侣暴力(IPV)。由复杂人工智能驱动的“智能扬声器”的普及意味着对国内环境进行监控的可能性越来越大。相应地,使用智能音箱检测或报告IPV的建议也多种多样。在本文中,我们阐明了使用现有或近期技术打击IPV的可能性,并开始了从道德和政治上评估该项目的项目。我们认为,根据人们是在考虑开发技术的决定还是在技术开发后使用它的决定,伦理前景看起来会有所不同。如果活动家和政府希望避免IPV反应的私有化,家庭空间无处不在的监视,增加警察对IPV反应对少数民族社区成员构成的风险,以及更强大的智能扬声器将被男性用来控制和虐待女性的危险,那么他们应该抵制这项技术的发展,而不是等到这些系统被开发出来。如果判断IPV的道德紧迫性证明了通过开发这项技术探索可能的可能性是合理的,即使面对这些风险,那么来自各种人口统计数据的受害者-幸存者,以及政府和非政府利益相关者,都必须参与塑造这项技术以及监管它所需的立法和政策。
{"title":"Should we embrace “Big Sister”? Smart speakers as a means to combat intimate partner violence","authors":"Robert Sparrow, Mark Andrejevic, Bridget Harris","doi":"10.1007/s10676-023-09727-5","DOIUrl":"https://doi.org/10.1007/s10676-023-09727-5","url":null,"abstract":"Abstract It is estimated that one in three women experience intimate partner violence (IPV) across the course of their life. The popular uptake of “smart speakers” powered by sophisticated AI means that surveillance of the domestic environment is increasingly possible. Correspondingly, there are various proposals to use smart speakers to detect or report IPV. In this paper, we clarify what might be possible when it comes to combatting IPV using existing or near-term technology and also begin the project of evaluating this project both ethically and politically. We argue that the ethical landscape looks different depending on whether one is considering the decision to develop the technology or the decision to use it once it has been developed. If activists and governments wish to avoid the privatisation of responses to IPV, ubiquitous surveillance of domestic spaces, increasing the risk posed to members of minority communities by police responses to IPV, and the danger that more powerful smart speakers will be co-opted by men to control and abuse women, then they should resist the development of this technology rather than wait until these systems are developed. If it is judged that the moral urgency of IPV justifies exploring what might be possible by developing this technology, even in the face of these risks, then it will be imperative that victim-survivors from a range of demographics, as well as government and non-government stakeholders, are engaged in shaping this technology and the legislation and policies needed to regulate it.","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":"108 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135773389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generative AI models should include detection mechanisms as a condition for public release 生成式人工智能模型应该包括检测机制,作为公开发布的条件
2区 哲学 Q1 ETHICS Pub Date : 2023-10-28 DOI: 10.1007/s10676-023-09728-4
Alistair Knott, Dino Pedreschi, Raja Chatila, Tapabrata Chakraborti, Susan Leavy, Ricardo Baeza-Yates, David Eyers, Andrew Trotman, Paul D. Teal, Przemyslaw Biecek, Stuart Russell, Yoshua Bengio
Abstract The new wave of ‘foundation models’—general-purpose generative AI models, for production of text (e.g., ChatGPT) or images (e.g., MidJourney)—represent a dramatic advance in the state of the art for AI. But their use also introduces a range of new risks, which has prompted an ongoing conversation about possible regulatory mechanisms. Here we propose a specific principle that should be incorporated into legislation: that any organization developing a foundation model intended for public use must demonstrate a reliable detection mechanism for the content it generates, as a condition of its public release. The detection mechanism should be made publicly available in a tool that allows users to query, for an arbitrary item of content, whether the item was generated (wholly or partly) by the model. In this paper, we argue that this requirement is technically feasible and would play an important role in reducing certain risks from new AI models in many domains. We also outline a number of options for the tool’s design, and summarize a number of points where further input from policymakers and researchers would be required.
新一波的“基础模型”——用于生成文本(如ChatGPT)或图像(如MidJourney)的通用生成人工智能模型——代表了人工智能技术的巨大进步。但它们的使用也带来了一系列新的风险,这引发了一场关于可能的监管机制的持续讨论。在这里,我们提出了一个应该纳入立法的具体原则:任何组织开发用于公共使用的基础模型必须证明其生成的内容具有可靠的检测机制,作为其公开发布的条件。检测机制应该在一个工具中公开可用,该工具允许用户查询任意内容项,该项是否(全部或部分)由模型生成。在本文中,我们认为这一要求在技术上是可行的,并且将在许多领域中降低新人工智能模型的某些风险方面发挥重要作用。我们还概述了该工具设计的一些选项,并总结了需要政策制定者和研究人员进一步投入的一些要点。
{"title":"Generative AI models should include detection mechanisms as a condition for public release","authors":"Alistair Knott, Dino Pedreschi, Raja Chatila, Tapabrata Chakraborti, Susan Leavy, Ricardo Baeza-Yates, David Eyers, Andrew Trotman, Paul D. Teal, Przemyslaw Biecek, Stuart Russell, Yoshua Bengio","doi":"10.1007/s10676-023-09728-4","DOIUrl":"https://doi.org/10.1007/s10676-023-09728-4","url":null,"abstract":"Abstract The new wave of ‘foundation models’—general-purpose generative AI models, for production of text (e.g., ChatGPT) or images (e.g., MidJourney)—represent a dramatic advance in the state of the art for AI. But their use also introduces a range of new risks, which has prompted an ongoing conversation about possible regulatory mechanisms. Here we propose a specific principle that should be incorporated into legislation: that any organization developing a foundation model intended for public use must demonstrate a reliable detection mechanism for the content it generates, as a condition of its public release. The detection mechanism should be made publicly available in a tool that allows users to query, for an arbitrary item of content, whether the item was generated (wholly or partly) by the model. In this paper, we argue that this requirement is technically feasible and would play an important role in reducing certain risks from new AI models in many domains. We also outline a number of options for the tool’s design, and summarize a number of points where further input from policymakers and researchers would be required.","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":"37 10","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136160670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Ethics and Information Technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1