首页 > 最新文献

PsychRN: Attitudes & Social Cognition (Topic)最新文献

英文 中文
COVID-19: Level of Concern Explained COVID-19:关注程度解释
Pub Date : 2020-04-23 DOI: 10.2139/ssrn.3713602
Erjon Gjoci
COVID19 Sentiment survey data shows that the level of concern has a strong inverse relationship with the respondents' confidence in government response to the virus. The results show that the lower the confidence in government response, the higher the level of concern.
covid - 19情绪调查数据显示,关注程度与受访者对政府应对病毒的信心呈强烈的反比关系。结果表明,对政府反应的信心越低,关注程度越高。
{"title":"COVID-19: Level of Concern Explained","authors":"Erjon Gjoci","doi":"10.2139/ssrn.3713602","DOIUrl":"https://doi.org/10.2139/ssrn.3713602","url":null,"abstract":"COVID19 Sentiment survey data shows that the level of concern has a strong inverse relationship with the respondents' confidence in government response to the virus. The results show that the lower the confidence in government response, the higher the level of concern.","PeriodicalId":369029,"journal":{"name":"PsychRN: Attitudes & Social Cognition (Topic)","volume":"130 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134290461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Primary and Secondary Emotions as an Instrument to Measure Implicit Prejudice 主要和次要情绪作为衡量内隐偏见的工具
Pub Date : 2019-11-28 DOI: 10.2139/ssrn.3495082
E. Agadullina, O. Gulevich, M. Terskova
The article presents the results of the selection of relevant to the Russian context emotions perceived as primary (which humans share with animals) or secondary (experienced only by humans). Three stages of the selection and evaluation of emotions made it possible to distinguish 12 emotions: primary positive emotions (Joy, Pleasure, and Interest), primary negative emotions (Anger, Irritation, and Rage), secondary positive emotions (Inspiration, Afflatus, and Enthusiasm), and secondary negative emotions (Disappointment, Regret, and Devastation). The results of confirmatory and multigroup confirmatory factor analyses demonstrated that these emotions are well grouped into primary-secondary subgroups and that their valence is important to grouping. The highlighted emotions can be used to study implicit prejudices towards various social groups.
本文介绍了与俄罗斯语境相关的情感选择的结果,这些情感被认为是主要的(人类与动物共享)或次要的(只有人类经历)。选择和评估情绪的三个阶段使得区分12种情绪成为可能:主要的积极情绪(喜悦、愉悦和兴趣),主要的消极情绪(愤怒、恼怒和愤怒),次要的积极情绪(鼓舞、激动和热情),以及次要的消极情绪(失望、遗憾和破坏)。验证性和多组验证性因素分析的结果表明,这些情绪可以很好地分为主要-次要亚组,并且它们的效价对分组很重要。突出的情绪可以用来研究对不同社会群体的内隐偏见。
{"title":"Primary and Secondary Emotions as an Instrument to Measure Implicit Prejudice","authors":"E. Agadullina, O. Gulevich, M. Terskova","doi":"10.2139/ssrn.3495082","DOIUrl":"https://doi.org/10.2139/ssrn.3495082","url":null,"abstract":"The article presents the results of the selection of relevant to the Russian context emotions perceived as primary (which humans share with animals) or secondary (experienced only by humans). Three stages of the selection and evaluation of emotions made it possible to distinguish 12 emotions: primary positive emotions (Joy, Pleasure, and Interest), primary negative emotions (Anger, Irritation, and Rage), secondary positive emotions (Inspiration, Afflatus, and Enthusiasm), and secondary negative emotions (Disappointment, Regret, and Devastation). The results of confirmatory and multigroup confirmatory factor analyses demonstrated that these emotions are well grouped into primary-secondary subgroups and that their valence is important to grouping. The highlighted emotions can be used to study implicit prejudices towards various social groups.","PeriodicalId":369029,"journal":{"name":"PsychRN: Attitudes & Social Cognition (Topic)","volume":"133 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130209095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Joint Distributional Choices: An Experimental Analysis 联合分配选择:一个实验分析
Pub Date : 2019-10-27 DOI: 10.2139/ssrn.3528762
N. Olekalns, Hugh Sibly, Amy Cormany
This paper uses an economic experiment to identify the presence of descriptive norms associated with the jointly determined division of a surplus. We consider the results from a two-stage experiment in which participants contribute to a common pool which is then divided using a coordination game between participants. Treatments effects are introduced by varying the context in which individual contributions to the pool are established. We find there is no simple universal principle (or norm) guiding participants when they make these joint distributional choices. Rather, we find self-interest, self-serving bias and a participant's contribution relative to others play a role in determining their distributive choices, their expectations regarding their partners’ likely choices and their evaluation of the fairness of the resulting outcomes. Additionally, we find that the way in which these determinants influence choice depends on whether individual contributions are determined randomly instead of by individuals’ skill or effort.
本文使用一个经济实验来确定与共同决定的剩余分配相关的描述性规范的存在。我们考虑了一个两阶段实验的结果,在这个实验中,参与者为一个公共池做出贡献,然后使用参与者之间的协调博弈来划分公共池。处理效果是通过改变环境来引入的,在这种环境中,个人对池的贡献是建立的。我们发现,当参与者做出这些共同分配选择时,并没有简单的普遍原则(或规范)来指导他们。相反,我们发现自利、利己偏见和参与者相对于他人的贡献在决定他们的分配选择、他们对伴侣可能选择的期望以及他们对结果公平性的评估方面发挥了作用。此外,我们发现这些决定因素影响选择的方式取决于个人的贡献是随机决定的,还是由个人的技能或努力决定的。
{"title":"Joint Distributional Choices: An Experimental Analysis","authors":"N. Olekalns, Hugh Sibly, Amy Cormany","doi":"10.2139/ssrn.3528762","DOIUrl":"https://doi.org/10.2139/ssrn.3528762","url":null,"abstract":"This paper uses an economic experiment to identify the presence of descriptive norms associated with the jointly determined division of a surplus. We consider the results from a two-stage experiment in which participants contribute to a common pool which is then divided using a coordination game between participants. Treatments effects are introduced by varying the context in which individual contributions to the pool are established. We find there is no simple universal principle (or norm) guiding participants when they make these joint distributional choices. Rather, we find self-interest, self-serving bias and a participant's contribution relative to others play a role in determining their distributive choices, their expectations regarding their partners’ likely choices and their evaluation of the fairness of the resulting outcomes. Additionally, we find that the way in which these determinants influence choice depends on whether individual contributions are determined randomly instead of by individuals’ skill or effort.","PeriodicalId":369029,"journal":{"name":"PsychRN: Attitudes & Social Cognition (Topic)","volume":"144 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132966043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Artificial Intelligence Needs Human Rights: How the Focus on Ethical AI Fails to Address Privacy, Discrimination and Other Concerns 人工智能需要人权:关注道德人工智能如何未能解决隐私、歧视和其他问题
Pub Date : 2019-09-30 DOI: 10.2139/ssrn.3589473
Kate Saslow, Philippe Lorenz
AI has been a catalyst for automation and efficiency in numerous ways, but has also had harmful consequences, including: unforeseen algorithmic bias that affects already marginalized communities, as with Amazon’s AI recruiting algorithm that showed bias against women; accountability and liability coming into question if an autonomous vehicle injures or kills, as seen with Uber’s self-driving car casualties; even the notion of democracy is being challenged as the technology enables authoritarian and democratic states like China and the United States to practice surveillance at an unprecedented scale.

The risks as well as the need for some form of basic rules have not gone unnoticed and governments, tech companies, research consortiums or advocacy groups have broached the issue. In fact, this has been the topic of local, national, and supranational discussion for some years now, as can be seen with new legislation popping up to ban facial recognition software in public spaces. The problem with these discussions, however, is that they have been heavily dominated by how we can make AI more “ethical”. Companies, states, and even international organizations discuss ethical principles, such as fair, accountable, responsible, or safe AI in numerous expert groups or ad hoc committees, such as the High-Level Expert Group on AI in the European Commission, the group on AI in Society of the Organization for Economic Co-operation and Development (OECD), or the select committee on Artificial Intelligence of the United Kingdom House of Lords.

This may sound like a solid approach to tackling the dangers that AI poses, but to actually be impactful, these discussions must be grounded in rhetoric that is focused and actionable. Not only may the principles be defined differently depending on the stakeholders, but there are overwhelming differences in how principles are interpreted and what requirements are necessary for them to materialize. In addition, ethical debates on AI are often dominated by American or Chinese companies, which are both propagating their own idea of ethical AI, but which may in many cases stand in conflict with the values of other cultures and nations. Not only do different countries have different ideas of which “ethics” principles need to be protected, but different countries play starkly different roles in developing AI. Another problem is when ethical guidelines are discussed, suggestions often come from tech companies themselves, while voices from citizens or even governments are marginalized.

Self-regulation around ethical principles is too weak to address the spreading implications that AI technologies have had. Ethical principles lack clarity and enforcement capabilities. We must stop focusing the discourse on ethical principles, and instead shift the debate to human rights. Debates must be louder at the supranational level. International pressure must be put on states and companies who fail to protect individuals b
人工智能在许多方面都是自动化和效率的催化剂,但也产生了有害的后果,包括:无法预见的算法偏见,影响到已经被边缘化的社区,比如亚马逊的人工智能招聘算法就显示出对女性的偏见;如果自动驾驶汽车造成伤害或死亡,就像优步(Uber)的自动驾驶汽车伤亡事件一样,问责制和责任就会受到质疑;就连民主的概念也受到了挑战,因为这项技术使中国和美国等专制和民主国家能够以前所未有的规模实施监控。风险以及对某种形式的基本规则的需求并没有被忽视,政府、科技公司、研究联盟或倡导团体都提出了这个问题。事实上,这已经成为地方、国家和超国家讨论的话题好几年了,从禁止在公共场所使用面部识别软件的新立法中可以看出。然而,这些讨论的问题在于,它们在很大程度上被我们如何使人工智能更“合乎道德”所主导。公司、国家甚至国际组织在众多专家组或特设委员会中讨论道德原则,例如公平、负责、负责或安全的人工智能,例如欧盟委员会的人工智能高级专家组、经济合作与发展组织(OECD)的社会人工智能小组或英国上议院的人工智能特别委员会。这听起来像是解决人工智能带来的危险的可靠方法,但要真正产生影响,这些讨论必须建立在专注和可操作的修辞基础上。不仅原则的定义可能会根据涉众的不同而不同,而且在如何解释原则以及实现原则所必需的需求方面也存在着巨大的差异。此外,关于人工智能的伦理辩论往往由美国或中国公司主导,它们都在宣传自己的人工智能伦理理念,但在很多情况下,这可能与其他文化和国家的价值观相冲突。不同的国家不仅对哪些“伦理”原则需要得到保护有不同的看法,而且不同的国家在发展人工智能方面扮演的角色也截然不同。另一个问题是,当讨论道德准则时,建议往往来自科技公司自己,而公民甚至政府的声音被边缘化。围绕道德原则的自我监管过于薄弱,无法应对人工智能技术带来的广泛影响。道德原则缺乏明确性和执行能力。我们必须停止把讨论的重点放在伦理原则上,而把辩论转向人权。超国家层面的辩论必须更加激烈。国际社会必须向那些未能通过宣传带有风险的人工智能技术来保护个人的国家和公司施加压力。领导力的定义不应由提出新的道德准则的行为者来定义,而应由那些制定有关人工智能的法律义务的人来定义,这些义务以人权的角度为基础并源于人权的角度。要做到这一点,一种方法是重申人工智能开发和部署的以人为本的本质,并遵循人权法的可操作标准。人权法律框架已经存在了几十年,在打击和迫使各国修改国内法方面发挥了重要作用。纳尔逊·曼德拉(Nelson Mandela)在为结束南非的种族隔离制度而斗争时,提到了《世界人权宣言》(Universal Declaration of Human Rights)中规定的义务;1973年,通过罗伊诉韦德案,美国最高法院顺应了承认妇女人权的全球大趋势,保护个人不受政府对私人事务的不当干预,赋予妇女充分和平等参与社会的能力;最近,互联网的开放接入被认为是一项人权,不仅对见解、言论、结社和集会自由至关重要,而且有助于动员民众呼吁平等、正义和问责制,以促进全球对人权的尊重。这些例子表明,人权标准是如何适用于各种国内和国际规则的。这些标准具有可操作性和可执行性,这表明它们非常适合监管人工智能技术的跨境性质。必须从人权的角度审视人工智能系统,分析人工智能造成或加剧的当前和未来危害,并采取行动避免任何伤害。人工智能技术的采用已经跨越国界,对世界各地的社会产生了不同的影响。全球化的技术需要国际义务来加速和更大规模地减轻所面临的社会问题。 企业和国家应该努力发展维护人权的人工智能技术。将人工智能话语围绕人权而不仅仅是伦理展开,可以为人工智能技术的开发和部署提供更清晰的法律基础。国际社会必须提高认识,凝聚共识,深入分析人工智能技术在不同背景下如何侵犯人权,并制定有效的法律补救途径。把讨论的重点放在人权而不是道德原则上,可以为国家和私人行为者提供更多的问责措施和更多的义务,并可以将辩论转向依赖于几十年来形成的一致和广泛接受的法律原则。
{"title":"Artificial Intelligence Needs Human Rights: How the Focus on Ethical AI Fails to Address Privacy, Discrimination and Other Concerns","authors":"Kate Saslow, Philippe Lorenz","doi":"10.2139/ssrn.3589473","DOIUrl":"https://doi.org/10.2139/ssrn.3589473","url":null,"abstract":"AI has been a catalyst for automation and efficiency in numerous ways, but has also had harmful consequences, including: unforeseen algorithmic bias that affects already marginalized communities, as with Amazon’s AI recruiting algorithm that showed bias against women; accountability and liability coming into question if an autonomous vehicle injures or kills, as seen with Uber’s self-driving car casualties; even the notion of democracy is being challenged as the technology enables authoritarian and democratic states like China and the United States to practice surveillance at an unprecedented scale.<br><br>The risks as well as the need for some form of basic rules have not gone unnoticed and governments, tech companies, research consortiums or advocacy groups have broached the issue. In fact, this has been the topic of local, national, and supranational discussion for some years now, as can be seen with new legislation popping up to ban facial recognition software in public spaces. The problem with these discussions, however, is that they have been heavily dominated by how we can make AI more “ethical”. Companies, states, and even international organizations discuss ethical principles, such as fair, accountable, responsible, or safe AI in numerous expert groups or ad hoc committees, such as the High-Level Expert Group on AI in the European Commission, the group on AI in Society of the Organization for Economic Co-operation and Development (OECD), or the select committee on Artificial Intelligence of the United Kingdom House of Lords.<br><br>This may sound like a solid approach to tackling the dangers that AI poses, but to actually be impactful, these discussions must be grounded in rhetoric that is focused and actionable. Not only may the principles be defined differently depending on the stakeholders, but there are overwhelming differences in how principles are interpreted and what requirements are necessary for them to materialize. In addition, ethical debates on AI are often dominated by American or Chinese companies, which are both propagating their own idea of ethical AI, but which may in many cases stand in conflict with the values of other cultures and nations. Not only do different countries have different ideas of which “ethics” principles need to be protected, but different countries play starkly different roles in developing AI. Another problem is when ethical guidelines are discussed, suggestions often come from tech companies themselves, while voices from citizens or even governments are marginalized.<br><br>Self-regulation around ethical principles is too weak to address the spreading implications that AI technologies have had. Ethical principles lack clarity and enforcement capabilities. We must stop focusing the discourse on ethical principles, and instead shift the debate to human rights. Debates must be louder at the supranational level. International pressure must be put on states and companies who fail to protect individuals b","PeriodicalId":369029,"journal":{"name":"PsychRN: Attitudes & Social Cognition (Topic)","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124205980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
'Cure or Poison?' Identity Verification and the Spread of Fake News on Social Media “治疗还是毒药?”身份验证和假新闻在社交媒体上的传播
Pub Date : 2018-09-14 DOI: 10.2139/ssrn.3249479
S. Wang, Min-Seok Pang, P. Pavlou
Fake news is increasingly prevalent on social media, and the anonymity of the Internet is a major enabler. Social media platforms seek to reduce online anonymity with identity verification by verifying user identities with email addresses, phone numbers, or government-issued photo identification. However, we ask: Is identity verification effective in deterring fake news? Using a unique dataset (spanning from 2009 to 2016) from a large-scale social media platform, we empirically investigate the impact of identity verification on the creation and sharing of fake news. In doing so, we exploit an exogenous policy change in identity verification on the social media platform as a natural experiment. Notably, our results show that identity verification may actually not deter fake news. We find that in contrast to verification with a regular badge (a badge that is designed to signal a verified status), verification with an enhanced badge (a badge that is designed to signal a superior verified status and allegedly endow higher credibility) may even fuel the proliferation of fake news. An enhanced badge for verification proliferates fake news, not only by encouraging verified users to create more fake news, but also by misleading other users into sharing fake news created by verified users. This study contributes to the literature on online anonymity and work on information diffusion on social media, while it informs leaders in social media that a costless-to-cheat identity verification system can have unintended negative effects, and that a misleading design of verification badges may amplify the influence of fake news created by verified users and incentivize more effort elicited from the strategic fake-news creators.
假新闻在社交媒体上越来越普遍,而互联网的匿名性是一个主要推手。社交媒体平台试图通过电子邮件地址、电话号码或政府颁发的带照片的身份证明来验证用户身份,从而减少在线匿名性。然而,我们要问:身份验证在阻止假新闻方面是否有效?使用来自大型社交媒体平台的独特数据集(从2009年到2016年),我们实证研究了身份验证对假新闻创作和分享的影响。在此过程中,我们利用社交媒体平台上身份验证的外生政策变化作为自然实验。值得注意的是,我们的研究结果表明,身份验证实际上可能无法阻止假新闻。我们发现,与普通徽章(旨在表明已验证状态的徽章)的验证相反,增强徽章(旨在表明更高的已验证状态并据称赋予更高可信度的徽章)的验证甚至可能助长假新闻的扩散。增强的验证标识会滋生假新闻,不仅会鼓励认证用户创造更多假新闻,还会误导其他用户分享认证用户创造的假新闻。本研究为网络匿名和社交媒体信息传播方面的文献做出了贡献,同时它告诉社交媒体的领导者,一个无成本欺骗的身份验证系统可能会产生意想不到的负面影响,一个误导性的验证徽章设计可能会放大由验证用户创建的假新闻的影响,并激励战略假新闻创作者做出更多努力。
{"title":"'Cure or Poison?' Identity Verification and the Spread of Fake News on Social Media","authors":"S. Wang, Min-Seok Pang, P. Pavlou","doi":"10.2139/ssrn.3249479","DOIUrl":"https://doi.org/10.2139/ssrn.3249479","url":null,"abstract":"Fake news is increasingly prevalent on social media, and the anonymity of the Internet is a major enabler. Social media platforms seek to reduce online anonymity with identity verification by verifying user identities with email addresses, phone numbers, or government-issued photo identification. However, we ask: Is identity verification effective in deterring fake news? Using a unique dataset (spanning from 2009 to 2016) from a large-scale social media platform, we empirically investigate the impact of identity verification on the creation and sharing of fake news. In doing so, we exploit an exogenous policy change in identity verification on the social media platform as a natural experiment. Notably, our results show that identity verification may actually not deter fake news. We find that in contrast to verification with a regular badge (a badge that is \u0000designed to signal a verified status), verification with an enhanced badge (a badge that is designed to signal a superior verified status and allegedly endow higher credibility) may even fuel the proliferation of fake news. An enhanced badge for verification proliferates fake news, not only by encouraging verified users to create more fake news, but also by misleading other users into sharing fake news created by verified users. This study contributes to the literature on online anonymity and work on information diffusion on social media, while it informs leaders in social media that a costless-to-cheat identity verification system can have unintended negative effects, and that a misleading design of verification badges may amplify the influence of fake news created by verified users and incentivize more effort elicited from the strategic fake-news creators.","PeriodicalId":369029,"journal":{"name":"PsychRN: Attitudes & Social Cognition (Topic)","volume":"12 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123659140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
期刊
PsychRN: Attitudes & Social Cognition (Topic)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1