COVID19 Sentiment survey data shows that the level of concern has a strong inverse relationship with the respondents' confidence in government response to the virus. The results show that the lower the confidence in government response, the higher the level of concern.
{"title":"COVID-19: Level of Concern Explained","authors":"Erjon Gjoci","doi":"10.2139/ssrn.3713602","DOIUrl":"https://doi.org/10.2139/ssrn.3713602","url":null,"abstract":"COVID19 Sentiment survey data shows that the level of concern has a strong inverse relationship with the respondents' confidence in government response to the virus. The results show that the lower the confidence in government response, the higher the level of concern.","PeriodicalId":369029,"journal":{"name":"PsychRN: Attitudes & Social Cognition (Topic)","volume":"130 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134290461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The article presents the results of the selection of relevant to the Russian context emotions perceived as primary (which humans share with animals) or secondary (experienced only by humans). Three stages of the selection and evaluation of emotions made it possible to distinguish 12 emotions: primary positive emotions (Joy, Pleasure, and Interest), primary negative emotions (Anger, Irritation, and Rage), secondary positive emotions (Inspiration, Afflatus, and Enthusiasm), and secondary negative emotions (Disappointment, Regret, and Devastation). The results of confirmatory and multigroup confirmatory factor analyses demonstrated that these emotions are well grouped into primary-secondary subgroups and that their valence is important to grouping. The highlighted emotions can be used to study implicit prejudices towards various social groups.
{"title":"Primary and Secondary Emotions as an Instrument to Measure Implicit Prejudice","authors":"E. Agadullina, O. Gulevich, M. Terskova","doi":"10.2139/ssrn.3495082","DOIUrl":"https://doi.org/10.2139/ssrn.3495082","url":null,"abstract":"The article presents the results of the selection of relevant to the Russian context emotions perceived as primary (which humans share with animals) or secondary (experienced only by humans). Three stages of the selection and evaluation of emotions made it possible to distinguish 12 emotions: primary positive emotions (Joy, Pleasure, and Interest), primary negative emotions (Anger, Irritation, and Rage), secondary positive emotions (Inspiration, Afflatus, and Enthusiasm), and secondary negative emotions (Disappointment, Regret, and Devastation). The results of confirmatory and multigroup confirmatory factor analyses demonstrated that these emotions are well grouped into primary-secondary subgroups and that their valence is important to grouping. The highlighted emotions can be used to study implicit prejudices towards various social groups.","PeriodicalId":369029,"journal":{"name":"PsychRN: Attitudes & Social Cognition (Topic)","volume":"133 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130209095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper uses an economic experiment to identify the presence of descriptive norms associated with the jointly determined division of a surplus. We consider the results from a two-stage experiment in which participants contribute to a common pool which is then divided using a coordination game between participants. Treatments effects are introduced by varying the context in which individual contributions to the pool are established. We find there is no simple universal principle (or norm) guiding participants when they make these joint distributional choices. Rather, we find self-interest, self-serving bias and a participant's contribution relative to others play a role in determining their distributive choices, their expectations regarding their partners’ likely choices and their evaluation of the fairness of the resulting outcomes. Additionally, we find that the way in which these determinants influence choice depends on whether individual contributions are determined randomly instead of by individuals’ skill or effort.
{"title":"Joint Distributional Choices: An Experimental Analysis","authors":"N. Olekalns, Hugh Sibly, Amy Cormany","doi":"10.2139/ssrn.3528762","DOIUrl":"https://doi.org/10.2139/ssrn.3528762","url":null,"abstract":"This paper uses an economic experiment to identify the presence of descriptive norms associated with the jointly determined division of a surplus. We consider the results from a two-stage experiment in which participants contribute to a common pool which is then divided using a coordination game between participants. Treatments effects are introduced by varying the context in which individual contributions to the pool are established. We find there is no simple universal principle (or norm) guiding participants when they make these joint distributional choices. Rather, we find self-interest, self-serving bias and a participant's contribution relative to others play a role in determining their distributive choices, their expectations regarding their partners’ likely choices and their evaluation of the fairness of the resulting outcomes. Additionally, we find that the way in which these determinants influence choice depends on whether individual contributions are determined randomly instead of by individuals’ skill or effort.","PeriodicalId":369029,"journal":{"name":"PsychRN: Attitudes & Social Cognition (Topic)","volume":"144 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132966043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI has been a catalyst for automation and efficiency in numerous ways, but has also had harmful consequences, including: unforeseen algorithmic bias that affects already marginalized communities, as with Amazon’s AI recruiting algorithm that showed bias against women; accountability and liability coming into question if an autonomous vehicle injures or kills, as seen with Uber’s self-driving car casualties; even the notion of democracy is being challenged as the technology enables authoritarian and democratic states like China and the United States to practice surveillance at an unprecedented scale.
The risks as well as the need for some form of basic rules have not gone unnoticed and governments, tech companies, research consortiums or advocacy groups have broached the issue. In fact, this has been the topic of local, national, and supranational discussion for some years now, as can be seen with new legislation popping up to ban facial recognition software in public spaces. The problem with these discussions, however, is that they have been heavily dominated by how we can make AI more “ethical”. Companies, states, and even international organizations discuss ethical principles, such as fair, accountable, responsible, or safe AI in numerous expert groups or ad hoc committees, such as the High-Level Expert Group on AI in the European Commission, the group on AI in Society of the Organization for Economic Co-operation and Development (OECD), or the select committee on Artificial Intelligence of the United Kingdom House of Lords.
This may sound like a solid approach to tackling the dangers that AI poses, but to actually be impactful, these discussions must be grounded in rhetoric that is focused and actionable. Not only may the principles be defined differently depending on the stakeholders, but there are overwhelming differences in how principles are interpreted and what requirements are necessary for them to materialize. In addition, ethical debates on AI are often dominated by American or Chinese companies, which are both propagating their own idea of ethical AI, but which may in many cases stand in conflict with the values of other cultures and nations. Not only do different countries have different ideas of which “ethics” principles need to be protected, but different countries play starkly different roles in developing AI. Another problem is when ethical guidelines are discussed, suggestions often come from tech companies themselves, while voices from citizens or even governments are marginalized.
Self-regulation around ethical principles is too weak to address the spreading implications that AI technologies have had. Ethical principles lack clarity and enforcement capabilities. We must stop focusing the discourse on ethical principles, and instead shift the debate to human rights. Debates must be louder at the supranational level. International pressure must be put on states and companies who fail to protect individuals b
人工智能在许多方面都是自动化和效率的催化剂,但也产生了有害的后果,包括:无法预见的算法偏见,影响到已经被边缘化的社区,比如亚马逊的人工智能招聘算法就显示出对女性的偏见;如果自动驾驶汽车造成伤害或死亡,就像优步(Uber)的自动驾驶汽车伤亡事件一样,问责制和责任就会受到质疑;就连民主的概念也受到了挑战,因为这项技术使中国和美国等专制和民主国家能够以前所未有的规模实施监控。风险以及对某种形式的基本规则的需求并没有被忽视,政府、科技公司、研究联盟或倡导团体都提出了这个问题。事实上,这已经成为地方、国家和超国家讨论的话题好几年了,从禁止在公共场所使用面部识别软件的新立法中可以看出。然而,这些讨论的问题在于,它们在很大程度上被我们如何使人工智能更“合乎道德”所主导。公司、国家甚至国际组织在众多专家组或特设委员会中讨论道德原则,例如公平、负责、负责或安全的人工智能,例如欧盟委员会的人工智能高级专家组、经济合作与发展组织(OECD)的社会人工智能小组或英国上议院的人工智能特别委员会。这听起来像是解决人工智能带来的危险的可靠方法,但要真正产生影响,这些讨论必须建立在专注和可操作的修辞基础上。不仅原则的定义可能会根据涉众的不同而不同,而且在如何解释原则以及实现原则所必需的需求方面也存在着巨大的差异。此外,关于人工智能的伦理辩论往往由美国或中国公司主导,它们都在宣传自己的人工智能伦理理念,但在很多情况下,这可能与其他文化和国家的价值观相冲突。不同的国家不仅对哪些“伦理”原则需要得到保护有不同的看法,而且不同的国家在发展人工智能方面扮演的角色也截然不同。另一个问题是,当讨论道德准则时,建议往往来自科技公司自己,而公民甚至政府的声音被边缘化。围绕道德原则的自我监管过于薄弱,无法应对人工智能技术带来的广泛影响。道德原则缺乏明确性和执行能力。我们必须停止把讨论的重点放在伦理原则上,而把辩论转向人权。超国家层面的辩论必须更加激烈。国际社会必须向那些未能通过宣传带有风险的人工智能技术来保护个人的国家和公司施加压力。领导力的定义不应由提出新的道德准则的行为者来定义,而应由那些制定有关人工智能的法律义务的人来定义,这些义务以人权的角度为基础并源于人权的角度。要做到这一点,一种方法是重申人工智能开发和部署的以人为本的本质,并遵循人权法的可操作标准。人权法律框架已经存在了几十年,在打击和迫使各国修改国内法方面发挥了重要作用。纳尔逊·曼德拉(Nelson Mandela)在为结束南非的种族隔离制度而斗争时,提到了《世界人权宣言》(Universal Declaration of Human Rights)中规定的义务;1973年,通过罗伊诉韦德案,美国最高法院顺应了承认妇女人权的全球大趋势,保护个人不受政府对私人事务的不当干预,赋予妇女充分和平等参与社会的能力;最近,互联网的开放接入被认为是一项人权,不仅对见解、言论、结社和集会自由至关重要,而且有助于动员民众呼吁平等、正义和问责制,以促进全球对人权的尊重。这些例子表明,人权标准是如何适用于各种国内和国际规则的。这些标准具有可操作性和可执行性,这表明它们非常适合监管人工智能技术的跨境性质。必须从人权的角度审视人工智能系统,分析人工智能造成或加剧的当前和未来危害,并采取行动避免任何伤害。人工智能技术的采用已经跨越国界,对世界各地的社会产生了不同的影响。全球化的技术需要国际义务来加速和更大规模地减轻所面临的社会问题。 企业和国家应该努力发展维护人权的人工智能技术。将人工智能话语围绕人权而不仅仅是伦理展开,可以为人工智能技术的开发和部署提供更清晰的法律基础。国际社会必须提高认识,凝聚共识,深入分析人工智能技术在不同背景下如何侵犯人权,并制定有效的法律补救途径。把讨论的重点放在人权而不是道德原则上,可以为国家和私人行为者提供更多的问责措施和更多的义务,并可以将辩论转向依赖于几十年来形成的一致和广泛接受的法律原则。
{"title":"Artificial Intelligence Needs Human Rights: How the Focus on Ethical AI Fails to Address Privacy, Discrimination and Other Concerns","authors":"Kate Saslow, Philippe Lorenz","doi":"10.2139/ssrn.3589473","DOIUrl":"https://doi.org/10.2139/ssrn.3589473","url":null,"abstract":"AI has been a catalyst for automation and efficiency in numerous ways, but has also had harmful consequences, including: unforeseen algorithmic bias that affects already marginalized communities, as with Amazon’s AI recruiting algorithm that showed bias against women; accountability and liability coming into question if an autonomous vehicle injures or kills, as seen with Uber’s self-driving car casualties; even the notion of democracy is being challenged as the technology enables authoritarian and democratic states like China and the United States to practice surveillance at an unprecedented scale.<br><br>The risks as well as the need for some form of basic rules have not gone unnoticed and governments, tech companies, research consortiums or advocacy groups have broached the issue. In fact, this has been the topic of local, national, and supranational discussion for some years now, as can be seen with new legislation popping up to ban facial recognition software in public spaces. The problem with these discussions, however, is that they have been heavily dominated by how we can make AI more “ethical”. Companies, states, and even international organizations discuss ethical principles, such as fair, accountable, responsible, or safe AI in numerous expert groups or ad hoc committees, such as the High-Level Expert Group on AI in the European Commission, the group on AI in Society of the Organization for Economic Co-operation and Development (OECD), or the select committee on Artificial Intelligence of the United Kingdom House of Lords.<br><br>This may sound like a solid approach to tackling the dangers that AI poses, but to actually be impactful, these discussions must be grounded in rhetoric that is focused and actionable. Not only may the principles be defined differently depending on the stakeholders, but there are overwhelming differences in how principles are interpreted and what requirements are necessary for them to materialize. In addition, ethical debates on AI are often dominated by American or Chinese companies, which are both propagating their own idea of ethical AI, but which may in many cases stand in conflict with the values of other cultures and nations. Not only do different countries have different ideas of which “ethics” principles need to be protected, but different countries play starkly different roles in developing AI. Another problem is when ethical guidelines are discussed, suggestions often come from tech companies themselves, while voices from citizens or even governments are marginalized.<br><br>Self-regulation around ethical principles is too weak to address the spreading implications that AI technologies have had. Ethical principles lack clarity and enforcement capabilities. We must stop focusing the discourse on ethical principles, and instead shift the debate to human rights. Debates must be louder at the supranational level. International pressure must be put on states and companies who fail to protect individuals b","PeriodicalId":369029,"journal":{"name":"PsychRN: Attitudes & Social Cognition (Topic)","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124205980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fake news is increasingly prevalent on social media, and the anonymity of the Internet is a major enabler. Social media platforms seek to reduce online anonymity with identity verification by verifying user identities with email addresses, phone numbers, or government-issued photo identification. However, we ask: Is identity verification effective in deterring fake news? Using a unique dataset (spanning from 2009 to 2016) from a large-scale social media platform, we empirically investigate the impact of identity verification on the creation and sharing of fake news. In doing so, we exploit an exogenous policy change in identity verification on the social media platform as a natural experiment. Notably, our results show that identity verification may actually not deter fake news. We find that in contrast to verification with a regular badge (a badge that is designed to signal a verified status), verification with an enhanced badge (a badge that is designed to signal a superior verified status and allegedly endow higher credibility) may even fuel the proliferation of fake news. An enhanced badge for verification proliferates fake news, not only by encouraging verified users to create more fake news, but also by misleading other users into sharing fake news created by verified users. This study contributes to the literature on online anonymity and work on information diffusion on social media, while it informs leaders in social media that a costless-to-cheat identity verification system can have unintended negative effects, and that a misleading design of verification badges may amplify the influence of fake news created by verified users and incentivize more effort elicited from the strategic fake-news creators.
{"title":"'Cure or Poison?' Identity Verification and the Spread of Fake News on Social Media","authors":"S. Wang, Min-Seok Pang, P. Pavlou","doi":"10.2139/ssrn.3249479","DOIUrl":"https://doi.org/10.2139/ssrn.3249479","url":null,"abstract":"Fake news is increasingly prevalent on social media, and the anonymity of the Internet is a major enabler. Social media platforms seek to reduce online anonymity with identity verification by verifying user identities with email addresses, phone numbers, or government-issued photo identification. However, we ask: Is identity verification effective in deterring fake news? Using a unique dataset (spanning from 2009 to 2016) from a large-scale social media platform, we empirically investigate the impact of identity verification on the creation and sharing of fake news. In doing so, we exploit an exogenous policy change in identity verification on the social media platform as a natural experiment. Notably, our results show that identity verification may actually not deter fake news. We find that in contrast to verification with a regular badge (a badge that is \u0000designed to signal a verified status), verification with an enhanced badge (a badge that is designed to signal a superior verified status and allegedly endow higher credibility) may even fuel the proliferation of fake news. An enhanced badge for verification proliferates fake news, not only by encouraging verified users to create more fake news, but also by misleading other users into sharing fake news created by verified users. This study contributes to the literature on online anonymity and work on information diffusion on social media, while it informs leaders in social media that a costless-to-cheat identity verification system can have unintended negative effects, and that a misleading design of verification badges may amplify the influence of fake news created by verified users and incentivize more effort elicited from the strategic fake-news creators.","PeriodicalId":369029,"journal":{"name":"PsychRN: Attitudes & Social Cognition (Topic)","volume":"12 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123659140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}