{"title":"消费者对人工智能评价的偏见:缺乏透明度焦虑的中介效应","authors":"Alberto Lopez, Ricardo Garza","doi":"10.1108/jrim-07-2021-0192","DOIUrl":null,"url":null,"abstract":"PurposeWill consumers accept artificial intelligence (AI) products that evaluate them? New consumer products offer AI evaluations. However, previous research has never investigated how consumers feel about being evaluated by AI instead of by a human. Furthermore, why do consumers experience being evaluated by an AI algorithm or by a human differently? This research aims to offer answers to these questions.Design/methodology/approachThree laboratory experiments were conducted. Experiments 1 and 2 test the main effect of evaluator (AI and human) and evaluations received (positive, neutral and negative) on fairness perception of the evaluation. Experiment 3 replicates previous findings and tests the mediation effect.FindingsBuilding on previous research on consumer biases and lack of transparency anxiety, the authors present converging evidence that consumers who got positive evaluations reported nonsignificant difference on the level of fairness perception on the evaluation regardless of the evaluator (human or AI). Contrarily, consumers who got negative evaluations reported lower fairness perception when the evaluation was given by AI. Further moderated mediation analysis showed that consumers who get a negative evaluation by AI experience higher levels of lack of transparency anxiety, which in turn is an underlying mechanism driving this effect.Originality/valueTo the best of the authors' knowledge, no previous research has investigated how consumers feel about being evaluated by AI instead of by a human. This consumer bias against AI evaluations is a phenomenon previously overlooked in the marketing literature, with many implications for the development and adoption of new AI products, as well as theoretical contributions to the nascent literature on consumer experience and AI.","PeriodicalId":47116,"journal":{"name":"Journal of Research in Interactive Marketing","volume":" ","pages":""},"PeriodicalIF":9.6000,"publicationDate":"2023-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Consumer bias against evaluations received by artificial intelligence: the mediation effect of lack of transparency anxiety\",\"authors\":\"Alberto Lopez, Ricardo Garza\",\"doi\":\"10.1108/jrim-07-2021-0192\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"PurposeWill consumers accept artificial intelligence (AI) products that evaluate them? New consumer products offer AI evaluations. However, previous research has never investigated how consumers feel about being evaluated by AI instead of by a human. Furthermore, why do consumers experience being evaluated by an AI algorithm or by a human differently? This research aims to offer answers to these questions.Design/methodology/approachThree laboratory experiments were conducted. Experiments 1 and 2 test the main effect of evaluator (AI and human) and evaluations received (positive, neutral and negative) on fairness perception of the evaluation. Experiment 3 replicates previous findings and tests the mediation effect.FindingsBuilding on previous research on consumer biases and lack of transparency anxiety, the authors present converging evidence that consumers who got positive evaluations reported nonsignificant difference on the level of fairness perception on the evaluation regardless of the evaluator (human or AI). Contrarily, consumers who got negative evaluations reported lower fairness perception when the evaluation was given by AI. Further moderated mediation analysis showed that consumers who get a negative evaluation by AI experience higher levels of lack of transparency anxiety, which in turn is an underlying mechanism driving this effect.Originality/valueTo the best of the authors' knowledge, no previous research has investigated how consumers feel about being evaluated by AI instead of by a human. This consumer bias against AI evaluations is a phenomenon previously overlooked in the marketing literature, with many implications for the development and adoption of new AI products, as well as theoretical contributions to the nascent literature on consumer experience and AI.\",\"PeriodicalId\":47116,\"journal\":{\"name\":\"Journal of Research in Interactive Marketing\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":9.6000,\"publicationDate\":\"2023-02-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Research in Interactive Marketing\",\"FirstCategoryId\":\"91\",\"ListUrlMain\":\"https://doi.org/10.1108/jrim-07-2021-0192\",\"RegionNum\":2,\"RegionCategory\":\"管理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"BUSINESS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Research in Interactive Marketing","FirstCategoryId":"91","ListUrlMain":"https://doi.org/10.1108/jrim-07-2021-0192","RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"BUSINESS","Score":null,"Total":0}
Consumer bias against evaluations received by artificial intelligence: the mediation effect of lack of transparency anxiety
PurposeWill consumers accept artificial intelligence (AI) products that evaluate them? New consumer products offer AI evaluations. However, previous research has never investigated how consumers feel about being evaluated by AI instead of by a human. Furthermore, why do consumers experience being evaluated by an AI algorithm or by a human differently? This research aims to offer answers to these questions.Design/methodology/approachThree laboratory experiments were conducted. Experiments 1 and 2 test the main effect of evaluator (AI and human) and evaluations received (positive, neutral and negative) on fairness perception of the evaluation. Experiment 3 replicates previous findings and tests the mediation effect.FindingsBuilding on previous research on consumer biases and lack of transparency anxiety, the authors present converging evidence that consumers who got positive evaluations reported nonsignificant difference on the level of fairness perception on the evaluation regardless of the evaluator (human or AI). Contrarily, consumers who got negative evaluations reported lower fairness perception when the evaluation was given by AI. Further moderated mediation analysis showed that consumers who get a negative evaluation by AI experience higher levels of lack of transparency anxiety, which in turn is an underlying mechanism driving this effect.Originality/valueTo the best of the authors' knowledge, no previous research has investigated how consumers feel about being evaluated by AI instead of by a human. This consumer bias against AI evaluations is a phenomenon previously overlooked in the marketing literature, with many implications for the development and adoption of new AI products, as well as theoretical contributions to the nascent literature on consumer experience and AI.
期刊介绍:
The mission of the Journal of Research in Interactive Marketing is to address substantive issues in interactive, relationship, electronic, direct and multi-channel marketing and marketing management.
ISSN: 2040-7122
eISSN: 2040-7122
With its origins in the discipline and practice of direct marketing, the Journal of Research in Interactive Marketing (JRIM) aims to publish progressive, innovative and rigorous scholarly research for marketing academics and practitioners.