Ali I. Ozkes , Nobuyuki Hanaki , Dieter Vanderelst , Jurgen Willems
{"title":"Ultimatum bargaining: Algorithms vs. Humans","authors":"Ali I. Ozkes , Nobuyuki Hanaki , Dieter Vanderelst , Jurgen Willems","doi":"10.1016/j.econlet.2024.111979","DOIUrl":null,"url":null,"abstract":"<div><div>We study human behavior in ultimatum game when interacting with either human or algorithmic opponents. We examine how the type of the AI algorithm (mimicking human behavior, optimising gains, or providing no explanation) and the presence of a human beneficiary affect sending and accepting behaviors. Our experimental data reveal that subjects generally do not differentiate between human and algorithmic opponents, between different algorithms, and between an explained and unexplained algorithm. However, they are more willing to forgo higher payoffs when the algorithm’s earnings benefit a human.</div></div>","PeriodicalId":2,"journal":{"name":"ACS Applied Bio Materials","volume":"244 ","pages":"Article 111979"},"PeriodicalIF":4.6000,"publicationDate":"2024-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0165176524004634/pdfft?md5=7035d59c89d338f0eefef616934e5cf0&pid=1-s2.0-S0165176524004634-main.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACS Applied Bio Materials","FirstCategoryId":"96","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0165176524004634","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MATERIALS SCIENCE, BIOMATERIALS","Score":null,"Total":0}
引用次数: 0
Abstract
We study human behavior in ultimatum game when interacting with either human or algorithmic opponents. We examine how the type of the AI algorithm (mimicking human behavior, optimising gains, or providing no explanation) and the presence of a human beneficiary affect sending and accepting behaviors. Our experimental data reveal that subjects generally do not differentiate between human and algorithmic opponents, between different algorithms, and between an explained and unexplained algorithm. However, they are more willing to forgo higher payoffs when the algorithm’s earnings benefit a human.