Zahra Ashktorab, Qian Pan, Werner Geyer, Michael Desmond, Marina Danilevsky, James M. Johnson, Casey Dugan, Michelle Bachman
In this paper, we investigate the impact of hallucinations and cognitive forcing functions in human-AI collaborative text generation tasks, focusing on the use of Large Language Models (LLMs) to assist in generating high-quality conversational data. LLMs require data for fine-tuning, a crucial step in enhancing their performance. In the context of conversational customer support, the data takes the form of a conversation between a human customer and an agent and can be generated with an AI assistant. In our inquiry, involving 11 users who each completed 8 tasks, resulting in a total of 88 tasks, we found that the presence of hallucinations negatively impacts the quality of data. We also find that, although the cognitive forcing function does not always mitigate the detrimental effects of hallucinations on data quality, the presence of cognitive forcing functions and hallucinations together impacts data quality and influences how users leverage the AI responses presented to them. Our analysis of user behavior reveals distinct patterns of reliance on AI-generated responses, highlighting the importance of managing hallucinations in AI-generated content within conversational AI contexts.
{"title":"Emerging Reliance Behaviors in Human-AI Text Generation: Hallucinations, Data Quality Assessment, and Cognitive Forcing Functions","authors":"Zahra Ashktorab, Qian Pan, Werner Geyer, Michael Desmond, Marina Danilevsky, James M. Johnson, Casey Dugan, Michelle Bachman","doi":"arxiv-2409.08937","DOIUrl":"https://doi.org/arxiv-2409.08937","url":null,"abstract":"In this paper, we investigate the impact of hallucinations and cognitive\u0000forcing functions in human-AI collaborative text generation tasks, focusing on\u0000the use of Large Language Models (LLMs) to assist in generating high-quality\u0000conversational data. LLMs require data for fine-tuning, a crucial step in\u0000enhancing their performance. In the context of conversational customer support,\u0000the data takes the form of a conversation between a human customer and an agent\u0000and can be generated with an AI assistant. In our inquiry, involving 11 users\u0000who each completed 8 tasks, resulting in a total of 88 tasks, we found that the\u0000presence of hallucinations negatively impacts the quality of data. We also find\u0000that, although the cognitive forcing function does not always mitigate the\u0000detrimental effects of hallucinations on data quality, the presence of\u0000cognitive forcing functions and hallucinations together impacts data quality\u0000and influences how users leverage the AI responses presented to them. Our\u0000analysis of user behavior reveals distinct patterns of reliance on AI-generated\u0000responses, highlighting the importance of managing hallucinations in\u0000AI-generated content within conversational AI contexts.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"15 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142252533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Variational Autoencoders are widespread in Machine Learning, but are typically explained with dense math notation or static code examples. This paper presents VAE Explainer, an interactive Variational Autoencoder running in the browser to supplement existing static documentation (e.g., Keras Code Examples). VAE Explainer adds interactions to the VAE summary with interactive model inputs, latent space, and output. VAE Explainer connects the high-level understanding with the implementation: annotated code and a live computational graph. The VAE Explainer interactive visualization is live at https://xnought.github.io/vae-explainer and the code is open source at https://github.com/xnought/vae-explainer.
{"title":"VAE Explainer: Supplement Learning Variational Autoencoders with Interactive Visualization","authors":"Donald Bertucci, Alex Endert","doi":"arxiv-2409.09011","DOIUrl":"https://doi.org/arxiv-2409.09011","url":null,"abstract":"Variational Autoencoders are widespread in Machine Learning, but are\u0000typically explained with dense math notation or static code examples. This\u0000paper presents VAE Explainer, an interactive Variational Autoencoder running in\u0000the browser to supplement existing static documentation (e.g., Keras Code\u0000Examples). VAE Explainer adds interactions to the VAE summary with interactive\u0000model inputs, latent space, and output. VAE Explainer connects the high-level\u0000understanding with the implementation: annotated code and a live computational\u0000graph. The VAE Explainer interactive visualization is live at\u0000https://xnought.github.io/vae-explainer and the code is open source at\u0000https://github.com/xnought/vae-explainer.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"45 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142252529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Saku Sourulahti, Christian P Janssen, Jussi PP Jokinen
Efficient attention deployment in visual search is limited by human visual memory, yet this limitation can be offset by exploiting the environment's structure. This paper introduces a computational cognitive model that simulates how the human visual system uses visual hierarchies to prevent refixations in sequential attention deployment. The model adopts computational rationality, positing behaviors as adaptations to cognitive constraints and environmental structures. In contrast to earlier models that predict search performance for hierarchical information, our model does not include predefined assumptions about particular search strategies. Instead, our model's search strategy emerges as a result of adapting to the environment through reinforcement learning algorithms. In an experiment with human participants we test the model's prediction that structured environments reduce visual search times compared to random tasks. Our model's predictions correspond well with human search performance across various set sizes for both structured and unstructured visual layouts. Our work improves understanding of the adaptive nature of visual search in hierarchically structured environments and informs the design of optimized search spaces.
{"title":"Modeling Rational Adaptation of Visual Search to Hierarchical Structures","authors":"Saku Sourulahti, Christian P Janssen, Jussi PP Jokinen","doi":"arxiv-2409.08967","DOIUrl":"https://doi.org/arxiv-2409.08967","url":null,"abstract":"Efficient attention deployment in visual search is limited by human visual\u0000memory, yet this limitation can be offset by exploiting the environment's\u0000structure. This paper introduces a computational cognitive model that simulates\u0000how the human visual system uses visual hierarchies to prevent refixations in\u0000sequential attention deployment. The model adopts computational rationality,\u0000positing behaviors as adaptations to cognitive constraints and environmental\u0000structures. In contrast to earlier models that predict search performance for\u0000hierarchical information, our model does not include predefined assumptions\u0000about particular search strategies. Instead, our model's search strategy\u0000emerges as a result of adapting to the environment through reinforcement\u0000learning algorithms. In an experiment with human participants we test the\u0000model's prediction that structured environments reduce visual search times\u0000compared to random tasks. Our model's predictions correspond well with human\u0000search performance across various set sizes for both structured and\u0000unstructured visual layouts. Our work improves understanding of the adaptive\u0000nature of visual search in hierarchically structured environments and informs\u0000the design of optimized search spaces.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"15 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142252531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As artificial intelligence (AI) technologies, including generative AI, continue to evolve, concerns have arisen about over-reliance on AI, which may lead to human deskilling and diminished cognitive engagement. Over-reliance on AI can also lead users to accept information given by AI without performing critical examinations, causing negative consequences, such as misleading users with hallucinated contents. This paper introduces extraheric AI, a human-AI interaction conceptual framework that fosters users' higher-order thinking skills, such as creativity, critical thinking, and problem-solving, during task completion. Unlike existing human-AI interaction designs, which replace or augment human cognition, extraheric AI fosters cognitive engagement by posing questions or providing alternative perspectives to users, rather than direct answers. We discuss interaction strategies, evaluation methods aligned with cognitive load theory and Bloom's taxonomy, and future research directions to ensure that human cognitive skills remain a crucial element in AI-integrated environments, promoting a balanced partnership between humans and AI.
{"title":"AI as Extraherics: Fostering Higher-order Thinking Skills in Human-AI Interaction","authors":"Koji Yatani, Zefan Sramek, Chi-lan Yang","doi":"arxiv-2409.09218","DOIUrl":"https://doi.org/arxiv-2409.09218","url":null,"abstract":"As artificial intelligence (AI) technologies, including generative AI,\u0000continue to evolve, concerns have arisen about over-reliance on AI, which may\u0000lead to human deskilling and diminished cognitive engagement. Over-reliance on\u0000AI can also lead users to accept information given by AI without performing\u0000critical examinations, causing negative consequences, such as misleading users\u0000with hallucinated contents. This paper introduces extraheric AI, a human-AI\u0000interaction conceptual framework that fosters users' higher-order thinking\u0000skills, such as creativity, critical thinking, and problem-solving, during task\u0000completion. Unlike existing human-AI interaction designs, which replace or\u0000augment human cognition, extraheric AI fosters cognitive engagement by posing\u0000questions or providing alternative perspectives to users, rather than direct\u0000answers. We discuss interaction strategies, evaluation methods aligned with\u0000cognitive load theory and Bloom's taxonomy, and future research directions to\u0000ensure that human cognitive skills remain a crucial element in AI-integrated\u0000environments, promoting a balanced partnership between humans and AI.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"14 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142252487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Documentation plays a crucial role in both external accountability and internal governance of AI systems. Although there are many proposals for documenting AI data, models, systems, and methods, the ways these practices enhance governance as well as the challenges practitioners and organizations face with documentation remain underexplored. In this paper, we analyze 37 proposed documentation frameworks and 21 empirical studies evaluating their use. We identify potential hypotheses about how documentation can strengthen governance, such as informing stakeholders about AI risks and usage, fostering collaboration, encouraging ethical reflection, and reinforcing best practices. However, empirical evidence shows that practitioners often encounter obstacles that prevent documentation from achieving these goals. We also highlight key considerations for organizations when designing documentation, such as determining the appropriate level of detail and balancing automation in the process. Finally, we offer recommendations for further research and for implementing effective documentation practices in real-world contexts.
{"title":"Improving governance outcomes through AI documentation: Bridging theory and practice","authors":"Amy A. Winecoff, Miranda Bogen","doi":"arxiv-2409.08960","DOIUrl":"https://doi.org/arxiv-2409.08960","url":null,"abstract":"Documentation plays a crucial role in both external accountability and\u0000internal governance of AI systems. Although there are many proposals for\u0000documenting AI data, models, systems, and methods, the ways these practices\u0000enhance governance as well as the challenges practitioners and organizations\u0000face with documentation remain underexplored. In this paper, we analyze 37\u0000proposed documentation frameworks and 21 empirical studies evaluating their\u0000use. We identify potential hypotheses about how documentation can strengthen\u0000governance, such as informing stakeholders about AI risks and usage, fostering\u0000collaboration, encouraging ethical reflection, and reinforcing best practices.\u0000However, empirical evidence shows that practitioners often encounter obstacles\u0000that prevent documentation from achieving these goals. We also highlight key\u0000considerations for organizations when designing documentation, such as\u0000determining the appropriate level of detail and balancing automation in the\u0000process. Finally, we offer recommendations for further research and for\u0000implementing effective documentation practices in real-world contexts.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"8 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142252532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pat Pataranutaporn, Chayapatr Archiwaranguprok, Samantha W. T. Chan, Elizabeth Loftus, Pattie Maes
AI is increasingly used to enhance images and videos, both intentionally and unintentionally. As AI editing tools become more integrated into smartphones, users can modify or animate photos into realistic videos. This study examines the impact of AI-altered visuals on false memories--recollections of events that didn't occur or deviate from reality. In a pre-registered study, 200 participants were divided into four conditions of 50 each. Participants viewed original images, completed a filler task, then saw stimuli corresponding to their assigned condition: unedited images, AI-edited images, AI-generated videos, or AI-generated videos of AI-edited images. AI-edited visuals significantly increased false recollections, with AI-generated videos of AI-edited images having the strongest effect (2.05x compared to control). Confidence in false memories was also highest for this condition (1.19x compared to control). We discuss potential applications in HCI, such as therapeutic memory reframing, and challenges in ethical, legal, political, and societal domains.
{"title":"Synthetic Human Memories: AI-Edited Images and Videos Can Implant False Memories and Distort Recollection","authors":"Pat Pataranutaporn, Chayapatr Archiwaranguprok, Samantha W. T. Chan, Elizabeth Loftus, Pattie Maes","doi":"arxiv-2409.08895","DOIUrl":"https://doi.org/arxiv-2409.08895","url":null,"abstract":"AI is increasingly used to enhance images and videos, both intentionally and\u0000unintentionally. As AI editing tools become more integrated into smartphones,\u0000users can modify or animate photos into realistic videos. This study examines\u0000the impact of AI-altered visuals on false memories--recollections of events\u0000that didn't occur or deviate from reality. In a pre-registered study, 200\u0000participants were divided into four conditions of 50 each. Participants viewed\u0000original images, completed a filler task, then saw stimuli corresponding to\u0000their assigned condition: unedited images, AI-edited images, AI-generated\u0000videos, or AI-generated videos of AI-edited images. AI-edited visuals\u0000significantly increased false recollections, with AI-generated videos of\u0000AI-edited images having the strongest effect (2.05x compared to control).\u0000Confidence in false memories was also highest for this condition (1.19x\u0000compared to control). We discuss potential applications in HCI, such as\u0000therapeutic memory reframing, and challenges in ethical, legal, political, and\u0000societal domains.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"77 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142252534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jiuyi Xu, Tolulope Sanni, Ziming Liu, Ye Yang, Jiyoung Lee, Wei Song, Yangming Shi
Timely and adequate risk communication before natural hazards can reduce losses from extreme weather events and provide more resilient disaster preparedness. However, existing natural hazard risk communications have been abstract, ineffective, not immersive, and sometimes counterproductive. The implementation of virtual reality (VR) for natural hazard risk communication presents a promising alternative to the existing risk communication system by offering immersive and engaging experiences. However, it is still unknown how different modalities in VR could affect individuals' mitigation behaviors related to incoming natural hazards. In addition, it is also not clear how the repetitive risk communication of different modalities in the VR system leads to the effect of risk habituation. To fill the knowledge gap, we developed a VR system with a tornado risk communication scenario and conducted a mixed-design human subject experiment (N = 24). We comprehensively investigated our research using both quantitative and qualitative results.
{"title":"To Shelter or Not To Shelter: Exploring the Influence of Different Modalities in Virtual Reality on Individuals' Tornado Mitigation Behaviors","authors":"Jiuyi Xu, Tolulope Sanni, Ziming Liu, Ye Yang, Jiyoung Lee, Wei Song, Yangming Shi","doi":"arxiv-2409.09205","DOIUrl":"https://doi.org/arxiv-2409.09205","url":null,"abstract":"Timely and adequate risk communication before natural hazards can reduce\u0000losses from extreme weather events and provide more resilient disaster\u0000preparedness. However, existing natural hazard risk communications have been\u0000abstract, ineffective, not immersive, and sometimes counterproductive. The\u0000implementation of virtual reality (VR) for natural hazard risk communication\u0000presents a promising alternative to the existing risk communication system by\u0000offering immersive and engaging experiences. However, it is still unknown how\u0000different modalities in VR could affect individuals' mitigation behaviors\u0000related to incoming natural hazards. In addition, it is also not clear how the\u0000repetitive risk communication of different modalities in the VR system leads to\u0000the effect of risk habituation. To fill the knowledge gap, we developed a VR\u0000system with a tornado risk communication scenario and conducted a mixed-design\u0000human subject experiment (N = 24). We comprehensively investigated our research\u0000using both quantitative and qualitative results.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"55 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142268605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Van Hong Tran, Aarushi Mehrotra, Ranya Sharma, Marshini Chetty, Nick Feamster, Jens Frankenreiter, Lior Strahilevitz
To protect consumer privacy, the California Consumer Privacy Act (CCPA) mandates that businesses provide consumers with a straightforward way to opt out of the sale and sharing of their personal information. However, the control that businesses enjoy over the opt-out process allows them to impose hurdles on consumers aiming to opt out, including by employing dark patterns. Motivated by the enactment of the California Privacy Rights Act (CPRA), which strengthens the CCPA and explicitly forbids certain dark patterns in the opt-out process, we investigate how dark patterns are used in opt-out processes and assess their compliance with CCPA regulations. Our research reveals that websites employ a variety of dark patterns. Some of these patterns are explicitly prohibited under the CCPA; others evidently take advantage of legal loopholes. Despite the initial efforts to restrict dark patterns by policymakers, there is more work to be done.
为保护消费者隐私,《加利福尼亚消费者隐私法》(CCPA)规定,企业应向消费者提供一种直接的方式,让消费者选择不出售和分享其个人信息。然而,企业对选择退出程序的控制使其能够对希望退出的消费者设置障碍,包括采用暗箱操作。加州隐私权法案》(California Privacy Rights Act,简称 CPRA)的颁布加强了《加州隐私权法案》(CCPA),并明确禁止在退出过程中使用某些暗模式,受此推动,我们调查了暗模式在退出过程中的使用情况,并评估了它们是否符合《加州隐私权法案》的规定。我们的研究显示,网站采用了多种暗模式。其中一些模式是 CCPA 明令禁止的,另一些则明显利用了法律漏洞。尽管政策制定者为限制暗箱操作模式做出了初步努力,但仍有更多工作要做。
{"title":"Dark Patterns in the Opt-Out Process and Compliance with the California Consumer Privacy Act (CCPA)","authors":"Van Hong Tran, Aarushi Mehrotra, Ranya Sharma, Marshini Chetty, Nick Feamster, Jens Frankenreiter, Lior Strahilevitz","doi":"arxiv-2409.09222","DOIUrl":"https://doi.org/arxiv-2409.09222","url":null,"abstract":"To protect consumer privacy, the California Consumer Privacy Act (CCPA)\u0000mandates that businesses provide consumers with a straightforward way to opt\u0000out of the sale and sharing of their personal information. However, the control\u0000that businesses enjoy over the opt-out process allows them to impose hurdles on\u0000consumers aiming to opt out, including by employing dark patterns. Motivated by\u0000the enactment of the California Privacy Rights Act (CPRA), which strengthens\u0000the CCPA and explicitly forbids certain dark patterns in the opt-out process,\u0000we investigate how dark patterns are used in opt-out processes and assess their\u0000compliance with CCPA regulations. Our research reveals that websites employ a\u0000variety of dark patterns. Some of these patterns are explicitly prohibited\u0000under the CCPA; others evidently take advantage of legal loopholes. Despite the\u0000initial efforts to restrict dark patterns by policymakers, there is more work\u0000to be done.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"127 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142252486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vincent Guigues, Anton Kleywegt, Victor Hugo Nascimento, Victor Salles Rodrigues, Thais Viana, Edson Medeiros
This paper describes an online tool for the visualization of medical emergency locations, randomly generated sample paths of medical emergencies, and the animation of ambulance movements under the control of various dispatch methods in response to these emergencies. The tool incorporates statistical models for forecasting emergency locations and call arrival times, the simulation of emergency arrivals and ambulance movement trajectories, and the computation and visualization of performance metrics such as ambulance response time distributions. Data for the Rio de Janeiro Emergency Medical Service are available on the website. A user can upload emergency data for any Emergency Medical Service, and can then use the visualization tool to explore the uploaded data. A user can also use the statistical tools and/or the simulation tool with any of the dispatch methods provided, and can then use the visualization tool to explore the computational output. Future enhancements include the ability of a user to embed additional dispatch algorithms into the simulation; the tool can then be used to visualize the simulation results obtained with the newly embedded algorithms.
{"title":"Management and Visualization Tools for Emergency Medical Services","authors":"Vincent Guigues, Anton Kleywegt, Victor Hugo Nascimento, Victor Salles Rodrigues, Thais Viana, Edson Medeiros","doi":"arxiv-2409.09154","DOIUrl":"https://doi.org/arxiv-2409.09154","url":null,"abstract":"This paper describes an online tool for the visualization of medical\u0000emergency locations, randomly generated sample paths of medical emergencies,\u0000and the animation of ambulance movements under the control of various dispatch\u0000methods in response to these emergencies. The tool incorporates statistical\u0000models for forecasting emergency locations and call arrival times, the\u0000simulation of emergency arrivals and ambulance movement trajectories, and the\u0000computation and visualization of performance metrics such as ambulance response\u0000time distributions. Data for the Rio de Janeiro Emergency Medical Service are\u0000available on the website. A user can upload emergency data for any Emergency\u0000Medical Service, and can then use the visualization tool to explore the\u0000uploaded data. A user can also use the statistical tools and/or the simulation\u0000tool with any of the dispatch methods provided, and can then use the\u0000visualization tool to explore the computational output. Future enhancements\u0000include the ability of a user to embed additional dispatch algorithms into the\u0000simulation; the tool can then be used to visualize the simulation results\u0000obtained with the newly embedded algorithms.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"29 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142252488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Robert Kaufman, Emi Lee, Manas Satish Bedmutha, David Kirsh, Nadir Weibel
Low trust remains a significant barrier to Autonomous Vehicle (AV) adoption. To design trustworthy AVs, we need to better understand the individual traits, attitudes, and experiences that impact people's trust judgements. We use machine learning to understand the most important factors that contribute to young adult trust based on a comprehensive set of personal factors gathered via survey (n = 1457). Factors ranged from psychosocial and cognitive attributes to driving style, experiences, and perceived AV risks and benefits. Using the explainable AI technique SHAP, we found that perceptions of AV risks and benefits, attitudes toward feasibility and usability, institutional trust, prior experience, and a person's mental model are the most important predictors. Surprisingly, psychosocial and many technology- and driving-specific factors were not strong predictors. Results highlight the importance of individual differences for designing trustworthy AVs for diverse groups and lead to key implications for future design and research.
要设计出值得信赖的自动驾驶汽车,我们需要更好地了解影响人们信任判断的个人特征、态度和经历。我们利用机器学习,根据调查(n = 1457)收集到的一整套个人因素,来了解促成年轻人信任的最重要因素。这些因素包括社会心理和认知属性、驾驶风格、经验以及感知到的 AV 风险和益处。利用可解释的人工智能技术 SHAP,我们发现对 AV 风险和益处的感知、对可行性和可用性的态度、机构信任、先前的经验以及个人的心理模型是最重要的预测因素。令人惊讶的是,社会心理因素以及许多技术和驾驶方面的具体因素并不是强有力的预测因素。研究结果凸显了个体差异对于为不同群体设计值得信赖的自动驾驶汽车的重要性,并对未来的设计和研究产生了重要影响。
{"title":"Predicting Trust In Autonomous Vehicles: Modeling Young Adult Psychosocial Traits, Risk-Benefit Attitudes, And Driving Factors With Machine Learning","authors":"Robert Kaufman, Emi Lee, Manas Satish Bedmutha, David Kirsh, Nadir Weibel","doi":"arxiv-2409.08980","DOIUrl":"https://doi.org/arxiv-2409.08980","url":null,"abstract":"Low trust remains a significant barrier to Autonomous Vehicle (AV) adoption.\u0000To design trustworthy AVs, we need to better understand the individual traits,\u0000attitudes, and experiences that impact people's trust judgements. We use\u0000machine learning to understand the most important factors that contribute to\u0000young adult trust based on a comprehensive set of personal factors gathered via\u0000survey (n = 1457). Factors ranged from psychosocial and cognitive attributes to\u0000driving style, experiences, and perceived AV risks and benefits. Using the\u0000explainable AI technique SHAP, we found that perceptions of AV risks and\u0000benefits, attitudes toward feasibility and usability, institutional trust,\u0000prior experience, and a person's mental model are the most important\u0000predictors. Surprisingly, psychosocial and many technology- and\u0000driving-specific factors were not strong predictors. Results highlight the\u0000importance of individual differences for designing trustworthy AVs for diverse\u0000groups and lead to key implications for future design and research.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"34 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142268609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}