首页 > 最新文献

arXiv - CS - Human-Computer Interaction最新文献

英文 中文
Visualization in Motion in Video Games for Different Types of Data 针对不同类型数据的视频游戏运动可视化
Pub Date : 2024-09-12 DOI: arxiv-2409.07696
Federica Bucchieri, Lijie Yao, Petra Isenberg
We contribute an analysis of situated visualizations in motion in video gamesfor different types of data, with a focus on quantitative and categorical datarepresentations. Video games convey a lot of data to players, to help themsucceed in the game. These visualizations frequently move across the screen dueto camera changes or because the game elements themselves move. Our ultimategoal is to understand how motion factors affect visualization readability invideo games and subsequently the players' performance in the game. We startedour work by surveying the characteristics of how motion currently influenceswhich kind of data representations in video games. We conducted a systematicreview of 160 visualizations in motion in video games and extracted patternsand considerations regarding was what, and how visualizations currently exhibitmotion factors in video games.
我们对视频游戏中不同类型数据的运动情景可视化进行了分析,重点是定量和分类数据的呈现。视频游戏向玩家传递了大量数据,以帮助他们在游戏中取得成功。这些可视化效果经常会因为摄像头的变化或游戏元素本身的移动而在屏幕上移动。我们的最终目标是了解运动因素如何影响视频游戏的可视化可读性,进而影响玩家在游戏中的表现。我们首先调查了目前运动是如何影响视频游戏中哪种数据表示的。我们对视频游戏中的 160 种运动可视化效果进行了系统回顾,并提取了视频游戏中的可视化效果是什么、如何表现出运动因素的相关模式和考虑因素。
{"title":"Visualization in Motion in Video Games for Different Types of Data","authors":"Federica Bucchieri, Lijie Yao, Petra Isenberg","doi":"arxiv-2409.07696","DOIUrl":"https://doi.org/arxiv-2409.07696","url":null,"abstract":"We contribute an analysis of situated visualizations in motion in video games\u0000for different types of data, with a focus on quantitative and categorical data\u0000representations. Video games convey a lot of data to players, to help them\u0000succeed in the game. These visualizations frequently move across the screen due\u0000to camera changes or because the game elements themselves move. Our ultimate\u0000goal is to understand how motion factors affect visualization readability in\u0000video games and subsequently the players' performance in the game. We started\u0000our work by surveying the characteristics of how motion currently influences\u0000which kind of data representations in video games. We conducted a systematic\u0000review of 160 visualizations in motion in video games and extracted patterns\u0000and considerations regarding was what, and how visualizations currently exhibit\u0000motion factors in video games.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"18 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Explorations in Designing Virtual Environments for Remote Counselling 为远程咨询设计虚拟环境的探索
Pub Date : 2024-09-12 DOI: arxiv-2409.07765
Jiashuo Cao, Wujie Gao, Yun Suen Pai, Simon Hoermann, Chen Li, Nilufar Baghaei, Mark Billinghurst
The advent of technology-enhanced interventions has significantly transformedmental health services, offering new opportunities for deliveringpsychotherapy, particularly in remote settings. This paper reports on a pilotstudy exploring the use of Virtual Reality (VR) as a medium for remotecounselling. The study involved four experienced psychotherapists who evaluatedthree different virtual environments designed to support remote counselling.Through thematic analysis of interviews and feedback, we identified key factorsthat could be critical for designing effective virtual environments forcounselling. These include the creation of clear boundaries, customization tomeet specific therapeutic needs, and the importance of aligning the environmentwith various therapeutic approaches. Our findings suggest that VR can enhancethe sense of presence and engagement in remote therapy, potentially improvingthe therapeutic relationship. In the paper we also outline areas for futureresearch based on these pilot study results.
技术强化干预措施的出现极大地改变了心理健康服务,为提供心理治疗提供了新的机遇,尤其是在偏远地区。本文报告了一项探索使用虚拟现实(VR)作为远程咨询媒介的试点研究。通过对访谈和反馈的主题分析,我们发现了设计有效的虚拟咨询环境的关键因素。这些因素包括创建清晰的界限、满足特定治疗需求的定制化,以及使环境与各种治疗方法保持一致的重要性。我们的研究结果表明,VR 可以增强远程治疗中的临场感和参与感,从而改善治疗关系。在本文中,我们还根据这些试点研究结果概述了未来的研究领域。
{"title":"Explorations in Designing Virtual Environments for Remote Counselling","authors":"Jiashuo Cao, Wujie Gao, Yun Suen Pai, Simon Hoermann, Chen Li, Nilufar Baghaei, Mark Billinghurst","doi":"arxiv-2409.07765","DOIUrl":"https://doi.org/arxiv-2409.07765","url":null,"abstract":"The advent of technology-enhanced interventions has significantly transformed\u0000mental health services, offering new opportunities for delivering\u0000psychotherapy, particularly in remote settings. This paper reports on a pilot\u0000study exploring the use of Virtual Reality (VR) as a medium for remote\u0000counselling. The study involved four experienced psychotherapists who evaluated\u0000three different virtual environments designed to support remote counselling.\u0000Through thematic analysis of interviews and feedback, we identified key factors\u0000that could be critical for designing effective virtual environments for\u0000counselling. These include the creation of clear boundaries, customization to\u0000meet specific therapeutic needs, and the importance of aligning the environment\u0000with various therapeutic approaches. Our findings suggest that VR can enhance\u0000the sense of presence and engagement in remote therapy, potentially improving\u0000the therapeutic relationship. In the paper we also outline areas for future\u0000research based on these pilot study results.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"3 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Co-badge: An Activity for Collaborative Engagement with Data Visualization Design Concepts 共同徽章:协作参与数据可视化设计概念的活动
Pub Date : 2024-09-12 DOI: arxiv-2409.08175
Damla Çay, Mary Karyda, Kitti Butter
As data visualization gains popularity and projects become moreinterdisciplinary, there is a growing need for methods that foster creativecollaboration and inform diverse audiences about data visualisation. In thispaper, we introduce Co-Badge, a 90-minute design activity where participantscollaboratively construct visualizations by ideating and prioritizing relevantdata types, mapping them to visual variables, and constructing data badges withstationery materials. We conducted three workshops in diverse settings withparticipants of different backgrounds. Our findings indicate that Co-badgefacilitates a playful and engaging way to gain awareness about datavisualization design principles without formal training while navigating thechallenges of collaboration. Our work contributes to the field of datavisualization education for diverse actors. We believe Co-Badge can serve as anengaging activity that introduces basic concepts of data visualization andcollaboration.
随着数据可视化的普及和项目的跨学科化,人们越来越需要能够促进创造性合作和向不同受众介绍数据可视化的方法。在本文中,我们介绍了 "共同徽章"(Co-Badge),这是一项 90 分钟的设计活动,参与者通过构思和优先排序相关数据类型、将其映射到可视变量并使用固定材料构建数据徽章,从而合作构建可视化。我们在不同的环境中举办了三次研讨会,参与者背景各不相同。我们的研究结果表明,Co-badge 能够以一种有趣且引人入胜的方式,让人们在没有接受过正式培训的情况下了解数据可视化设计原则,同时应对合作过程中的挑战。我们的工作为面向不同参与者的数据可视化教育领域做出了贡献。我们相信,Co-Badge 可以作为一项引人入胜的活动,介绍数据可视化和协作的基本概念。
{"title":"Co-badge: An Activity for Collaborative Engagement with Data Visualization Design Concepts","authors":"Damla Çay, Mary Karyda, Kitti Butter","doi":"arxiv-2409.08175","DOIUrl":"https://doi.org/arxiv-2409.08175","url":null,"abstract":"As data visualization gains popularity and projects become more\u0000interdisciplinary, there is a growing need for methods that foster creative\u0000collaboration and inform diverse audiences about data visualisation. In this\u0000paper, we introduce Co-Badge, a 90-minute design activity where participants\u0000collaboratively construct visualizations by ideating and prioritizing relevant\u0000data types, mapping them to visual variables, and constructing data badges with\u0000stationery materials. We conducted three workshops in diverse settings with\u0000participants of different backgrounds. Our findings indicate that Co-badge\u0000facilitates a playful and engaging way to gain awareness about data\u0000visualization design principles without formal training while navigating the\u0000challenges of collaboration. Our work contributes to the field of data\u0000visualization education for diverse actors. We believe Co-Badge can serve as an\u0000engaging activity that introduces basic concepts of data visualization and\u0000collaboration.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"22 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GAZEploit: Remote Keystroke Inference Attack by Gaze Estimation from Avatar Views in VR/MR Devices GAZEploit:通过 VR/MR 设备中头像视图的凝视估计进行远程按键推理攻击
Pub Date : 2024-09-12 DOI: arxiv-2409.08122
Hanqiu Wang, Zihao Zhan, Haoqi Shan, Siqi Dai, Max Panoff, Shuo Wang
The advent and growing popularity of Virtual Reality (VR) and Mixed Reality(MR) solutions have revolutionized the way we interact with digital platforms.The cutting-edge gaze-controlled typing methods, now prevalent in high-endmodels of these devices, e.g., Apple Vision Pro, have not only improved userexperience but also mitigated traditional keystroke inference attacks thatrelied on hand gestures, head movements and acoustic side-channels. However,this advancement has paradoxically given birth to a new, potentially moreinsidious cyber threat, GAZEploit. In this paper, we unveil GAZEploit, a novel eye-tracking based attackspecifically designed to exploit these eye-tracking information by leveragingthe common use of virtual appearances in VR applications. This widespread usagesignificantly enhances the practicality and feasibility of our attack comparedto existing methods. GAZEploit takes advantage of this vulnerability toremotely extract gaze estimations and steal sensitive keystroke informationacross various typing scenarios-including messages, passwords, URLs, emails,and passcodes. Our research, involving 30 participants, achieved over 80%accuracy in keystroke inference. Alarmingly, our study also identified over 15top-rated apps in the Apple Store as vulnerable to the GAZEploit attack,emphasizing the urgent need for bolstered security measures for thisstate-of-the-art VR/MR text entry method.
虚拟现实(VR)和混合现实(MR)解决方案的出现和日益普及,彻底改变了我们与数字平台的交互方式。最先进的凝视控制输入法目前在这些设备的高端机型(如苹果Vision Pro)中非常普遍,不仅改善了用户体验,还缓解了依赖手势、头部运动和声学侧信道的传统按键推理攻击。然而,这一进步却矛盾地催生了一种新的、可能更为隐蔽的网络威胁--GAZEploit。在本文中,我们揭示了 GAZEploit,这是一种基于眼动跟踪的新型攻击,专门设计用于利用 VR 应用程序中常见的虚拟外观来利用这些眼动跟踪信息。与现有方法相比,这种广泛应用大大增强了我们攻击的实用性和可行性。GAZEploit 利用这一漏洞远程提取注视估计值,并窃取各种输入场景中的敏感按键信息--包括信息、密码、URL、电子邮件和密码。我们的研究有 30 人参与,按键推断准确率超过 80%。令人震惊的是,我们的研究还发现苹果商店中超过 15 款顶级应用程序容易受到 GAZEploit 攻击,这强调了对这种最先进的 VR/MR 文本输入方法加强安全措施的迫切需要。
{"title":"GAZEploit: Remote Keystroke Inference Attack by Gaze Estimation from Avatar Views in VR/MR Devices","authors":"Hanqiu Wang, Zihao Zhan, Haoqi Shan, Siqi Dai, Max Panoff, Shuo Wang","doi":"arxiv-2409.08122","DOIUrl":"https://doi.org/arxiv-2409.08122","url":null,"abstract":"The advent and growing popularity of Virtual Reality (VR) and Mixed Reality\u0000(MR) solutions have revolutionized the way we interact with digital platforms.\u0000The cutting-edge gaze-controlled typing methods, now prevalent in high-end\u0000models of these devices, e.g., Apple Vision Pro, have not only improved user\u0000experience but also mitigated traditional keystroke inference attacks that\u0000relied on hand gestures, head movements and acoustic side-channels. However,\u0000this advancement has paradoxically given birth to a new, potentially more\u0000insidious cyber threat, GAZEploit. In this paper, we unveil GAZEploit, a novel eye-tracking based attack\u0000specifically designed to exploit these eye-tracking information by leveraging\u0000the common use of virtual appearances in VR applications. This widespread usage\u0000significantly enhances the practicality and feasibility of our attack compared\u0000to existing methods. GAZEploit takes advantage of this vulnerability to\u0000remotely extract gaze estimations and steal sensitive keystroke information\u0000across various typing scenarios-including messages, passwords, URLs, emails,\u0000and passcodes. Our research, involving 30 participants, achieved over 80%\u0000accuracy in keystroke inference. Alarmingly, our study also identified over 15\u0000top-rated apps in the Apple Store as vulnerable to the GAZEploit attack,\u0000emphasizing the urgent need for bolstered security measures for this\u0000state-of-the-art VR/MR text entry method.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"11 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From Explanations to Action: A Zero-Shot, Theory-Driven LLM Framework for Student Performance Feedback 从解释到行动:学生成绩反馈的零起点、理论驱动的 LLM 框架
Pub Date : 2024-09-12 DOI: arxiv-2409.08027
Vinitra Swamy, Davide Romano, Bhargav Srinivasa Desikan, Oana-Maria Camburu, Tanja Käser
Recent advances in eXplainable AI (XAI) for education have highlighted acritical challenge: ensuring that explanations for state-of-the-art AI modelsare understandable for non-technical users such as educators and students. Inresponse, we introduce iLLuMinaTE, a zero-shot, chain-of-prompts LLM-XAIpipeline inspired by Miller's cognitive model of explanation. iLLuMinaTE isdesigned to deliver theory-driven, actionable feedback to students in onlinecourses. iLLuMinaTE navigates three main stages - causal connection,explanation selection, and explanation presentation - with variations drawingfrom eight social science theories (e.g. Abnormal Conditions, Pearl's Model ofExplanation, Necessity and Robustness Selection, Contrastive Explanation). Weextensively evaluate 21,915 natural language explanations of iLLuMinaTEextracted from three LLMs (GPT-4o, Gemma2-9B, Llama3-70B), with three differentunderlying XAI methods (LIME, Counterfactuals, MC-LIME), across students fromthree diverse online courses. Our evaluation involves analyses of explanationalignment to the social science theory, understandability of the explanation,and a real-world user preference study with 114 university students containinga novel actionability simulation. We find that students prefer iLLuMinaTEexplanations over traditional explainers 89.52% of the time. Our work providesa robust, ready-to-use framework for effectively communicating hybridXAI-driven insights in education, with significant generalization potential forother human-centric fields.
教育领域的可解释人工智能(XAI)的最新进展凸显了一个严峻的挑战:确保最先进的人工智能模型的解释能够为教育工作者和学生等非技术用户所理解。作为回应,我们介绍了 iLLuMinaTE,这是一个零射程、链式 LLM-XAI 管道,其灵感来自米勒的认知解释模型。iLLuMinaTE 设计用于向在线课程的学生提供理论驱动、可操作的反馈。iLLuMinaTE 通过三个主要阶段--因果联系、解释选择和解释呈现--从八种社会科学理论(如非正常情况、珀尔解释模型、必要性和稳健性选择、对比解释)中汲取变化。我们使用三种不同的基础 XAI 方法(LIME、Counterfactuals、MC-LIME),对来自三个不同在线课程的学生对 iLLuMinaTE 的 21,915 条自然语言解释进行了广泛评估,这些解释摘自三个 LLM(GPT-4o、Gemma2-9B、Llama3-70B)。我们的评估包括分析解释与社会科学理论的一致性、解释的可理解性,以及对 114 名大学生进行的真实世界用户偏好研究,其中包含一个新颖的可操作性模拟。我们发现,在 89.52% 的情况下,学生更喜欢 iLLuMinaTE 解释,而不是传统的解释。我们的工作为在教育领域有效传达混合 XAI 驱动的见解提供了一个强大的、随时可用的框架,并为其他以人为本的领域带来了巨大的推广潜力。
{"title":"From Explanations to Action: A Zero-Shot, Theory-Driven LLM Framework for Student Performance Feedback","authors":"Vinitra Swamy, Davide Romano, Bhargav Srinivasa Desikan, Oana-Maria Camburu, Tanja Käser","doi":"arxiv-2409.08027","DOIUrl":"https://doi.org/arxiv-2409.08027","url":null,"abstract":"Recent advances in eXplainable AI (XAI) for education have highlighted a\u0000critical challenge: ensuring that explanations for state-of-the-art AI models\u0000are understandable for non-technical users such as educators and students. In\u0000response, we introduce iLLuMinaTE, a zero-shot, chain-of-prompts LLM-XAI\u0000pipeline inspired by Miller's cognitive model of explanation. iLLuMinaTE is\u0000designed to deliver theory-driven, actionable feedback to students in online\u0000courses. iLLuMinaTE navigates three main stages - causal connection,\u0000explanation selection, and explanation presentation - with variations drawing\u0000from eight social science theories (e.g. Abnormal Conditions, Pearl's Model of\u0000Explanation, Necessity and Robustness Selection, Contrastive Explanation). We\u0000extensively evaluate 21,915 natural language explanations of iLLuMinaTE\u0000extracted from three LLMs (GPT-4o, Gemma2-9B, Llama3-70B), with three different\u0000underlying XAI methods (LIME, Counterfactuals, MC-LIME), across students from\u0000three diverse online courses. Our evaluation involves analyses of explanation\u0000alignment to the social science theory, understandability of the explanation,\u0000and a real-world user preference study with 114 university students containing\u0000a novel actionability simulation. We find that students prefer iLLuMinaTE\u0000explanations over traditional explainers 89.52% of the time. Our work provides\u0000a robust, ready-to-use framework for effectively communicating hybrid\u0000XAI-driven insights in education, with significant generalization potential for\u0000other human-centric fields.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Objection Overruled! Lay People can Distinguish Large Language Models from Lawyers, but still Favour Advice from an LLM 反对无效!普通人能从律师那里分辨大型语言模型,但仍偏爱法学硕士的建议
Pub Date : 2024-09-12 DOI: arxiv-2409.07871
Eike Schneiders, Tina Seabrooke, Joshua Krook, Richard Hyde, Natalie Leesakul, Jeremie Clos, Joel Fischer
Large Language Models (LLMs) are seemingly infiltrating every domain, and thelegal context is no exception. In this paper, we present the results of threeexperiments (total N=288) that investigated lay people's willingness to actupon, and their ability to discriminate between, LLM- and lawyer-generatedlegal advice. In Experiment 1, participants judged their willingness to act onlegal advice when the source of the advice was either known or unknown. Whenthe advice source was unknown, participants indicated that they weresignificantly more willing to act on the LLM-generated advice. This result wasreplicated in Experiment 2. Intriguingly, despite participants indicatinghigher willingness to act on LLM-generated advice in Experiments 1 and 2,participants discriminated between the LLM- and lawyer-generated textssignificantly above chance-level in Experiment 3. Lastly, we discuss potentialexplanations and risks of our findings, limitations and future work, and theimportance of language complexity and real-world comparability.
大型语言模型(LLM)似乎正在渗透到各个领域,法律领域也不例外。在本文中,我们介绍了三项实验(总人数=288)的结果,这些实验调查了非专业人士对 LLM 和律师提供的法律建议采取行动的意愿及其辨别能力。在实验 1 中,参与者判断自己是否愿意接受已知或未知来源的法律建议。当建议来源不明时,参与者表示他们明显更愿意对法律硕士提出的建议采取行动。实验 2 复制了这一结果。耐人寻味的是,尽管在实验 1 和实验 2 中,参与者表示更愿意对法律硕士提供的建议采取行动,但在实验 3 中,参与者对法律硕士和律师提供的文本的辨别能力显著高于偶然水平。最后,我们讨论了研究结果的潜在解释和风险、局限性和未来工作,以及语言复杂性和现实世界可比性的重要性。
{"title":"Objection Overruled! Lay People can Distinguish Large Language Models from Lawyers, but still Favour Advice from an LLM","authors":"Eike Schneiders, Tina Seabrooke, Joshua Krook, Richard Hyde, Natalie Leesakul, Jeremie Clos, Joel Fischer","doi":"arxiv-2409.07871","DOIUrl":"https://doi.org/arxiv-2409.07871","url":null,"abstract":"Large Language Models (LLMs) are seemingly infiltrating every domain, and the\u0000legal context is no exception. In this paper, we present the results of three\u0000experiments (total N=288) that investigated lay people's willingness to act\u0000upon, and their ability to discriminate between, LLM- and lawyer-generated\u0000legal advice. In Experiment 1, participants judged their willingness to act on\u0000legal advice when the source of the advice was either known or unknown. When\u0000the advice source was unknown, participants indicated that they were\u0000significantly more willing to act on the LLM-generated advice. This result was\u0000replicated in Experiment 2. Intriguingly, despite participants indicating\u0000higher willingness to act on LLM-generated advice in Experiments 1 and 2,\u0000participants discriminated between the LLM- and lawyer-generated texts\u0000significantly above chance-level in Experiment 3. Lastly, we discuss potential\u0000explanations and risks of our findings, limitations and future work, and the\u0000importance of language complexity and real-world comparability.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"23 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
More than just a Tool: People's Perception and Acceptance of Prosocial Delivery Robots as Fellow Road Users 不仅仅是工具人们对作为道路使用者的亲社会送货机器人的看法和接受程度
Pub Date : 2024-09-12 DOI: arxiv-2409.07815
Vivienne Bihe Chi, Elise Ulwelling, Kevin Salubre, Shashank Mehrotra, Teruhisa Misu, Kumar Akash
Service robots are increasingly deployed in public spaces, performingfunctional tasks such as making deliveries. To better integrate them into oursocial environment and enhance their adoption, we consider integrating socialidentities within delivery robots along with their functional identity. Weconducted a virtual reality-based pilot study to explore people's perceptionsand acceptance of delivery robots that perform prosocial behavior. Preliminaryfindings from thematic analysis of semi-structured interviews illustratepeople's ambivalence about dual identity. We discussed the emerging themes inlight of social identity theory, framing effect, and human-robot intergroupdynamics. Building on these insights, we propose that the next generation ofdelivery robots should use peer-based framing, an updated value proposition,and an interactive design that places greater emphasis on expressingintentionality and emotional responses.
服务机器人越来越多地被部署在公共场所,执行送货等功能性任务。为了更好地将它们融入我们的社会环境并提高它们的采用率,我们考虑将社会身份与送货机器人的功能身份整合在一起。我们开展了一项基于虚拟现实的试点研究,以探索人们对执行亲社会行为的送货机器人的看法和接受程度。我们对半结构式访谈进行了主题分析,初步发现了人们对双重身份的矛盾心理。我们结合社会身份理论、框架效应和人机互动动力学讨论了新出现的主题。基于这些见解,我们建议下一代送货机器人应使用基于同伴的框架、更新的价值主张以及更加强调表达意图和情感反应的交互式设计。
{"title":"More than just a Tool: People's Perception and Acceptance of Prosocial Delivery Robots as Fellow Road Users","authors":"Vivienne Bihe Chi, Elise Ulwelling, Kevin Salubre, Shashank Mehrotra, Teruhisa Misu, Kumar Akash","doi":"arxiv-2409.07815","DOIUrl":"https://doi.org/arxiv-2409.07815","url":null,"abstract":"Service robots are increasingly deployed in public spaces, performing\u0000functional tasks such as making deliveries. To better integrate them into our\u0000social environment and enhance their adoption, we consider integrating social\u0000identities within delivery robots along with their functional identity. We\u0000conducted a virtual reality-based pilot study to explore people's perceptions\u0000and acceptance of delivery robots that perform prosocial behavior. Preliminary\u0000findings from thematic analysis of semi-structured interviews illustrate\u0000people's ambivalence about dual identity. We discussed the emerging themes in\u0000light of social identity theory, framing effect, and human-robot intergroup\u0000dynamics. Building on these insights, we propose that the next generation of\u0000delivery robots should use peer-based framing, an updated value proposition,\u0000and an interactive design that places greater emphasis on expressing\u0000intentionality and emotional responses.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"7 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Eyes on the Phish(er): Towards Understanding Users' Email Processing Pattern and Mental Models in Phishing Detection 盯着 "钓鱼网站":了解用户的电子邮件处理模式和网络钓鱼检测中的心理模型
Pub Date : 2024-09-12 DOI: arxiv-2409.07717
Sijie Zhuo, Robert Biddle, Jared Daniel Recomendable, Giovanni Russello, Danielle Lottridge
Phishing emails typically masquerade themselves as reputable identities totrick people into providing sensitive information and credentials. Despiteadvancements in cybersecurity, attackers continuously adapt, posing ongoingthreats to individuals and organisations. While email users are the last lineof defence, they are not always well-prepared to detect phishing emails. Thisstudy examines how workload affects susceptibility to phishing, usingeye-tracking technology to observe participants' reading patterns andinteractions with tailored phishing emails. Incorporating both quantitative andqualitative analysis, we investigate users' attention to two phishingindicators, email sender and hyperlink URLs, and their reasons for assessingthe trustworthiness of emails and falling for phishing emails. Our resultsprovide concrete evidence that attention to the email sender can reducephishing susceptibility. While we found no evidence that attention to theactual URL in the browser influences phishing detection, attention to the textmasking links can increase phishing susceptibility. We also highlight how emailrelevance, familiarity, and visual presentation impact first impressions ofemail trustworthiness and phishing susceptibility.
网络钓鱼电子邮件通常伪装成信誉良好的身份,诱使人们提供敏感信息和凭证。尽管网络安全取得了进步,但攻击者仍在不断调整,对个人和组织造成持续威胁。虽然电子邮件用户是最后一道防线,但他们并不总是做好了检测网络钓鱼电子邮件的充分准备。本研究使用眼睛跟踪技术来观察参与者的阅读模式以及与定制的网络钓鱼电子邮件的互动,从而研究工作量如何影响对网络钓鱼的易感性。通过定量和定性分析,我们调查了用户对电子邮件发件人和超链接 URL 这两个网络钓鱼指标的关注程度,以及他们评估电子邮件可信度和上当受骗的原因。我们的研究结果提供了具体证据,证明对电子邮件发件人的关注可以降低网络钓鱼的易感性。虽然我们没有发现任何证据表明关注浏览器中的实际 URL 会影响网络钓鱼的检测,但关注文本屏蔽链接会增加网络钓鱼的易感性。我们还强调了电子邮件的相关性、熟悉程度和视觉呈现如何影响对电子邮件可信度的第一印象和网络钓鱼的易感性。
{"title":"Eyes on the Phish(er): Towards Understanding Users' Email Processing Pattern and Mental Models in Phishing Detection","authors":"Sijie Zhuo, Robert Biddle, Jared Daniel Recomendable, Giovanni Russello, Danielle Lottridge","doi":"arxiv-2409.07717","DOIUrl":"https://doi.org/arxiv-2409.07717","url":null,"abstract":"Phishing emails typically masquerade themselves as reputable identities to\u0000trick people into providing sensitive information and credentials. Despite\u0000advancements in cybersecurity, attackers continuously adapt, posing ongoing\u0000threats to individuals and organisations. While email users are the last line\u0000of defence, they are not always well-prepared to detect phishing emails. This\u0000study examines how workload affects susceptibility to phishing, using\u0000eye-tracking technology to observe participants' reading patterns and\u0000interactions with tailored phishing emails. Incorporating both quantitative and\u0000qualitative analysis, we investigate users' attention to two phishing\u0000indicators, email sender and hyperlink URLs, and their reasons for assessing\u0000the trustworthiness of emails and falling for phishing emails. Our results\u0000provide concrete evidence that attention to the email sender can reduce\u0000phishing susceptibility. While we found no evidence that attention to the\u0000actual URL in the browser influences phishing detection, attention to the text\u0000masking links can increase phishing susceptibility. We also highlight how email\u0000relevance, familiarity, and visual presentation impact first impressions of\u0000email trustworthiness and phishing susceptibility.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"49 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Measuring the limit of perception of bond stiffness of interactive molecules in VR via a gamified psychophysics experiment 通过游戏化心理物理学实验测量 VR 中交互式分子键硬度的感知极限
Pub Date : 2024-09-12 DOI: arxiv-2409.07836
Rhoslyn Roebuck Williams, Jonathan Barnoud, Luis Toledo, Till Holzapfel, David R. Glowacki
Molecular dynamics (MD) simulations provide crucial insight into molecularinteractions and biomolecular function. With interactive MD simulations in VR(iMD-VR), chemists can now interact with these molecular simulations inreal-time. Our sense of touch is essential for exploring the properties ofphysical objects, but recreating this sensory experience for virtual objectsposes challenges. Furthermore, employing haptics in the context of molecularsimulation is especially difficult since textit{we do not know what moleculesactually feel like}. In this paper, we build upon previous work thatdemonstrated how VR-users can distinguish properties of molecules withouthaptic feedback. We present the results of a gamified two-alternative forcedchoice (2AFC) psychophysics user study in which we quantify the threshold atwhich iMD-VR users can differentiate the stiffness of molecular bonds. Ourpreliminary analysis suggests that participants can sense differences betweenbuckminsterfullerene molecules with different bond stiffness parameters andthat this limit may fall within the chemically relevant range. Our resultshighlight how iMD-VR may facilitate a more embodied way of exploring complexand dynamic molecular systems, enabling chemists to sense the properties ofmolecules purely by interacting with them in VR.
分子动力学(MD)模拟为深入了解分子相互作用和生物分子功能提供了重要依据。通过 VR 交互式 MD 模拟(iMD-VR),化学家现在可以与这些分子模拟进行实时交互。我们的触觉对于探索物理对象的特性至关重要,但要在虚拟对象中重现这种感官体验却面临挑战。此外,在分子模拟中使用触觉尤其困难,因为我们并不知道分子的真实感觉。在本文中,我们以之前的工作为基础,展示了 VR 用户如何通过触觉反馈来区分分子的属性。我们展示了一项游戏化双选项强制选择(2AFC)心理物理学用户研究的结果,在这项研究中,我们量化了 iMD-VR 用户能够区分分子键硬度的阈值。我们的初步分析表明,参与者可以感受到具有不同键硬度参数的巴克明斯特富勒烯分子之间的差异,而且这一极限可能在化学相关范围内。我们的研究结果凸显了 iMD-VR 可以如何促进以一种更直观的方式探索复杂和动态的分子系统,使化学家能够纯粹通过在 VR 中与分子互动来感知分子的特性。
{"title":"Measuring the limit of perception of bond stiffness of interactive molecules in VR via a gamified psychophysics experiment","authors":"Rhoslyn Roebuck Williams, Jonathan Barnoud, Luis Toledo, Till Holzapfel, David R. Glowacki","doi":"arxiv-2409.07836","DOIUrl":"https://doi.org/arxiv-2409.07836","url":null,"abstract":"Molecular dynamics (MD) simulations provide crucial insight into molecular\u0000interactions and biomolecular function. With interactive MD simulations in VR\u0000(iMD-VR), chemists can now interact with these molecular simulations in\u0000real-time. Our sense of touch is essential for exploring the properties of\u0000physical objects, but recreating this sensory experience for virtual objects\u0000poses challenges. Furthermore, employing haptics in the context of molecular\u0000simulation is especially difficult since textit{we do not know what molecules\u0000actually feel like}. In this paper, we build upon previous work that\u0000demonstrated how VR-users can distinguish properties of molecules without\u0000haptic feedback. We present the results of a gamified two-alternative forced\u0000choice (2AFC) psychophysics user study in which we quantify the threshold at\u0000which iMD-VR users can differentiate the stiffness of molecular bonds. Our\u0000preliminary analysis suggests that participants can sense differences between\u0000buckminsterfullerene molecules with different bond stiffness parameters and\u0000that this limit may fall within the chemically relevant range. Our results\u0000highlight how iMD-VR may facilitate a more embodied way of exploring complex\u0000and dynamic molecular systems, enabling chemists to sense the properties of\u0000molecules purely by interacting with them in VR.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"33 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Online vs Offline: A Comparative Study of First-Party and Third-Party Evaluations of Social Chatbots 在线与离线:社交聊天机器人的第一方和第三方评价比较研究
Pub Date : 2024-09-12 DOI: arxiv-2409.07823
Ekaterina Svikhnushina, Pearl Pu
This paper explores the efficacy of online versus offline evaluation methodsin assessing conversational chatbots, specifically comparing first-party directinteractions with third-party observational assessments. By extending abenchmarking dataset of user dialogs with empathetic chatbots with offlinethird-party evaluations, we present a systematic comparison between thefeedback from online interactions and the more detached offline third-partyevaluations. Our results reveal that offline human evaluations fail to capturethe subtleties of human-chatbot interactions as effectively as onlineassessments. In comparison, automated third-party evaluations using a GPT-4model offer a better approximation of first-party human judgments givendetailed instructions. This study highlights the limitations of third-partyevaluations in grasping the complexities of user experiences and advocates forthe integration of direct interaction feedback in conversational AI evaluationto enhance system development and user satisfaction.
本文探讨了在线与离线评估方法在评估会话式聊天机器人方面的功效,特别是比较了第一方直接交互与第三方观察评估。通过将用户与移情聊天机器人对话的基准数据集与离线第三方评估进行扩展,我们对来自在线交互的反馈与更加独立的离线第三方评估进行了系统比较。我们的结果表明,离线人工评估无法像在线评估那样有效捕捉人与聊天机器人交互的微妙之处。相比之下,使用GPT-4模型的自动第三方评估能更好地接近第一方人类给出详细说明后做出的判断。本研究强调了第三方评估在把握用户体验复杂性方面的局限性,并主张在对话式人工智能评估中整合直接交互反馈,以提高系统开发水平和用户满意度。
{"title":"Online vs Offline: A Comparative Study of First-Party and Third-Party Evaluations of Social Chatbots","authors":"Ekaterina Svikhnushina, Pearl Pu","doi":"arxiv-2409.07823","DOIUrl":"https://doi.org/arxiv-2409.07823","url":null,"abstract":"This paper explores the efficacy of online versus offline evaluation methods\u0000in assessing conversational chatbots, specifically comparing first-party direct\u0000interactions with third-party observational assessments. By extending a\u0000benchmarking dataset of user dialogs with empathetic chatbots with offline\u0000third-party evaluations, we present a systematic comparison between the\u0000feedback from online interactions and the more detached offline third-party\u0000evaluations. Our results reveal that offline human evaluations fail to capture\u0000the subtleties of human-chatbot interactions as effectively as online\u0000assessments. In comparison, automated third-party evaluations using a GPT-4\u0000model offer a better approximation of first-party human judgments given\u0000detailed instructions. This study highlights the limitations of third-party\u0000evaluations in grasping the complexities of user experiences and advocates for\u0000the integration of direct interaction feedback in conversational AI evaluation\u0000to enhance system development and user satisfaction.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"23 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
arXiv - CS - Human-Computer Interaction
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1