首页 > 最新文献

arXiv - CS - Human-Computer Interaction最新文献

英文 中文
Visualization in Motion in Video Games for Different Types of Data 针对不同类型数据的视频游戏运动可视化
Pub Date : 2024-09-12 DOI: arxiv-2409.07696
Federica Bucchieri, Lijie Yao, Petra Isenberg
We contribute an analysis of situated visualizations in motion in video gamesfor different types of data, with a focus on quantitative and categorical datarepresentations. Video games convey a lot of data to players, to help themsucceed in the game. These visualizations frequently move across the screen dueto camera changes or because the game elements themselves move. Our ultimategoal is to understand how motion factors affect visualization readability invideo games and subsequently the players' performance in the game. We startedour work by surveying the characteristics of how motion currently influenceswhich kind of data representations in video games. We conducted a systematicreview of 160 visualizations in motion in video games and extracted patternsand considerations regarding was what, and how visualizations currently exhibitmotion factors in video games.
我们对视频游戏中不同类型数据的运动情景可视化进行了分析,重点是定量和分类数据的呈现。视频游戏向玩家传递了大量数据,以帮助他们在游戏中取得成功。这些可视化效果经常会因为摄像头的变化或游戏元素本身的移动而在屏幕上移动。我们的最终目标是了解运动因素如何影响视频游戏的可视化可读性,进而影响玩家在游戏中的表现。我们首先调查了目前运动是如何影响视频游戏中哪种数据表示的。我们对视频游戏中的 160 种运动可视化效果进行了系统回顾,并提取了视频游戏中的可视化效果是什么、如何表现出运动因素的相关模式和考虑因素。
{"title":"Visualization in Motion in Video Games for Different Types of Data","authors":"Federica Bucchieri, Lijie Yao, Petra Isenberg","doi":"arxiv-2409.07696","DOIUrl":"https://doi.org/arxiv-2409.07696","url":null,"abstract":"We contribute an analysis of situated visualizations in motion in video games\u0000for different types of data, with a focus on quantitative and categorical data\u0000representations. Video games convey a lot of data to players, to help them\u0000succeed in the game. These visualizations frequently move across the screen due\u0000to camera changes or because the game elements themselves move. Our ultimate\u0000goal is to understand how motion factors affect visualization readability in\u0000video games and subsequently the players' performance in the game. We started\u0000our work by surveying the characteristics of how motion currently influences\u0000which kind of data representations in video games. We conducted a systematic\u0000review of 160 visualizations in motion in video games and extracted patterns\u0000and considerations regarding was what, and how visualizations currently exhibit\u0000motion factors in video games.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"18 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Explorations in Designing Virtual Environments for Remote Counselling 为远程咨询设计虚拟环境的探索
Pub Date : 2024-09-12 DOI: arxiv-2409.07765
Jiashuo Cao, Wujie Gao, Yun Suen Pai, Simon Hoermann, Chen Li, Nilufar Baghaei, Mark Billinghurst
The advent of technology-enhanced interventions has significantly transformedmental health services, offering new opportunities for deliveringpsychotherapy, particularly in remote settings. This paper reports on a pilotstudy exploring the use of Virtual Reality (VR) as a medium for remotecounselling. The study involved four experienced psychotherapists who evaluatedthree different virtual environments designed to support remote counselling.Through thematic analysis of interviews and feedback, we identified key factorsthat could be critical for designing effective virtual environments forcounselling. These include the creation of clear boundaries, customization tomeet specific therapeutic needs, and the importance of aligning the environmentwith various therapeutic approaches. Our findings suggest that VR can enhancethe sense of presence and engagement in remote therapy, potentially improvingthe therapeutic relationship. In the paper we also outline areas for futureresearch based on these pilot study results.
技术强化干预措施的出现极大地改变了心理健康服务,为提供心理治疗提供了新的机遇,尤其是在偏远地区。本文报告了一项探索使用虚拟现实(VR)作为远程咨询媒介的试点研究。通过对访谈和反馈的主题分析,我们发现了设计有效的虚拟咨询环境的关键因素。这些因素包括创建清晰的界限、满足特定治疗需求的定制化,以及使环境与各种治疗方法保持一致的重要性。我们的研究结果表明,VR 可以增强远程治疗中的临场感和参与感,从而改善治疗关系。在本文中,我们还根据这些试点研究结果概述了未来的研究领域。
{"title":"Explorations in Designing Virtual Environments for Remote Counselling","authors":"Jiashuo Cao, Wujie Gao, Yun Suen Pai, Simon Hoermann, Chen Li, Nilufar Baghaei, Mark Billinghurst","doi":"arxiv-2409.07765","DOIUrl":"https://doi.org/arxiv-2409.07765","url":null,"abstract":"The advent of technology-enhanced interventions has significantly transformed\u0000mental health services, offering new opportunities for delivering\u0000psychotherapy, particularly in remote settings. This paper reports on a pilot\u0000study exploring the use of Virtual Reality (VR) as a medium for remote\u0000counselling. The study involved four experienced psychotherapists who evaluated\u0000three different virtual environments designed to support remote counselling.\u0000Through thematic analysis of interviews and feedback, we identified key factors\u0000that could be critical for designing effective virtual environments for\u0000counselling. These include the creation of clear boundaries, customization to\u0000meet specific therapeutic needs, and the importance of aligning the environment\u0000with various therapeutic approaches. Our findings suggest that VR can enhance\u0000the sense of presence and engagement in remote therapy, potentially improving\u0000the therapeutic relationship. In the paper we also outline areas for future\u0000research based on these pilot study results.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"3 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Co-badge: An Activity for Collaborative Engagement with Data Visualization Design Concepts 共同徽章:协作参与数据可视化设计概念的活动
Pub Date : 2024-09-12 DOI: arxiv-2409.08175
Damla Çay, Mary Karyda, Kitti Butter
As data visualization gains popularity and projects become moreinterdisciplinary, there is a growing need for methods that foster creativecollaboration and inform diverse audiences about data visualisation. In thispaper, we introduce Co-Badge, a 90-minute design activity where participantscollaboratively construct visualizations by ideating and prioritizing relevantdata types, mapping them to visual variables, and constructing data badges withstationery materials. We conducted three workshops in diverse settings withparticipants of different backgrounds. Our findings indicate that Co-badgefacilitates a playful and engaging way to gain awareness about datavisualization design principles without formal training while navigating thechallenges of collaboration. Our work contributes to the field of datavisualization education for diverse actors. We believe Co-Badge can serve as anengaging activity that introduces basic concepts of data visualization andcollaboration.
随着数据可视化的普及和项目的跨学科化,人们越来越需要能够促进创造性合作和向不同受众介绍数据可视化的方法。在本文中,我们介绍了 "共同徽章"(Co-Badge),这是一项 90 分钟的设计活动,参与者通过构思和优先排序相关数据类型、将其映射到可视变量并使用固定材料构建数据徽章,从而合作构建可视化。我们在不同的环境中举办了三次研讨会,参与者背景各不相同。我们的研究结果表明,Co-badge 能够以一种有趣且引人入胜的方式,让人们在没有接受过正式培训的情况下了解数据可视化设计原则,同时应对合作过程中的挑战。我们的工作为面向不同参与者的数据可视化教育领域做出了贡献。我们相信,Co-Badge 可以作为一项引人入胜的活动,介绍数据可视化和协作的基本概念。
{"title":"Co-badge: An Activity for Collaborative Engagement with Data Visualization Design Concepts","authors":"Damla Çay, Mary Karyda, Kitti Butter","doi":"arxiv-2409.08175","DOIUrl":"https://doi.org/arxiv-2409.08175","url":null,"abstract":"As data visualization gains popularity and projects become more\u0000interdisciplinary, there is a growing need for methods that foster creative\u0000collaboration and inform diverse audiences about data visualisation. In this\u0000paper, we introduce Co-Badge, a 90-minute design activity where participants\u0000collaboratively construct visualizations by ideating and prioritizing relevant\u0000data types, mapping them to visual variables, and constructing data badges with\u0000stationery materials. We conducted three workshops in diverse settings with\u0000participants of different backgrounds. Our findings indicate that Co-badge\u0000facilitates a playful and engaging way to gain awareness about data\u0000visualization design principles without formal training while navigating the\u0000challenges of collaboration. Our work contributes to the field of data\u0000visualization education for diverse actors. We believe Co-Badge can serve as an\u0000engaging activity that introduces basic concepts of data visualization and\u0000collaboration.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"22 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GAZEploit: Remote Keystroke Inference Attack by Gaze Estimation from Avatar Views in VR/MR Devices GAZEploit:通过 VR/MR 设备中头像视图的凝视估计进行远程按键推理攻击
Pub Date : 2024-09-12 DOI: arxiv-2409.08122
Hanqiu Wang, Zihao Zhan, Haoqi Shan, Siqi Dai, Max Panoff, Shuo Wang
The advent and growing popularity of Virtual Reality (VR) and Mixed Reality(MR) solutions have revolutionized the way we interact with digital platforms.The cutting-edge gaze-controlled typing methods, now prevalent in high-endmodels of these devices, e.g., Apple Vision Pro, have not only improved userexperience but also mitigated traditional keystroke inference attacks thatrelied on hand gestures, head movements and acoustic side-channels. However,this advancement has paradoxically given birth to a new, potentially moreinsidious cyber threat, GAZEploit. In this paper, we unveil GAZEploit, a novel eye-tracking based attackspecifically designed to exploit these eye-tracking information by leveragingthe common use of virtual appearances in VR applications. This widespread usagesignificantly enhances the practicality and feasibility of our attack comparedto existing methods. GAZEploit takes advantage of this vulnerability toremotely extract gaze estimations and steal sensitive keystroke informationacross various typing scenarios-including messages, passwords, URLs, emails,and passcodes. Our research, involving 30 participants, achieved over 80%accuracy in keystroke inference. Alarmingly, our study also identified over 15top-rated apps in the Apple Store as vulnerable to the GAZEploit attack,emphasizing the urgent need for bolstered security measures for thisstate-of-the-art VR/MR text entry method.
虚拟现实(VR)和混合现实(MR)解决方案的出现和日益普及,彻底改变了我们与数字平台的交互方式。最先进的凝视控制输入法目前在这些设备的高端机型(如苹果Vision Pro)中非常普遍,不仅改善了用户体验,还缓解了依赖手势、头部运动和声学侧信道的传统按键推理攻击。然而,这一进步却矛盾地催生了一种新的、可能更为隐蔽的网络威胁--GAZEploit。在本文中,我们揭示了 GAZEploit,这是一种基于眼动跟踪的新型攻击,专门设计用于利用 VR 应用程序中常见的虚拟外观来利用这些眼动跟踪信息。与现有方法相比,这种广泛应用大大增强了我们攻击的实用性和可行性。GAZEploit 利用这一漏洞远程提取注视估计值,并窃取各种输入场景中的敏感按键信息--包括信息、密码、URL、电子邮件和密码。我们的研究有 30 人参与,按键推断准确率超过 80%。令人震惊的是,我们的研究还发现苹果商店中超过 15 款顶级应用程序容易受到 GAZEploit 攻击,这强调了对这种最先进的 VR/MR 文本输入方法加强安全措施的迫切需要。
{"title":"GAZEploit: Remote Keystroke Inference Attack by Gaze Estimation from Avatar Views in VR/MR Devices","authors":"Hanqiu Wang, Zihao Zhan, Haoqi Shan, Siqi Dai, Max Panoff, Shuo Wang","doi":"arxiv-2409.08122","DOIUrl":"https://doi.org/arxiv-2409.08122","url":null,"abstract":"The advent and growing popularity of Virtual Reality (VR) and Mixed Reality\u0000(MR) solutions have revolutionized the way we interact with digital platforms.\u0000The cutting-edge gaze-controlled typing methods, now prevalent in high-end\u0000models of these devices, e.g., Apple Vision Pro, have not only improved user\u0000experience but also mitigated traditional keystroke inference attacks that\u0000relied on hand gestures, head movements and acoustic side-channels. However,\u0000this advancement has paradoxically given birth to a new, potentially more\u0000insidious cyber threat, GAZEploit. In this paper, we unveil GAZEploit, a novel eye-tracking based attack\u0000specifically designed to exploit these eye-tracking information by leveraging\u0000the common use of virtual appearances in VR applications. This widespread usage\u0000significantly enhances the practicality and feasibility of our attack compared\u0000to existing methods. GAZEploit takes advantage of this vulnerability to\u0000remotely extract gaze estimations and steal sensitive keystroke information\u0000across various typing scenarios-including messages, passwords, URLs, emails,\u0000and passcodes. Our research, involving 30 participants, achieved over 80%\u0000accuracy in keystroke inference. Alarmingly, our study also identified over 15\u0000top-rated apps in the Apple Store as vulnerable to the GAZEploit attack,\u0000emphasizing the urgent need for bolstered security measures for this\u0000state-of-the-art VR/MR text entry method.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"11 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From Explanations to Action: A Zero-Shot, Theory-Driven LLM Framework for Student Performance Feedback 从解释到行动:学生成绩反馈的零起点、理论驱动的 LLM 框架
Pub Date : 2024-09-12 DOI: arxiv-2409.08027
Vinitra Swamy, Davide Romano, Bhargav Srinivasa Desikan, Oana-Maria Camburu, Tanja Käser
Recent advances in eXplainable AI (XAI) for education have highlighted acritical challenge: ensuring that explanations for state-of-the-art AI modelsare understandable for non-technical users such as educators and students. Inresponse, we introduce iLLuMinaTE, a zero-shot, chain-of-prompts LLM-XAIpipeline inspired by Miller's cognitive model of explanation. iLLuMinaTE isdesigned to deliver theory-driven, actionable feedback to students in onlinecourses. iLLuMinaTE navigates three main stages - causal connection,explanation selection, and explanation presentation - with variations drawingfrom eight social science theories (e.g. Abnormal Conditions, Pearl's Model ofExplanation, Necessity and Robustness Selection, Contrastive Explanation). Weextensively evaluate 21,915 natural language explanations of iLLuMinaTEextracted from three LLMs (GPT-4o, Gemma2-9B, Llama3-70B), with three differentunderlying XAI methods (LIME, Counterfactuals, MC-LIME), across students fromthree diverse online courses. Our evaluation involves analyses of explanationalignment to the social science theory, understandability of the explanation,and a real-world user preference study with 114 university students containinga novel actionability simulation. We find that students prefer iLLuMinaTEexplanations over traditional explainers 89.52% of the time. Our work providesa robust, ready-to-use framework for effectively communicating hybridXAI-driven insights in education, with significant generalization potential forother human-centric fields.
教育领域的可解释人工智能(XAI)的最新进展凸显了一个严峻的挑战:确保最先进的人工智能模型的解释能够为教育工作者和学生等非技术用户所理解。作为回应,我们介绍了 iLLuMinaTE,这是一个零射程、链式 LLM-XAI 管道,其灵感来自米勒的认知解释模型。iLLuMinaTE 设计用于向在线课程的学生提供理论驱动、可操作的反馈。iLLuMinaTE 通过三个主要阶段--因果联系、解释选择和解释呈现--从八种社会科学理论(如非正常情况、珀尔解释模型、必要性和稳健性选择、对比解释)中汲取变化。我们使用三种不同的基础 XAI 方法(LIME、Counterfactuals、MC-LIME),对来自三个不同在线课程的学生对 iLLuMinaTE 的 21,915 条自然语言解释进行了广泛评估,这些解释摘自三个 LLM(GPT-4o、Gemma2-9B、Llama3-70B)。我们的评估包括分析解释与社会科学理论的一致性、解释的可理解性,以及对 114 名大学生进行的真实世界用户偏好研究,其中包含一个新颖的可操作性模拟。我们发现,在 89.52% 的情况下,学生更喜欢 iLLuMinaTE 解释,而不是传统的解释。我们的工作为在教育领域有效传达混合 XAI 驱动的见解提供了一个强大的、随时可用的框架,并为其他以人为本的领域带来了巨大的推广潜力。
{"title":"From Explanations to Action: A Zero-Shot, Theory-Driven LLM Framework for Student Performance Feedback","authors":"Vinitra Swamy, Davide Romano, Bhargav Srinivasa Desikan, Oana-Maria Camburu, Tanja Käser","doi":"arxiv-2409.08027","DOIUrl":"https://doi.org/arxiv-2409.08027","url":null,"abstract":"Recent advances in eXplainable AI (XAI) for education have highlighted a\u0000critical challenge: ensuring that explanations for state-of-the-art AI models\u0000are understandable for non-technical users such as educators and students. In\u0000response, we introduce iLLuMinaTE, a zero-shot, chain-of-prompts LLM-XAI\u0000pipeline inspired by Miller's cognitive model of explanation. iLLuMinaTE is\u0000designed to deliver theory-driven, actionable feedback to students in online\u0000courses. iLLuMinaTE navigates three main stages - causal connection,\u0000explanation selection, and explanation presentation - with variations drawing\u0000from eight social science theories (e.g. Abnormal Conditions, Pearl's Model of\u0000Explanation, Necessity and Robustness Selection, Contrastive Explanation). We\u0000extensively evaluate 21,915 natural language explanations of iLLuMinaTE\u0000extracted from three LLMs (GPT-4o, Gemma2-9B, Llama3-70B), with three different\u0000underlying XAI methods (LIME, Counterfactuals, MC-LIME), across students from\u0000three diverse online courses. Our evaluation involves analyses of explanation\u0000alignment to the social science theory, understandability of the explanation,\u0000and a real-world user preference study with 114 university students containing\u0000a novel actionability simulation. We find that students prefer iLLuMinaTE\u0000explanations over traditional explainers 89.52% of the time. Our work provides\u0000a robust, ready-to-use framework for effectively communicating hybrid\u0000XAI-driven insights in education, with significant generalization potential for\u0000other human-centric fields.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Objection Overruled! Lay People can Distinguish Large Language Models from Lawyers, but still Favour Advice from an LLM 反对无效!普通人能从律师那里分辨大型语言模型,但仍偏爱法学硕士的建议
Pub Date : 2024-09-12 DOI: arxiv-2409.07871
Eike Schneiders, Tina Seabrooke, Joshua Krook, Richard Hyde, Natalie Leesakul, Jeremie Clos, Joel Fischer
Large Language Models (LLMs) are seemingly infiltrating every domain, and thelegal context is no exception. In this paper, we present the results of threeexperiments (total N=288) that investigated lay people's willingness to actupon, and their ability to discriminate between, LLM- and lawyer-generatedlegal advice. In Experiment 1, participants judged their willingness to act onlegal advice when the source of the advice was either known or unknown. Whenthe advice source was unknown, participants indicated that they weresignificantly more willing to act on the LLM-generated advice. This result wasreplicated in Experiment 2. Intriguingly, despite participants indicatinghigher willingness to act on LLM-generated advice in Experiments 1 and 2,participants discriminated between the LLM- and lawyer-generated textssignificantly above chance-level in Experiment 3. Lastly, we discuss potentialexplanations and risks of our findings, limitations and future work, and theimportance of language complexity and real-world comparability.
大型语言模型(LLM)似乎正在渗透到各个领域,法律领域也不例外。在本文中,我们介绍了三项实验(总人数=288)的结果,这些实验调查了非专业人士对 LLM 和律师提供的法律建议采取行动的意愿及其辨别能力。在实验 1 中,参与者判断自己是否愿意接受已知或未知来源的法律建议。当建议来源不明时,参与者表示他们明显更愿意对法律硕士提出的建议采取行动。实验 2 复制了这一结果。耐人寻味的是,尽管在实验 1 和实验 2 中,参与者表示更愿意对法律硕士提供的建议采取行动,但在实验 3 中,参与者对法律硕士和律师提供的文本的辨别能力显著高于偶然水平。最后,我们讨论了研究结果的潜在解释和风险、局限性和未来工作,以及语言复杂性和现实世界可比性的重要性。
{"title":"Objection Overruled! Lay People can Distinguish Large Language Models from Lawyers, but still Favour Advice from an LLM","authors":"Eike Schneiders, Tina Seabrooke, Joshua Krook, Richard Hyde, Natalie Leesakul, Jeremie Clos, Joel Fischer","doi":"arxiv-2409.07871","DOIUrl":"https://doi.org/arxiv-2409.07871","url":null,"abstract":"Large Language Models (LLMs) are seemingly infiltrating every domain, and the\u0000legal context is no exception. In this paper, we present the results of three\u0000experiments (total N=288) that investigated lay people's willingness to act\u0000upon, and their ability to discriminate between, LLM- and lawyer-generated\u0000legal advice. In Experiment 1, participants judged their willingness to act on\u0000legal advice when the source of the advice was either known or unknown. When\u0000the advice source was unknown, participants indicated that they were\u0000significantly more willing to act on the LLM-generated advice. This result was\u0000replicated in Experiment 2. Intriguingly, despite participants indicating\u0000higher willingness to act on LLM-generated advice in Experiments 1 and 2,\u0000participants discriminated between the LLM- and lawyer-generated texts\u0000significantly above chance-level in Experiment 3. Lastly, we discuss potential\u0000explanations and risks of our findings, limitations and future work, and the\u0000importance of language complexity and real-world comparability.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"23 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
More than just a Tool: People's Perception and Acceptance of Prosocial Delivery Robots as Fellow Road Users 不仅仅是工具人们对作为道路使用者的亲社会送货机器人的看法和接受程度
Pub Date : 2024-09-12 DOI: arxiv-2409.07815
Vivienne Bihe Chi, Elise Ulwelling, Kevin Salubre, Shashank Mehrotra, Teruhisa Misu, Kumar Akash
Service robots are increasingly deployed in public spaces, performingfunctional tasks such as making deliveries. To better integrate them into oursocial environment and enhance their adoption, we consider integrating socialidentities within delivery robots along with their functional identity. Weconducted a virtual reality-based pilot study to explore people's perceptionsand acceptance of delivery robots that perform prosocial behavior. Preliminaryfindings from thematic analysis of semi-structured interviews illustratepeople's ambivalence about dual identity. We discussed the emerging themes inlight of social identity theory, framing effect, and human-robot intergroupdynamics. Building on these insights, we propose that the next generation ofdelivery robots should use peer-based framing, an updated value proposition,and an interactive design that places greater emphasis on expressingintentionality and emotional responses.
服务机器人越来越多地被部署在公共场所,执行送货等功能性任务。为了更好地将它们融入我们的社会环境并提高它们的采用率,我们考虑将社会身份与送货机器人的功能身份整合在一起。我们开展了一项基于虚拟现实的试点研究,以探索人们对执行亲社会行为的送货机器人的看法和接受程度。我们对半结构式访谈进行了主题分析,初步发现了人们对双重身份的矛盾心理。我们结合社会身份理论、框架效应和人机互动动力学讨论了新出现的主题。基于这些见解,我们建议下一代送货机器人应使用基于同伴的框架、更新的价值主张以及更加强调表达意图和情感反应的交互式设计。
{"title":"More than just a Tool: People's Perception and Acceptance of Prosocial Delivery Robots as Fellow Road Users","authors":"Vivienne Bihe Chi, Elise Ulwelling, Kevin Salubre, Shashank Mehrotra, Teruhisa Misu, Kumar Akash","doi":"arxiv-2409.07815","DOIUrl":"https://doi.org/arxiv-2409.07815","url":null,"abstract":"Service robots are increasingly deployed in public spaces, performing\u0000functional tasks such as making deliveries. To better integrate them into our\u0000social environment and enhance their adoption, we consider integrating social\u0000identities within delivery robots along with their functional identity. We\u0000conducted a virtual reality-based pilot study to explore people's perceptions\u0000and acceptance of delivery robots that perform prosocial behavior. Preliminary\u0000findings from thematic analysis of semi-structured interviews illustrate\u0000people's ambivalence about dual identity. We discussed the emerging themes in\u0000light of social identity theory, framing effect, and human-robot intergroup\u0000dynamics. Building on these insights, we propose that the next generation of\u0000delivery robots should use peer-based framing, an updated value proposition,\u0000and an interactive design that places greater emphasis on expressing\u0000intentionality and emotional responses.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"7 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Eyes on the Phish(er): Towards Understanding Users' Email Processing Pattern and Mental Models in Phishing Detection 盯着 "钓鱼网站":了解用户的电子邮件处理模式和网络钓鱼检测中的心理模型
Pub Date : 2024-09-12 DOI: arxiv-2409.07717
Sijie Zhuo, Robert Biddle, Jared Daniel Recomendable, Giovanni Russello, Danielle Lottridge
Phishing emails typically masquerade themselves as reputable identities totrick people into providing sensitive information and credentials. Despiteadvancements in cybersecurity, attackers continuously adapt, posing ongoingthreats to individuals and organisations. While email users are the last lineof defence, they are not always well-prepared to detect phishing emails. Thisstudy examines how workload affects susceptibility to phishing, usingeye-tracking technology to observe participants' reading patterns andinteractions with tailored phishing emails. Incorporating both quantitative andqualitative analysis, we investigate users' attention to two phishingindicators, email sender and hyperlink URLs, and their reasons for assessingthe trustworthiness of emails and falling for phishing emails. Our resultsprovide concrete evidence that attention to the email sender can reducephishing susceptibility. While we found no evidence that attention to theactual URL in the browser influences phishing detection, attention to the textmasking links can increase phishing susceptibility. We also highlight how emailrelevance, familiarity, and visual presentation impact first impressions ofemail trustworthiness and phishing susceptibility.
网络钓鱼电子邮件通常伪装成信誉良好的身份,诱使人们提供敏感信息和凭证。尽管网络安全取得了进步,但攻击者仍在不断调整,对个人和组织造成持续威胁。虽然电子邮件用户是最后一道防线,但他们并不总是做好了检测网络钓鱼电子邮件的充分准备。本研究使用眼睛跟踪技术来观察参与者的阅读模式以及与定制的网络钓鱼电子邮件的互动,从而研究工作量如何影响对网络钓鱼的易感性。通过定量和定性分析,我们调查了用户对电子邮件发件人和超链接 URL 这两个网络钓鱼指标的关注程度,以及他们评估电子邮件可信度和上当受骗的原因。我们的研究结果提供了具体证据,证明对电子邮件发件人的关注可以降低网络钓鱼的易感性。虽然我们没有发现任何证据表明关注浏览器中的实际 URL 会影响网络钓鱼的检测,但关注文本屏蔽链接会增加网络钓鱼的易感性。我们还强调了电子邮件的相关性、熟悉程度和视觉呈现如何影响对电子邮件可信度的第一印象和网络钓鱼的易感性。
{"title":"Eyes on the Phish(er): Towards Understanding Users' Email Processing Pattern and Mental Models in Phishing Detection","authors":"Sijie Zhuo, Robert Biddle, Jared Daniel Recomendable, Giovanni Russello, Danielle Lottridge","doi":"arxiv-2409.07717","DOIUrl":"https://doi.org/arxiv-2409.07717","url":null,"abstract":"Phishing emails typically masquerade themselves as reputable identities to\u0000trick people into providing sensitive information and credentials. Despite\u0000advancements in cybersecurity, attackers continuously adapt, posing ongoing\u0000threats to individuals and organisations. While email users are the last line\u0000of defence, they are not always well-prepared to detect phishing emails. This\u0000study examines how workload affects susceptibility to phishing, using\u0000eye-tracking technology to observe participants' reading patterns and\u0000interactions with tailored phishing emails. Incorporating both quantitative and\u0000qualitative analysis, we investigate users' attention to two phishing\u0000indicators, email sender and hyperlink URLs, and their reasons for assessing\u0000the trustworthiness of emails and falling for phishing emails. Our results\u0000provide concrete evidence that attention to the email sender can reduce\u0000phishing susceptibility. While we found no evidence that attention to the\u0000actual URL in the browser influences phishing detection, attention to the text\u0000masking links can increase phishing susceptibility. We also highlight how email\u0000relevance, familiarity, and visual presentation impact first impressions of\u0000email trustworthiness and phishing susceptibility.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"49 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
OmniQuery: Contextually Augmenting Captured Multimodal Memory to Enable Personal Question Answering OmniQuery:从上下文角度增强捕捉到的多模态记忆,实现个人问题解答
Pub Date : 2024-09-12 DOI: arxiv-2409.08250
Jiahao Nick LiJerry, ZhuohaoJerry, Zhang, Jiaju Ma
People often capture memories through photos, screenshots, and videos. Whileexisting AI-based tools enable querying this data using natural language, theymostly only support retrieving individual pieces of information like certainobjects in photos and struggle with answering more complex queries that involveinterpreting interconnected memories like event sequences. We conducted aone-month diary study to collect realistic user queries and generated ataxonomy of necessary contextual information for integrating with capturedmemories. We then introduce OmniQuery, a novel system that is able to answercomplex personal memory-related questions that require extracting and inferringcontextual information. OmniQuery augments single captured memories throughintegrating scattered contextual information from multiple interconnectedmemories, retrieves relevant memories, and uses a large language model (LLM) tocomprehensive answers. In human evaluations, we show the effectiveness ofOmniQuery with an accuracy of 71.5%, and it outperformed a conventional RAGsystem, winning or tying in 74.5% of the time.
人们经常通过照片、截图和视频来捕捉记忆。虽然现有的基于人工智能的工具可以使用自然语言查询这些数据,但它们几乎只支持检索单个信息,如照片中的某些物体,而难以回答涉及解释事件序列等相互关联的记忆的更复杂查询。我们进行了为期一个月的日记研究,以收集真实的用户查询,并生成了与捕获记忆整合的必要上下文信息分类标准。然后,我们介绍了 OmniQuery,这是一个新颖的系统,能够回答需要提取和推断上下文信息的复杂个人记忆相关问题。OmniQuery 通过整合多个相互连接的记忆中分散的上下文信息来增强单个捕获的记忆,检索相关记忆,并使用大型语言模型(LLM)来综合回答问题。在人类评估中,我们展示了 OmniQuery 的有效性,其准确率高达 71.5%,而且它的表现优于传统的 RAG 系统,在 74.5% 的情况下获胜或打成平手。
{"title":"OmniQuery: Contextually Augmenting Captured Multimodal Memory to Enable Personal Question Answering","authors":"Jiahao Nick LiJerry, ZhuohaoJerry, Zhang, Jiaju Ma","doi":"arxiv-2409.08250","DOIUrl":"https://doi.org/arxiv-2409.08250","url":null,"abstract":"People often capture memories through photos, screenshots, and videos. While\u0000existing AI-based tools enable querying this data using natural language, they\u0000mostly only support retrieving individual pieces of information like certain\u0000objects in photos and struggle with answering more complex queries that involve\u0000interpreting interconnected memories like event sequences. We conducted a\u0000one-month diary study to collect realistic user queries and generated a\u0000taxonomy of necessary contextual information for integrating with captured\u0000memories. We then introduce OmniQuery, a novel system that is able to answer\u0000complex personal memory-related questions that require extracting and inferring\u0000contextual information. OmniQuery augments single captured memories through\u0000integrating scattered contextual information from multiple interconnected\u0000memories, retrieves relevant memories, and uses a large language model (LLM) to\u0000comprehensive answers. In human evaluations, we show the effectiveness of\u0000OmniQuery with an accuracy of 71.5%, and it outperformed a conventional RAG\u0000system, winning or tying in 74.5% of the time.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"64 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring Use and Perceptions of Generative AI Art Tools by Blind Artists 探索盲人艺术家对人工智能生成艺术工具的使用和看法
Pub Date : 2024-09-12 DOI: arxiv-2409.08226
Gayatri Raman, Erin Brady
The paper explores the intersection of AI art and blindness, as existing AIresearch has primarily focused on AI art's reception and impact, on sightedartists and consumers. To address this gap, the researcher interviewed sixblind artists from various visual art mediums and levels of blindness about thegenerative AI image platform Midjourney. The participants shared text promptsand discussed their reactions to the generated images with the sightedresearcher. The findings highlight blind artists' interest in AI images as acollaborative tool but express concerns about cultural perceptions and labelingof AI-generated art. They also underscore unique challenges, such as potentialmisunderstandings and stereotypes about blindness leading to exclusion. Thestudy advocates for greater inclusion of blind individuals in AI art,emphasizing the need to address their specific needs and experiences indeveloping AI art technologies.
本文探讨了人工智能艺术与失明之间的交集,因为现有的人工智能研究主要集中于人工智能艺术对视力正常的艺术家和消费者的接受和影响。为了填补这一空白,研究人员就人工智能图像生成平台 Midjourney 采访了六位来自不同视觉艺术媒介和失明程度的失明艺术家。参与者分享了文本提示,并与视力正常的研究人员讨论了他们对生成图像的反应。研究结果凸显了盲人艺术家对人工智能图像这一合作工具的兴趣,但也表达了对人工智能生成艺术的文化观念和标签的担忧。他们还强调了独特的挑战,如对盲人可能存在的误解和成见,从而导致排斥。本研究提倡在人工智能艺术中更多地纳入盲人,强调在开发人工智能艺术技术时需要解决他们的特殊需求和体验。
{"title":"Exploring Use and Perceptions of Generative AI Art Tools by Blind Artists","authors":"Gayatri Raman, Erin Brady","doi":"arxiv-2409.08226","DOIUrl":"https://doi.org/arxiv-2409.08226","url":null,"abstract":"The paper explores the intersection of AI art and blindness, as existing AI\u0000research has primarily focused on AI art's reception and impact, on sighted\u0000artists and consumers. To address this gap, the researcher interviewed six\u0000blind artists from various visual art mediums and levels of blindness about the\u0000generative AI image platform Midjourney. The participants shared text prompts\u0000and discussed their reactions to the generated images with the sighted\u0000researcher. The findings highlight blind artists' interest in AI images as a\u0000collaborative tool but express concerns about cultural perceptions and labeling\u0000of AI-generated art. They also underscore unique challenges, such as potential\u0000misunderstandings and stereotypes about blindness leading to exclusion. The\u0000study advocates for greater inclusion of blind individuals in AI art,\u0000emphasizing the need to address their specific needs and experiences in\u0000developing AI art technologies.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"56 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
arXiv - CS - Human-Computer Interaction
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1