首页 > 最新文献

arXiv - CS - Human-Computer Interaction最新文献

英文 中文
Exploring Gaze Pattern in Autistic Children: Clustering, Visualization, and Prediction 探索自闭症儿童的注视模式:聚类、可视化和预测
Pub Date : 2024-09-18 DOI: arxiv-2409.11744
Weiyan Shi, Haihong Zhang, Jin Yang, Ruiqing Ding, YongWei Zhu, Kenny Tsu Wei Choo
Autism Spectrum Disorder (ASD) significantly affects the social andcommunication abilities of children, and eye-tracking is commonly used as adiagnostic tool by identifying associated atypical gaze patterns. Traditionalmethods demand manual identification of Areas of Interest in gaze patterns,lowering the performance of gaze behavior analysis in ASD subjects. To tacklethis limitation, we propose a novel method to automatically analyze gazebehaviors in ASD children with superior accuracy. To be specific, we firstapply and optimize seven clustering algorithms to automatically group gazepoints to compare ASD subjects with typically developing peers. Subsequently,we extract 63 significant features to fully describe the patterns. Thesefeatures can describe correlations between ASD diagnosis and gaze patterns.Lastly, using these features as prior knowledge, we train multiple predictivemachine learning models to predict and diagnose ASD based on their gazebehaviors. To evaluate our method, we apply our method to three ASD datasets.The experimental and visualization results demonstrate the improvements ofclustering algorithms in the analysis of unique gaze patterns in ASD children.Additionally, these predictive machine learning models achievedstate-of-the-art prediction performance ($81%$ AUC) in the field ofautomatically constructed gaze point features for ASD diagnosis. Our code isavailable at url{https://github.com/username/projectname}.
自闭症谱系障碍(ASD)严重影响了儿童的社交和沟通能力,眼动跟踪通常通过识别相关的非典型注视模式作为诊断工具。传统方法需要人工识别注视模式中的兴趣区,从而降低了 ASD 受试者注视行为分析的性能。为了解决这一局限性,我们提出了一种新方法来自动分析 ASD 儿童的注视行为,而且准确性更高。具体来说,我们首先应用并优化了七种聚类算法,对注视点进行自动分组,将 ASD 受试者与发育正常的同龄人进行比较。随后,我们提取了 63 个重要特征来全面描述这些模式。最后,利用这些特征作为先验知识,我们训练了多个预测性机器学习模型,以根据他们的注视行为预测和诊断 ASD。为了评估我们的方法,我们将我们的方法应用于三个ASD数据集。实验和可视化结果证明了聚类算法在分析ASD儿童独特注视模式方面的改进。此外,这些预测性机器学习模型在自动构建注视点特征用于ASD诊断领域达到了最先进的预测性能(81%$ AUC)。我们的代码可在(url{https://github.com/username/projectname}.
{"title":"Exploring Gaze Pattern in Autistic Children: Clustering, Visualization, and Prediction","authors":"Weiyan Shi, Haihong Zhang, Jin Yang, Ruiqing Ding, YongWei Zhu, Kenny Tsu Wei Choo","doi":"arxiv-2409.11744","DOIUrl":"https://doi.org/arxiv-2409.11744","url":null,"abstract":"Autism Spectrum Disorder (ASD) significantly affects the social and\u0000communication abilities of children, and eye-tracking is commonly used as a\u0000diagnostic tool by identifying associated atypical gaze patterns. Traditional\u0000methods demand manual identification of Areas of Interest in gaze patterns,\u0000lowering the performance of gaze behavior analysis in ASD subjects. To tackle\u0000this limitation, we propose a novel method to automatically analyze gaze\u0000behaviors in ASD children with superior accuracy. To be specific, we first\u0000apply and optimize seven clustering algorithms to automatically group gaze\u0000points to compare ASD subjects with typically developing peers. Subsequently,\u0000we extract 63 significant features to fully describe the patterns. These\u0000features can describe correlations between ASD diagnosis and gaze patterns.\u0000Lastly, using these features as prior knowledge, we train multiple predictive\u0000machine learning models to predict and diagnose ASD based on their gaze\u0000behaviors. To evaluate our method, we apply our method to three ASD datasets.\u0000The experimental and visualization results demonstrate the improvements of\u0000clustering algorithms in the analysis of unique gaze patterns in ASD children.\u0000Additionally, these predictive machine learning models achieved\u0000state-of-the-art prediction performance ($81%$ AUC) in the field of\u0000automatically constructed gaze point features for ASD diagnosis. Our code is\u0000available at url{https://github.com/username/projectname}.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"11 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142252419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Revealing the Challenge of Detecting Character Knowledge Errors in LLM Role-Playing 揭示在 LLM 角色扮演中检测角色知识错误所面临的挑战
Pub Date : 2024-09-18 DOI: arxiv-2409.11726
Wenyuan Zhang, Jiawei Sheng, Shuaiyi Nie, Zefeng Zhang, Xinghua Zhang, Yongquan He, Tingwen Liu
Large language model (LLM) role-playing has gained widespread attention,where the authentic character knowledge is crucial for constructing realisticLLM role-playing agents. However, existing works usually overlook theexploration of LLMs' ability to detect characters' known knowledge errors (KKE)and unknown knowledge errors (UKE) while playing roles, which would lead tolow-quality automatic construction of character trainable corpus. In thispaper, we propose a probing dataset to evaluate LLMs' ability to detect errorsin KKE and UKE. The results indicate that even the latest LLMs struggle toeffectively detect these two types of errors, especially when it comes tofamiliar knowledge. We experimented with various reasoning strategies andpropose an agent-based reasoning method, Self-Recollection and Self-Doubt(S2RD), to further explore the potential for improving error detectioncapabilities. Experiments show that our method effectively improves the LLMs'ability to detect error character knowledge, but it remains an issue thatrequires ongoing attention.
大语言模型(LLM)角色扮演已受到广泛关注,其中真实的角色知识对于构建逼真的 LLM 角色扮演代理至关重要。然而,现有研究通常忽视了对 LLM 检测角色扮演过程中已知知识错误(KKE)和未知知识错误(UKE)能力的探索,这将导致自动构建角色可训练语料库的质量低下。在本文中,我们提出了一个探测数据集来评估 LLMs 检测 KKE 和 UKE 中错误的能力。结果表明,即使是最新的 LLM 也很难有效地检测出这两类错误,尤其是在涉及熟悉的知识时。我们尝试了各种推理策略,并提出了一种基于代理的推理方法--自我回忆与自我怀疑(S2RD),以进一步探索提高错误检测能力的潜力。实验表明,我们的方法有效地提高了 LLMs 检测错误特征知识的能力,但这仍然是一个需要持续关注的问题。
{"title":"Revealing the Challenge of Detecting Character Knowledge Errors in LLM Role-Playing","authors":"Wenyuan Zhang, Jiawei Sheng, Shuaiyi Nie, Zefeng Zhang, Xinghua Zhang, Yongquan He, Tingwen Liu","doi":"arxiv-2409.11726","DOIUrl":"https://doi.org/arxiv-2409.11726","url":null,"abstract":"Large language model (LLM) role-playing has gained widespread attention,\u0000where the authentic character knowledge is crucial for constructing realistic\u0000LLM role-playing agents. However, existing works usually overlook the\u0000exploration of LLMs' ability to detect characters' known knowledge errors (KKE)\u0000and unknown knowledge errors (UKE) while playing roles, which would lead to\u0000low-quality automatic construction of character trainable corpus. In this\u0000paper, we propose a probing dataset to evaluate LLMs' ability to detect errors\u0000in KKE and UKE. The results indicate that even the latest LLMs struggle to\u0000effectively detect these two types of errors, especially when it comes to\u0000familiar knowledge. We experimented with various reasoning strategies and\u0000propose an agent-based reasoning method, Self-Recollection and Self-Doubt\u0000(S2RD), to further explore the potential for improving error detection\u0000capabilities. Experiments show that our method effectively improves the LLMs'\u0000ability to detect error character knowledge, but it remains an issue that\u0000requires ongoing attention.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142252420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI paintings vs. Human Paintings? Deciphering Public Interactions and Perceptions towards AI-Generated Paintings on TikTok 人工智能绘画与人类绘画?解读公众对 TikTok 上人工智能生成的绘画的互动和看法
Pub Date : 2024-09-18 DOI: arxiv-2409.11911
Jiajun Wang, Xiangzhe Yuan, Siying Hu, Zhicong Lu
With the development of generative AI technology, a vast array ofAI-generated paintings (AIGP) have gone viral on social media like TikTok.However, some negative news about AIGP has also emerged. For example, in 2022,numerous painters worldwide organized a large-scale anti-AI movement because ofthe infringement in generative AI model training. This event reflected a socialissue that, with the development and application of generative AI, publicfeedback and feelings towards it may have been overlooked. Therefore, toinvestigate public interactions and perceptions towards AIGP on social media,we analyzed user engagement level and comment sentiment scores of AIGP usinghuman painting videos as a baseline. In analyzing user engagement, we alsoconsidered the possible moderating effect of the aesthetic quality ofPaintings. Utilizing topic modeling, we identified seven reasons, includinglooks too real, looks too scary, ambivalence, etc., leading to negative publicperceptions of AIGP. Our work may provide instructive suggestions for futuregenerative AI technology development and avoid potential crises in human-AIcollaboration.
随着人工智能生成技术的发展,大量人工智能生成的绘画作品(AIGP)在TikTok等社交媒体上走红。例如,2022 年,由于人工智能生成模型训练中的侵权行为,全球众多画家组织了一场大规模的反人工智能运动。这一事件反映出一个社会问题,即随着生成式人工智能的发展和应用,公众对它的反馈和感受可能被忽视了。因此,为了调查公众在社交媒体上对 AIGP 的互动和看法,我们以人类绘画视频为基线,分析了用户对 AIGP 的参与程度和评论情感评分。在分析用户参与度时,我们还考虑了绘画美学质量可能产生的调节作用。利用主题建模,我们找出了导致公众对 AIGP 产生负面看法的七个原因,包括看起来太真实、看起来太吓人、矛盾等。我们的研究可以为未来的人工智能技术开发提供指导性建议,避免人类与人工智能合作中的潜在危机。
{"title":"AI paintings vs. Human Paintings? Deciphering Public Interactions and Perceptions towards AI-Generated Paintings on TikTok","authors":"Jiajun Wang, Xiangzhe Yuan, Siying Hu, Zhicong Lu","doi":"arxiv-2409.11911","DOIUrl":"https://doi.org/arxiv-2409.11911","url":null,"abstract":"With the development of generative AI technology, a vast array of\u0000AI-generated paintings (AIGP) have gone viral on social media like TikTok.\u0000However, some negative news about AIGP has also emerged. For example, in 2022,\u0000numerous painters worldwide organized a large-scale anti-AI movement because of\u0000the infringement in generative AI model training. This event reflected a social\u0000issue that, with the development and application of generative AI, public\u0000feedback and feelings towards it may have been overlooked. Therefore, to\u0000investigate public interactions and perceptions towards AIGP on social media,\u0000we analyzed user engagement level and comment sentiment scores of AIGP using\u0000human painting videos as a baseline. In analyzing user engagement, we also\u0000considered the possible moderating effect of the aesthetic quality of\u0000Paintings. Utilizing topic modeling, we identified seven reasons, including\u0000looks too real, looks too scary, ambivalence, etc., leading to negative public\u0000perceptions of AIGP. Our work may provide instructive suggestions for future\u0000generative AI technology development and avoid potential crises in human-AI\u0000collaboration.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"39 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142252158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From Data Stories to Dialogues: A Randomised Controlled Trial of Generative AI Agents and Data Storytelling in Enhancing Data Visualisation Comprehension 从数据故事到对话:生成式人工智能代理和数据故事在增强数据可视化理解方面的随机对照试验
Pub Date : 2024-09-18 DOI: arxiv-2409.11645
Lixiang Yan, Roberto Martinez-Maldonado, Yueqiao Jin, Vanessa Echeverria, Mikaela Milesi, Jie Fan, Linxuan Zhao, Riordan Alfredo, Xinyu Li, Dragan Gašević
Generative AI (GenAI) agents offer a potentially scalable approach to supportcomprehending complex data visualisations, a skill many individuals strugglewith. While data storytelling has proven effective, there is little evidenceregarding the comparative effectiveness of GenAI agents. To address this gap,we conducted a randomised controlled study with 141 participants to compare theeffectiveness and efficiency of data dialogues facilitated by both passive(which simply answer participants' questions about visualisations) andproactive (infused with scaffolding questions to guide participants throughvisualisations) GenAI agents against data storytelling in enhancing theircomprehension of data visualisations. Comprehension was measured before,during, and after the intervention. Results suggest that passive GenAI agentsimprove comprehension similarly to data storytelling both during and afterintervention. Notably, proactive GenAI agents significantly enhancecomprehension after intervention compared to both passive GenAI agents andstandalone data storytelling, regardless of participants' visualisationliteracy, indicating sustained improvements and learning.
生成式人工智能(GenAI)代理提供了一种潜在的可扩展方法来支持理解复杂的数据可视化,这是许多人都在努力学习的一项技能。虽然讲数据故事已被证明是有效的,但有关 GenAI 代理的比较效果的证据却很少。为了弥补这一不足,我们对141名参与者进行了随机对照研究,比较了被动式(只回答参与者关于可视化的问题)和主动式(通过脚手架问题引导参与者完成可视化)GenAI代理与数据讲故事在增强参与者对数据可视化的理解方面的效果和效率。在干预前、干预中和干预后都对参与者的理解能力进行了测量。结果表明,在干预期间和干预之后,被动 GenAI 代理对理解能力的提高与数据讲故事相似。值得注意的是,与被动型 GenAI 代理和单独的数据讲故事相比,主动型 GenAI 代理在干预后显著提高了理解能力,而与参与者的可视化读写能力无关,这表明了持续的改进和学习。
{"title":"From Data Stories to Dialogues: A Randomised Controlled Trial of Generative AI Agents and Data Storytelling in Enhancing Data Visualisation Comprehension","authors":"Lixiang Yan, Roberto Martinez-Maldonado, Yueqiao Jin, Vanessa Echeverria, Mikaela Milesi, Jie Fan, Linxuan Zhao, Riordan Alfredo, Xinyu Li, Dragan Gašević","doi":"arxiv-2409.11645","DOIUrl":"https://doi.org/arxiv-2409.11645","url":null,"abstract":"Generative AI (GenAI) agents offer a potentially scalable approach to support\u0000comprehending complex data visualisations, a skill many individuals struggle\u0000with. While data storytelling has proven effective, there is little evidence\u0000regarding the comparative effectiveness of GenAI agents. To address this gap,\u0000we conducted a randomised controlled study with 141 participants to compare the\u0000effectiveness and efficiency of data dialogues facilitated by both passive\u0000(which simply answer participants' questions about visualisations) and\u0000proactive (infused with scaffolding questions to guide participants through\u0000visualisations) GenAI agents against data storytelling in enhancing their\u0000comprehension of data visualisations. Comprehension was measured before,\u0000during, and after the intervention. Results suggest that passive GenAI agents\u0000improve comprehension similarly to data storytelling both during and after\u0000intervention. Notably, proactive GenAI agents significantly enhance\u0000comprehension after intervention compared to both passive GenAI agents and\u0000standalone data storytelling, regardless of participants' visualisation\u0000literacy, indicating sustained improvements and learning.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"43 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142252159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
OSINT Clinic: Co-designing AI-Augmented Collaborative OSINT Investigations for Vulnerability Assessment OSINT 诊所:共同设计用于漏洞评估的人工智能增强型协作 OSINT 调查
Pub Date : 2024-09-18 DOI: arxiv-2409.11672
Anirban Mukhopadhyay, Kurt Luther
Small businesses need vulnerability assessments to identify and mitigatecyber risks. Cybersecurity clinics provide a solution by offering studentshands-on experience while delivering free vulnerability assessments to localorganizations. To scale this model, we propose an Open Source Intelligence(OSINT) clinic where students conduct assessments using only publicly availabledata. We enhance the quality of investigations in the OSINT clinic byaddressing the technical and collaborative challenges. Over the duration of the2023-24 academic year, we conducted a three-phase co-design study with sixstudents. Our study identified key challenges in the OSINT investigations andexplored how generative AI could address these performance gaps. We developeddesign ideas for effective AI integration based on the use of AI probes andcollaboration platform features. A pilot with three small businesseshighlighted both the practical benefits of AI in streamlining investigations,and limitations, including privacy concerns and difficulty in monitoringprogress.
小型企业需要进行漏洞评估,以识别和降低网络风险。网络安全诊所提供了一种解决方案,在为当地组织提供免费漏洞评估的同时,还为学生提供了实践经验。为了推广这种模式,我们提出了开源情报(Open Source Intelligence,OSINT)诊所,让学生只使用公开数据进行评估。我们通过解决技术和合作方面的挑战来提高 OSINT 诊所的调查质量。在 2023-24 学年期间,我们与六名学生进行了三阶段的共同设计研究。我们的研究确定了 OSINT 调查中的关键挑战,并探索了生成式人工智能如何解决这些性能差距。我们在使用人工智能探针和协作平台功能的基础上,提出了有效整合人工智能的设计思路。在三家小型企业中进行的试点凸显了人工智能在简化调查方面的实际优势和局限性,包括隐私问题和难以监控进展。
{"title":"OSINT Clinic: Co-designing AI-Augmented Collaborative OSINT Investigations for Vulnerability Assessment","authors":"Anirban Mukhopadhyay, Kurt Luther","doi":"arxiv-2409.11672","DOIUrl":"https://doi.org/arxiv-2409.11672","url":null,"abstract":"Small businesses need vulnerability assessments to identify and mitigate\u0000cyber risks. Cybersecurity clinics provide a solution by offering students\u0000hands-on experience while delivering free vulnerability assessments to local\u0000organizations. To scale this model, we propose an Open Source Intelligence\u0000(OSINT) clinic where students conduct assessments using only publicly available\u0000data. We enhance the quality of investigations in the OSINT clinic by\u0000addressing the technical and collaborative challenges. Over the duration of the\u00002023-24 academic year, we conducted a three-phase co-design study with six\u0000students. Our study identified key challenges in the OSINT investigations and\u0000explored how generative AI could address these performance gaps. We developed\u0000design ideas for effective AI integration based on the use of AI probes and\u0000collaboration platform features. A pilot with three small businesses\u0000highlighted both the practical benefits of AI in streamlining investigations,\u0000and limitations, including privacy concerns and difficulty in monitoring\u0000progress.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"20 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142268564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Equimetrics -- Applying HAR principles to equestrian activities Equimetrics -- 在马术活动中应用 HAR 原理
Pub Date : 2024-09-18 DOI: arxiv-2409.11989
Jonas Pöhler, Kristof Van Laerhoven
This paper presents the Equimetrics data capture system. The primaryobjective is to apply HAR principles to enhance the understanding andoptimization of equestrian performance. By integrating data from strategicallyplaced sensors on the rider's body and the horse's limbs, the system provides acomprehensive view of their interactions. Preliminary data collection hasdemonstrated the system's ability to accurately classify various equestrianactivities, such as walking, trotting, cantering, and jumping, while alsodetecting subtle changes in rider posture and horse movement. The systemleverages open-source hardware and software to offer a cost-effectivealternative to traditional motion capture technologies, making it accessiblefor researchers and trainers. The Equimetrics system represents a significantadvancement in equestrian performance analysis, providing objective,data-driven insights that can be used to enhance training and competitionoutcomes.
本文介绍了 Equimetrics 数据采集系统。其主要目的是应用 HAR 原理来提高对马术表现的理解和优化。通过整合来自骑手身体和马匹四肢上战略性放置的传感器的数据,该系统提供了两者互动的综合视图。初步数据收集表明,该系统能够准确地对各种马术活动进行分类,如行走、小跑、奔跑和跳跃,同时还能检测骑手姿势和马匹运动的细微变化。该系统利用开源硬件和软件,为传统的动作捕捉技术提供了一种具有成本效益的替代方案,使研究人员和训练人员都能使用该系统。Equimetrics 系统是马术成绩分析领域的一大进步,它提供了客观、数据驱动的见解,可用于提高训练和比赛成绩。
{"title":"Equimetrics -- Applying HAR principles to equestrian activities","authors":"Jonas Pöhler, Kristof Van Laerhoven","doi":"arxiv-2409.11989","DOIUrl":"https://doi.org/arxiv-2409.11989","url":null,"abstract":"This paper presents the Equimetrics data capture system. The primary\u0000objective is to apply HAR principles to enhance the understanding and\u0000optimization of equestrian performance. By integrating data from strategically\u0000placed sensors on the rider's body and the horse's limbs, the system provides a\u0000comprehensive view of their interactions. Preliminary data collection has\u0000demonstrated the system's ability to accurately classify various equestrian\u0000activities, such as walking, trotting, cantering, and jumping, while also\u0000detecting subtle changes in rider posture and horse movement. The system\u0000leverages open-source hardware and software to offer a cost-effective\u0000alternative to traditional motion capture technologies, making it accessible\u0000for researchers and trainers. The Equimetrics system represents a significant\u0000advancement in equestrian performance analysis, providing objective,\u0000data-driven insights that can be used to enhance training and competition\u0000outcomes.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"20 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142252157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Human-Centered Risk Evaluation of Biometric Systems Using Conjoint Analysis 使用联合分析法对生物识别系统进行以人为本的风险评估
Pub Date : 2024-09-17 DOI: arxiv-2409.11224
Tetsushi Ohki, Narishige Abe, Hidetsugu Uchida, Shigefumi Yamada
Biometric recognition systems, known for their convenience, are widelyadopted across various fields. However, their security faces risks depending onthe authentication algorithm and deployment environment. Current riskassessment methods faces significant challenges in incorporating the crucialfactor of attacker's motivation, leading to incomplete evaluations. This paperpresents a novel human-centered risk evaluation framework using conjointanalysis to quantify the impact of risk factors, such as surveillance cameras,on attacker's motivation. Our framework calculates risk values incorporatingthe False Acceptance Rate (FAR) and attack probability, allowing comprehensivecomparisons across use cases. A survey of 600 Japanese participantsdemonstrates our method's effectiveness, showing how security measuresinfluence attacker's motivation. This approach helps decision-makers customizebiometric systems to enhance security while maintaining usability.
生物识别系统以其便捷性著称,在各个领域被广泛采用。然而,根据认证算法和部署环境的不同,其安全性也面临风险。目前的风险评估方法在纳入攻击者动机这一关键因素方面面临巨大挑战,导致评估不全面。本文提出了一种新颖的以人为本的风险评估框架,利用联合分析来量化监控摄像头等风险因素对攻击者动机的影响。我们的框架计算的风险值包含错误接受率(FAR)和攻击概率,可以对不同的使用案例进行综合比较。一项针对 600 名日本参与者的调查证明了我们方法的有效性,展示了安全措施如何影响攻击者的动机。这种方法可以帮助决策者定制生物识别系统,在提高安全性的同时保持可用性。
{"title":"A Human-Centered Risk Evaluation of Biometric Systems Using Conjoint Analysis","authors":"Tetsushi Ohki, Narishige Abe, Hidetsugu Uchida, Shigefumi Yamada","doi":"arxiv-2409.11224","DOIUrl":"https://doi.org/arxiv-2409.11224","url":null,"abstract":"Biometric recognition systems, known for their convenience, are widely\u0000adopted across various fields. However, their security faces risks depending on\u0000the authentication algorithm and deployment environment. Current risk\u0000assessment methods faces significant challenges in incorporating the crucial\u0000factor of attacker's motivation, leading to incomplete evaluations. This paper\u0000presents a novel human-centered risk evaluation framework using conjoint\u0000analysis to quantify the impact of risk factors, such as surveillance cameras,\u0000on attacker's motivation. Our framework calculates risk values incorporating\u0000the False Acceptance Rate (FAR) and attack probability, allowing comprehensive\u0000comparisons across use cases. A survey of 600 Japanese participants\u0000demonstrates our method's effectiveness, showing how security measures\u0000influence attacker's motivation. This approach helps decision-makers customize\u0000biometric systems to enhance security while maintaining usability.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"4 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142252432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring Dimensions of Expertise in AR-Guided Psychomotor Tasks 探索 AR 引导的心理运动任务中的专业知识维度
Pub Date : 2024-09-17 DOI: arxiv-2409.11599
Steven Yoo, Casper Harteveld, Nicholas Wilson, Kemi Jona, Mohsen Moghaddam
This study aimed to explore how novices and experts differ in performingcomplex psychomotor tasks guided by augmented reality (AR), focusing ondecision-making and technical proficiency. Participants were divided intonovice and expert groups based on a pre-questionnaire assessing their technicalskills and theoretical knowledge of precision inspection. Participantscompleted a post-study questionnaire that evaluated cognitive load (NASA-TLX),self-efficacy, and experience with the HoloLens 2 and AR app, along withgeneral feedback. We used multimodal data from AR devices and wearables,including hand tracking, galvanic skin response, and gaze tracking, to measurekey performance metrics. We found that experts significantly outperformednovices in decision-making speed, efficiency, accuracy, and dexterity in theexecution of technical tasks. Novices exhibited a positive correlation betweenperceived performance in the NASA-TLX and the GSR amplitude, indicating thathigher perceived performance is associated with increased physiological stressresponses. This study provides a foundation for designing multidimensionalexpertise estimation models to enable personalized industrial AR trainingsystems.
本研究旨在探索新手和专家在执行由增强现实(AR)引导的复杂心理运动任务时有何不同,重点关注决策制定和技术熟练程度。根据评估技术技能和精密检测理论知识的前置问卷,参与者被分为新手组和专家组。参与者填写了一份研究后问卷,评估认知负荷(NASA-TLX)、自我效能、使用 HoloLens 2 和 AR 应用程序的经验以及一般反馈。我们使用来自 AR 设备和可穿戴设备的多模态数据(包括手部跟踪、皮肤电反应和注视跟踪)来测量关键的性能指标。我们发现,在执行技术任务时,专家在决策速度、效率、准确性和灵巧性方面都明显优于新手。新手在NASA-TLX中的感知表现与GSR振幅呈正相关,这表明较高的感知表现与生理压力反应的增加有关。这项研究为设计多维度技能估计模型奠定了基础,从而实现个性化的工业AR培训系统。
{"title":"Exploring Dimensions of Expertise in AR-Guided Psychomotor Tasks","authors":"Steven Yoo, Casper Harteveld, Nicholas Wilson, Kemi Jona, Mohsen Moghaddam","doi":"arxiv-2409.11599","DOIUrl":"https://doi.org/arxiv-2409.11599","url":null,"abstract":"This study aimed to explore how novices and experts differ in performing\u0000complex psychomotor tasks guided by augmented reality (AR), focusing on\u0000decision-making and technical proficiency. Participants were divided into\u0000novice and expert groups based on a pre-questionnaire assessing their technical\u0000skills and theoretical knowledge of precision inspection. Participants\u0000completed a post-study questionnaire that evaluated cognitive load (NASA-TLX),\u0000self-efficacy, and experience with the HoloLens 2 and AR app, along with\u0000general feedback. We used multimodal data from AR devices and wearables,\u0000including hand tracking, galvanic skin response, and gaze tracking, to measure\u0000key performance metrics. We found that experts significantly outperformed\u0000novices in decision-making speed, efficiency, accuracy, and dexterity in the\u0000execution of technical tasks. Novices exhibited a positive correlation between\u0000perceived performance in the NASA-TLX and the GSR amplitude, indicating that\u0000higher perceived performance is associated with increased physiological stress\u0000responses. This study provides a foundation for designing multidimensional\u0000expertise estimation models to enable personalized industrial AR training\u0000systems.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"8 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142252161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dark Mode or Light Mode? Exploring the Impact of Contrast Polarity on Visualization Performance Between Age Groups 黑暗模式还是光明模式?探索对比极性对不同年龄组视觉表现的影响
Pub Date : 2024-09-17 DOI: arxiv-2409.10841
Zack While, Ali Sarvghad
This study examines the impact of positive and negative contrast polarities(i.e., light and dark modes) on the performance of younger adults and people intheir late adulthood (PLA). In a crowdsourced study with 134 participants (69below age 60, 66 aged 60 and above), we assessed their accuracy and timeperforming analysis tasks across three common visualization types (Bar, Line,Scatterplot) and two contrast polarities (positive and negative). We observedthat, across both age groups, the polarity that led to better performance andthe resulting amount of improvement varied on an individual basis, with eachpolarity benefiting comparable proportions of participants. However, thecontrast polarity that led to better performance did not always match theirpreferred polarity. Additionally, we observed that the choice of contrastpolarity can have an impact on time similar to that of the choice ofvisualization type, resulting in an average percent difference of around 36%.These findings indicate that, overall, the effects of contrast polarity onvisual analysis performance do not noticeably change with age. Furthermore,they underscore the importance of making visualizations available in bothcontrast polarities to better-support a broad audience with differing needs.Supplementary materials for this work can be found aturl{https://osf.io/539a4/}.
本研究探讨了正负对比度极性(即明暗模式)对年轻成年人和晚年成年人(PLA)的学习成绩的影响。在一项有 134 名参与者(69 名 60 岁以下,66 名 60 岁及以上)参加的众包研究中,我们评估了他们在三种常见可视化类型(条形图、线形图和散点图)和两种对比度极性(正对比度和负对比度)下执行分析任务的准确性和时间。我们观察到,在这两个年龄组中,导致更好表现的极性以及由此带来的改善程度因人而异,每种极性都能使相当比例的参与者受益。然而,能提高成绩的对比极性并不总是与他们偏好的极性相匹配。此外,我们还观察到,对比度极性的选择对时间的影响与可视化类型的选择类似,造成的平均百分比差异约为 36%。这些发现表明,总体而言,对比度极性对视觉分析成绩的影响不会随着年龄的增长而发生明显变化。此外,它们还强调了提供两种对比度极性的可视化的重要性,以便更好地支持具有不同需求的广大受众。这项工作的补充材料可在以下网址找到:url{https://osf.io/539a4/}。
{"title":"Dark Mode or Light Mode? Exploring the Impact of Contrast Polarity on Visualization Performance Between Age Groups","authors":"Zack While, Ali Sarvghad","doi":"arxiv-2409.10841","DOIUrl":"https://doi.org/arxiv-2409.10841","url":null,"abstract":"This study examines the impact of positive and negative contrast polarities\u0000(i.e., light and dark modes) on the performance of younger adults and people in\u0000their late adulthood (PLA). In a crowdsourced study with 134 participants (69\u0000below age 60, 66 aged 60 and above), we assessed their accuracy and time\u0000performing analysis tasks across three common visualization types (Bar, Line,\u0000Scatterplot) and two contrast polarities (positive and negative). We observed\u0000that, across both age groups, the polarity that led to better performance and\u0000the resulting amount of improvement varied on an individual basis, with each\u0000polarity benefiting comparable proportions of participants. However, the\u0000contrast polarity that led to better performance did not always match their\u0000preferred polarity. Additionally, we observed that the choice of contrast\u0000polarity can have an impact on time similar to that of the choice of\u0000visualization type, resulting in an average percent difference of around 36%.\u0000These findings indicate that, overall, the effects of contrast polarity on\u0000visual analysis performance do not noticeably change with age. Furthermore,\u0000they underscore the importance of making visualizations available in both\u0000contrast polarities to better-support a broad audience with differing needs.\u0000Supplementary materials for this work can be found at\u0000url{https://osf.io/539a4/}.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142252427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ArticulatePro: A Comparative Study on a Proactive and Non-Proactive Assistant in a Climate Data Exploration Task ArticulatePro:气候数据探索任务中主动和非主动助手的比较研究
Pub Date : 2024-09-17 DOI: arxiv-2409.10797
Roderick Tabalba, Christopher J. Lee, Giorgio Tran, Nurit Kirshenbaum, Jason Leigh
Recent advances in Natural Language Interfaces (NLIs) and Large LanguageModels (LLMs) have transformed our approach to NLP tasks, allowing us to focusmore on a Pragmatics-based approach. This shift enables more naturalinteractions between humans and voice assistants, which have been challengingto achieve. Pragmatics describes how users often talk out of turn, interrupteach other, or provide relevant information without being explicitly asked(maxim of quantity). To explore this, we developed a digital assistant thatconstantly listens to conversations and proactively generates relevantvisualizations during data exploration tasks. In a within-subject study,participants interacted with both proactive and non-proactive versions of avoice assistant while exploring the Hawaii Climate Data Portal (HCDP). Resultssuggest that the proactive assistant enhanced user engagement and facilitatedquicker insights. Our study highlights the potential of Pragmatic, proactive AIin NLIs and identifies key challenges in its implementation, offering insightsfor future research.
自然语言界面(NLIs)和大型语言模型(LLMs)的最新进展改变了我们处理 NLP 任务的方法,使我们能够更加专注于基于语用学的方法。这一转变使人类与语音助手之间的交互更加自然,而实现这一点一直是个挑战。语用学描述了用户是如何在没有被明确询问的情况下,经常不按常理出牌、打断他人讲话或提供相关信息的(数量格言)。为了探讨这个问题,我们开发了一款数字助手,它能持续倾听对话,并在数据探索任务中主动生成相关的可视化信息。在一项主体内研究中,参与者在探索夏威夷气候数据门户网站(HCDP)时与主动和非主动版本的语音助手进行了互动。结果表明,主动语音助手提高了用户的参与度,并有助于用户更快地获得见解。我们的研究强调了实用、主动型人工智能在国家语言实验室中的潜力,并指出了其实施过程中的关键挑战,为未来的研究提供了启示。
{"title":"ArticulatePro: A Comparative Study on a Proactive and Non-Proactive Assistant in a Climate Data Exploration Task","authors":"Roderick Tabalba, Christopher J. Lee, Giorgio Tran, Nurit Kirshenbaum, Jason Leigh","doi":"arxiv-2409.10797","DOIUrl":"https://doi.org/arxiv-2409.10797","url":null,"abstract":"Recent advances in Natural Language Interfaces (NLIs) and Large Language\u0000Models (LLMs) have transformed our approach to NLP tasks, allowing us to focus\u0000more on a Pragmatics-based approach. This shift enables more natural\u0000interactions between humans and voice assistants, which have been challenging\u0000to achieve. Pragmatics describes how users often talk out of turn, interrupt\u0000each other, or provide relevant information without being explicitly asked\u0000(maxim of quantity). To explore this, we developed a digital assistant that\u0000constantly listens to conversations and proactively generates relevant\u0000visualizations during data exploration tasks. In a within-subject study,\u0000participants interacted with both proactive and non-proactive versions of a\u0000voice assistant while exploring the Hawaii Climate Data Portal (HCDP). Results\u0000suggest that the proactive assistant enhanced user engagement and facilitated\u0000quicker insights. Our study highlights the potential of Pragmatic, proactive AI\u0000in NLIs and identifies key challenges in its implementation, offering insights\u0000for future research.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"52 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142252429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
arXiv - CS - Human-Computer Interaction
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1