Autism Spectrum Disorder (ASD) significantly affects the social and communication abilities of children, and eye-tracking is commonly used as a diagnostic tool by identifying associated atypical gaze patterns. Traditional methods demand manual identification of Areas of Interest in gaze patterns, lowering the performance of gaze behavior analysis in ASD subjects. To tackle this limitation, we propose a novel method to automatically analyze gaze behaviors in ASD children with superior accuracy. To be specific, we first apply and optimize seven clustering algorithms to automatically group gaze points to compare ASD subjects with typically developing peers. Subsequently, we extract 63 significant features to fully describe the patterns. These features can describe correlations between ASD diagnosis and gaze patterns. Lastly, using these features as prior knowledge, we train multiple predictive machine learning models to predict and diagnose ASD based on their gaze behaviors. To evaluate our method, we apply our method to three ASD datasets. The experimental and visualization results demonstrate the improvements of clustering algorithms in the analysis of unique gaze patterns in ASD children. Additionally, these predictive machine learning models achieved state-of-the-art prediction performance ($81%$ AUC) in the field of automatically constructed gaze point features for ASD diagnosis. Our code is available at url{https://github.com/username/projectname}.
{"title":"Exploring Gaze Pattern in Autistic Children: Clustering, Visualization, and Prediction","authors":"Weiyan Shi, Haihong Zhang, Jin Yang, Ruiqing Ding, YongWei Zhu, Kenny Tsu Wei Choo","doi":"arxiv-2409.11744","DOIUrl":"https://doi.org/arxiv-2409.11744","url":null,"abstract":"Autism Spectrum Disorder (ASD) significantly affects the social and\u0000communication abilities of children, and eye-tracking is commonly used as a\u0000diagnostic tool by identifying associated atypical gaze patterns. Traditional\u0000methods demand manual identification of Areas of Interest in gaze patterns,\u0000lowering the performance of gaze behavior analysis in ASD subjects. To tackle\u0000this limitation, we propose a novel method to automatically analyze gaze\u0000behaviors in ASD children with superior accuracy. To be specific, we first\u0000apply and optimize seven clustering algorithms to automatically group gaze\u0000points to compare ASD subjects with typically developing peers. Subsequently,\u0000we extract 63 significant features to fully describe the patterns. These\u0000features can describe correlations between ASD diagnosis and gaze patterns.\u0000Lastly, using these features as prior knowledge, we train multiple predictive\u0000machine learning models to predict and diagnose ASD based on their gaze\u0000behaviors. To evaluate our method, we apply our method to three ASD datasets.\u0000The experimental and visualization results demonstrate the improvements of\u0000clustering algorithms in the analysis of unique gaze patterns in ASD children.\u0000Additionally, these predictive machine learning models achieved\u0000state-of-the-art prediction performance ($81%$ AUC) in the field of\u0000automatically constructed gaze point features for ASD diagnosis. Our code is\u0000available at url{https://github.com/username/projectname}.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"11 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142252419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wenyuan Zhang, Jiawei Sheng, Shuaiyi Nie, Zefeng Zhang, Xinghua Zhang, Yongquan He, Tingwen Liu
Large language model (LLM) role-playing has gained widespread attention, where the authentic character knowledge is crucial for constructing realistic LLM role-playing agents. However, existing works usually overlook the exploration of LLMs' ability to detect characters' known knowledge errors (KKE) and unknown knowledge errors (UKE) while playing roles, which would lead to low-quality automatic construction of character trainable corpus. In this paper, we propose a probing dataset to evaluate LLMs' ability to detect errors in KKE and UKE. The results indicate that even the latest LLMs struggle to effectively detect these two types of errors, especially when it comes to familiar knowledge. We experimented with various reasoning strategies and propose an agent-based reasoning method, Self-Recollection and Self-Doubt (S2RD), to further explore the potential for improving error detection capabilities. Experiments show that our method effectively improves the LLMs' ability to detect error character knowledge, but it remains an issue that requires ongoing attention.
{"title":"Revealing the Challenge of Detecting Character Knowledge Errors in LLM Role-Playing","authors":"Wenyuan Zhang, Jiawei Sheng, Shuaiyi Nie, Zefeng Zhang, Xinghua Zhang, Yongquan He, Tingwen Liu","doi":"arxiv-2409.11726","DOIUrl":"https://doi.org/arxiv-2409.11726","url":null,"abstract":"Large language model (LLM) role-playing has gained widespread attention,\u0000where the authentic character knowledge is crucial for constructing realistic\u0000LLM role-playing agents. However, existing works usually overlook the\u0000exploration of LLMs' ability to detect characters' known knowledge errors (KKE)\u0000and unknown knowledge errors (UKE) while playing roles, which would lead to\u0000low-quality automatic construction of character trainable corpus. In this\u0000paper, we propose a probing dataset to evaluate LLMs' ability to detect errors\u0000in KKE and UKE. The results indicate that even the latest LLMs struggle to\u0000effectively detect these two types of errors, especially when it comes to\u0000familiar knowledge. We experimented with various reasoning strategies and\u0000propose an agent-based reasoning method, Self-Recollection and Self-Doubt\u0000(S2RD), to further explore the potential for improving error detection\u0000capabilities. Experiments show that our method effectively improves the LLMs'\u0000ability to detect error character knowledge, but it remains an issue that\u0000requires ongoing attention.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142252420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the development of generative AI technology, a vast array of AI-generated paintings (AIGP) have gone viral on social media like TikTok. However, some negative news about AIGP has also emerged. For example, in 2022, numerous painters worldwide organized a large-scale anti-AI movement because of the infringement in generative AI model training. This event reflected a social issue that, with the development and application of generative AI, public feedback and feelings towards it may have been overlooked. Therefore, to investigate public interactions and perceptions towards AIGP on social media, we analyzed user engagement level and comment sentiment scores of AIGP using human painting videos as a baseline. In analyzing user engagement, we also considered the possible moderating effect of the aesthetic quality of Paintings. Utilizing topic modeling, we identified seven reasons, including looks too real, looks too scary, ambivalence, etc., leading to negative public perceptions of AIGP. Our work may provide instructive suggestions for future generative AI technology development and avoid potential crises in human-AI collaboration.
{"title":"AI paintings vs. Human Paintings? Deciphering Public Interactions and Perceptions towards AI-Generated Paintings on TikTok","authors":"Jiajun Wang, Xiangzhe Yuan, Siying Hu, Zhicong Lu","doi":"arxiv-2409.11911","DOIUrl":"https://doi.org/arxiv-2409.11911","url":null,"abstract":"With the development of generative AI technology, a vast array of\u0000AI-generated paintings (AIGP) have gone viral on social media like TikTok.\u0000However, some negative news about AIGP has also emerged. For example, in 2022,\u0000numerous painters worldwide organized a large-scale anti-AI movement because of\u0000the infringement in generative AI model training. This event reflected a social\u0000issue that, with the development and application of generative AI, public\u0000feedback and feelings towards it may have been overlooked. Therefore, to\u0000investigate public interactions and perceptions towards AIGP on social media,\u0000we analyzed user engagement level and comment sentiment scores of AIGP using\u0000human painting videos as a baseline. In analyzing user engagement, we also\u0000considered the possible moderating effect of the aesthetic quality of\u0000Paintings. Utilizing topic modeling, we identified seven reasons, including\u0000looks too real, looks too scary, ambivalence, etc., leading to negative public\u0000perceptions of AIGP. Our work may provide instructive suggestions for future\u0000generative AI technology development and avoid potential crises in human-AI\u0000collaboration.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"39 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142252158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Generative AI (GenAI) agents offer a potentially scalable approach to support comprehending complex data visualisations, a skill many individuals struggle with. While data storytelling has proven effective, there is little evidence regarding the comparative effectiveness of GenAI agents. To address this gap, we conducted a randomised controlled study with 141 participants to compare the effectiveness and efficiency of data dialogues facilitated by both passive (which simply answer participants' questions about visualisations) and proactive (infused with scaffolding questions to guide participants through visualisations) GenAI agents against data storytelling in enhancing their comprehension of data visualisations. Comprehension was measured before, during, and after the intervention. Results suggest that passive GenAI agents improve comprehension similarly to data storytelling both during and after intervention. Notably, proactive GenAI agents significantly enhance comprehension after intervention compared to both passive GenAI agents and standalone data storytelling, regardless of participants' visualisation literacy, indicating sustained improvements and learning.
{"title":"From Data Stories to Dialogues: A Randomised Controlled Trial of Generative AI Agents and Data Storytelling in Enhancing Data Visualisation Comprehension","authors":"Lixiang Yan, Roberto Martinez-Maldonado, Yueqiao Jin, Vanessa Echeverria, Mikaela Milesi, Jie Fan, Linxuan Zhao, Riordan Alfredo, Xinyu Li, Dragan Gašević","doi":"arxiv-2409.11645","DOIUrl":"https://doi.org/arxiv-2409.11645","url":null,"abstract":"Generative AI (GenAI) agents offer a potentially scalable approach to support\u0000comprehending complex data visualisations, a skill many individuals struggle\u0000with. While data storytelling has proven effective, there is little evidence\u0000regarding the comparative effectiveness of GenAI agents. To address this gap,\u0000we conducted a randomised controlled study with 141 participants to compare the\u0000effectiveness and efficiency of data dialogues facilitated by both passive\u0000(which simply answer participants' questions about visualisations) and\u0000proactive (infused with scaffolding questions to guide participants through\u0000visualisations) GenAI agents against data storytelling in enhancing their\u0000comprehension of data visualisations. Comprehension was measured before,\u0000during, and after the intervention. Results suggest that passive GenAI agents\u0000improve comprehension similarly to data storytelling both during and after\u0000intervention. Notably, proactive GenAI agents significantly enhance\u0000comprehension after intervention compared to both passive GenAI agents and\u0000standalone data storytelling, regardless of participants' visualisation\u0000literacy, indicating sustained improvements and learning.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"43 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142252159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Small businesses need vulnerability assessments to identify and mitigate cyber risks. Cybersecurity clinics provide a solution by offering students hands-on experience while delivering free vulnerability assessments to local organizations. To scale this model, we propose an Open Source Intelligence (OSINT) clinic where students conduct assessments using only publicly available data. We enhance the quality of investigations in the OSINT clinic by addressing the technical and collaborative challenges. Over the duration of the 2023-24 academic year, we conducted a three-phase co-design study with six students. Our study identified key challenges in the OSINT investigations and explored how generative AI could address these performance gaps. We developed design ideas for effective AI integration based on the use of AI probes and collaboration platform features. A pilot with three small businesses highlighted both the practical benefits of AI in streamlining investigations, and limitations, including privacy concerns and difficulty in monitoring progress.
{"title":"OSINT Clinic: Co-designing AI-Augmented Collaborative OSINT Investigations for Vulnerability Assessment","authors":"Anirban Mukhopadhyay, Kurt Luther","doi":"arxiv-2409.11672","DOIUrl":"https://doi.org/arxiv-2409.11672","url":null,"abstract":"Small businesses need vulnerability assessments to identify and mitigate\u0000cyber risks. Cybersecurity clinics provide a solution by offering students\u0000hands-on experience while delivering free vulnerability assessments to local\u0000organizations. To scale this model, we propose an Open Source Intelligence\u0000(OSINT) clinic where students conduct assessments using only publicly available\u0000data. We enhance the quality of investigations in the OSINT clinic by\u0000addressing the technical and collaborative challenges. Over the duration of the\u00002023-24 academic year, we conducted a three-phase co-design study with six\u0000students. Our study identified key challenges in the OSINT investigations and\u0000explored how generative AI could address these performance gaps. We developed\u0000design ideas for effective AI integration based on the use of AI probes and\u0000collaboration platform features. A pilot with three small businesses\u0000highlighted both the practical benefits of AI in streamlining investigations,\u0000and limitations, including privacy concerns and difficulty in monitoring\u0000progress.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"20 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142268564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents the Equimetrics data capture system. The primary objective is to apply HAR principles to enhance the understanding and optimization of equestrian performance. By integrating data from strategically placed sensors on the rider's body and the horse's limbs, the system provides a comprehensive view of their interactions. Preliminary data collection has demonstrated the system's ability to accurately classify various equestrian activities, such as walking, trotting, cantering, and jumping, while also detecting subtle changes in rider posture and horse movement. The system leverages open-source hardware and software to offer a cost-effective alternative to traditional motion capture technologies, making it accessible for researchers and trainers. The Equimetrics system represents a significant advancement in equestrian performance analysis, providing objective, data-driven insights that can be used to enhance training and competition outcomes.
本文介绍了 Equimetrics 数据采集系统。其主要目的是应用 HAR 原理来提高对马术表现的理解和优化。通过整合来自骑手身体和马匹四肢上战略性放置的传感器的数据,该系统提供了两者互动的综合视图。初步数据收集表明,该系统能够准确地对各种马术活动进行分类,如行走、小跑、奔跑和跳跃,同时还能检测骑手姿势和马匹运动的细微变化。该系统利用开源硬件和软件,为传统的动作捕捉技术提供了一种具有成本效益的替代方案,使研究人员和训练人员都能使用该系统。Equimetrics 系统是马术成绩分析领域的一大进步,它提供了客观、数据驱动的见解,可用于提高训练和比赛成绩。
{"title":"Equimetrics -- Applying HAR principles to equestrian activities","authors":"Jonas Pöhler, Kristof Van Laerhoven","doi":"arxiv-2409.11989","DOIUrl":"https://doi.org/arxiv-2409.11989","url":null,"abstract":"This paper presents the Equimetrics data capture system. The primary\u0000objective is to apply HAR principles to enhance the understanding and\u0000optimization of equestrian performance. By integrating data from strategically\u0000placed sensors on the rider's body and the horse's limbs, the system provides a\u0000comprehensive view of their interactions. Preliminary data collection has\u0000demonstrated the system's ability to accurately classify various equestrian\u0000activities, such as walking, trotting, cantering, and jumping, while also\u0000detecting subtle changes in rider posture and horse movement. The system\u0000leverages open-source hardware and software to offer a cost-effective\u0000alternative to traditional motion capture technologies, making it accessible\u0000for researchers and trainers. The Equimetrics system represents a significant\u0000advancement in equestrian performance analysis, providing objective,\u0000data-driven insights that can be used to enhance training and competition\u0000outcomes.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"20 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142252157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Biometric recognition systems, known for their convenience, are widely adopted across various fields. However, their security faces risks depending on the authentication algorithm and deployment environment. Current risk assessment methods faces significant challenges in incorporating the crucial factor of attacker's motivation, leading to incomplete evaluations. This paper presents a novel human-centered risk evaluation framework using conjoint analysis to quantify the impact of risk factors, such as surveillance cameras, on attacker's motivation. Our framework calculates risk values incorporating the False Acceptance Rate (FAR) and attack probability, allowing comprehensive comparisons across use cases. A survey of 600 Japanese participants demonstrates our method's effectiveness, showing how security measures influence attacker's motivation. This approach helps decision-makers customize biometric systems to enhance security while maintaining usability.
{"title":"A Human-Centered Risk Evaluation of Biometric Systems Using Conjoint Analysis","authors":"Tetsushi Ohki, Narishige Abe, Hidetsugu Uchida, Shigefumi Yamada","doi":"arxiv-2409.11224","DOIUrl":"https://doi.org/arxiv-2409.11224","url":null,"abstract":"Biometric recognition systems, known for their convenience, are widely\u0000adopted across various fields. However, their security faces risks depending on\u0000the authentication algorithm and deployment environment. Current risk\u0000assessment methods faces significant challenges in incorporating the crucial\u0000factor of attacker's motivation, leading to incomplete evaluations. This paper\u0000presents a novel human-centered risk evaluation framework using conjoint\u0000analysis to quantify the impact of risk factors, such as surveillance cameras,\u0000on attacker's motivation. Our framework calculates risk values incorporating\u0000the False Acceptance Rate (FAR) and attack probability, allowing comprehensive\u0000comparisons across use cases. A survey of 600 Japanese participants\u0000demonstrates our method's effectiveness, showing how security measures\u0000influence attacker's motivation. This approach helps decision-makers customize\u0000biometric systems to enhance security while maintaining usability.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"4 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142252432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Steven Yoo, Casper Harteveld, Nicholas Wilson, Kemi Jona, Mohsen Moghaddam
This study aimed to explore how novices and experts differ in performing complex psychomotor tasks guided by augmented reality (AR), focusing on decision-making and technical proficiency. Participants were divided into novice and expert groups based on a pre-questionnaire assessing their technical skills and theoretical knowledge of precision inspection. Participants completed a post-study questionnaire that evaluated cognitive load (NASA-TLX), self-efficacy, and experience with the HoloLens 2 and AR app, along with general feedback. We used multimodal data from AR devices and wearables, including hand tracking, galvanic skin response, and gaze tracking, to measure key performance metrics. We found that experts significantly outperformed novices in decision-making speed, efficiency, accuracy, and dexterity in the execution of technical tasks. Novices exhibited a positive correlation between perceived performance in the NASA-TLX and the GSR amplitude, indicating that higher perceived performance is associated with increased physiological stress responses. This study provides a foundation for designing multidimensional expertise estimation models to enable personalized industrial AR training systems.
本研究旨在探索新手和专家在执行由增强现实(AR)引导的复杂心理运动任务时有何不同,重点关注决策制定和技术熟练程度。根据评估技术技能和精密检测理论知识的前置问卷,参与者被分为新手组和专家组。参与者填写了一份研究后问卷,评估认知负荷(NASA-TLX)、自我效能、使用 HoloLens 2 和 AR 应用程序的经验以及一般反馈。我们使用来自 AR 设备和可穿戴设备的多模态数据(包括手部跟踪、皮肤电反应和注视跟踪)来测量关键的性能指标。我们发现,在执行技术任务时,专家在决策速度、效率、准确性和灵巧性方面都明显优于新手。新手在NASA-TLX中的感知表现与GSR振幅呈正相关,这表明较高的感知表现与生理压力反应的增加有关。这项研究为设计多维度技能估计模型奠定了基础,从而实现个性化的工业AR培训系统。
{"title":"Exploring Dimensions of Expertise in AR-Guided Psychomotor Tasks","authors":"Steven Yoo, Casper Harteveld, Nicholas Wilson, Kemi Jona, Mohsen Moghaddam","doi":"arxiv-2409.11599","DOIUrl":"https://doi.org/arxiv-2409.11599","url":null,"abstract":"This study aimed to explore how novices and experts differ in performing\u0000complex psychomotor tasks guided by augmented reality (AR), focusing on\u0000decision-making and technical proficiency. Participants were divided into\u0000novice and expert groups based on a pre-questionnaire assessing their technical\u0000skills and theoretical knowledge of precision inspection. Participants\u0000completed a post-study questionnaire that evaluated cognitive load (NASA-TLX),\u0000self-efficacy, and experience with the HoloLens 2 and AR app, along with\u0000general feedback. We used multimodal data from AR devices and wearables,\u0000including hand tracking, galvanic skin response, and gaze tracking, to measure\u0000key performance metrics. We found that experts significantly outperformed\u0000novices in decision-making speed, efficiency, accuracy, and dexterity in the\u0000execution of technical tasks. Novices exhibited a positive correlation between\u0000perceived performance in the NASA-TLX and the GSR amplitude, indicating that\u0000higher perceived performance is associated with increased physiological stress\u0000responses. This study provides a foundation for designing multidimensional\u0000expertise estimation models to enable personalized industrial AR training\u0000systems.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"8 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142252161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study examines the impact of positive and negative contrast polarities (i.e., light and dark modes) on the performance of younger adults and people in their late adulthood (PLA). In a crowdsourced study with 134 participants (69 below age 60, 66 aged 60 and above), we assessed their accuracy and time performing analysis tasks across three common visualization types (Bar, Line, Scatterplot) and two contrast polarities (positive and negative). We observed that, across both age groups, the polarity that led to better performance and the resulting amount of improvement varied on an individual basis, with each polarity benefiting comparable proportions of participants. However, the contrast polarity that led to better performance did not always match their preferred polarity. Additionally, we observed that the choice of contrast polarity can have an impact on time similar to that of the choice of visualization type, resulting in an average percent difference of around 36%. These findings indicate that, overall, the effects of contrast polarity on visual analysis performance do not noticeably change with age. Furthermore, they underscore the importance of making visualizations available in both contrast polarities to better-support a broad audience with differing needs. Supplementary materials for this work can be found at url{https://osf.io/539a4/}.
{"title":"Dark Mode or Light Mode? Exploring the Impact of Contrast Polarity on Visualization Performance Between Age Groups","authors":"Zack While, Ali Sarvghad","doi":"arxiv-2409.10841","DOIUrl":"https://doi.org/arxiv-2409.10841","url":null,"abstract":"This study examines the impact of positive and negative contrast polarities\u0000(i.e., light and dark modes) on the performance of younger adults and people in\u0000their late adulthood (PLA). In a crowdsourced study with 134 participants (69\u0000below age 60, 66 aged 60 and above), we assessed their accuracy and time\u0000performing analysis tasks across three common visualization types (Bar, Line,\u0000Scatterplot) and two contrast polarities (positive and negative). We observed\u0000that, across both age groups, the polarity that led to better performance and\u0000the resulting amount of improvement varied on an individual basis, with each\u0000polarity benefiting comparable proportions of participants. However, the\u0000contrast polarity that led to better performance did not always match their\u0000preferred polarity. Additionally, we observed that the choice of contrast\u0000polarity can have an impact on time similar to that of the choice of\u0000visualization type, resulting in an average percent difference of around 36%.\u0000These findings indicate that, overall, the effects of contrast polarity on\u0000visual analysis performance do not noticeably change with age. Furthermore,\u0000they underscore the importance of making visualizations available in both\u0000contrast polarities to better-support a broad audience with differing needs.\u0000Supplementary materials for this work can be found at\u0000url{https://osf.io/539a4/}.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142252427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Roderick Tabalba, Christopher J. Lee, Giorgio Tran, Nurit Kirshenbaum, Jason Leigh
Recent advances in Natural Language Interfaces (NLIs) and Large Language Models (LLMs) have transformed our approach to NLP tasks, allowing us to focus more on a Pragmatics-based approach. This shift enables more natural interactions between humans and voice assistants, which have been challenging to achieve. Pragmatics describes how users often talk out of turn, interrupt each other, or provide relevant information without being explicitly asked (maxim of quantity). To explore this, we developed a digital assistant that constantly listens to conversations and proactively generates relevant visualizations during data exploration tasks. In a within-subject study, participants interacted with both proactive and non-proactive versions of a voice assistant while exploring the Hawaii Climate Data Portal (HCDP). Results suggest that the proactive assistant enhanced user engagement and facilitated quicker insights. Our study highlights the potential of Pragmatic, proactive AI in NLIs and identifies key challenges in its implementation, offering insights for future research.
{"title":"ArticulatePro: A Comparative Study on a Proactive and Non-Proactive Assistant in a Climate Data Exploration Task","authors":"Roderick Tabalba, Christopher J. Lee, Giorgio Tran, Nurit Kirshenbaum, Jason Leigh","doi":"arxiv-2409.10797","DOIUrl":"https://doi.org/arxiv-2409.10797","url":null,"abstract":"Recent advances in Natural Language Interfaces (NLIs) and Large Language\u0000Models (LLMs) have transformed our approach to NLP tasks, allowing us to focus\u0000more on a Pragmatics-based approach. This shift enables more natural\u0000interactions between humans and voice assistants, which have been challenging\u0000to achieve. Pragmatics describes how users often talk out of turn, interrupt\u0000each other, or provide relevant information without being explicitly asked\u0000(maxim of quantity). To explore this, we developed a digital assistant that\u0000constantly listens to conversations and proactively generates relevant\u0000visualizations during data exploration tasks. In a within-subject study,\u0000participants interacted with both proactive and non-proactive versions of a\u0000voice assistant while exploring the Hawaii Climate Data Portal (HCDP). Results\u0000suggest that the proactive assistant enhanced user engagement and facilitated\u0000quicker insights. Our study highlights the potential of Pragmatic, proactive AI\u0000in NLIs and identifies key challenges in its implementation, offering insights\u0000for future research.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"52 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142252429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}