首页 > 最新文献

arXiv - CS - Human-Computer Interaction最新文献

英文 中文
Measuring the limit of perception of bond stiffness of interactive molecules in VR via a gamified psychophysics experiment 通过游戏化心理物理学实验测量 VR 中交互式分子键硬度的感知极限
Pub Date : 2024-09-12 DOI: arxiv-2409.07836
Rhoslyn Roebuck Williams, Jonathan Barnoud, Luis Toledo, Till Holzapfel, David R. Glowacki
Molecular dynamics (MD) simulations provide crucial insight into molecularinteractions and biomolecular function. With interactive MD simulations in VR(iMD-VR), chemists can now interact with these molecular simulations inreal-time. Our sense of touch is essential for exploring the properties ofphysical objects, but recreating this sensory experience for virtual objectsposes challenges. Furthermore, employing haptics in the context of molecularsimulation is especially difficult since textit{we do not know what moleculesactually feel like}. In this paper, we build upon previous work thatdemonstrated how VR-users can distinguish properties of molecules withouthaptic feedback. We present the results of a gamified two-alternative forcedchoice (2AFC) psychophysics user study in which we quantify the threshold atwhich iMD-VR users can differentiate the stiffness of molecular bonds. Ourpreliminary analysis suggests that participants can sense differences betweenbuckminsterfullerene molecules with different bond stiffness parameters andthat this limit may fall within the chemically relevant range. Our resultshighlight how iMD-VR may facilitate a more embodied way of exploring complexand dynamic molecular systems, enabling chemists to sense the properties ofmolecules purely by interacting with them in VR.
分子动力学(MD)模拟为深入了解分子相互作用和生物分子功能提供了重要依据。通过 VR 交互式 MD 模拟(iMD-VR),化学家现在可以与这些分子模拟进行实时交互。我们的触觉对于探索物理对象的特性至关重要,但要在虚拟对象中重现这种感官体验却面临挑战。此外,在分子模拟中使用触觉尤其困难,因为我们并不知道分子的真实感觉。在本文中,我们以之前的工作为基础,展示了 VR 用户如何通过触觉反馈来区分分子的属性。我们展示了一项游戏化双选项强制选择(2AFC)心理物理学用户研究的结果,在这项研究中,我们量化了 iMD-VR 用户能够区分分子键硬度的阈值。我们的初步分析表明,参与者可以感受到具有不同键硬度参数的巴克明斯特富勒烯分子之间的差异,而且这一极限可能在化学相关范围内。我们的研究结果凸显了 iMD-VR 可以如何促进以一种更直观的方式探索复杂和动态的分子系统,使化学家能够纯粹通过在 VR 中与分子互动来感知分子的特性。
{"title":"Measuring the limit of perception of bond stiffness of interactive molecules in VR via a gamified psychophysics experiment","authors":"Rhoslyn Roebuck Williams, Jonathan Barnoud, Luis Toledo, Till Holzapfel, David R. Glowacki","doi":"arxiv-2409.07836","DOIUrl":"https://doi.org/arxiv-2409.07836","url":null,"abstract":"Molecular dynamics (MD) simulations provide crucial insight into molecular\u0000interactions and biomolecular function. With interactive MD simulations in VR\u0000(iMD-VR), chemists can now interact with these molecular simulations in\u0000real-time. Our sense of touch is essential for exploring the properties of\u0000physical objects, but recreating this sensory experience for virtual objects\u0000poses challenges. Furthermore, employing haptics in the context of molecular\u0000simulation is especially difficult since textit{we do not know what molecules\u0000actually feel like}. In this paper, we build upon previous work that\u0000demonstrated how VR-users can distinguish properties of molecules without\u0000haptic feedback. We present the results of a gamified two-alternative forced\u0000choice (2AFC) psychophysics user study in which we quantify the threshold at\u0000which iMD-VR users can differentiate the stiffness of molecular bonds. Our\u0000preliminary analysis suggests that participants can sense differences between\u0000buckminsterfullerene molecules with different bond stiffness parameters and\u0000that this limit may fall within the chemically relevant range. Our results\u0000highlight how iMD-VR may facilitate a more embodied way of exploring complex\u0000and dynamic molecular systems, enabling chemists to sense the properties of\u0000molecules purely by interacting with them in VR.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"33 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Online vs Offline: A Comparative Study of First-Party and Third-Party Evaluations of Social Chatbots 在线与离线:社交聊天机器人的第一方和第三方评价比较研究
Pub Date : 2024-09-12 DOI: arxiv-2409.07823
Ekaterina Svikhnushina, Pearl Pu
This paper explores the efficacy of online versus offline evaluation methodsin assessing conversational chatbots, specifically comparing first-party directinteractions with third-party observational assessments. By extending abenchmarking dataset of user dialogs with empathetic chatbots with offlinethird-party evaluations, we present a systematic comparison between thefeedback from online interactions and the more detached offline third-partyevaluations. Our results reveal that offline human evaluations fail to capturethe subtleties of human-chatbot interactions as effectively as onlineassessments. In comparison, automated third-party evaluations using a GPT-4model offer a better approximation of first-party human judgments givendetailed instructions. This study highlights the limitations of third-partyevaluations in grasping the complexities of user experiences and advocates forthe integration of direct interaction feedback in conversational AI evaluationto enhance system development and user satisfaction.
本文探讨了在线与离线评估方法在评估会话式聊天机器人方面的功效,特别是比较了第一方直接交互与第三方观察评估。通过将用户与移情聊天机器人对话的基准数据集与离线第三方评估进行扩展,我们对来自在线交互的反馈与更加独立的离线第三方评估进行了系统比较。我们的结果表明,离线人工评估无法像在线评估那样有效捕捉人与聊天机器人交互的微妙之处。相比之下,使用GPT-4模型的自动第三方评估能更好地接近第一方人类给出详细说明后做出的判断。本研究强调了第三方评估在把握用户体验复杂性方面的局限性,并主张在对话式人工智能评估中整合直接交互反馈,以提高系统开发水平和用户满意度。
{"title":"Online vs Offline: A Comparative Study of First-Party and Third-Party Evaluations of Social Chatbots","authors":"Ekaterina Svikhnushina, Pearl Pu","doi":"arxiv-2409.07823","DOIUrl":"https://doi.org/arxiv-2409.07823","url":null,"abstract":"This paper explores the efficacy of online versus offline evaluation methods\u0000in assessing conversational chatbots, specifically comparing first-party direct\u0000interactions with third-party observational assessments. By extending a\u0000benchmarking dataset of user dialogs with empathetic chatbots with offline\u0000third-party evaluations, we present a systematic comparison between the\u0000feedback from online interactions and the more detached offline third-party\u0000evaluations. Our results reveal that offline human evaluations fail to capture\u0000the subtleties of human-chatbot interactions as effectively as online\u0000assessments. In comparison, automated third-party evaluations using a GPT-4\u0000model offer a better approximation of first-party human judgments given\u0000detailed instructions. This study highlights the limitations of third-party\u0000evaluations in grasping the complexities of user experiences and advocates for\u0000the integration of direct interaction feedback in conversational AI evaluation\u0000to enhance system development and user satisfaction.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"23 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Situated Visualization in Motion for Swimming 游泳运动中的情景可视化
Pub Date : 2024-09-12 DOI: arxiv-2409.07695
Lijie Yao, Anastasia Bezerianos, Romain Vuillemot, Petra Isenberg
Competitive sports coverage increasingly includes information on athlete orteam statistics and records. Sports video coverage has traditionally embeddedrepresentations of this data in fixed locations on the screen, but morerecently also attached representations to athletes or other targets in motion.These publicly used representations so far have been rather simple andsystematic investigations of the research space of embedded visualizations inmotion are still missing. Here we report on our preliminary research in thedomain of professional and amateur swimming. We analyzed how visualizations arecurrently added to the coverage of Olympics swimming competitions and then planto derive a design space for embedded data representations for swimmingcompetitions. We are currently conducting a crowdsourced survey to explorewhich kind of swimming-related data general audiences are interested in, inorder to identify opportunities for additional visualizations to be added toswimming competition coverage.
竞技体育报道越来越多地包括运动员或团队的统计数据和记录信息。体育视频报道传统上是将这些数据嵌入屏幕上的固定位置,但最近也将这些数据附加到运动员或其他运动目标上。迄今为止,这些公开使用的数据表示都相当简单,对运动中嵌入式可视化研究领域的系统性调查仍然缺失。在此,我们报告了我们在专业和业余游泳领域的初步研究。我们分析了目前如何将可视化添加到奥运会游泳比赛的报道中,然后计划为游泳比赛的嵌入式数据表示得出一个设计空间。我们目前正在进行一项众包调查,以探索普通受众对哪类与游泳相关的数据感兴趣,从而确定在游泳比赛报道中增加可视化内容的机会。
{"title":"Situated Visualization in Motion for Swimming","authors":"Lijie Yao, Anastasia Bezerianos, Romain Vuillemot, Petra Isenberg","doi":"arxiv-2409.07695","DOIUrl":"https://doi.org/arxiv-2409.07695","url":null,"abstract":"Competitive sports coverage increasingly includes information on athlete or\u0000team statistics and records. Sports video coverage has traditionally embedded\u0000representations of this data in fixed locations on the screen, but more\u0000recently also attached representations to athletes or other targets in motion.\u0000These publicly used representations so far have been rather simple and\u0000systematic investigations of the research space of embedded visualizations in\u0000motion are still missing. Here we report on our preliminary research in the\u0000domain of professional and amateur swimming. We analyzed how visualizations are\u0000currently added to the coverage of Olympics swimming competitions and then plan\u0000to derive a design space for embedded data representations for swimming\u0000competitions. We are currently conducting a crowdsourced survey to explore\u0000which kind of swimming-related data general audiences are interested in, in\u0000order to identify opportunities for additional visualizations to be added to\u0000swimming competition coverage.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"14 2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Testing the Test: Observations When Assessing Visualization Literacy of Domain Experts 测试测试:评估领域专家可视化素养时的观察结果
Pub Date : 2024-09-12 DOI: arxiv-2409.08101
Seyda Öney, Moataz Abdelaal, Kuno Kurzhals, Paul Betz, Cordula Kropp, Daniel Weiskopf
Various standardized tests exist that assess individuals' visualizationliteracy. Their use can help to draw conclusions from studies. However, it isnot taken into account that the test itself can create a pressure situationwhere participants might fear being exposed and assessed negatively. This isespecially problematic when testing domain experts in design studies. Weconducted interviews with experts from different domains performing theMini-VLAT test for visualization literacy to identify potential problems. Ourparticipants reported that the time limit per question, ambiguities in thequestions and visualizations, and missing steps in the test procedure mainlyhad an impact on their performance and content. We discuss possible changes tothe test design to address these issues and how such assessment methods couldbe integrated into existing evaluation procedures.
现有各种标准化测试可以评估个人的可视化素养。使用这些测试有助于从研究中得出结论。然而,人们并没有考虑到,测试本身可能会给参与者造成压力,他们可能会害怕暴露自己并受到负面评价。在设计研究中对领域专家进行测试时,问题尤其突出。我们对进行可视化素养迷你 VLAT 测试的不同领域的专家进行了访谈,以找出潜在的问题。我们的参与者报告说,每个问题的时间限制、问题和可视化中的歧义以及测试程序中的步骤缺失主要影响了他们的表现和内容。我们讨论了为解决这些问题而可能对测试设计进行的修改,以及如何将此类评估方法纳入现有的评估程序。
{"title":"Testing the Test: Observations When Assessing Visualization Literacy of Domain Experts","authors":"Seyda Öney, Moataz Abdelaal, Kuno Kurzhals, Paul Betz, Cordula Kropp, Daniel Weiskopf","doi":"arxiv-2409.08101","DOIUrl":"https://doi.org/arxiv-2409.08101","url":null,"abstract":"Various standardized tests exist that assess individuals' visualization\u0000literacy. Their use can help to draw conclusions from studies. However, it is\u0000not taken into account that the test itself can create a pressure situation\u0000where participants might fear being exposed and assessed negatively. This is\u0000especially problematic when testing domain experts in design studies. We\u0000conducted interviews with experts from different domains performing the\u0000Mini-VLAT test for visualization literacy to identify potential problems. Our\u0000participants reported that the time limit per question, ambiguities in the\u0000questions and visualizations, and missing steps in the test procedure mainly\u0000had an impact on their performance and content. We discuss possible changes to\u0000the test design to address these issues and how such assessment methods could\u0000be integrated into existing evaluation procedures.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"36 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visual Compositional Data Analytics for Spatial Transcriptomics 空间转录组学的可视化合成数据分析
Pub Date : 2024-09-11 DOI: arxiv-2409.07306
David Hägele, Yuxuan Tang, Daniel Weiskopf
For the Bio+Med-Vis Challenge 2024, we propose a visual analytics system as aredesign for the scatter pie chart visualization of cell type proportions ofspatial transcriptomics data. Our design uses three linked views: a view of thehistological image of the tissue, a stacked bar chart showing cell typeproportions of the spots, and a scatter plot showing a dimensionality reductionof the multivariate proportions. Furthermore, we apply a compositional dataanalysis framework, the Aitchison geometry, to the proportions fordimensionality reduction and $k$-means clustering. Leveraging brushing andlinking, the system allows one to explore and uncover patterns in the cell typemixtures and relate them to their spatial locations on the cellular tissue.This redesign shifts the pattern recognition workload from the human visualsystem to computational methods commonly used in visual analytics. We providethe code and setup instructions of our visual analytics system on GitHub(https://github.com/UniStuttgart-VISUS/va-for-spatial-transcriptomics).
针对 2024 年生物+医学-视觉挑战赛,我们提出了一种视觉分析系统,用于空间转录组学数据细胞类型比例的散点饼图可视化设计。我们的设计使用了三个关联视图:组织的组织学图像视图、显示斑点细胞类型比例的堆叠条形图,以及显示多元比例降维的散点图。此外,我们还将艾奇逊几何(Aitchison geometry)这一组合数据分析框架应用于比例降维和千元均值聚类。利用刷和链接,该系统可以探索和发现细胞类型混合物中的模式,并将它们与细胞组织上的空间位置联系起来。这种重新设计将模式识别的工作量从人类视觉系统转移到视觉分析中常用的计算方法上。我们在 GitHub(https://github.com/UniStuttgart-VISUS/va-for-spatial-transcriptomics) 上提供了视觉分析系统的代码和设置说明。
{"title":"Visual Compositional Data Analytics for Spatial Transcriptomics","authors":"David Hägele, Yuxuan Tang, Daniel Weiskopf","doi":"arxiv-2409.07306","DOIUrl":"https://doi.org/arxiv-2409.07306","url":null,"abstract":"For the Bio+Med-Vis Challenge 2024, we propose a visual analytics system as a\u0000redesign for the scatter pie chart visualization of cell type proportions of\u0000spatial transcriptomics data. Our design uses three linked views: a view of the\u0000histological image of the tissue, a stacked bar chart showing cell type\u0000proportions of the spots, and a scatter plot showing a dimensionality reduction\u0000of the multivariate proportions. Furthermore, we apply a compositional data\u0000analysis framework, the Aitchison geometry, to the proportions for\u0000dimensionality reduction and $k$-means clustering. Leveraging brushing and\u0000linking, the system allows one to explore and uncover patterns in the cell type\u0000mixtures and relate them to their spatial locations on the cellular tissue.\u0000This redesign shifts the pattern recognition workload from the human visual\u0000system to computational methods commonly used in visual analytics. We provide\u0000the code and setup instructions of our visual analytics system on GitHub\u0000(https://github.com/UniStuttgart-VISUS/va-for-spatial-transcriptomics).","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"36 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Trust Dynamics in Human-Autonomy Interaction: Uncover Associations between Trust Dynamics and Personal Characteristics 人与自主性互动中的信任动态:发现信任动态与个人特征之间的关联
Pub Date : 2024-09-11 DOI: arxiv-2409.07406
Hyesun Chung, X. Jessie Yang
While personal characteristics influence people's snapshot trust towardsautonomous systems, their relationships with trust dynamics remain poorlyunderstood. We conducted a human-subject experiment with 130 participantsperforming a simulated surveillance task aided by an automated threat detector.A comprehensive pre-experimental survey collected data on participants'personal characteristics across 12 constructs and 28 dimensions. Based on datacollected in the experiment, we clustered participants' trust dynamics intothree types and assessed differences among the three clusters in terms ofpersonal characteristics, behaviors, performance, and post-experiment ratings.Participants were clustered into three groups, namely Bayesian decision makers,disbelievers, and oscillators. Results showed that the clusters differsignificantly in seven personal characteristics: masculinity, positive affect,extraversion, neuroticism, intellect, performance expectancy, and highexpectations. The disbelievers tend to have high neuroticism and lowperformance expectancy. The oscillators tend to have higher scores inmasculinity, positive affect, extraversion and intellect. We also foundsignificant differences in the behaviors and post-experiment ratings among thethree groups. The disbelievers are the least likely to blindly follow therecommendations made by the automated threat detector. Based on the significantpersonal characteristics, we developed a decision tree model to predict clustertypes with an accuracy of 70%.
虽然个人特征会影响人们对自主系统的信任快照,但人们对这些特征与信任动态之间的关系仍然知之甚少。我们进行了一项以人为对象的实验,130 名参与者在自动威胁探测器的帮助下执行了一项模拟监视任务。实验前的综合调查收集了参与者个人特征的数据,涉及 12 个构造和 28 个维度。根据实验中收集到的数据,我们将参与者的信任动态分为三种类型,并评估了三个群组在个人特征、行为、表现和实验后评分方面的差异。结果显示,各组在七个个人特征上存在显著差异:男性气质、积极情绪、外向性、神经质、智力、绩效预期和高期望值。不相信者往往具有较高的神经质和较低的绩效预期。振荡者在男子气概、积极情绪、外向性和智力方面得分较高。我们还发现三组人在行为和实验后评分方面存在显著差异。不相信者最不可能盲目听从自动威胁检测器的建议。根据重要的个人特征,我们建立了一个决策树模型来预测群组类型,准确率高达 70%。
{"title":"Trust Dynamics in Human-Autonomy Interaction: Uncover Associations between Trust Dynamics and Personal Characteristics","authors":"Hyesun Chung, X. Jessie Yang","doi":"arxiv-2409.07406","DOIUrl":"https://doi.org/arxiv-2409.07406","url":null,"abstract":"While personal characteristics influence people's snapshot trust towards\u0000autonomous systems, their relationships with trust dynamics remain poorly\u0000understood. We conducted a human-subject experiment with 130 participants\u0000performing a simulated surveillance task aided by an automated threat detector.\u0000A comprehensive pre-experimental survey collected data on participants'\u0000personal characteristics across 12 constructs and 28 dimensions. Based on data\u0000collected in the experiment, we clustered participants' trust dynamics into\u0000three types and assessed differences among the three clusters in terms of\u0000personal characteristics, behaviors, performance, and post-experiment ratings.\u0000Participants were clustered into three groups, namely Bayesian decision makers,\u0000disbelievers, and oscillators. Results showed that the clusters differ\u0000significantly in seven personal characteristics: masculinity, positive affect,\u0000extraversion, neuroticism, intellect, performance expectancy, and high\u0000expectations. The disbelievers tend to have high neuroticism and low\u0000performance expectancy. The oscillators tend to have higher scores in\u0000masculinity, positive affect, extraversion and intellect. We also found\u0000significant differences in the behaviors and post-experiment ratings among the\u0000three groups. The disbelievers are the least likely to blindly follow the\u0000recommendations made by the automated threat detector. Based on the significant\u0000personal characteristics, we developed a decision tree model to predict cluster\u0000types with an accuracy of 70%.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"157 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Situated Visualization in Motion 运动中的情景可视化
Pub Date : 2024-09-11 DOI: arxiv-2409.07005
Lijie Yao, Anastasia Bezerianos, Petra Isenberg
We contribute a first design space on visualizations in motion and the designof a pilot study we plan to run in the fall. Visualizations can be useful incontexts where either the observation is in motion or the whole visualizationis moving at various speeds. Imagine, for example, displays attached to anathlete or animal that show data about the wearer -- for example, captured froma fitness tracking band; or a visualization attached to a moving object such asa vehicle or a soccer ball. The ultimate goal of our research is to inform thedesign of visualizations under motion.
我们为运动中的可视化提供了第一个设计空间,并计划在秋季开展一项试点研究。在观察对象处于运动状态或整个可视化对象以不同速度运动的情况下,可视化效果可能会非常有用。举例来说,想象一下附在运动员或动物身上的显示屏,它可以显示佩戴者的相关数据--例如,从健身追踪带中捕捉到的数据;或者是附在移动物体(如车辆或足球)上的可视化。我们研究的最终目标是为运动状态下的可视化设计提供参考。
{"title":"Situated Visualization in Motion","authors":"Lijie Yao, Anastasia Bezerianos, Petra Isenberg","doi":"arxiv-2409.07005","DOIUrl":"https://doi.org/arxiv-2409.07005","url":null,"abstract":"We contribute a first design space on visualizations in motion and the design\u0000of a pilot study we plan to run in the fall. Visualizations can be useful in\u0000contexts where either the observation is in motion or the whole visualization\u0000is moving at various speeds. Imagine, for example, displays attached to an\u0000athlete or animal that show data about the wearer -- for example, captured from\u0000a fitness tracking band; or a visualization attached to a moving object such as\u0000a vehicle or a soccer ball. The ultimate goal of our research is to inform the\u0000design of visualizations under motion.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"3 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
"My Grade is Wrong!": A Contestable AI Framework for Interactive Feedback in Evaluating Student Essays "我的分数错了!":评价学生作文的交互式反馈的可竞争人工智能框架
Pub Date : 2024-09-11 DOI: arxiv-2409.07453
Shengxin Hong, Chang Cai, Sixuan Du, Haiyue Feng, Siyuan Liu, Xiuyi Fan
Interactive feedback, where feedback flows in both directions between teacherand student, is more effective than traditional one-way feedback. However, itis often too time-consuming for widespread use in educational practice. WhileLarge Language Models (LLMs) have potential for automating feedback, theystruggle with reasoning and interaction in an interactive setting. This paperintroduces CAELF, a Contestable AI Empowered LLM Framework for automatinginteractive feedback. CAELF allows students to query, challenge, and clarifytheir feedback by integrating a multi-agent system with computationalargumentation. Essays are first assessed by multiple Teaching-Assistant Agents(TA Agents), and then a Teacher Agent aggregates the evaluations through formalreasoning to generate feedback and grades. Students can further engage with thefeedback to refine their understanding. A case study on 500 critical thinkingessays with user studies demonstrates that CAELF significantly improvesinteractive feedback, enhancing the reasoning and interaction capabilities ofLLMs. This approach offers a promising solution to overcoming the time andresource barriers that have limited the adoption of interactive feedback ineducational settings.
互动反馈,即教师和学生之间的双向反馈,比传统的单向反馈更有效。然而,在教育实践中广泛使用往往过于耗时。虽然大型语言模型(LLM)在自动反馈方面具有潜力,但它们在交互式环境中的推理和交互方面却举步维艰。本文介绍了 CAELF,一个用于自动交互反馈的可竞争人工智能授权 LLM 框架。CAELF 通过将多代理系统与计算论证整合在一起,允许学生查询、质疑和澄清他们的反馈。论文首先由多个教学助理代理(TA Agents)进行评估,然后由教师代理(Teacher Agent)通过正式推理汇总评估结果,生成反馈和分数。学生可以进一步参与反馈,以完善自己的理解。对500篇批判性思维论文进行的案例研究和用户研究表明,CAELF显著改善了交互式反馈,增强了LLMs的推理和交互能力。这种方法为克服时间和资源障碍提供了一个很有前景的解决方案,而这些障碍限制了交互式反馈在教育环境中的应用。
{"title":"\"My Grade is Wrong!\": A Contestable AI Framework for Interactive Feedback in Evaluating Student Essays","authors":"Shengxin Hong, Chang Cai, Sixuan Du, Haiyue Feng, Siyuan Liu, Xiuyi Fan","doi":"arxiv-2409.07453","DOIUrl":"https://doi.org/arxiv-2409.07453","url":null,"abstract":"Interactive feedback, where feedback flows in both directions between teacher\u0000and student, is more effective than traditional one-way feedback. However, it\u0000is often too time-consuming for widespread use in educational practice. While\u0000Large Language Models (LLMs) have potential for automating feedback, they\u0000struggle with reasoning and interaction in an interactive setting. This paper\u0000introduces CAELF, a Contestable AI Empowered LLM Framework for automating\u0000interactive feedback. CAELF allows students to query, challenge, and clarify\u0000their feedback by integrating a multi-agent system with computational\u0000argumentation. Essays are first assessed by multiple Teaching-Assistant Agents\u0000(TA Agents), and then a Teacher Agent aggregates the evaluations through formal\u0000reasoning to generate feedback and grades. Students can further engage with the\u0000feedback to refine their understanding. A case study on 500 critical thinking\u0000essays with user studies demonstrates that CAELF significantly improves\u0000interactive feedback, enhancing the reasoning and interaction capabilities of\u0000LLMs. This approach offers a promising solution to overcoming the time and\u0000resource barriers that have limited the adoption of interactive feedback in\u0000educational settings.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"10 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bridging Quantitative and Qualitative Methods for Visualization Research: A Data/Semantics Perspective in Light of Advanced AI 在可视化研究中衔接定量和定性方法:从数据/符号学角度看先进的人工智能
Pub Date : 2024-09-11 DOI: arxiv-2409.07250
Daniel Weiskopf
This paper revisits the role of quantitative and qualitative methods invisualization research in the context of advancements in artificialintelligence (AI). The focus is on how we can bridge between the differentmethods in an integrated process of analyzing user study data. To this end, aprocess model of - potentially iterated - semantic enrichment andtransformation of data is proposed. This joint perspective of data andsemantics facilitates the integration of quantitative and qualitative methods.The model is motivated by examples of own prior work, especially in the area ofeye tracking user studies and coding data-rich observations. Finally, there isa discussion of open issues and research opportunities in the interplay betweenAI, human analyst, and qualitative and quantitative methods for visualizationresearch.
本文以人工智能(AI)的发展为背景,重新审视了定量和定性方法在可视化研究中的作用。重点在于我们如何在分析用户研究数据的综合过程中,在不同方法之间架起一座桥梁。为此,我们提出了一个可能反复进行的语义丰富和数据转换过程模型。这一数据和语义的联合视角有助于定量和定性方法的整合。该模型的灵感来源于我们之前的工作实例,尤其是在眼睛跟踪用户研究和对数据丰富的观察结果进行编码方面。最后,还讨论了可视化研究中人工智能、人类分析师以及定性和定量方法之间相互作用的公开问题和研究机会。
{"title":"Bridging Quantitative and Qualitative Methods for Visualization Research: A Data/Semantics Perspective in Light of Advanced AI","authors":"Daniel Weiskopf","doi":"arxiv-2409.07250","DOIUrl":"https://doi.org/arxiv-2409.07250","url":null,"abstract":"This paper revisits the role of quantitative and qualitative methods in\u0000visualization research in the context of advancements in artificial\u0000intelligence (AI). The focus is on how we can bridge between the different\u0000methods in an integrated process of analyzing user study data. To this end, a\u0000process model of - potentially iterated - semantic enrichment and\u0000transformation of data is proposed. This joint perspective of data and\u0000semantics facilitates the integration of quantitative and qualitative methods.\u0000The model is motivated by examples of own prior work, especially in the area of\u0000eye tracking user studies and coding data-rich observations. Finally, there is\u0000a discussion of open issues and research opportunities in the interplay between\u0000AI, human analyst, and qualitative and quantitative methods for visualization\u0000research.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"9 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Awaking the Slides: A Tuning-free and Knowledge-regulated AI Tutoring System via Language Model Coordination 唤醒幻灯片:通过语言模型协调实现无调谐和知识调控的人工智能辅导系统
Pub Date : 2024-09-11 DOI: arxiv-2409.07372
Daniel Zhang-Li, Zheyuan Zhang, Jifan Yu, Joy Lim Jia Yin, Shangqing Tu, Linlu Gong, Haohua Wang, Zhiyuan Liu, Huiqin Liu, Lei Hou, Juanzi Li
The vast pre-existing slides serve as rich and important materials to carrylecture knowledge. However, effectively leveraging lecture slides to servestudents is difficult due to the multi-modal nature of slide content and theheterogeneous teaching actions. We study the problem of discovering effectivedesigns that convert a slide into an interactive lecture. We developSlide2Lecture, a tuning-free and knowledge-regulated intelligent tutoringsystem that can (1) effectively convert an input lecture slide into astructured teaching agenda consisting of a set of heterogeneous teachingactions; (2) create and manage an interactive lecture that generates responsiveinteractions catering to student learning demands while regulating theinteractions to follow teaching actions. Slide2Lecture contains a completepipeline for learners to obtain an interactive classroom experience to learnthe slide. For teachers and developers, Slide2Lecture enables customization tocater to personalized demands. The evaluation rated by annotators and studentsshows that Slide2Lecture is effective in outperforming the remainingimplementation. Slide2Lecture's online deployment has made more than 200Kinteraction with students in the 3K lecture sessions. We open sourceSlide2Lecture's implementation inhttps://anonymous.4open.science/r/slide2lecture-4210/.
已有的大量幻灯片是承载授课知识的丰富而重要的材料。然而,由于幻灯片内容的多模式性和教学行为的异质性,有效利用幻灯片为学生服务十分困难。我们研究的问题是发现能将幻灯片转换成交互式讲座的有效设计。我们开发的Slide2Lecture是一种免调谐和知识调控的智能辅导系统,它可以:(1)有效地将输入的讲座幻灯片转换成由一系列异构教学行为组成的结构化教学议程;(2)创建和管理互动讲座,根据学生的学习需求生成响应式互动,同时根据教学行为调控互动。Slide2Lecture 包含一个完整的流程,让学习者获得学习幻灯片的互动课堂体验。对于教师和开发人员来说,Slide2Lecture 可以进行定制,以满足个性化需求。由注释者和学生进行的评估表明,Slide2Lecture 的效果优于其他实施方案。Slide2Lecture 的在线部署已在 3K 场讲座中与学生进行了超过 200K 次互动。我们在https://anonymous.4open.science/r/slide2lecture-4210/。
{"title":"Awaking the Slides: A Tuning-free and Knowledge-regulated AI Tutoring System via Language Model Coordination","authors":"Daniel Zhang-Li, Zheyuan Zhang, Jifan Yu, Joy Lim Jia Yin, Shangqing Tu, Linlu Gong, Haohua Wang, Zhiyuan Liu, Huiqin Liu, Lei Hou, Juanzi Li","doi":"arxiv-2409.07372","DOIUrl":"https://doi.org/arxiv-2409.07372","url":null,"abstract":"The vast pre-existing slides serve as rich and important materials to carry\u0000lecture knowledge. However, effectively leveraging lecture slides to serve\u0000students is difficult due to the multi-modal nature of slide content and the\u0000heterogeneous teaching actions. We study the problem of discovering effective\u0000designs that convert a slide into an interactive lecture. We develop\u0000Slide2Lecture, a tuning-free and knowledge-regulated intelligent tutoring\u0000system that can (1) effectively convert an input lecture slide into a\u0000structured teaching agenda consisting of a set of heterogeneous teaching\u0000actions; (2) create and manage an interactive lecture that generates responsive\u0000interactions catering to student learning demands while regulating the\u0000interactions to follow teaching actions. Slide2Lecture contains a complete\u0000pipeline for learners to obtain an interactive classroom experience to learn\u0000the slide. For teachers and developers, Slide2Lecture enables customization to\u0000cater to personalized demands. The evaluation rated by annotators and students\u0000shows that Slide2Lecture is effective in outperforming the remaining\u0000implementation. Slide2Lecture's online deployment has made more than 200K\u0000interaction with students in the 3K lecture sessions. We open source\u0000Slide2Lecture's implementation in\u0000https://anonymous.4open.science/r/slide2lecture-4210/.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"23 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
arXiv - CS - Human-Computer Interaction
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1