首页 > 最新文献

Visual Informatics最新文献

英文 中文
FundSelector: A visual analysis system for mutual fund selection FundSelector:一个共同基金选择的可视化分析系统
IF 3.8 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-07-29 DOI: 10.1016/j.visinf.2025.100258
Fan Yan , Yong Wang , Xuanwu Yue , Kam-Kwai Wong , Ketian Mao , Rong Zhang , Huamin Qu , Haiyang Zhu , Minfeng Zhu , Wei Chen
Mutual funds are one of the most important and popular investment ways for ordinary investors to maintain and increase the value of their assets. However, it is challenging for ordinary investors to select optimal mutual funds from thousands of fund choices managed by different managers. Various investors often have different personal investment preferences and it is difficult to characterize their preferences quickly. Also, mutual fund performance relies on various factors (e.g., the economic market and the management of fund managers), and most of these factors are dynamically changing, making it difficult to efficiently compare different mutual funds in detail. To address these challenges, we propose FundSelector, an interactive multi-view visual analytics system that quantifies user preferences to rank mutual funds and allows ordinary investors to explore mutual fund performance in terms of multiple factors and scales. Two novel visual designs are proposed to enable detailed comparisons of mutual funds. Rank-informed bipartite contribution bar chart provides interpretable fund ranking results by explicitly showing both positive and negative factors. Elastic trend chart allows investors to analyze and compare the temporal evolution of the mutual funds’ performances in a customizable way. We evaluated FundSelector through two case studies and interviews with eight ordinary investors. The results highlight its effectiveness and utility.
共同基金是普通投资者保值增值最重要、最受欢迎的投资方式之一。然而,对于普通投资者来说,从不同基金经理管理的数千种基金选择中选择最优的共同基金是一项挑战。不同的投资者往往有不同的个人投资偏好,很难快速表征他们的偏好。此外,共同基金的业绩取决于各种因素(如经济市场和基金经理的管理),而这些因素大多是动态变化的,因此很难对不同的共同基金进行有效的详细比较。为了应对这些挑战,我们提出了FundSelector,这是一个交互式多视图可视化分析系统,可以量化用户偏好以对共同基金进行排名,并允许普通投资者从多个因素和规模方面探索共同基金的表现。提出了两种新颖的视觉设计,以便对共同基金进行详细的比较。通过明确地显示积极因素和消极因素,排名信息双向贡献柱状图提供了可解释的基金排名结果。弹性趋势图允许投资者以可定制的方式分析和比较共同基金业绩的时间演变。我们通过两个案例研究和对八位普通投资者的采访来评估FundSelector。结果表明了该方法的有效性和实用性。
{"title":"FundSelector: A visual analysis system for mutual fund selection","authors":"Fan Yan ,&nbsp;Yong Wang ,&nbsp;Xuanwu Yue ,&nbsp;Kam-Kwai Wong ,&nbsp;Ketian Mao ,&nbsp;Rong Zhang ,&nbsp;Huamin Qu ,&nbsp;Haiyang Zhu ,&nbsp;Minfeng Zhu ,&nbsp;Wei Chen","doi":"10.1016/j.visinf.2025.100258","DOIUrl":"10.1016/j.visinf.2025.100258","url":null,"abstract":"<div><div>Mutual funds are one of the most important and popular investment ways for ordinary investors to maintain and increase the value of their assets. However, it is challenging for ordinary investors to select optimal mutual funds from thousands of fund choices managed by different managers. Various investors often have different personal investment preferences and it is difficult to characterize their preferences quickly. Also, mutual fund performance relies on various factors (e.g., the economic market and the management of fund managers), and most of these factors are dynamically changing, making it difficult to efficiently compare different mutual funds in detail. To address these challenges, we propose FundSelector, an interactive multi-view visual analytics system that quantifies user preferences to rank mutual funds and allows ordinary investors to explore mutual fund performance in terms of multiple factors and scales. Two novel visual designs are proposed to enable detailed comparisons of mutual funds. Rank-informed bipartite contribution bar chart provides interpretable fund ranking results by explicitly showing both positive and negative factors. Elastic trend chart allows investors to analyze and compare the temporal evolution of the mutual funds’ performances in a customizable way. We evaluated FundSelector through two case studies and interviews with eight ordinary investors. The results highlight its effectiveness and utility.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"9 4","pages":"Article 100258"},"PeriodicalIF":3.8,"publicationDate":"2025-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145420022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visual analysis of LLM-based entity resolution from scientific papers 科学论文中基于llm的实体解析可视化分析
IF 3.8 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-06-01 DOI: 10.1016/j.visinf.2025.100236
Siyu Wu , Yi Yang , Weize Wu , Ruiming Li , Yuyang Zhang , Ge Wang , Huobin Tan , Zipeng Liu , Lei Shi
This paper focuses on the visual analytics support for extracting domain-specific entities from extensive scientific literature, a task with inherent limitations using traditional named entity resolution methods. With the advent of large language models (LLMs) such as GPT-4, significant improvements over conventional machine learning approaches have been achieved due to LLM’s capability on entity resolution integrate abilities such as understanding multiple types of text. This research introduces a new visual analysis pipeline that integrates these advanced LLMs with versatile visualization and interaction designs to support batch entity resolution. Specifically, we focus on a specific material science field of Metal-Organic Frameworks (MOFs) and a large data collection namely CSD-MOFs. Through collaboration with domain experts in material science, we obtain well-labeled synthesis paragraphs. We propose human-in-the-loop refinement over the entity resolution process using visual analytics techniques, which allows domain experts to interactively integrate insights into LLM intelligence, including error analysis and interpretation of the retrieval-augmented generation (RAG) algorithm. Our evaluation through the case study of example selection for RAG demonstrates that this visual analysis approach effectively improves the accuracy of single-document entity resolution.
本文的重点是可视化分析支持从广泛的科学文献中提取特定领域的实体,这是使用传统的命名实体解析方法具有固有局限性的任务。随着像GPT-4这样的大型语言模型(LLM)的出现,由于LLM在实体解析集成能力(如理解多种类型的文本)方面的能力,比传统的机器学习方法有了显著的改进。本研究引入了一种新的可视化分析管道,该管道将这些先进的llm与多功能可视化和交互设计集成在一起,以支持批量实体解析。具体来说,我们专注于金属有机框架(MOFs)的特定材料科学领域和大量数据收集,即CSD-MOFs。通过与材料科学领域的专家合作,我们获得了标记良好的合成段落。我们建议使用可视化分析技术对实体解析过程进行人在环优化,这使得领域专家能够交互式地将见解集成到LLM智能中,包括错误分析和对检索增强生成(RAG)算法的解释。通过RAG的示例选择案例研究,我们的评估表明,这种可视化分析方法有效地提高了单文档实体解析的准确性。
{"title":"Visual analysis of LLM-based entity resolution from scientific papers","authors":"Siyu Wu ,&nbsp;Yi Yang ,&nbsp;Weize Wu ,&nbsp;Ruiming Li ,&nbsp;Yuyang Zhang ,&nbsp;Ge Wang ,&nbsp;Huobin Tan ,&nbsp;Zipeng Liu ,&nbsp;Lei Shi","doi":"10.1016/j.visinf.2025.100236","DOIUrl":"10.1016/j.visinf.2025.100236","url":null,"abstract":"<div><div>This paper focuses on the visual analytics support for extracting domain-specific entities from extensive scientific literature, a task with inherent limitations using traditional named entity resolution methods. With the advent of large language models (LLMs) such as GPT-4, significant improvements over conventional machine learning approaches have been achieved due to LLM’s capability on entity resolution integrate abilities such as understanding multiple types of text. This research introduces a new visual analysis pipeline that integrates these advanced LLMs with versatile visualization and interaction designs to support batch entity resolution. Specifically, we focus on a specific material science field of Metal-Organic Frameworks (MOFs) and a large data collection namely CSD-MOFs. Through collaboration with domain experts in material science, we obtain well-labeled synthesis paragraphs. We propose human-in-the-loop refinement over the entity resolution process using visual analytics techniques, which allows domain experts to interactively integrate insights into LLM intelligence, including error analysis and interpretation of the retrieval-augmented generation (RAG) algorithm. Our evaluation through the case study of example selection for RAG demonstrates that this visual analysis approach effectively improves the accuracy of single-document entity resolution.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"9 2","pages":"Article 100236"},"PeriodicalIF":3.8,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144196306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
YOLO-SAATD: An efficient SAR airport and aircraft target detector YOLO-SAATD:高效的SAR机场和飞机目标探测器
IF 3.8 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-04-16 DOI: 10.1016/j.visinf.2025.100240
Daobin Ma , Zhanhong Lu , Zixuan Dai , Yangyue Wei , Li Yang , Haimiao Hu , Wenqiao Zhang , Dongping Zhang
While object detection performs well in natural images, it faces challenges in Synthetic Aperture Radar (SAR) images for detecting airports and aircraft due to discrete scattering points, complex backgrounds, and multi-scale targets. Existing methods struggle with computational inefficiency, omission of small targets, and low accuracy. We propose a SAR airport and aircraft target detection model based on YOLO, named YOLO-SAATD (You Only Look Once-SAR Airport and Aircraft Target Detector), which tackles the aforementioned challenges from three perspectives: 1. Efficiency: A lightweight hierarchical multi-scale backbone reduces parameters and enhances detection speed. 2. Fine granularity: A ”ScaleNimble Neck” integrates feature reshaping and scale-aware aggregation to enhance detail detection and feature capture in multi-scale SAR images. 3. Precision: Wise-IoU loss function is used to optimize bounding box localization and enhance detection accuracy. Experiments on the SAR-Airport-1.0 and SAR-AirCraft-1.0 datasets show that YOLO-SAATD improves precision and mAP50 by 1%-2%, increases detection frame rate by 15%, and reduces model parameters by 25% compared to YOLOv8n, thus validating its effectiveness for SAR airport and aircraft target detection.
虽然在自然图像中目标检测具有良好的性能,但在合成孔径雷达(SAR)图像中,由于散射点离散、背景复杂、目标多尺度等特点,机场和飞机的目标检测面临挑战。现有方法存在计算效率低、遗漏小目标和精度低等问题。我们提出了一种基于YOLO的SAR机场和飞机目标检测模型,命名为YOLO- saatd (You Only Look Once-SAR机场和飞机目标检测器),该模型从三个方面解决了上述挑战:效率:轻量级的分层多尺度骨干网减少了参数,提高了检测速度。2. 细粒度:“ScaleNimble Neck”集成了特征重塑和尺度感知聚合,增强了多尺度SAR图像的细节检测和特征捕获。3. 精度:采用Wise-IoU损失函数优化边界盒定位,提高检测精度。在SAR- airport -1.0和SAR- aircraft -1.0数据集上的实验表明,与YOLOv8n相比,YOLO-SAATD的精度和mAP50提高了1% ~ 2%,检测帧率提高了15%,模型参数降低了25%,验证了其在SAR机场和飞机目标检测中的有效性。
{"title":"YOLO-SAATD: An efficient SAR airport and aircraft target detector","authors":"Daobin Ma ,&nbsp;Zhanhong Lu ,&nbsp;Zixuan Dai ,&nbsp;Yangyue Wei ,&nbsp;Li Yang ,&nbsp;Haimiao Hu ,&nbsp;Wenqiao Zhang ,&nbsp;Dongping Zhang","doi":"10.1016/j.visinf.2025.100240","DOIUrl":"10.1016/j.visinf.2025.100240","url":null,"abstract":"<div><div>While object detection performs well in natural images, it faces challenges in Synthetic Aperture Radar (SAR) images for detecting airports and aircraft due to discrete scattering points, complex backgrounds, and multi-scale targets. Existing methods struggle with computational inefficiency, omission of small targets, and low accuracy. We propose a SAR airport and aircraft target detection model based on YOLO, named YOLO-SAATD (You Only Look Once-SAR Airport and Aircraft Target Detector), which tackles the aforementioned challenges from three perspectives: <strong>1. Efficiency</strong>: A lightweight hierarchical multi-scale backbone reduces parameters and enhances detection speed. <strong>2. Fine granularity</strong>: A ”ScaleNimble Neck” integrates feature reshaping and scale-aware aggregation to enhance detail detection and feature capture in multi-scale SAR images. <strong>3. Precision</strong>: Wise-IoU loss function is used to optimize bounding box localization and enhance detection accuracy. Experiments on the SAR-Airport-1.0 and SAR-AirCraft-1.0 datasets show that YOLO-SAATD improves precision and mAP50 by 1%-2%, increases detection frame rate by 15%, and reduces model parameters by 25% compared to YOLOv8n, thus validating its effectiveness for SAR airport and aircraft target detection.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"9 2","pages":"Article 100240"},"PeriodicalIF":3.8,"publicationDate":"2025-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144167150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Photogrammetry engaged automated image labeling approach 摄影测量采用自动图像标记方法
IF 3.8 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-04-09 DOI: 10.1016/j.visinf.2025.100239
Jonathan Boyack , Jongseong Brad Choi
Deep learning models require many instances of training data to be able to accurately detect the desired object. However, the labeling of images is currently conducted manually due to the inclusion of irrelevant scenes in the original images, especially for the data collected in a dynamic environment such as from drone imagery. In this work, we developed an automated extraction of training data set using photogrammetry. This approach works with continuous and arbitrary collection of visual data, such as video, encompassing a stationary object. A dense point cloud was first generated to estimate the geometric relationship between individual images using a structure-from-motion (SfM) technique, followed by user-designated region-of-interests, ROIs, that are automatically extracted from the original images. An orthophoto mosaic of the façade plane of the building shown in the point cloud was created to ease the user’s selection of an intended labeling region of the object, which is a one-time process. We verified this method by using the ROIs extracted from a previously obtained dataset to train and test a convolutional neural network which is modeled to detect damage locations. The method put forward in this work allows a relatively small amount of labeling to generate a large amount of training data. We successfully demonstrate the capabilities of the technique with the dataset previously collected by a drone from an abandoned building in which many of the glass windows have been damaged.
深度学习模型需要许多训练数据实例才能准确地检测到所需的对象。然而,由于原始图像中包含了不相关的场景,目前对图像的标记是手动进行的,特别是对于在动态环境中收集的数据,如无人机图像。在这项工作中,我们开发了一种使用摄影测量法自动提取训练数据集的方法。这种方法适用于连续和任意的视觉数据集合,例如包含静止物体的视频。首先使用运动结构(SfM)技术生成密集的点云来估计单个图像之间的几何关系,然后从原始图像中自动提取用户指定的兴趣区域(roi)。在点云中显示的建筑物的正面平面的正射影像马赛克被创建,以方便用户选择目标的预期标记区域,这是一个一次性的过程。我们通过使用从先前获得的数据集中提取的roi来训练和测试卷积神经网络来验证该方法,该网络被建模用于检测损伤位置。本工作提出的方法允许相对少量的标记生成大量的训练数据。我们成功地展示了该技术的能力,该数据集以前是由无人机从一座废弃的建筑物中收集的,其中许多玻璃窗已经损坏。
{"title":"Photogrammetry engaged automated image labeling approach","authors":"Jonathan Boyack ,&nbsp;Jongseong Brad Choi","doi":"10.1016/j.visinf.2025.100239","DOIUrl":"10.1016/j.visinf.2025.100239","url":null,"abstract":"<div><div>Deep learning models require many instances of training data to be able to accurately detect the desired object. However, the labeling of images is currently conducted manually due to the inclusion of irrelevant scenes in the original images, especially for the data collected in a dynamic environment such as from drone imagery. In this work, we developed an automated extraction of training data set using photogrammetry. This approach works with continuous and arbitrary collection of visual data, such as video, encompassing a stationary object. A dense point cloud was first generated to estimate the geometric relationship between individual images using a structure-from-motion (SfM) technique, followed by user-designated region-of-interests, ROIs, that are automatically extracted from the original images. An orthophoto mosaic of the façade plane of the building shown in the point cloud was created to ease the user’s selection of an intended labeling region of the object, which is a one-time process. We verified this method by using the ROIs extracted from a previously obtained dataset to train and test a convolutional neural network which is modeled to detect damage locations. The method put forward in this work allows a relatively small amount of labeling to generate a large amount of training data. We successfully demonstrate the capabilities of the technique with the dataset previously collected by a drone from an abandoned building in which many of the glass windows have been damaged.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"9 2","pages":"Article 100239"},"PeriodicalIF":3.8,"publicationDate":"2025-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144167151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generative object insertion in Gaussian splatting with a multi-view diffusion model 基于多视图扩散模型的高斯溅射生成对象插入
IF 3.8 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-04-08 DOI: 10.1016/j.visinf.2025.100238
Hongliang Zhong, Can Wang, Jingbo Zhang, Jing Liao
Generating and inserting new objects into 3D content is a compelling approach for achieving versatile scene recreation. Existing methods, which rely on SDS optimization or single-view inpainting, often struggle to produce high-quality results. To address this, we propose a novel method for object insertion in 3D content represented by Gaussian Splatting. Our approach introduces a multi-view diffusion model, dubbed MVInpainter, which is built upon a pre-trained stable video diffusion model to facilitate view-consistent object inpainting. Within MVInpainter, we incorporate a ControlNet-based conditional injection module to enable controlled and more predictable multi-view generation. After generating the multi-view inpainted results, we further propose a mask-aware 3D reconstruction technique to refine Gaussian Splatting reconstruction from these sparse inpainted views. By leveraging these fabricate techniques, our approach yields diverse results, ensures view-consistent and harmonious insertions, and produces better object quality. Extensive experiments demonstrate that our approach outperforms existing methods.
在3D内容中生成和插入新对象是实现多用途场景再现的一种引人注目的方法。现有的方法依赖于SDS优化或单视图绘制,通常难以产生高质量的结果。为了解决这个问题,我们提出了一种用高斯溅射表示的3D内容插入对象的新方法。我们的方法引入了一个多视图扩散模型,称为MVInpainter,它建立在一个预训练的稳定视频扩散模型之上,以促进视图一致的对象在绘画中。在MVInpainter中,我们结合了一个基于controlnet的条件注入模块,以实现可控和更可预测的多视图生成。在生成多视图绘制结果后,我们进一步提出了一种基于蒙版感知的3D重建技术,以从这些稀疏的绘制视图中改进高斯飞溅重建。通过利用这些制造技术,我们的方法产生不同的结果,确保视图一致和和谐的插入,并产生更好的对象质量。大量的实验表明,我们的方法优于现有的方法。
{"title":"Generative object insertion in Gaussian splatting with a multi-view diffusion model","authors":"Hongliang Zhong,&nbsp;Can Wang,&nbsp;Jingbo Zhang,&nbsp;Jing Liao","doi":"10.1016/j.visinf.2025.100238","DOIUrl":"10.1016/j.visinf.2025.100238","url":null,"abstract":"<div><div>Generating and inserting new objects into 3D content is a compelling approach for achieving versatile scene recreation. Existing methods, which rely on SDS optimization or single-view inpainting, often struggle to produce high-quality results. To address this, we propose a novel method for object insertion in 3D content represented by Gaussian Splatting. Our approach introduces a multi-view diffusion model, dubbed MVInpainter, which is built upon a pre-trained stable video diffusion model to facilitate view-consistent object inpainting. Within MVInpainter, we incorporate a ControlNet-based conditional injection module to enable controlled and more predictable multi-view generation. After generating the multi-view inpainted results, we further propose a mask-aware 3D reconstruction technique to refine Gaussian Splatting reconstruction from these sparse inpainted views. By leveraging these fabricate techniques, our approach yields diverse results, ensures view-consistent and harmonious insertions, and produces better object quality. Extensive experiments demonstrate that our approach outperforms existing methods.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"9 2","pages":"Article 100238"},"PeriodicalIF":3.8,"publicationDate":"2025-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144148113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visual analysis of multi-subject association patterns in high-dimensional time-varying student performance data 高维时变学生成绩数据中多学科关联模式的可视化分析
IF 3.8 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-04-07 DOI: 10.1016/j.visinf.2025.100237
Lianen Ji, Ziyi Wang, Shirong Qiu, Guang Yang, Sufang Zhang
Exploring the association patterns of student performance in depth can help administrators and teachers optimize the curriculum structure and teaching plans more specifically to improve teaching effectiveness in a college undergraduate major. However, these high-dimensional time-varying student performance data involve multiple associated subjects, such as student, course, and teacher, which exhibit complex interrelationships in academic semesters, knowledge categories, and student groups. This makes it challenging to conduct a comprehensive analysis of association patterns. To this end, we construct a visual analysis framework, called MAPVis, to support multi-method and multi-level interactive exploration of the association patterns in student performance. MAPVis consists of two stages: in the first stage, we extract students’ learning patterns and further introduce mutual information to explore the distribution of learning patterns; in the second stage, various learning patterns and subject attributes are integrated based on a hierarchical apriori algorithm to achieve a multi-subject interactive exploration of the association patterns among students, courses, and teachers. Finally, we conduct a case study using real student performance data to verify the applicability and effectiveness of MAPVis.
深入探索学生成绩的关联模式,可以帮助管理者和教师更有针对性地优化课程结构和教学计划,提高大学本科专业的教学效果。然而,这些高维时变学生表现数据涉及多个相关主题,如学生、课程和教师,它们在学术学期、知识类别和学生群体中表现出复杂的相互关系。这使得对关联模式进行全面分析具有挑战性。为此,我们构建了一个可视化分析框架,称为MAPVis,以支持对学生成绩关联模式的多方法和多层次互动探索。MAPVis包括两个阶段:第一阶段,我们提取学生的学习模式,并进一步引入互信息来探索学习模式的分布;第二阶段,基于层次先验算法整合各种学习模式和学科属性,实现学生、课程和教师之间关联模式的多学科交互探索。最后,我们使用真实的学生成绩数据进行案例研究,以验证MAPVis的适用性和有效性。
{"title":"Visual analysis of multi-subject association patterns in high-dimensional time-varying student performance data","authors":"Lianen Ji,&nbsp;Ziyi Wang,&nbsp;Shirong Qiu,&nbsp;Guang Yang,&nbsp;Sufang Zhang","doi":"10.1016/j.visinf.2025.100237","DOIUrl":"10.1016/j.visinf.2025.100237","url":null,"abstract":"<div><div>Exploring the association patterns of student performance in depth can help administrators and teachers optimize the curriculum structure and teaching plans more specifically to improve teaching effectiveness in a college undergraduate major. However, these high-dimensional time-varying student performance data involve multiple associated subjects, such as student, course, and teacher, which exhibit complex interrelationships in academic semesters, knowledge categories, and student groups. This makes it challenging to conduct a comprehensive analysis of association patterns. To this end, we construct a visual analysis framework, called MAPVis, to support multi-method and multi-level interactive exploration of the association patterns in student performance. MAPVis consists of two stages: in the first stage, we extract students’ learning patterns and further introduce mutual information to explore the distribution of learning patterns; in the second stage, various learning patterns and subject attributes are integrated based on a hierarchical apriori algorithm to achieve a multi-subject interactive exploration of the association patterns among students, courses, and teachers. Finally, we conduct a case study using real student performance data to verify the applicability and effectiveness of MAPVis.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"9 2","pages":"Article 100237"},"PeriodicalIF":3.8,"publicationDate":"2025-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143877014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VisMocap: Interactive visualization and analysis for multi-source motion capture data VisMocap:多源动作捕捉数据的交互式可视化和分析
IF 3.8 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-03-25 DOI: 10.1016/j.visinf.2025.100235
Lishuang Zhan , Rongting Li , Rui Cao , Juncong Lin , Shihui Guo
With the rapid advancement of artificial intelligence, research on enabling computers to assist humans in achieving intelligent augmentation—thereby enhancing the accuracy and efficiency of information perception and processing—has been steadily evolving. Among these developments, innovations in human motion capture technology have been emerging rapidly, leading to an increasing diversity in motion capture data types. This diversity necessitates the establishment of a unified standard for multi-source data to facilitate effective analysis and comparison of their capability to represent human motion. Additionally, motion capture data often suffer from significant noise, acquisition delays, and asynchrony, making their effective processing and visualization a critical challenge. In this paper, we utilized data collected from a prototype of flexible fabric-based motion capture clothing and optical motion capture devices as inputs. Time synchronization and error analysis between the two data types were conducted, individual actions from continuous motion sequences were segmented, and the processed results were presented through a concise and intuitive visualization interface. Finally, we evaluated various system metrics, including the accuracy of time synchronization, data fitting error from fabric resistance to joint angles, precision of motion segmentation, and user feedback.
随着人工智能的快速发展,计算机辅助人类实现智能增强,从而提高信息感知和处理的准确性和效率的研究一直在稳步发展。在这些发展中,人体动作捕捉技术的创新迅速涌现,导致动作捕捉数据类型的多样性日益增加。这种多样性需要为多源数据建立统一的标准,以方便对其表示人体运动的能力进行有效的分析和比较。此外,动作捕捉数据经常受到严重的噪声、采集延迟和异步的影响,这使得它们的有效处理和可视化成为一个关键的挑战。在本文中,我们利用从基于柔性织物的动作捕捉服装原型和光学动作捕捉设备中收集的数据作为输入。对两种数据类型进行时间同步和误差分析,对连续运动序列中的个体动作进行分割,并通过简洁直观的可视化界面呈现处理结果。最后,我们评估了各种系统指标,包括时间同步的准确性,从织物阻力到关节角度的数据拟合误差,运动分割的精度和用户反馈。
{"title":"VisMocap: Interactive visualization and analysis for multi-source motion capture data","authors":"Lishuang Zhan ,&nbsp;Rongting Li ,&nbsp;Rui Cao ,&nbsp;Juncong Lin ,&nbsp;Shihui Guo","doi":"10.1016/j.visinf.2025.100235","DOIUrl":"10.1016/j.visinf.2025.100235","url":null,"abstract":"<div><div>With the rapid advancement of artificial intelligence, research on enabling computers to assist humans in achieving intelligent augmentation—thereby enhancing the accuracy and efficiency of information perception and processing—has been steadily evolving. Among these developments, innovations in human motion capture technology have been emerging rapidly, leading to an increasing diversity in motion capture data types. This diversity necessitates the establishment of a unified standard for multi-source data to facilitate effective analysis and comparison of their capability to represent human motion. Additionally, motion capture data often suffer from significant noise, acquisition delays, and asynchrony, making their effective processing and visualization a critical challenge. In this paper, we utilized data collected from a prototype of flexible fabric-based motion capture clothing and optical motion capture devices as inputs. Time synchronization and error analysis between the two data types were conducted, individual actions from continuous motion sequences were segmented, and the processed results were presented through a concise and intuitive visualization interface. Finally, we evaluated various system metrics, including the accuracy of time synchronization, data fitting error from fabric resistance to joint angles, precision of motion segmentation, and user feedback.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"9 2","pages":"Article 100235"},"PeriodicalIF":3.8,"publicationDate":"2025-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143868512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Contextualized visual analytics for multivariate events 多变量事件的上下文可视化分析
IF 3.8 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-03-21 DOI: 10.1016/j.visinf.2025.100234
Lei Peng , Ziyue Lin , Natalia Andrienko , Gennady Andrienko , Siming Chen
For event analysis, the information from both before and after the event can be crucial in certain scenarios. By incorporating a contextualized perspective in event analysis, analysts can gain deeper insights from the events. We propose a contextualized visual analysis framework which enables the identification and interpretation of temporal patterns within and across multivariate events. The framework consists of a design of visual representation for multivariate event contexts, a data processing workflow to support the visualization, and a context-centered visual analysis system to facilitate the interactive exploration of temporal patterns. To demonstrate the applicability and effectiveness of our framework, we present case studies using real-world datasets from two different domains and an expert study conducted with experienced data analysts.
对于事件分析而言,事件发生前后的信息在某些情况下至关重要。通过在事件分析中纳入情境化视角,分析人员可以从事件中获得更深入的见解。我们提出了一个情境可视化分析框架,该框架能够识别和解释多元事件内部和之间的时间模式。该框架包括多变量事件上下文的可视化表示设计、支持可视化的数据处理工作流程和以上下文为中心的可视化分析系统,以促进对时间模式的交互式探索。为了证明我们的框架的适用性和有效性,我们使用来自两个不同领域的真实世界数据集进行了案例研究,并与经验丰富的数据分析师进行了专家研究。
{"title":"Contextualized visual analytics for multivariate events","authors":"Lei Peng ,&nbsp;Ziyue Lin ,&nbsp;Natalia Andrienko ,&nbsp;Gennady Andrienko ,&nbsp;Siming Chen","doi":"10.1016/j.visinf.2025.100234","DOIUrl":"10.1016/j.visinf.2025.100234","url":null,"abstract":"<div><div>For event analysis, the information from both before and after the event can be crucial in certain scenarios. By incorporating a contextualized perspective in event analysis, analysts can gain deeper insights from the events. We propose a contextualized visual analysis framework which enables the identification and interpretation of temporal patterns within and across multivariate events. The framework consists of a design of visual representation for multivariate event contexts, a data processing workflow to support the visualization, and a context-centered visual analysis system to facilitate the interactive exploration of temporal patterns. To demonstrate the applicability and effectiveness of our framework, we present case studies using real-world datasets from two different domains and an expert study conducted with experienced data analysts.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"9 2","pages":"Article 100234"},"PeriodicalIF":3.8,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143865195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CodeLin: An in situ visualization method for understanding data transformation scripts CodeLin:一种用于理解数据转换脚本的现场可视化方法
IF 3.8 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-03-19 DOI: 10.1016/j.visinf.2025.03.002
Xiwen Cai , Kai Xiong , Zhongsu Luo , Di Weng , Shuainan Ye , Yingcai Wu
Understanding data transformation scripts is an essential task for data analysts who write code to process data. However, this can be challenging, especially when encountering unfamiliar scripts. Comments can help users understand data transformation code, but well-written comments are not always present. Visualization methods have been proposed to help analysts understand data transformations, but they generally require a separate view, which may distract users and entail efforts for connecting visualizations and code. In this work, we explore the use of in situ program visualization to help data analysts understand data transformation scripts. We present CodeLin, a new visualization method that combines word-sized glyphs for presenting transformation semantics and a lineage graph for presenting data lineage in an in situ manner. Through a use case, code pattern demonstrations, and a preliminary user study, we demonstrate the effectiveness and usability of CodeLin. We further discuss how visualization can help users understand data transformation code.
理解数据转换脚本是编写代码来处理数据的数据分析师的基本任务。然而,这可能具有挑战性,特别是在遇到不熟悉的脚本时。注释可以帮助用户理解数据转换代码,但是编写良好的注释并不总是存在。已经提出可视化方法来帮助分析人员理解数据转换,但是它们通常需要一个单独的视图,这可能会分散用户的注意力,并且需要将可视化和代码连接起来。在这项工作中,我们探索了原位程序可视化的使用,以帮助数据分析师理解数据转换脚本。我们提出了CodeLin,这是一种新的可视化方法,它结合了字大小的符号来表示转换语义,并结合了一个谱系图来以原位方式表示数据谱系。通过用例、代码模式演示和初步的用户研究,我们演示了CodeLin的有效性和可用性。我们将进一步讨论可视化如何帮助用户理解数据转换代码。
{"title":"CodeLin: An in situ visualization method for understanding data transformation scripts","authors":"Xiwen Cai ,&nbsp;Kai Xiong ,&nbsp;Zhongsu Luo ,&nbsp;Di Weng ,&nbsp;Shuainan Ye ,&nbsp;Yingcai Wu","doi":"10.1016/j.visinf.2025.03.002","DOIUrl":"10.1016/j.visinf.2025.03.002","url":null,"abstract":"<div><div>Understanding data transformation scripts is an essential task for data analysts who write code to process data. However, this can be challenging, especially when encountering unfamiliar scripts. Comments can help users understand data transformation code, but well-written comments are not always present. Visualization methods have been proposed to help analysts understand data transformations, but they generally require a separate view, which may distract users and entail efforts for connecting visualizations and code. In this work, we explore the use of in situ program visualization to help data analysts understand data transformation scripts. We present CodeLin, a new visualization method that combines word-sized glyphs for presenting transformation semantics and a lineage graph for presenting data lineage in an in situ manner. Through a use case, code pattern demonstrations, and a preliminary user study, we demonstrate the effectiveness and usability of CodeLin. We further discuss how visualization can help users understand data transformation code.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"9 2","pages":"Article 100233"},"PeriodicalIF":3.8,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143851644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A human-centric perspective on interpretability in large language models 大型语言模型中以人为中心的可解释性观点
IF 3.8 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-03-01 DOI: 10.1016/j.visinf.2025.03.001
Zihan Zhou, Minfeng Zhu, Wei Chen
{"title":"A human-centric perspective on interpretability in large language models","authors":"Zihan Zhou,&nbsp;Minfeng Zhu,&nbsp;Wei Chen","doi":"10.1016/j.visinf.2025.03.001","DOIUrl":"10.1016/j.visinf.2025.03.001","url":null,"abstract":"","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"9 1","pages":"Pages A1-A3"},"PeriodicalIF":3.8,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143716172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Visual Informatics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1