首页 > 最新文献

Visual Informatics最新文献

英文 中文
From perception to reflection: A layered framework for aesthetic education in the digital design of ancient painting 从感知到反思:古画数字化设计中的审美教育分层框架
IF 3.8 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-12-01 DOI: 10.1016/j.visinf.2025.100290
Xiaojiao Chen, Wenru Qi, Yulian Yang, Xiaosong Wang, Wei Chen
{"title":"From perception to reflection: A layered framework for aesthetic education in the digital design of ancient painting","authors":"Xiaojiao Chen, Wenru Qi, Yulian Yang, Xiaosong Wang, Wei Chen","doi":"10.1016/j.visinf.2025.100290","DOIUrl":"10.1016/j.visinf.2025.100290","url":null,"abstract":"","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"9 4","pages":"Article 100290"},"PeriodicalIF":3.8,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145736222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unified 3D Gaussian splatting for motion and defocus blur reconstruction 统一的3D高斯飞溅运动和散焦模糊重建
IF 3.8 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-12-01 DOI: 10.1016/j.visinf.2025.100270
Li Liu , Jing Duan , Xiaodong Fu , Wei Peng , Lijun Liu
This paper proposes a unified 3D Gaussian splatting framework consisting of three key components for motion and defocus blur reconstruction. First, a dual-blur perception module is designed to generate pixel-wise masks and predict the types of motion and defocus blur, guiding structural feature extraction. Second, a blur-aware Gaussian splatting integrates blur-aware features into the splatting process for accurate modeling of the global and local scene structure. Third, an Unoptimized Gaussian Ratio (UGR)-opacity joint optimization strategy is proposed to refine under-optimized regions, improving reconstruction accuracy under complex blur conditions. Experiments on a newly constructed motion and defocus blur dataset demonstrate the effectiveness of the proposed method for novel view synthesis. Compared with state-of-the-art methods, our framework achieves improvements of 0.28 dB, 2.46% and 39.88% on PSNR, SSIM, and LPIPS, respectively. For deblurring tasks, it achieves improvements of 0.36 dB, 3.24% and 28.96% on the same metrics. These results highlight the robustness and effectiveness of this approach. Additional visual results and video renderings are available on our project webpage: https://sunbeam-217.github.io/Dual-blur-reconstruction/.
本文提出了一种统一的三维高斯飞溅框架,该框架由运动和散焦模糊重建三个关键组件组成。首先,设计双模糊感知模块,生成逐像素的蒙版,预测运动和散焦模糊的类型,指导结构特征提取;其次,模糊感知高斯喷溅将模糊感知特征集成到喷溅过程中,以精确建模全局和局部场景结构。第三,提出了一种未优化高斯比(UGR)-不透明度联合优化策略,对未优化区域进行细化,提高了复杂模糊条件下的重建精度。在一个新构建的运动和散焦模糊数据集上的实验证明了该方法对新视图合成的有效性。与最先进的方法相比,我们的框架在PSNR、SSIM和LPIPS上分别提高了0.28 dB、2.46%和39.88%。对于去模糊任务,它在相同的指标上实现了0.36 dB, 3.24%和28.96%的改进。这些结果突出了该方法的鲁棒性和有效性。更多的视觉结果和视频效果图可在我们的项目网页:https://sunbeam-217.github.io/Dual-blur-reconstruction/。
{"title":"Unified 3D Gaussian splatting for motion and defocus blur reconstruction","authors":"Li Liu ,&nbsp;Jing Duan ,&nbsp;Xiaodong Fu ,&nbsp;Wei Peng ,&nbsp;Lijun Liu","doi":"10.1016/j.visinf.2025.100270","DOIUrl":"10.1016/j.visinf.2025.100270","url":null,"abstract":"<div><div>This paper proposes a unified 3D Gaussian splatting framework consisting of three key components for motion and defocus blur reconstruction. First, a dual-blur perception module is designed to generate pixel-wise masks and predict the types of motion and defocus blur, guiding structural feature extraction. Second, a blur-aware Gaussian splatting integrates blur-aware features into the splatting process for accurate modeling of the global and local scene structure. Third, an Unoptimized Gaussian Ratio (UGR)-opacity joint optimization strategy is proposed to refine under-optimized regions, improving reconstruction accuracy under complex blur conditions. Experiments on a newly constructed motion and defocus blur dataset demonstrate the effectiveness of the proposed method for novel view synthesis. Compared with state-of-the-art methods, our framework achieves improvements of 0.28 dB, 2.46% and 39.88% on PSNR, SSIM, and LPIPS, respectively. For deblurring tasks, it achieves improvements of 0.36 dB, 3.24% and 28.96% on the same metrics. These results highlight the robustness and effectiveness of this approach. Additional visual results and video renderings are available on our project webpage: <span><span>https://sunbeam-217.github.io/Dual-blur-reconstruction/</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"9 4","pages":"Article 100270"},"PeriodicalIF":3.8,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145615119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A methodological approach towards human-centered visual analytics 一种以人为中心的视觉分析方法
IF 3.8 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-12-01 DOI: 10.1016/j.visinf.2025.100269
Emmanouil Adamakis , George Margetis , Stavroula Ntoa , Constantine Stephanidis
Visual analytics focuses on amplifying users’ reasoning and understanding by enhancing data analysis procedures with the efficient incorporation of information visualization and data processing techniques. In this study, we conduct an overview of this multidisciplinary field, focusing on both the process that formalizes its primary concepts and the affiliated research areas. We identify key developments in each area, as well as the challenges that arise when these areas are interconnected under the visual analytics process. We consider that to address the identified challenges, an appropriate representation of key user needs is essential. Therefore, inspired by human-centered design and its principles, we propose a novel methodological approach comprising a human-centered definition of visual analytics that expands on models of the field and quantifies the intermediate states of a data analysis. In addition to the theoretical aspects of the definition, we also provide a set of directions that align the process with technical aspects of the development cycle. In this respect, our research endeavor aims to transform the visual analytics process into an essential method for both conceptualizing data analysis systems capable of anticipating user needs and for streamlining their technical implementation.
可视化分析侧重于通过有效地结合信息可视化和数据处理技术来增强数据分析程序,从而扩大用户的推理和理解。在本研究中,我们对这一多学科领域进行了概述,重点关注其主要概念的形式化过程和相关研究领域。我们确定了每个领域的关键发展,以及当这些领域在可视化分析过程中相互关联时出现的挑战。我们认为,要解决已确定的挑战,关键用户需求的适当表述至关重要。因此,受以人为中心的设计及其原则的启发,我们提出了一种新的方法方法,包括以人为中心的视觉分析定义,扩展了该领域的模型并量化了数据分析的中间状态。除了定义的理论方面之外,我们还提供了一组将过程与开发周期的技术方面结合起来的方向。在这方面,我们的研究努力旨在将可视化分析过程转化为能够预测用户需求并简化其技术实现的概念化数据分析系统的基本方法。
{"title":"A methodological approach towards human-centered visual analytics","authors":"Emmanouil Adamakis ,&nbsp;George Margetis ,&nbsp;Stavroula Ntoa ,&nbsp;Constantine Stephanidis","doi":"10.1016/j.visinf.2025.100269","DOIUrl":"10.1016/j.visinf.2025.100269","url":null,"abstract":"<div><div>Visual analytics focuses on amplifying users’ reasoning and understanding by enhancing data analysis procedures with the efficient incorporation of information visualization and data processing techniques. In this study, we conduct an overview of this multidisciplinary field, focusing on both the process that formalizes its primary concepts and the affiliated research areas. We identify key developments in each area, as well as the challenges that arise when these areas are interconnected under the visual analytics process. We consider that to address the identified challenges, an appropriate representation of key user needs is essential. Therefore, inspired by human-centered design and its principles, we propose a novel methodological approach comprising a human-centered definition of visual analytics that expands on models of the field and quantifies the intermediate states of a data analysis. In addition to the theoretical aspects of the definition, we also provide a set of directions that align the process with technical aspects of the development cycle. In this respect, our research endeavor aims to transform the visual analytics process into an essential method for both conceptualizing data analysis systems capable of anticipating user needs and for streamlining their technical implementation.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"9 4","pages":"Article 100269"},"PeriodicalIF":3.8,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145615118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A survey of visual insight mining: Connecting data and insights via visualization 视觉洞察挖掘的调查:通过可视化连接数据和洞察
IF 3.8 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-12-01 DOI: 10.1016/j.visinf.2025.100271
Yijie Lian , Jianing Hao , Wei Zeng , Qiong Luo
Insight mining transforms complex data into actionable knowledge, enabling effective decision-making across diverse domains. Given the richness and interpretative power of visualizations, visual insight mining – the process of extracting meaningful insights from raw data through intuitive visual representations – has become increasingly vital. This survey systematically reviews the current landscape of visual insight mining, addressing the critical questions: “How can visualizations be generated from data?” and “How can insights be extracted from visualizations?”. Specifically, we delve into six distinct tasks (i.e., task decomposition, visualization generation, visualization recommendation, chart parsing, chart question answering, and insight generation) in the process of visual insight mining, and provide a comprehensive analysis of rule-based, learning-based, and large-model-based methods for each task. Based on the survey, we discuss current research challenges and outline future opportunities. By viewing visualization as a bridge in the data-to-insight path, this survey offers a structured foundation for further exploration in visual insight mining.
洞察挖掘将复杂的数据转换为可操作的知识,从而实现跨不同领域的有效决策。鉴于可视化的丰富性和解释力,视觉洞察挖掘——通过直观的视觉表示从原始数据中提取有意义的见解的过程——变得越来越重要。本调查系统地回顾了视觉洞察挖掘的现状,解决了关键问题:“如何从数据中生成可视化?”和“如何从可视化中提取洞察力?”具体而言,我们深入研究了可视化洞察挖掘过程中的六个不同任务(即任务分解、可视化生成、可视化推荐、图表解析、图表问答和洞察生成),并对每个任务的基于规则、基于学习和基于大模型的方法进行了全面分析。在调查的基础上,我们讨论了当前的研究挑战,并概述了未来的机遇。通过将可视化视为数据到洞察路径的桥梁,本调查为可视化洞察挖掘的进一步探索提供了结构化的基础。
{"title":"A survey of visual insight mining: Connecting data and insights via visualization","authors":"Yijie Lian ,&nbsp;Jianing Hao ,&nbsp;Wei Zeng ,&nbsp;Qiong Luo","doi":"10.1016/j.visinf.2025.100271","DOIUrl":"10.1016/j.visinf.2025.100271","url":null,"abstract":"<div><div>Insight mining transforms complex data into actionable knowledge, enabling effective decision-making across diverse domains. Given the richness and interpretative power of visualizations, visual insight mining – the process of extracting meaningful insights from raw data through intuitive visual representations – has become increasingly vital. This survey systematically reviews the current landscape of visual insight mining, addressing the critical questions: <em>“How can visualizations be generated from data?”</em> and <em>“How can insights be extracted from visualizations?”</em>. Specifically, we delve into six distinct tasks (i.e., task decomposition, visualization generation, visualization recommendation, chart parsing, chart question answering, and insight generation) in the process of visual insight mining, and provide a comprehensive analysis of rule-based, learning-based, and large-model-based methods for each task. Based on the survey, we discuss current research challenges and outline future opportunities. By viewing visualization as a bridge in the data-to-insight path, this survey offers a structured foundation for further exploration in visual insight mining.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"9 4","pages":"Article 100271"},"PeriodicalIF":3.8,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145680949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Attribute guided adversarial editing for face privacy protection 面向人脸隐私保护的属性导向对抗性编辑
IF 3.8 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-12-01 DOI: 10.1016/j.visinf.2025.100267
Yu Xu , Ziang Wang , Fan Tang , Juan Cao , Xirong Li , Jintao Li
Nowadays, the proliferation of portraits or photographs containing human faces on the internet has created significant risks of illegal privacy collection and analysis by intelligent systems. Previous attempts to protect against unauthorized identification by face recognition models have primarily involved manipulating or adding adversarial perturbations to photos. However, it remains a challenge to balance privacy protection effectiveness and maintaining image visual quality. That is, to successfully attack real-world black-box face recognition models, significant manipulation is required for the source image, which will obviously damage the image visual quality. To address these issues, we propose an attribute-guided face identity protection (AG-FIP) approach that can protect facial privacy effectively without introducing meaningless or conspicuous artifacts into the source image. The proposed method involves mapping the images to latent space and subsequently implementing an adversarial attack through attribute editing. An attribute selection module followed by an attribute adversarially editing module is proposed to enhance the efficiency and effectiveness of adversarial attacks. Experimental results demonstrate that our approach outperforms SOTAs in terms of confusing black-box face recognition models, commercial face recognition APIs, and image visual quality.
如今,互联网上人脸肖像或照片的激增,给智能系统非法收集和分析隐私带来了巨大的风险。以前通过人脸识别模型防止未经授权的识别的尝试主要涉及对照片进行操纵或添加对抗性扰动。然而,如何平衡隐私保护效果和保持图像视觉质量仍然是一个挑战。也就是说,要成功攻击现实世界的黑箱人脸识别模型,需要对源图像进行大量的处理,这将明显损害图像的视觉质量。为了解决这些问题,我们提出了一种属性引导的面部身份保护(AG-FIP)方法,该方法可以有效地保护面部隐私,而不会在源图像中引入无意义或明显的伪像。该方法将图像映射到潜在空间,然后通过属性编辑实现对抗性攻击。为了提高对抗性攻击的效率和有效性,提出了属性选择模块和属性对抗性编辑模块。实验结果表明,我们的方法在混淆黑箱人脸识别模型、商业人脸识别api和图像视觉质量方面优于sota。
{"title":"Attribute guided adversarial editing for face privacy protection","authors":"Yu Xu ,&nbsp;Ziang Wang ,&nbsp;Fan Tang ,&nbsp;Juan Cao ,&nbsp;Xirong Li ,&nbsp;Jintao Li","doi":"10.1016/j.visinf.2025.100267","DOIUrl":"10.1016/j.visinf.2025.100267","url":null,"abstract":"<div><div>Nowadays, the proliferation of portraits or photographs containing human faces on the internet has created significant risks of illegal privacy collection and analysis by intelligent systems. Previous attempts to protect against unauthorized identification by face recognition models have primarily involved manipulating or adding adversarial perturbations to photos. However, it remains a challenge to balance privacy protection effectiveness and maintaining image visual quality. That is, to successfully attack real-world black-box face recognition models, significant manipulation is required for the source image, which will obviously damage the image visual quality. To address these issues, we propose an attribute-guided face identity protection (AG-FIP) approach that can protect facial privacy effectively without introducing meaningless or conspicuous artifacts into the source image. The proposed method involves mapping the images to latent space and subsequently implementing an adversarial attack through attribute editing. An attribute selection module followed by an attribute adversarially editing module is proposed to enhance the efficiency and effectiveness of adversarial attacks. Experimental results demonstrate that our approach outperforms SOTAs in terms of confusing black-box face recognition models, commercial face recognition APIs, and image visual quality.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"9 4","pages":"Article 100267"},"PeriodicalIF":3.8,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145680951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VirtuNarrator: Crafting museum narratives via spatial layout in creating customized virtual museums virtunarator:通过空间布局打造定制的虚拟博物馆
IF 3.8 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-09-01 DOI: 10.1016/j.visinf.2025.100257
Yonghao Chen, Tan Tang, Xiaojiao Chen, Yueying Li, Qinghua Liu, Xiaosong Wang
Curation in museums involves not only presenting exhibits for visitors but also deeply shaping a systematic narrative experience through deliberate spatial layout design of the museum space. In contrast, the dynamic nature of virtual reality (VR) environments establishes virtual museums as a more potent space for both layout optimization and narrative construction, particularly when integrating visitors’ diverse preferences to optimize the virtual museum and convey narratives. Therefore, we first collaborated with experienced curators to conduct a formative study to understand the workflow of curation and summarize the museum narratives that weave exhibits, galleries, and museum architecture into a compelling story. We then proposed a museum spatial layout framework that clarified three narrative levels (exhibit level, gallery level, and architecture level) to support the controllable spatial layout of the museum’s elements. Based on that, we developed VirtuNarrator, a proof-of-concept prototype designed to assist visitors in choosing different narrative themes, filtering exhibits, creating and adjusting galleries, and freely connecting them. The evaluation results validated that visitors received a more systematic museum narrative experience and perceptions of multi-perspective narrative design in VirtuNarrator. We also provided insights into VR-based museum narrative enhancement beyond spatial layout design.
博物馆策展不仅仅是向参观者展示展品,还需要通过精心设计博物馆空间布局,深刻塑造一种系统的叙事体验。相比之下,虚拟现实(VR)环境的动态性使虚拟博物馆成为一个更有效的空间,无论是布局优化还是叙事构建,特别是在整合游客的多样化偏好来优化虚拟博物馆和传达叙事时。因此,我们首先与经验丰富的策展人合作,进行了一项形成性研究,以了解策展的工作流程,并总结了将展品、画廊和博物馆建筑编织成一个引人入胜的故事的博物馆叙事。然后,我们提出了一个博物馆空间布局框架,明确了三个叙事层面(展览层面、画廊层面和建筑层面),以支持博物馆元素的可控空间布局。基于此,我们开发了virtunarator,这是一个概念验证原型,旨在帮助参观者选择不同的叙事主题,过滤展品,创建和调整画廊,并自由连接它们。评价结果验证了参观者获得了更系统的博物馆叙事体验和virtunarator中多角度叙事设计的感知。我们还提供了超越空间布局设计的基于vr的博物馆叙事增强的见解。
{"title":"VirtuNarrator: Crafting museum narratives via spatial layout in creating customized virtual museums","authors":"Yonghao Chen,&nbsp;Tan Tang,&nbsp;Xiaojiao Chen,&nbsp;Yueying Li,&nbsp;Qinghua Liu,&nbsp;Xiaosong Wang","doi":"10.1016/j.visinf.2025.100257","DOIUrl":"10.1016/j.visinf.2025.100257","url":null,"abstract":"<div><div>Curation in museums involves not only presenting exhibits for visitors but also deeply shaping a systematic narrative experience through deliberate spatial layout design of the museum space. In contrast, the dynamic nature of virtual reality (VR) environments establishes virtual museums as a more potent space for both layout optimization and narrative construction, particularly when integrating visitors’ diverse preferences to optimize the virtual museum and convey narratives. Therefore, we first collaborated with experienced curators to conduct a formative study to understand the workflow of curation and summarize the museum narratives that weave exhibits, galleries, and museum architecture into a compelling story. We then proposed a museum spatial layout framework that clarified three narrative levels (exhibit level, gallery level, and architecture level) to support the controllable spatial layout of the museum’s elements. Based on that, we developed VirtuNarrator, a proof-of-concept prototype designed to assist visitors in choosing different narrative themes, filtering exhibits, creating and adjusting galleries, and freely connecting them. The evaluation results validated that visitors received a more systematic museum narrative experience and perceptions of multi-perspective narrative design in VirtuNarrator. We also provided insights into VR-based museum narrative enhancement beyond spatial layout design.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"9 3","pages":"Article 100257"},"PeriodicalIF":3.8,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145096888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visualizing game dynamics at a specific time: Influence of the players’ poses for tactical analyses in padel 可视化特定时间的游戏动态:玩家姿势对模式战术分析的影响
IF 3.8 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-09-01 DOI: 10.1016/j.visinf.2025.100256
Mohammadreza Javadiha , Carlos Andujar , Enrique Lacasa , Gota Shirato , Natalia Andrienko , Gennady Andrienko
Tactical elements are crucial in team sports. The analysis of hypothetical game situations greatly benefits from positional diagrams showing where the players are. These diagrams often show the layout of the players through simple symbols, which provide no information about their poses. This paper investigates if the visualization of player poses is beneficial for tactical understanding of positional diagrams in padel. We propose a realistic, cartoon-like representation of the players and discuss its integration into a typical positional diagram. To overcome the cost of generating player representations depicting their pose, we propose a method to generate such representations from minimal user input. We conducted a user study to evaluate the effectiveness of our pose-aware diagrams. The tasks for the study were designed to encompass the main in-game scenarios in padel, which include the ballholder at the net with opponents defending, the reverse situation, and transitions between these two states. We found that our representation is preferred over a symbolic one that only indicates player orientation. The proposed method enables coaches to produce such representations within a matter of seconds, thereby significantly facilitating the creation of detailed and easily analyzable depictions of game situations.
战术因素在团队运动中是至关重要的。对假想游戏情境的分析很大程度上得益于显示玩家所在位置的位置图。这些图表通常通过简单的符号来显示玩家的布局,而这些符号并没有提供关于他们姿势的信息。本文研究了球员姿态的可视化是否有助于战术上对牌位图的理解。我们提出了一个逼真的、卡通式的玩家表示,并讨论了将其整合到一个典型的位置图中。为了克服生成描绘姿态的玩家表示的成本,我们提出了一种从最小用户输入生成这种表示的方法。我们进行了一项用户研究来评估姿势感知图的有效性。这项研究的任务被设计成包含主要的游戏场景,其中包括持球者在网前与对手防守,相反的情况,以及这两种状态之间的转换。我们发现我们的表现方式比只表明玩家方向的象征性表现方式更受欢迎。所提出的方法使教练能够在几秒钟内产生这样的表示,从而大大促进了对比赛情况进行详细且易于分析的描述的创建。
{"title":"Visualizing game dynamics at a specific time: Influence of the players’ poses for tactical analyses in padel","authors":"Mohammadreza Javadiha ,&nbsp;Carlos Andujar ,&nbsp;Enrique Lacasa ,&nbsp;Gota Shirato ,&nbsp;Natalia Andrienko ,&nbsp;Gennady Andrienko","doi":"10.1016/j.visinf.2025.100256","DOIUrl":"10.1016/j.visinf.2025.100256","url":null,"abstract":"<div><div>Tactical elements are crucial in team sports. The analysis of hypothetical game situations greatly benefits from positional diagrams showing where the players are. These diagrams often show the layout of the players through simple symbols, which provide no information about their poses. This paper investigates if the visualization of player poses is beneficial for tactical understanding of positional diagrams in padel. We propose a realistic, cartoon-like representation of the players and discuss its integration into a typical positional diagram. To overcome the cost of generating player representations depicting their pose, we propose a method to generate such representations from minimal user input. We conducted a user study to evaluate the effectiveness of our pose-aware diagrams. The tasks for the study were designed to encompass the main in-game scenarios in padel, which include the ballholder at the net with opponents defending, the reverse situation, and transitions between these two states. We found that our representation is preferred over a symbolic one that only indicates player orientation. The proposed method enables coaches to produce such representations within a matter of seconds, thereby significantly facilitating the creation of detailed and easily analyzable depictions of game situations.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"9 3","pages":"Article 100256"},"PeriodicalIF":3.8,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145096889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PVeSight: Dimensionality reduction-based anomaly detection and visual analysis of photovoltaic strings 基于降维的光伏串异常检测与可视化分析
IF 3.8 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-09-01 DOI: 10.1016/j.visinf.2025.100243
Yurun Yang , Xinjing Yi , Yingqiang Jin , Sen Li , Kang Ma , Shuhan Liu , Dazhen Deng , Di Weng , Yingcai Wu
Efficient and accurate detection of anomalies in photovoltaic (PV) strings is essential for ensuring the normal operation of PV power stations. Most existing studies focus on developing automated anomaly detection models based on temporal abnormalities in PV strings. However, since analyzing anomalies often requires domain knowledge, existing automated methods have significant limitations in assisting experts to understand the causes and impact of these anomalies. In close collaboration with domain experts, this work has summarized the specific user requirements for PV string anomaly detection and designed PVeSight, an interactive visual analysis system to help experts discover and analyze anomalies in PV strings. We use dimensionality reduction techniques to generate string pattern map. These maps are used for anomaly detection, classifying anomalies, comparative analysis between strings, and hierarchical analysis under inverters and combiner boxes. This helps experts trace the causes of anomalies and acquire valuable insights into anomalous PV strings. Through case studies and expert evaluation, we verified the usability and effectiveness of PVeSight for PV string anomaly detection.
高效、准确地检测光伏发电串的异常,对于保证光伏电站的正常运行至关重要。现有的研究大多集中在开发基于PV串时间异常的自动异常检测模型上。然而,由于分析异常通常需要领域知识,现有的自动化方法在帮助专家了解这些异常的原因和影响方面有很大的局限性。在与领域专家的密切合作下,本工作总结了PV串异常检测的具体用户需求,并设计了PVeSight,这是一个交互式可视化分析系统,可以帮助专家发现和分析PV串的异常。我们使用降维技术来生成字符串模式图。这些映射用于异常检测、异常分类、字符串之间的比较分析以及逆变器和组合盒下的分层分析。这有助于专家追踪异常原因,并获得异常PV管柱的宝贵信息。通过案例研究和专家评估,验证了PVeSight在PV管柱异常检测中的可用性和有效性。
{"title":"PVeSight: Dimensionality reduction-based anomaly detection and visual analysis of photovoltaic strings","authors":"Yurun Yang ,&nbsp;Xinjing Yi ,&nbsp;Yingqiang Jin ,&nbsp;Sen Li ,&nbsp;Kang Ma ,&nbsp;Shuhan Liu ,&nbsp;Dazhen Deng ,&nbsp;Di Weng ,&nbsp;Yingcai Wu","doi":"10.1016/j.visinf.2025.100243","DOIUrl":"10.1016/j.visinf.2025.100243","url":null,"abstract":"<div><div>Efficient and accurate detection of anomalies in photovoltaic (PV) strings is essential for ensuring the normal operation of PV power stations. Most existing studies focus on developing automated anomaly detection models based on temporal abnormalities in PV strings. However, since analyzing anomalies often requires domain knowledge, existing automated methods have significant limitations in assisting experts to understand the causes and impact of these anomalies. In close collaboration with domain experts, this work has summarized the specific user requirements for PV string anomaly detection and designed PVeSight, an interactive visual analysis system to help experts discover and analyze anomalies in PV strings. We use dimensionality reduction techniques to generate string pattern map. These maps are used for anomaly detection, classifying anomalies, comparative analysis between strings, and hierarchical analysis under inverters and combiner boxes. This helps experts trace the causes of anomalies and acquire valuable insights into anomalous PV strings. Through case studies and expert evaluation, we verified the usability and effectiveness of PVeSight for PV string anomaly detection.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"9 3","pages":"Article 100243"},"PeriodicalIF":3.8,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145061151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
STEP-LINK: STEP-by-Step Tutorial Editing with Programmable LINKages STEP-LINK:一步一步的教程编辑与可编程的联系
IF 3.8 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-09-01 DOI: 10.1016/j.visinf.2025.100244
Te Li , Junming Ke , Zhen Wen , Yuchen Wu , Junhua Lu , Biao Zhu , Minfeng Zhu , Wei Chen
Programming tutorials serve a crucial role in teaching coding and programming techniques. Creating high-quality programming tutorials remains a laborious task. Authors devote effort to writing step-by-step solutions, creating examples, and editing existing tutorials. We explore the potential of using the text-code connection to improve the authoring experience of programming tutorials. We proposed a mixed-initiative approach to infer, establish, and maintain the latent text-code connections. With a series of interactions, the STEP-LINK (STEP-by-Step Tutorial Editing with Programmable LINKages) prototype leverages text-code connections to assist users in authoring tutorials. The results of our experiment demonstrate the effectiveness of our system in supporting users in the authoring of step-by-step code explanations, the creation of examples, and the iteration of tutorials.
编程教程在教授编码和编程技术方面起着至关重要的作用。创建高质量的编程教程仍然是一项艰巨的任务。作者致力于编写一步一步的解决方案,创建示例和编辑现有教程。我们探索了使用文本-代码连接来改善编程教程的创作体验的潜力。我们提出了一种混合主动的方法来推断、建立和维护潜在的文本-代码连接。通过一系列的交互,STEP-LINK(使用可编程链接的分步教程编辑)原型利用文本-代码连接来帮助用户编写教程。我们的实验结果证明了我们的系统在支持用户编写分步代码解释、创建示例和迭代教程方面的有效性。
{"title":"STEP-LINK: STEP-by-Step Tutorial Editing with Programmable LINKages","authors":"Te Li ,&nbsp;Junming Ke ,&nbsp;Zhen Wen ,&nbsp;Yuchen Wu ,&nbsp;Junhua Lu ,&nbsp;Biao Zhu ,&nbsp;Minfeng Zhu ,&nbsp;Wei Chen","doi":"10.1016/j.visinf.2025.100244","DOIUrl":"10.1016/j.visinf.2025.100244","url":null,"abstract":"<div><div>Programming tutorials serve a crucial role in teaching coding and programming techniques. Creating high-quality programming tutorials remains a laborious task. Authors devote effort to writing step-by-step solutions, creating examples, and editing existing tutorials. We explore the potential of using the text-code connection to improve the authoring experience of programming tutorials. We proposed a mixed-initiative approach to infer, establish, and maintain the latent text-code connections. With a series of interactions, the STEP-LINK (<em><strong>STEP-</strong>by-Step Tutorial Editing with Programmable <strong>LINK</strong>ages</em>) prototype leverages text-code connections to assist users in authoring tutorials. The results of our experiment demonstrate the effectiveness of our system in supporting users in the authoring of step-by-step code explanations, the creation of examples, and the iteration of tutorials.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"9 3","pages":"Article 100244"},"PeriodicalIF":3.8,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145061150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dye advection without the blur: ML-based flow visualization 没有模糊的染料平流:基于ml的流动可视化
IF 3.8 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-09-01 DOI: 10.1016/j.visinf.2025.100242
Sebastian Künzel, Daniel Weiskopf
Semi-Lagrangian texture advection (SLTA) enables efficient visualization of 2D and 3D unsteady flow. The major drawback of SLTA-based visualizations is numerical diffusion caused by iterative texture interpolation. We focus on reducing numerical diffusion in techniques that use textures sparsely populated by solid blobs, such as typically in dye advection. A ReLU-based model architecture is the foundation of our ML-based approach. Multiple model configurations are trained to learn a performant interpolation model that reduces numerical diffusion. Our evaluation investigates the models’ ability to generalize concerning the flow and length of the advection process. The model with the best tradeoff between the computational effort to compute, quality of the result, and generality of application is found to be single-layer ReLU-based. This model is further analyzed and explained in-depth and improved using symmetry constraints. Additionally, a metamodel is fitted to predict single-layer ReLU model parameters for advection processes of any length. The metamodel removes the need for any prior training when applying our technique to a new scenario. Additionally, we show that our model is compatible with Back and Forth Error Compensation and Correction to improve the quality of the advection result further. We demonstrate that our model shows excellent diffusion reduction properties in typical examples of 3D steady and unsteady flow visualization. Finally, we utilize the strong diffusion reduction capabilities of our model to compute dye advection with exponential decay, a novel method that we introduce to visualize the extent and evolution of unsteadiness in both 2D and 3D unsteady flow.
半拉格朗日纹理平流(SLTA)可以实现二维和三维非定常流场的高效可视化。基于slta的可视化的主要缺点是由迭代纹理插值引起的数值扩散。我们专注于在使用由固体斑点稀疏填充的纹理的技术中减少数值扩散,例如通常在染料平流中。基于relu的模型架构是我们基于ml的方法的基础。训练多个模型配置以学习减少数值扩散的高性能插值模型。我们的评估考察了模型对平流过程的流量和长度的概括能力。在计算工作量、结果质量和应用程序的通用性之间取得最佳平衡的模型是基于单层relu的模型。对该模型进行了深入的分析和解释,并利用对称约束对其进行了改进。此外,还拟合了一个元模型来预测任意长度的平流过程的单层ReLU模型参数。在将我们的技术应用于新场景时,元模型消除了任何预先培训的需要。此外,我们还证明了该模型兼容前后误差补偿和校正,以进一步提高平流结果的质量。在三维定常和非定常流动显示的典型实例中,我们证明了该模型具有良好的扩散还原性能。最后,我们利用该模型强大的扩散还原能力来计算具有指数衰减的染料平流,这是一种新颖的方法,我们引入该方法来可视化二维和三维非定常流中的非定常程度和演变。
{"title":"Dye advection without the blur: ML-based flow visualization","authors":"Sebastian Künzel,&nbsp;Daniel Weiskopf","doi":"10.1016/j.visinf.2025.100242","DOIUrl":"10.1016/j.visinf.2025.100242","url":null,"abstract":"<div><div>Semi-Lagrangian texture advection (SLTA) enables efficient visualization of 2D and 3D unsteady flow. The major drawback of SLTA-based visualizations is numerical diffusion caused by iterative texture interpolation. We focus on reducing numerical diffusion in techniques that use textures sparsely populated by solid blobs, such as typically in dye advection. A ReLU-based model architecture is the foundation of our ML-based approach. Multiple model configurations are trained to learn a performant interpolation model that reduces numerical diffusion. Our evaluation investigates the models’ ability to generalize concerning the flow and length of the advection process. The model with the best tradeoff between the computational effort to compute, quality of the result, and generality of application is found to be single-layer ReLU-based. This model is further analyzed and explained in-depth and improved using symmetry constraints. Additionally, a metamodel is fitted to predict single-layer ReLU model parameters for advection processes of any length. The metamodel removes the need for any prior training when applying our technique to a new scenario. Additionally, we show that our model is compatible with Back and Forth Error Compensation and Correction to improve the quality of the advection result further. We demonstrate that our model shows excellent diffusion reduction properties in typical examples of 3D steady and unsteady flow visualization. Finally, we utilize the strong diffusion reduction capabilities of our model to compute dye advection with exponential decay, a novel method that we introduce to visualize the extent and evolution of unsteadiness in both 2D and 3D unsteady flow.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"9 3","pages":"Article 100242"},"PeriodicalIF":3.8,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144933920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Visual Informatics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1