首页 > 最新文献

IEEE Computer Graphics and Applications最新文献

英文 中文
VILOD: Combining Visual Interactive Labeling With Active Learning for Object Detection. VILOD:结合视觉交互标记与主动学习的目标检测。
IF 1.4 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-02-03 DOI: 10.1109/MCG.2026.3660508
Isac Holm, Rafael M Martins, Claudio D G Linhares, Amilcar Soares

The need for large, high-quality annotated datasets continues to represent a primary limitation in training Object Detection (OD) models. To mitigate this challenge, we present VILOD, a Visual Interactive Labeling tool that integrates Active Learning (AL) with a suite of interactive visualizations to create an effective Human-in-the-Loop (HITL) workflow for OD annotation and training. VILOD is designed to make the AL process more transparent and steerable, empowering expert users to implement diverse, strategically guided labeling strategies that extend beyond algorithmic query strategies. Through comparative case studies, we evaluate three visually guided labeling strategies against a conventional automated AL baseline. The results show that a balanced, human-guided strategy-leveraging VILOD's visual cues to synthesize information about data structure and model uncertainty-not only outperforms the automated baseline but also achieves the highest overall model performance. These findings emphasize the potential of visually guided, interactive annotation to enhance both the efficiency and effectiveness of dataset creation for OD.

对大型、高质量注释数据集的需求仍然是训练对象检测(OD)模型的主要限制。为了缓解这一挑战,我们提出了VILOD,一种视觉交互式标签工具,它将主动学习(AL)与一套交互式可视化相结合,为OD注释和培训创建有效的人在环(HITL)工作流。VILOD旨在使人工智能过程更加透明和可操控,使专家用户能够实施超越算法查询策略的多样化、战略指导的标签策略。通过比较案例研究,我们针对传统的自动化人工智能基线评估了三种视觉引导标记策略。结果表明,一个平衡的、人为引导的策略——利用VILOD的视觉线索来综合有关数据结构和模型不确定性的信息——不仅优于自动化基线,而且还实现了最高的整体模型性能。这些发现强调了视觉引导、交互式注释在提高OD数据集创建效率和有效性方面的潜力。
{"title":"VILOD: Combining Visual Interactive Labeling With Active Learning for Object Detection.","authors":"Isac Holm, Rafael M Martins, Claudio D G Linhares, Amilcar Soares","doi":"10.1109/MCG.2026.3660508","DOIUrl":"https://doi.org/10.1109/MCG.2026.3660508","url":null,"abstract":"<p><p>The need for large, high-quality annotated datasets continues to represent a primary limitation in training Object Detection (OD) models. To mitigate this challenge, we present VILOD, a Visual Interactive Labeling tool that integrates Active Learning (AL) with a suite of interactive visualizations to create an effective Human-in-the-Loop (HITL) workflow for OD annotation and training. VILOD is designed to make the AL process more transparent and steerable, empowering expert users to implement diverse, strategically guided labeling strategies that extend beyond algorithmic query strategies. Through comparative case studies, we evaluate three visually guided labeling strategies against a conventional automated AL baseline. The results show that a balanced, human-guided strategy-leveraging VILOD's visual cues to synthesize information about data structure and model uncertainty-not only outperforms the automated baseline but also achieves the highest overall model performance. These findings emphasize the potential of visually guided, interactive annotation to enhance both the efficiency and effectiveness of dataset creation for OD.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":""},"PeriodicalIF":1.4,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146115081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visual Exploration of a Historical Vietnamese Corpus of Captioned Drawings: A Case Study. 越南历史图集的视觉探索:个案研究。
IF 1.4 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-02-02 DOI: 10.1109/MCG.2026.3660122
Kailiang Fu, Tyler Gurth, David H Laidlaw, Cindy Anh Nguyen

This paper presents a case study focusing on the exploratory visual analysis of a unique historical dataset consisting of approximately 4000 visual sketches and associated captions from an encyclopedic book published in 1909-1910. The book, which offers insight into Vietnamese crafts and social practices, poses the challenge of extracting cultural meaning and narrative structure from thousands of drawings and multilingual captions. Our research aims to explore and evaluate the effectiveness of multiple visualization techniques in uncovering meaningful relationships within the dataset while working closely with professional historians. The main contributions of this study include refining historical research questions through task and data abstraction, combining and validating visualization techniques for historical data interpretation, and involving a focus group of historians for further evaluation. These contributions offer generalizable insights for the development of domain-specific visualization tools and support interdisciplinary engagement in historical data visualization and critical digital humanities research.

本文介绍了一个案例研究,重点是对一个独特的历史数据集进行探索性视觉分析,该数据集由1909-1910年出版的一本百科全书中的大约4000个视觉草图和相关字幕组成。这本书提供了对越南工艺和社会习俗的深入了解,提出了从数千幅图画和多语言字幕中提取文化意义和叙事结构的挑战。我们的研究旨在探索和评估多种可视化技术在揭示数据集中有意义的关系方面的有效性,同时与专业历史学家密切合作。本研究的主要贡献包括通过任务和数据抽象提炼历史研究问题,结合并验证历史数据解释的可视化技术,以及涉及历史学家焦点小组进行进一步评估。这些贡献为特定领域可视化工具的开发提供了可概括的见解,并支持历史数据可视化和关键数字人文研究的跨学科参与。
{"title":"Visual Exploration of a Historical Vietnamese Corpus of Captioned Drawings: A Case Study.","authors":"Kailiang Fu, Tyler Gurth, David H Laidlaw, Cindy Anh Nguyen","doi":"10.1109/MCG.2026.3660122","DOIUrl":"https://doi.org/10.1109/MCG.2026.3660122","url":null,"abstract":"<p><p>This paper presents a case study focusing on the exploratory visual analysis of a unique historical dataset consisting of approximately 4000 visual sketches and associated captions from an encyclopedic book published in 1909-1910. The book, which offers insight into Vietnamese crafts and social practices, poses the challenge of extracting cultural meaning and narrative structure from thousands of drawings and multilingual captions. Our research aims to explore and evaluate the effectiveness of multiple visualization techniques in uncovering meaningful relationships within the dataset while working closely with professional historians. The main contributions of this study include refining historical research questions through task and data abstraction, combining and validating visualization techniques for historical data interpretation, and involving a focus group of historians for further evaluation. These contributions offer generalizable insights for the development of domain-specific visualization tools and support interdisciplinary engagement in historical data visualization and critical digital humanities research.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":""},"PeriodicalIF":1.4,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146108286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PLUTO: A Public Value Assessment Tool. 冥王星:一个公共价值评估工具。
IF 1.4 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-19 DOI: 10.1109/MCG.2025.3649342
Laura Koesten, Peter Ferenc Gyarmati, Connor Hogan, Bernhard Jordan, Seliem El-Sayed, Barbara Prainsack, Torsten Moller

We present PLUTO (Public VaLUe Assessment TOol), a framework for assessing the public value of specific instances of data use. Grounded in the concept of data solidarity, PLUTO aims to empower diverse stakeholders-including regulatory bodies, private enterprises, NGOs, and individuals-to critically engage with data projects through a structured assessment of the risks and benefits of data use, and by encouraging critical reflection. This paper discusses the theoretical foundation, development process, and initial user experiences with PLUTO. Key challenges include translating qualitative assessments of benefits and risks into actionable quantitative metrics while maintaining inclusivity and transparency. Initial feedback highlights PLUTO's potential to foster responsible decision-making and shared accountability in data practices.

我们提出了PLUTO(公共价值评估工具),这是一个评估特定数据使用实例的公共价值的框架。PLUTO以数据团结的概念为基础,旨在通过对数据使用的风险和收益进行结构化评估并鼓励批判性反思,使不同的利益相关者(包括监管机构、私营企业、非政府组织和个人)能够批判性地参与数据项目。本文讨论了PLUTO的理论基础、开发过程和初始用户体验。主要挑战包括将效益和风险的定性评估转化为可操作的定量指标,同时保持包容性和透明度。初步反馈强调了冥王星在数据实践中促进负责任决策和共同问责制的潜力。
{"title":"PLUTO: A Public Value Assessment Tool.","authors":"Laura Koesten, Peter Ferenc Gyarmati, Connor Hogan, Bernhard Jordan, Seliem El-Sayed, Barbara Prainsack, Torsten Moller","doi":"10.1109/MCG.2025.3649342","DOIUrl":"https://doi.org/10.1109/MCG.2025.3649342","url":null,"abstract":"<p><p>We present PLUTO (Public VaLUe Assessment TOol), a framework for assessing the public value of specific instances of data use. Grounded in the concept of data solidarity, PLUTO aims to empower diverse stakeholders-including regulatory bodies, private enterprises, NGOs, and individuals-to critically engage with data projects through a structured assessment of the risks and benefits of data use, and by encouraging critical reflection. This paper discusses the theoretical foundation, development process, and initial user experiences with PLUTO. Key challenges include translating qualitative assessments of benefits and risks into actionable quantitative metrics while maintaining inclusivity and transparency. Initial feedback highlights PLUTO's potential to foster responsible decision-making and shared accountability in data practices.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":""},"PeriodicalIF":1.4,"publicationDate":"2026-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146004542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Computational Design and Fabrication of Protective Foam. 防护泡沫的计算设计与制造。
IF 1.4 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-01 DOI: 10.1109/MCG.2025.3556656
Tsukasa Fukusato, Naoki Kita

This article proposes a method to design protective foam for packaging 3-D objects. Users first load a 3-D object and define a block-based design space by setting the block resolution and the size of each block. The system then constructs a block map in the space using depth textures of the input object, separates the map into two regions, and outputs the regions as foams. The proposed method is fast and stable, allowing the user to interactively make protective foams. The generated foam is a height field in each direction, so the foams can easily be fabricated using various materials, such as LEGO blocks, sponge with slits, glass, and wood. This article shows some examples of fabrication results to demonstrate the robustness of our system. In addition, we conducted a user study and confirmed that our system is effective for manually designing protective foams envisioned by users.

本文提出了一种为包装三维物体设计保护泡沫的方法。用户首先加载一个三维物体,并通过设置块的分辨率和每个块的大小来定义一个基于块的设计空间。然后,系统利用输入对象的深度纹理在空间中构建块图,将块图分成两个区域,并将区域输出为泡沫。所提出的方法快速而稳定,允许用户交互式地制作保护泡沫。生成的泡沫在每个方向上都是一个高度场,因此可以使用乐高积木、带缝隙的海绵、玻璃和木材等各种材料轻松制作泡沫。本文展示了一些制作结果实例,以证明我们系统的稳健性。此外,我们还进行了一项用户研究,证实我们的系统可以有效地手动设计用户所设想的保护泡沫。
{"title":"Computational Design and Fabrication of Protective Foam.","authors":"Tsukasa Fukusato, Naoki Kita","doi":"10.1109/MCG.2025.3556656","DOIUrl":"10.1109/MCG.2025.3556656","url":null,"abstract":"<p><p>This article proposes a method to design protective foam for packaging 3-D objects. Users first load a 3-D object and define a block-based design space by setting the block resolution and the size of each block. The system then constructs a block map in the space using depth textures of the input object, separates the map into two regions, and outputs the regions as foams. The proposed method is fast and stable, allowing the user to interactively make protective foams. The generated foam is a height field in each direction, so the foams can easily be fabricated using various materials, such as LEGO blocks, sponge with slits, glass, and wood. This article shows some examples of fabrication results to demonstrate the robustness of our system. In addition, we conducted a user study and confirmed that our system is effective for manually designing protective foams envisioned by users.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":"81-88"},"PeriodicalIF":1.4,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143765957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Animating Shakespeare: A Case Study in Human-AI Collaboration for Animating Classical Illustration. 动画莎士比亚:人类与人工智能合作动画经典插图的案例研究。
IF 1.4 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-01 DOI: 10.1109/MCG.2025.3608802
Hannes Rall, Alice Osinska, Aaron Zhi Qiang Lim

The study investigates the role of human-AI interaction in animating illustration with a case study of John Gilbert's visual interpretation of Shakespeare's play As You Like It. Through a multilayered animation, the research highlighted the irreplaceable role of human direction, particularly as a creator, to achieve narrative and visual coherence in AI-assisted animation. Drawing on theories of creativity and collaborative spaces, this article argues that human guidance is inherent for successful AI-empowered animation. It proposes a structured HAI workflow where the human remains the creative agent and main lead, while AI augments the process. This case study showcased how cocreative workflow can ensure visual and narrative coherence rather than foster mutual extinction.

该研究以约翰·吉尔伯特对莎士比亚戏剧《皆大欢喜》的视觉诠释为例,探讨了人类与人工智能互动在动画插画中的作用。通过一个多层次的动画,研究突出了人类的指导,特别是作为创作者,在人工智能辅助动画中实现叙事和视觉连贯的不可替代的作用。根据创造力和协作空间的理论,本文认为人类的指导是成功的人工智能动画所固有的。它提出了一个结构化的人工智能工作流程,其中人类仍然是创造性的代理人和主要领导者,而人工智能则增强了这一过程。这个案例研究展示了共同创造的工作流程如何确保视觉和叙事的一致性,而不是促进相互灭绝。
{"title":"Animating Shakespeare: A Case Study in Human-AI Collaboration for Animating Classical Illustration.","authors":"Hannes Rall, Alice Osinska, Aaron Zhi Qiang Lim","doi":"10.1109/MCG.2025.3608802","DOIUrl":"10.1109/MCG.2025.3608802","url":null,"abstract":"<p><p>The study investigates the role of human-AI interaction in animating illustration with a case study of John Gilbert's visual interpretation of Shakespeare's play As You Like It. Through a multilayered animation, the research highlighted the irreplaceable role of human direction, particularly as a creator, to achieve narrative and visual coherence in AI-assisted animation. Drawing on theories of creativity and collaborative spaces, this article argues that human guidance is inherent for successful AI-empowered animation. It proposes a structured HAI workflow where the human remains the creative agent and main lead, while AI augments the process. This case study showcased how cocreative workflow can ensure visual and narrative coherence rather than foster mutual extinction.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":"41-51"},"PeriodicalIF":1.4,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145356852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design Exploration of AI-Assisted Personal Affective Physicalization. 人工智能辅助个人情感物质化的设计探索。
IF 1.4 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-01 DOI: 10.1109/MCG.2025.3614686
Ruishan Wu, Zhuoyang Li, Charles Perin, Sheelagh Carpendale, Can Liu

Personal affective physicalization is the process by which individuals express emotions through tangible forms to record, reflect on, and communicate. Yet such physical data representations can be challenging to design due to the abstract nature of emotions. Given the shown potential of AI in detecting emotion and assisting design, we explore opportunities in AI-assisted design of personal affective physicalization using a research-through-design method. We developed PhEmotion, a tool for embedding LLM-extracted emotion values from human-AI conversations into the parametric design of physical artifacts. A lab study was conducted with 14 participants creating these artifacts based on their personal emotions, with and without AI support. We observed nuances and variations in participants' creative strategies, meaning-making processes, and their perceptions of AI support in this context. We found key tensions in AI-human cocreation that provide a nuanced agenda for future research in AI-assisted personal affective physicalization.

个人情感物质化是个体通过记录、反思和交流的有形形式表达情感的过程。然而,由于情感的抽象性,这种物理数据表示的设计可能具有挑战性。鉴于人工智能在检测情感和辅助设计方面的潜力,我们探索了使用通过设计的研究方法在人工智能辅助设计中个人情感物化的机会。我们开发了PhEmotion,这是一种工具,用于将llm从人类与人工智能对话中提取的情感值嵌入到物理工件的参数化设计中。在一项实验室研究中,14名参与者根据他们的个人情绪,在有或没有人工智能支持的情况下创造了这些人工制品。在这种情况下,我们观察到参与者的创造性策略、意义制造过程和他们对人工智能支持的看法的细微差别和变化。我们发现了人工智能与人类共同创造中的关键紧张关系,这为人工智能辅助的个人情感物质化的未来研究提供了一个微妙的议程。
{"title":"Design Exploration of AI-Assisted Personal Affective Physicalization.","authors":"Ruishan Wu, Zhuoyang Li, Charles Perin, Sheelagh Carpendale, Can Liu","doi":"10.1109/MCG.2025.3614686","DOIUrl":"10.1109/MCG.2025.3614686","url":null,"abstract":"<p><p>Personal affective physicalization is the process by which individuals express emotions through tangible forms to record, reflect on, and communicate. Yet such physical data representations can be challenging to design due to the abstract nature of emotions. Given the shown potential of AI in detecting emotion and assisting design, we explore opportunities in AI-assisted design of personal affective physicalization using a research-through-design method. We developed PhEmotion, a tool for embedding LLM-extracted emotion values from human-AI conversations into the parametric design of physical artifacts. A lab study was conducted with 14 participants creating these artifacts based on their personal emotions, with and without AI support. We observed nuances and variations in participants' creative strategies, meaning-making processes, and their perceptions of AI support in this context. We found key tensions in AI-human cocreation that provide a nuanced agenda for future research in AI-assisted personal affective physicalization.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":"26-40"},"PeriodicalIF":1.4,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145180380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive Cardiac Dynamics in Surgical Simulation: A Human-AI Interaction Framework for Robotic Internal Mammary Artery Harvesting. 手术模拟中的自适应心脏动力学:机器人乳腺内动脉采集的人机交互框架。
IF 1.4 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-01 DOI: 10.1109/MCG.2025.3623124
Shuo Wang, Tong Ren, Nan Cheng, Rong Wang, Li Zhang

Virtual surgical simulation offers promising training for complex procedures, such as robotic internal mammary artery harvesting. Building upon previous work on dynamic virtual simulation with haptic feedback, we present an adaptive human-AI interaction framework that dynamically adjusts cardiac pulsation parameters based on surgeon behavior analysis. Our system captures surgical tool movements and performance metrics to create personalized training through dynamic difficulty adjustment, context-aware parameter selection, personalized learning paths, and real-time feedback. In a study with three cardiac surgeons across 24 sessions, our adaptive approach showed significant improvements over static simulations: 18% reduction in spatial asymmetry, 22% faster completion, and 48% fewer tissue trauma events. The system demonstrated consistent benefits across different skill levels and sustained learning progression, preventing performance plateaus seen in fixed-difficulty conditions.

虚拟外科模拟为复杂的手术提供了有前途的培训,比如机器人乳腺内动脉(IMA)手术。在先前基于触觉反馈的动态虚拟仿真工作的基础上,我们提出了一个自适应的人类-人工智能交互(HAI)框架,该框架可以根据外科医生的行为分析动态调整心脏脉动参数。我们的系统捕捉手术工具的运动和性能指标,通过动态难度调整、环境感知参数选择、个性化学习路径和实时反馈来创建个性化培训。在一项对三名心脏外科医生进行的24次手术的研究中,我们的自适应方法显示出比静态模拟有显著改善:空间不对称性降低18%,完成速度提高22%,组织创伤事件减少48%。该系统在不同的技能水平和持续的学习进程中表现出一致的好处,防止了固定难度条件下的表现停滞。
{"title":"Adaptive Cardiac Dynamics in Surgical Simulation: A Human-AI Interaction Framework for Robotic Internal Mammary Artery Harvesting.","authors":"Shuo Wang, Tong Ren, Nan Cheng, Rong Wang, Li Zhang","doi":"10.1109/MCG.2025.3623124","DOIUrl":"10.1109/MCG.2025.3623124","url":null,"abstract":"<p><p>Virtual surgical simulation offers promising training for complex procedures, such as robotic internal mammary artery harvesting. Building upon previous work on dynamic virtual simulation with haptic feedback, we present an adaptive human-AI interaction framework that dynamically adjusts cardiac pulsation parameters based on surgeon behavior analysis. Our system captures surgical tool movements and performance metrics to create personalized training through dynamic difficulty adjustment, context-aware parameter selection, personalized learning paths, and real-time feedback. In a study with three cardiac surgeons across 24 sessions, our adaptive approach showed significant improvements over static simulations: 18% reduction in spatial asymmetry, 22% faster completion, and 48% fewer tissue trauma events. The system demonstrated consistent benefits across different skill levels and sustained learning progression, preventing performance plateaus seen in fixed-difficulty conditions.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":"52-65"},"PeriodicalIF":1.4,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145314255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Developing Collaborative Artificial Intelligence in Artistic Practices: Using AI in Creative Explorations. 在艺术实践中发展协作人工智能:在创造性探索中使用人工智能。
IF 1.4 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-01 DOI: 10.1109/MCG.2025.3634661
Bruce Donald Campbell, Beatriz Sousa Santos, Alejandra J Magana, Rafael Bidarra

This article describes the design and implementation of a course that evaluated the applicability of an artistic studio model, to educating students on the subject of artificial intelligence (AI). Specifically, four sections of an asynchronous, studio-style understanding and exploring AI course ran once per season via the Rhode Island School of Design online learning facility during the 2024-2025 academic year. The artistic studio model engages in bottom-up learning methods that include student-directed exploration, engagement, and artifact creation. As generative AI tools can output artistic artifacts based on human prompting, the research aligned well with typical course objectives. The qualitative study describes students' experiences of integrating Large Learning Models in support of their creative process. The results from 36 students are considered as evidence that the dominant model is applicable, and case studies from individual students are provided to assist the reader in considering the model for their own needs and interests.

本文描述了一门课程的设计和实现,该课程评估了艺术工作室模型在教育学生人工智能(AI)主题方面的适用性。具体来说,在2024-2025学年期间,通过罗德岛设计学院的在线学习设施,每个季度都有四个部分的异步、工作室式的理解和探索人工智能课程。艺术工作室模式采用自下而上的学习方法,包括学生主导的探索、参与和人工制品创作。由于生成式人工智能工具可以根据人类的提示输出艺术品,因此该研究与典型的课程目标非常吻合。定性研究描述了学生整合大型学习模式以支持其创造性过程的经验。来自36名学生的结果被认为是主导模型适用的证据,并提供了来自个别学生的案例研究,以帮助读者根据自己的需求和兴趣考虑该模型。
{"title":"Developing Collaborative Artificial Intelligence in Artistic Practices: Using AI in Creative Explorations.","authors":"Bruce Donald Campbell, Beatriz Sousa Santos, Alejandra J Magana, Rafael Bidarra","doi":"10.1109/MCG.2025.3634661","DOIUrl":"https://doi.org/10.1109/MCG.2025.3634661","url":null,"abstract":"<p><p>This article describes the design and implementation of a course that evaluated the applicability of an artistic studio model, to educating students on the subject of artificial intelligence (AI). Specifically, four sections of an asynchronous, studio-style understanding and exploring AI course ran once per season via the Rhode Island School of Design online learning facility during the 2024-2025 academic year. The artistic studio model engages in bottom-up learning methods that include student-directed exploration, engagement, and artifact creation. As generative AI tools can output artistic artifacts based on human prompting, the research aligned well with typical course objectives. The qualitative study describes students' experiences of integrating Large Learning Models in support of their creative process. The results from 36 students are considered as evidence that the dominant model is applicable, and case studies from individual students are provided to assist the reader in considering the model for their own needs and interests.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"46 1","pages":"99-106"},"PeriodicalIF":1.4,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146055229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Immersive Virtual Reality Platform for First Aid and Emergency Training. 沉浸式虚拟现实急救与应急培训平台。
IF 1.4 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-01 DOI: 10.1109/MCG.2025.3635750
Marcello A Carrozzino, Matteo Caponi, Simone Pisani, Bruno Papaleo, Alda Mazzei, Rudy Foddis, Massimo Bergamasco, Mike Potel

Virtual reality (VR) technologies have emerged as valuable tools for medical and emergency training, providing safe, immersive, and repeatable environments where complex procedures can be practiced effectively. This article presents an immersive VR system designed to train workplace first-aid responders, with a particular focus on cardiopulmonary resuscitation (CPR). The platform integrates a physical CPR manikin with virtual patient overlays through mixed-reality calibration, incorporates realistic emergency scenarios with environmental hazards, and enables synchronous multiuser interaction between trainees and instructors. To assess its potential, we provide a detailed description of the system's architecture and functionalities, introducing the results of an extensive user study employing validated questionnaires on usability, performance, and user experience. The proposed framework contributes to the advancement of VR-based medical education, highlighting its benefits, current limitations, and future research opportunities.

虚拟现实(VR)技术已经成为医疗和应急培训的宝贵工具,它提供了安全、沉浸式和可重复的环境,可以有效地实施复杂的程序。本文介绍了一种沉浸式VR系统,旨在培训工作场所急救人员,特别关注心肺复苏(CPR)。该平台通过混合现实校准集成了物理CPR人体模型和虚拟患者覆盖,结合了具有环境危害的现实紧急情况,并实现了学员和教官之间的同步多用户交互。为了评估其潜力,我们提供了系统架构和功能的详细描述,介绍了广泛的用户研究的结果,该研究采用了关于可用性、性能和用户体验的有效问卷。提出的框架有助于推进基于虚拟现实的医学教育,突出其好处、当前的局限性和未来的研究机会。
{"title":"An Immersive Virtual Reality Platform for First Aid and Emergency Training.","authors":"Marcello A Carrozzino, Matteo Caponi, Simone Pisani, Bruno Papaleo, Alda Mazzei, Rudy Foddis, Massimo Bergamasco, Mike Potel","doi":"10.1109/MCG.2025.3635750","DOIUrl":"https://doi.org/10.1109/MCG.2025.3635750","url":null,"abstract":"<p><p>Virtual reality (VR) technologies have emerged as valuable tools for medical and emergency training, providing safe, immersive, and repeatable environments where complex procedures can be practiced effectively. This article presents an immersive VR system designed to train workplace first-aid responders, with a particular focus on cardiopulmonary resuscitation (CPR). The platform integrates a physical CPR manikin with virtual patient overlays through mixed-reality calibration, incorporates realistic emergency scenarios with environmental hazards, and enables synchronous multiuser interaction between trainees and instructors. To assess its potential, we provide a detailed description of the system's architecture and functionalities, introducing the results of an extensive user study employing validated questionnaires on usability, performance, and user experience. The proposed framework contributes to the advancement of VR-based medical education, highlighting its benefits, current limitations, and future research opportunities.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"46 1","pages":"107-115"},"PeriodicalIF":1.4,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146055245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visualizing the Chain of Thought in Large Language Models. 可视化大型语言模型中的思维链。
IF 1.4 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-01 DOI: 10.1109/MCG.2025.3624666
Bahar Ilgen, Georges Hattab, Theresa-Marie Rhyne

This Visualization Viewpoints article explores how visualization helps uncover and communicate the internal chain-of-thought trajectories and generative pathways of large language models (LLMs) in reasoning tasks. As LLMs become increasingly powerful and widespread, a key challenge is understanding how their reasoning dynamics unfold, particularly in natural language processing (NLP) applications. Their outputs may appear coherent, yet the multistep inference pathways behind them remain largely hidden. We argue that visualization offers an effective avenue to illuminate these internal mechanisms. Moving beyond attention weights or token saliency, we advocate for richer visual tools that expose model uncertainty, highlight alternative reasoning paths, and reveal what the model omits or overlooks. We discuss examples, such as prompt trajectory visualizations, counterfactual response maps, and semantic drift flows, to illustrate how these techniques foster trust, identify failure modes, and support deeper human interaction with these systems. In doing so, visualizing the chain of thought in LLMs lays critical groundwork for transparent, interpretable, and truly collaborative human-AI reasoning.

这篇可视化观点文章探讨了可视化如何帮助揭示和交流推理任务中大型语言模型(llm)的内部思维链轨迹和生成路径。随着法学硕士变得越来越强大和广泛,一个关键的挑战是理解他们的推理动态是如何展开的,特别是在自然语言处理(NLP)应用中。它们的输出可能看起来是连贯的,但它们背后的多步推理路径在很大程度上仍然是隐藏的。我们认为,可视化提供了一个有效的途径来阐明这些内部机制。除了关注权重或标记显著性之外,我们提倡使用更丰富的可视化工具来暴露模型的不确定性,突出可选择的推理路径,并揭示模型遗漏或忽略的内容。我们讨论了一些例子,如提示轨迹可视化、反事实响应图和语义漂移流,以说明这些技术如何培养信任、识别故障模式,并支持与这些系统进行更深层次的人类交互。通过这样做,可视化llm中的思维链为透明、可解释和真正协作的人类-人工智能推理奠定了关键基础。
{"title":"Visualizing the Chain of Thought in Large Language Models.","authors":"Bahar Ilgen, Georges Hattab, Theresa-Marie Rhyne","doi":"10.1109/MCG.2025.3624666","DOIUrl":"https://doi.org/10.1109/MCG.2025.3624666","url":null,"abstract":"<p><p>This Visualization Viewpoints article explores how visualization helps uncover and communicate the internal chain-of-thought trajectories and generative pathways of large language models (LLMs) in reasoning tasks. As LLMs become increasingly powerful and widespread, a key challenge is understanding how their reasoning dynamics unfold, particularly in natural language processing (NLP) applications. Their outputs may appear coherent, yet the multistep inference pathways behind them remain largely hidden. We argue that visualization offers an effective avenue to illuminate these internal mechanisms. Moving beyond attention weights or token saliency, we advocate for richer visual tools that expose model uncertainty, highlight alternative reasoning paths, and reveal what the model omits or overlooks. We discuss examples, such as prompt trajectory visualizations, counterfactual response maps, and semantic drift flows, to illustrate how these techniques foster trust, identify failure modes, and support deeper human interaction with these systems. In doing so, visualizing the chain of thought in LLMs lays critical groundwork for transparent, interpretable, and truly collaborative human-AI reasoning.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"46 1","pages":"89-98"},"PeriodicalIF":1.4,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146054539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Computer Graphics and Applications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1