Pub Date : 2026-02-03DOI: 10.1109/MCG.2026.3660508
Isac Holm, Rafael M Martins, Claudio D G Linhares, Amilcar Soares
The need for large, high-quality annotated datasets continues to represent a primary limitation in training Object Detection (OD) models. To mitigate this challenge, we present VILOD, a Visual Interactive Labeling tool that integrates Active Learning (AL) with a suite of interactive visualizations to create an effective Human-in-the-Loop (HITL) workflow for OD annotation and training. VILOD is designed to make the AL process more transparent and steerable, empowering expert users to implement diverse, strategically guided labeling strategies that extend beyond algorithmic query strategies. Through comparative case studies, we evaluate three visually guided labeling strategies against a conventional automated AL baseline. The results show that a balanced, human-guided strategy-leveraging VILOD's visual cues to synthesize information about data structure and model uncertainty-not only outperforms the automated baseline but also achieves the highest overall model performance. These findings emphasize the potential of visually guided, interactive annotation to enhance both the efficiency and effectiveness of dataset creation for OD.
{"title":"VILOD: Combining Visual Interactive Labeling With Active Learning for Object Detection.","authors":"Isac Holm, Rafael M Martins, Claudio D G Linhares, Amilcar Soares","doi":"10.1109/MCG.2026.3660508","DOIUrl":"https://doi.org/10.1109/MCG.2026.3660508","url":null,"abstract":"<p><p>The need for large, high-quality annotated datasets continues to represent a primary limitation in training Object Detection (OD) models. To mitigate this challenge, we present VILOD, a Visual Interactive Labeling tool that integrates Active Learning (AL) with a suite of interactive visualizations to create an effective Human-in-the-Loop (HITL) workflow for OD annotation and training. VILOD is designed to make the AL process more transparent and steerable, empowering expert users to implement diverse, strategically guided labeling strategies that extend beyond algorithmic query strategies. Through comparative case studies, we evaluate three visually guided labeling strategies against a conventional automated AL baseline. The results show that a balanced, human-guided strategy-leveraging VILOD's visual cues to synthesize information about data structure and model uncertainty-not only outperforms the automated baseline but also achieves the highest overall model performance. These findings emphasize the potential of visually guided, interactive annotation to enhance both the efficiency and effectiveness of dataset creation for OD.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":""},"PeriodicalIF":1.4,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146115081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-02DOI: 10.1109/MCG.2026.3660122
Kailiang Fu, Tyler Gurth, David H Laidlaw, Cindy Anh Nguyen
This paper presents a case study focusing on the exploratory visual analysis of a unique historical dataset consisting of approximately 4000 visual sketches and associated captions from an encyclopedic book published in 1909-1910. The book, which offers insight into Vietnamese crafts and social practices, poses the challenge of extracting cultural meaning and narrative structure from thousands of drawings and multilingual captions. Our research aims to explore and evaluate the effectiveness of multiple visualization techniques in uncovering meaningful relationships within the dataset while working closely with professional historians. The main contributions of this study include refining historical research questions through task and data abstraction, combining and validating visualization techniques for historical data interpretation, and involving a focus group of historians for further evaluation. These contributions offer generalizable insights for the development of domain-specific visualization tools and support interdisciplinary engagement in historical data visualization and critical digital humanities research.
{"title":"Visual Exploration of a Historical Vietnamese Corpus of Captioned Drawings: A Case Study.","authors":"Kailiang Fu, Tyler Gurth, David H Laidlaw, Cindy Anh Nguyen","doi":"10.1109/MCG.2026.3660122","DOIUrl":"https://doi.org/10.1109/MCG.2026.3660122","url":null,"abstract":"<p><p>This paper presents a case study focusing on the exploratory visual analysis of a unique historical dataset consisting of approximately 4000 visual sketches and associated captions from an encyclopedic book published in 1909-1910. The book, which offers insight into Vietnamese crafts and social practices, poses the challenge of extracting cultural meaning and narrative structure from thousands of drawings and multilingual captions. Our research aims to explore and evaluate the effectiveness of multiple visualization techniques in uncovering meaningful relationships within the dataset while working closely with professional historians. The main contributions of this study include refining historical research questions through task and data abstraction, combining and validating visualization techniques for historical data interpretation, and involving a focus group of historians for further evaluation. These contributions offer generalizable insights for the development of domain-specific visualization tools and support interdisciplinary engagement in historical data visualization and critical digital humanities research.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":""},"PeriodicalIF":1.4,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146108286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-19DOI: 10.1109/MCG.2025.3649342
Laura Koesten, Peter Ferenc Gyarmati, Connor Hogan, Bernhard Jordan, Seliem El-Sayed, Barbara Prainsack, Torsten Moller
We present PLUTO (Public VaLUe Assessment TOol), a framework for assessing the public value of specific instances of data use. Grounded in the concept of data solidarity, PLUTO aims to empower diverse stakeholders-including regulatory bodies, private enterprises, NGOs, and individuals-to critically engage with data projects through a structured assessment of the risks and benefits of data use, and by encouraging critical reflection. This paper discusses the theoretical foundation, development process, and initial user experiences with PLUTO. Key challenges include translating qualitative assessments of benefits and risks into actionable quantitative metrics while maintaining inclusivity and transparency. Initial feedback highlights PLUTO's potential to foster responsible decision-making and shared accountability in data practices.
{"title":"PLUTO: A Public Value Assessment Tool.","authors":"Laura Koesten, Peter Ferenc Gyarmati, Connor Hogan, Bernhard Jordan, Seliem El-Sayed, Barbara Prainsack, Torsten Moller","doi":"10.1109/MCG.2025.3649342","DOIUrl":"https://doi.org/10.1109/MCG.2025.3649342","url":null,"abstract":"<p><p>We present PLUTO (Public VaLUe Assessment TOol), a framework for assessing the public value of specific instances of data use. Grounded in the concept of data solidarity, PLUTO aims to empower diverse stakeholders-including regulatory bodies, private enterprises, NGOs, and individuals-to critically engage with data projects through a structured assessment of the risks and benefits of data use, and by encouraging critical reflection. This paper discusses the theoretical foundation, development process, and initial user experiences with PLUTO. Key challenges include translating qualitative assessments of benefits and risks into actionable quantitative metrics while maintaining inclusivity and transparency. Initial feedback highlights PLUTO's potential to foster responsible decision-making and shared accountability in data practices.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":""},"PeriodicalIF":1.4,"publicationDate":"2026-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146004542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01DOI: 10.1109/MCG.2025.3556656
Tsukasa Fukusato, Naoki Kita
This article proposes a method to design protective foam for packaging 3-D objects. Users first load a 3-D object and define a block-based design space by setting the block resolution and the size of each block. The system then constructs a block map in the space using depth textures of the input object, separates the map into two regions, and outputs the regions as foams. The proposed method is fast and stable, allowing the user to interactively make protective foams. The generated foam is a height field in each direction, so the foams can easily be fabricated using various materials, such as LEGO blocks, sponge with slits, glass, and wood. This article shows some examples of fabrication results to demonstrate the robustness of our system. In addition, we conducted a user study and confirmed that our system is effective for manually designing protective foams envisioned by users.
{"title":"Computational Design and Fabrication of Protective Foam.","authors":"Tsukasa Fukusato, Naoki Kita","doi":"10.1109/MCG.2025.3556656","DOIUrl":"10.1109/MCG.2025.3556656","url":null,"abstract":"<p><p>This article proposes a method to design protective foam for packaging 3-D objects. Users first load a 3-D object and define a block-based design space by setting the block resolution and the size of each block. The system then constructs a block map in the space using depth textures of the input object, separates the map into two regions, and outputs the regions as foams. The proposed method is fast and stable, allowing the user to interactively make protective foams. The generated foam is a height field in each direction, so the foams can easily be fabricated using various materials, such as LEGO blocks, sponge with slits, glass, and wood. This article shows some examples of fabrication results to demonstrate the robustness of our system. In addition, we conducted a user study and confirmed that our system is effective for manually designing protective foams envisioned by users.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":"81-88"},"PeriodicalIF":1.4,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143765957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01DOI: 10.1109/MCG.2025.3608802
Hannes Rall, Alice Osinska, Aaron Zhi Qiang Lim
The study investigates the role of human-AI interaction in animating illustration with a case study of John Gilbert's visual interpretation of Shakespeare's play As You Like It. Through a multilayered animation, the research highlighted the irreplaceable role of human direction, particularly as a creator, to achieve narrative and visual coherence in AI-assisted animation. Drawing on theories of creativity and collaborative spaces, this article argues that human guidance is inherent for successful AI-empowered animation. It proposes a structured HAI workflow where the human remains the creative agent and main lead, while AI augments the process. This case study showcased how cocreative workflow can ensure visual and narrative coherence rather than foster mutual extinction.
{"title":"Animating Shakespeare: A Case Study in Human-AI Collaboration for Animating Classical Illustration.","authors":"Hannes Rall, Alice Osinska, Aaron Zhi Qiang Lim","doi":"10.1109/MCG.2025.3608802","DOIUrl":"10.1109/MCG.2025.3608802","url":null,"abstract":"<p><p>The study investigates the role of human-AI interaction in animating illustration with a case study of John Gilbert's visual interpretation of Shakespeare's play As You Like It. Through a multilayered animation, the research highlighted the irreplaceable role of human direction, particularly as a creator, to achieve narrative and visual coherence in AI-assisted animation. Drawing on theories of creativity and collaborative spaces, this article argues that human guidance is inherent for successful AI-empowered animation. It proposes a structured HAI workflow where the human remains the creative agent and main lead, while AI augments the process. This case study showcased how cocreative workflow can ensure visual and narrative coherence rather than foster mutual extinction.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":"41-51"},"PeriodicalIF":1.4,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145356852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01DOI: 10.1109/MCG.2025.3614686
Ruishan Wu, Zhuoyang Li, Charles Perin, Sheelagh Carpendale, Can Liu
Personal affective physicalization is the process by which individuals express emotions through tangible forms to record, reflect on, and communicate. Yet such physical data representations can be challenging to design due to the abstract nature of emotions. Given the shown potential of AI in detecting emotion and assisting design, we explore opportunities in AI-assisted design of personal affective physicalization using a research-through-design method. We developed PhEmotion, a tool for embedding LLM-extracted emotion values from human-AI conversations into the parametric design of physical artifacts. A lab study was conducted with 14 participants creating these artifacts based on their personal emotions, with and without AI support. We observed nuances and variations in participants' creative strategies, meaning-making processes, and their perceptions of AI support in this context. We found key tensions in AI-human cocreation that provide a nuanced agenda for future research in AI-assisted personal affective physicalization.
{"title":"Design Exploration of AI-Assisted Personal Affective Physicalization.","authors":"Ruishan Wu, Zhuoyang Li, Charles Perin, Sheelagh Carpendale, Can Liu","doi":"10.1109/MCG.2025.3614686","DOIUrl":"10.1109/MCG.2025.3614686","url":null,"abstract":"<p><p>Personal affective physicalization is the process by which individuals express emotions through tangible forms to record, reflect on, and communicate. Yet such physical data representations can be challenging to design due to the abstract nature of emotions. Given the shown potential of AI in detecting emotion and assisting design, we explore opportunities in AI-assisted design of personal affective physicalization using a research-through-design method. We developed PhEmotion, a tool for embedding LLM-extracted emotion values from human-AI conversations into the parametric design of physical artifacts. A lab study was conducted with 14 participants creating these artifacts based on their personal emotions, with and without AI support. We observed nuances and variations in participants' creative strategies, meaning-making processes, and their perceptions of AI support in this context. We found key tensions in AI-human cocreation that provide a nuanced agenda for future research in AI-assisted personal affective physicalization.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":"26-40"},"PeriodicalIF":1.4,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145180380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01DOI: 10.1109/MCG.2025.3623124
Shuo Wang, Tong Ren, Nan Cheng, Rong Wang, Li Zhang
Virtual surgical simulation offers promising training for complex procedures, such as robotic internal mammary artery harvesting. Building upon previous work on dynamic virtual simulation with haptic feedback, we present an adaptive human-AI interaction framework that dynamically adjusts cardiac pulsation parameters based on surgeon behavior analysis. Our system captures surgical tool movements and performance metrics to create personalized training through dynamic difficulty adjustment, context-aware parameter selection, personalized learning paths, and real-time feedback. In a study with three cardiac surgeons across 24 sessions, our adaptive approach showed significant improvements over static simulations: 18% reduction in spatial asymmetry, 22% faster completion, and 48% fewer tissue trauma events. The system demonstrated consistent benefits across different skill levels and sustained learning progression, preventing performance plateaus seen in fixed-difficulty conditions.
{"title":"Adaptive Cardiac Dynamics in Surgical Simulation: A Human-AI Interaction Framework for Robotic Internal Mammary Artery Harvesting.","authors":"Shuo Wang, Tong Ren, Nan Cheng, Rong Wang, Li Zhang","doi":"10.1109/MCG.2025.3623124","DOIUrl":"10.1109/MCG.2025.3623124","url":null,"abstract":"<p><p>Virtual surgical simulation offers promising training for complex procedures, such as robotic internal mammary artery harvesting. Building upon previous work on dynamic virtual simulation with haptic feedback, we present an adaptive human-AI interaction framework that dynamically adjusts cardiac pulsation parameters based on surgeon behavior analysis. Our system captures surgical tool movements and performance metrics to create personalized training through dynamic difficulty adjustment, context-aware parameter selection, personalized learning paths, and real-time feedback. In a study with three cardiac surgeons across 24 sessions, our adaptive approach showed significant improvements over static simulations: 18% reduction in spatial asymmetry, 22% faster completion, and 48% fewer tissue trauma events. The system demonstrated consistent benefits across different skill levels and sustained learning progression, preventing performance plateaus seen in fixed-difficulty conditions.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":"52-65"},"PeriodicalIF":1.4,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145314255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01DOI: 10.1109/MCG.2025.3634661
Bruce Donald Campbell, Beatriz Sousa Santos, Alejandra J Magana, Rafael Bidarra
This article describes the design and implementation of a course that evaluated the applicability of an artistic studio model, to educating students on the subject of artificial intelligence (AI). Specifically, four sections of an asynchronous, studio-style understanding and exploring AI course ran once per season via the Rhode Island School of Design online learning facility during the 2024-2025 academic year. The artistic studio model engages in bottom-up learning methods that include student-directed exploration, engagement, and artifact creation. As generative AI tools can output artistic artifacts based on human prompting, the research aligned well with typical course objectives. The qualitative study describes students' experiences of integrating Large Learning Models in support of their creative process. The results from 36 students are considered as evidence that the dominant model is applicable, and case studies from individual students are provided to assist the reader in considering the model for their own needs and interests.
{"title":"Developing Collaborative Artificial Intelligence in Artistic Practices: Using AI in Creative Explorations.","authors":"Bruce Donald Campbell, Beatriz Sousa Santos, Alejandra J Magana, Rafael Bidarra","doi":"10.1109/MCG.2025.3634661","DOIUrl":"https://doi.org/10.1109/MCG.2025.3634661","url":null,"abstract":"<p><p>This article describes the design and implementation of a course that evaluated the applicability of an artistic studio model, to educating students on the subject of artificial intelligence (AI). Specifically, four sections of an asynchronous, studio-style understanding and exploring AI course ran once per season via the Rhode Island School of Design online learning facility during the 2024-2025 academic year. The artistic studio model engages in bottom-up learning methods that include student-directed exploration, engagement, and artifact creation. As generative AI tools can output artistic artifacts based on human prompting, the research aligned well with typical course objectives. The qualitative study describes students' experiences of integrating Large Learning Models in support of their creative process. The results from 36 students are considered as evidence that the dominant model is applicable, and case studies from individual students are provided to assist the reader in considering the model for their own needs and interests.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"46 1","pages":"99-106"},"PeriodicalIF":1.4,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146055229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01DOI: 10.1109/MCG.2025.3635750
Marcello A Carrozzino, Matteo Caponi, Simone Pisani, Bruno Papaleo, Alda Mazzei, Rudy Foddis, Massimo Bergamasco, Mike Potel
Virtual reality (VR) technologies have emerged as valuable tools for medical and emergency training, providing safe, immersive, and repeatable environments where complex procedures can be practiced effectively. This article presents an immersive VR system designed to train workplace first-aid responders, with a particular focus on cardiopulmonary resuscitation (CPR). The platform integrates a physical CPR manikin with virtual patient overlays through mixed-reality calibration, incorporates realistic emergency scenarios with environmental hazards, and enables synchronous multiuser interaction between trainees and instructors. To assess its potential, we provide a detailed description of the system's architecture and functionalities, introducing the results of an extensive user study employing validated questionnaires on usability, performance, and user experience. The proposed framework contributes to the advancement of VR-based medical education, highlighting its benefits, current limitations, and future research opportunities.
{"title":"An Immersive Virtual Reality Platform for First Aid and Emergency Training.","authors":"Marcello A Carrozzino, Matteo Caponi, Simone Pisani, Bruno Papaleo, Alda Mazzei, Rudy Foddis, Massimo Bergamasco, Mike Potel","doi":"10.1109/MCG.2025.3635750","DOIUrl":"https://doi.org/10.1109/MCG.2025.3635750","url":null,"abstract":"<p><p>Virtual reality (VR) technologies have emerged as valuable tools for medical and emergency training, providing safe, immersive, and repeatable environments where complex procedures can be practiced effectively. This article presents an immersive VR system designed to train workplace first-aid responders, with a particular focus on cardiopulmonary resuscitation (CPR). The platform integrates a physical CPR manikin with virtual patient overlays through mixed-reality calibration, incorporates realistic emergency scenarios with environmental hazards, and enables synchronous multiuser interaction between trainees and instructors. To assess its potential, we provide a detailed description of the system's architecture and functionalities, introducing the results of an extensive user study employing validated questionnaires on usability, performance, and user experience. The proposed framework contributes to the advancement of VR-based medical education, highlighting its benefits, current limitations, and future research opportunities.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"46 1","pages":"107-115"},"PeriodicalIF":1.4,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146055245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01DOI: 10.1109/MCG.2025.3624666
Bahar Ilgen, Georges Hattab, Theresa-Marie Rhyne
This Visualization Viewpoints article explores how visualization helps uncover and communicate the internal chain-of-thought trajectories and generative pathways of large language models (LLMs) in reasoning tasks. As LLMs become increasingly powerful and widespread, a key challenge is understanding how their reasoning dynamics unfold, particularly in natural language processing (NLP) applications. Their outputs may appear coherent, yet the multistep inference pathways behind them remain largely hidden. We argue that visualization offers an effective avenue to illuminate these internal mechanisms. Moving beyond attention weights or token saliency, we advocate for richer visual tools that expose model uncertainty, highlight alternative reasoning paths, and reveal what the model omits or overlooks. We discuss examples, such as prompt trajectory visualizations, counterfactual response maps, and semantic drift flows, to illustrate how these techniques foster trust, identify failure modes, and support deeper human interaction with these systems. In doing so, visualizing the chain of thought in LLMs lays critical groundwork for transparent, interpretable, and truly collaborative human-AI reasoning.
{"title":"Visualizing the Chain of Thought in Large Language Models.","authors":"Bahar Ilgen, Georges Hattab, Theresa-Marie Rhyne","doi":"10.1109/MCG.2025.3624666","DOIUrl":"https://doi.org/10.1109/MCG.2025.3624666","url":null,"abstract":"<p><p>This Visualization Viewpoints article explores how visualization helps uncover and communicate the internal chain-of-thought trajectories and generative pathways of large language models (LLMs) in reasoning tasks. As LLMs become increasingly powerful and widespread, a key challenge is understanding how their reasoning dynamics unfold, particularly in natural language processing (NLP) applications. Their outputs may appear coherent, yet the multistep inference pathways behind them remain largely hidden. We argue that visualization offers an effective avenue to illuminate these internal mechanisms. Moving beyond attention weights or token saliency, we advocate for richer visual tools that expose model uncertainty, highlight alternative reasoning paths, and reveal what the model omits or overlooks. We discuss examples, such as prompt trajectory visualizations, counterfactual response maps, and semantic drift flows, to illustrate how these techniques foster trust, identify failure modes, and support deeper human interaction with these systems. In doing so, visualizing the chain of thought in LLMs lays critical groundwork for transparent, interpretable, and truly collaborative human-AI reasoning.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"46 1","pages":"89-98"},"PeriodicalIF":1.4,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146054539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}