Pub Date : 2025-03-01DOI: 10.1109/MCG.2024.3517293
Katja Buhler, Thomas Hollt, Thomas Schultz, Pere-Pau Vazquez, Theresa-Marie Rhyne
AI is the workhorse of modern data analytics and omnipresent across many sectors. Large language models and multimodal foundation models are today capable of generating code, charts, visualizations, etc. How will these massive developments of AI in data analytics shape future data visualizations and visual analytics workflows? What is the potential of AI to reshape methodology and design of future visual analytics applications? What will be our role as visualization researchers in the future? What are opportunities, open challenges, and threats in the context of an increasingly powerful AI? This Visualization Viewpoints discusses these questions in the special context of biomedical data analytics as an example of a domain in which critical decisions are taken based on complex and sensitive data, with high requirements on transparency, efficiency, and reliability. We map recent trends and developments in AI on the elements of interactive visualization and visual analytics workflows and highlight the potential of AI to transform biomedical visualization as a research field. Given that agency and responsibility have to remain with human experts, we argue that it is helpful to keep the focus on human-centered workflows, and to use visual analytics as a tool for integrating "AI-in-the-loop." This is in contrast to the more traditional term "human-in-the-loop." which focuses on incorporating human expertise into AI-based systems.
{"title":"AI-in-The-Loop: The Future of Biomedical Visual Analytics Applications in the Era of AI.","authors":"Katja Buhler, Thomas Hollt, Thomas Schultz, Pere-Pau Vazquez, Theresa-Marie Rhyne","doi":"10.1109/MCG.2024.3517293","DOIUrl":"10.1109/MCG.2024.3517293","url":null,"abstract":"<p><p>AI is the workhorse of modern data analytics and omnipresent across many sectors. Large language models and multimodal foundation models are today capable of generating code, charts, visualizations, etc. How will these massive developments of AI in data analytics shape future data visualizations and visual analytics workflows? What is the potential of AI to reshape methodology and design of future visual analytics applications? What will be our role as visualization researchers in the future? What are opportunities, open challenges, and threats in the context of an increasingly powerful AI? This Visualization Viewpoints discusses these questions in the special context of biomedical data analytics as an example of a domain in which critical decisions are taken based on complex and sensitive data, with high requirements on transparency, efficiency, and reliability. We map recent trends and developments in AI on the elements of interactive visualization and visual analytics workflows and highlight the potential of AI to transform biomedical visualization as a research field. Given that agency and responsibility have to remain with human experts, we argue that it is helpful to keep the focus on human-centered workflows, and to use visual analytics as a tool for integrating \"AI-in-the-loop.\" This is in contrast to the more traditional term \"human-in-the-loop.\" which focuses on incorporating human expertise into AI-based systems.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"45 2","pages":"90-99"},"PeriodicalIF":1.7,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144287144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-01DOI: 10.1109/MCG.2025.3570722
Sudhir K Routray
Generative artificial intelligence (AI) has immense potential to create diverse computer graphics for various applications, but it also raises significant ethical issues. This article examines the ethical landscape of using generative AI in computer graphics, highlighting key concerns, such as the authenticity of generated content, intellectual property rights, and cultural appropriation. Additional ethical challenges include algorithmic bias in graphics generation, representation, privacy, inclusivity, and the impact on human-computer interaction and artistic integrity. The displacement of creative professionals, erosion of trust in visual media, and psychological effects of AI-generated content further complicate the ethical debate. Addressing these issues requires a comprehensive approach that integrates technological innovation with regulatory oversight, ethical education, and collaboration among stakeholders. By carefully considering these ethical dimensions, we can fully leverage generative AI's potential in computer graphics while mitigating its risks.
{"title":"Ethical Considerations and Implications of Generative AI in Computer Graphics.","authors":"Sudhir K Routray","doi":"10.1109/MCG.2025.3570722","DOIUrl":"10.1109/MCG.2025.3570722","url":null,"abstract":"<p><p>Generative artificial intelligence (AI) has immense potential to create diverse computer graphics for various applications, but it also raises significant ethical issues. This article examines the ethical landscape of using generative AI in computer graphics, highlighting key concerns, such as the authenticity of generated content, intellectual property rights, and cultural appropriation. Additional ethical challenges include algorithmic bias in graphics generation, representation, privacy, inclusivity, and the impact on human-computer interaction and artistic integrity. The displacement of creative professionals, erosion of trust in visual media, and psychological effects of AI-generated content further complicate the ethical debate. Addressing these issues requires a comprehensive approach that integrates technological innovation with regulatory oversight, ethical education, and collaboration among stakeholders. By carefully considering these ethical dimensions, we can fully leverage generative AI's potential in computer graphics while mitigating its risks.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":"78-89"},"PeriodicalIF":1.7,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144082153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Japanese entertainment computer graphics (CG) industry, including games, animation, and visual effects, is facing a gap between industry demands and the programs being offered by educational institutions. As work is being diversified, this gap is becoming more apparent owing to the lack of education in specialized technical and communication skills that companies require. To address this gap between educational institutions and Japan's CG industry, we propose a new educational model developed through industry-academic collaboration. Centered on a "salad bowl" education framework, the model reflects corporate culture and project characteristics, and it incorporates a mentorship system that promotes the transfer of practical skills, meeting the needs of both students and companies. The initial implementation of the model revealed significant improvement in student skills and satisfaction, indicating the formation of the foundation necessary for workplace success. The study focuses on the training of CG creators in the industry and educational institutions, excluding computational knowledge and programming skills. This approach highlights the importance of long-term cooperation between education and industry in meeting diverse demands. Future research should explore the long-term impact and scalability of this model.
{"title":"Bridging the Gap: Long-Term Collaboration Between Computer Graphics Production and Educational Institutions in Japan.","authors":"Harutaka Matsunaga, Kazunori Miyata, Yukari Nagai, Beatriz Sousa Santos, Alejandra J Magana, Rafael Bidarra","doi":"10.1109/MCG.2025.3528677","DOIUrl":"https://doi.org/10.1109/MCG.2025.3528677","url":null,"abstract":"<p><p>The Japanese entertainment computer graphics (CG) industry, including games, animation, and visual effects, is facing a gap between industry demands and the programs being offered by educational institutions. As work is being diversified, this gap is becoming more apparent owing to the lack of education in specialized technical and communication skills that companies require. To address this gap between educational institutions and Japan's CG industry, we propose a new educational model developed through industry-academic collaboration. Centered on a \"salad bowl\" education framework, the model reflects corporate culture and project characteristics, and it incorporates a mentorship system that promotes the transfer of practical skills, meeting the needs of both students and companies. The initial implementation of the model revealed significant improvement in student skills and satisfaction, indicating the formation of the foundation necessary for workplace success. The study focuses on the training of CG creators in the industry and educational institutions, excluding computational knowledge and programming skills. This approach highlights the importance of long-term cooperation between education and industry in meeting diverse demands. Future research should explore the long-term impact and scalability of this model.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"45 2","pages":"152-160"},"PeriodicalIF":1.7,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144287145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01DOI: 10.1109/MCG.2024.3491532
Praneeth Chakravarthula, Sumanta N Pattanaik
Augmented reality (AR) is emerging as the next ubiquitous wearable technology and is expected to significantly transform various industries in the near future. There has been tremendous investment in developing AR eyeglasses in recent years, including about $45 billion investment by Meta since 2021. Despite such efforts, the existing displays are very bulky in form factor and there has not yet been a socially acceptable eyeglasses-style AR display. Such wearable display eyeglasses promise to unlock enormous potential in diverse applications such as medicine, education, navigation, and many more; but until eyeglass-style AR glasses are realized, those possibilities remain only a dream. My research addresses this problem and makes progress "towards everyday-use augmented reality eyeglasses" through computational imaging, displays, and perception. My dissertation (Chakravarthula, 2021) made advances in three key and seemingly distinct areas: first, digital holography and advanced algorithms for compact, high-quality, true 3-D holographic displays; second, hardware and software for robust and comprehensive 3-D eye tracking via Purkinje Images; and third, automatic focus adjusting AR display eyeglasses for well-focused virtual and real imagery, toward potentially achieving 20/20 vision for users of all ages.
{"title":"Present and Future of Everyday-Use Augmented Reality Eyeglasses.","authors":"Praneeth Chakravarthula, Sumanta N Pattanaik","doi":"10.1109/MCG.2024.3491532","DOIUrl":"10.1109/MCG.2024.3491532","url":null,"abstract":"<p><p>Augmented reality (AR) is emerging as the next ubiquitous wearable technology and is expected to significantly transform various industries in the near future. There has been tremendous investment in developing AR eyeglasses in recent years, including about $45 billion investment by Meta since 2021. Despite such efforts, the existing displays are very bulky in form factor and there has not yet been a socially acceptable eyeglasses-style AR display. Such wearable display eyeglasses promise to unlock enormous potential in diverse applications such as medicine, education, navigation, and many more; but until eyeglass-style AR glasses are realized, those possibilities remain only a dream. My research addresses this problem and makes progress \"towards everyday-use augmented reality eyeglasses\" through computational imaging, displays, and perception. My dissertation (Chakravarthula, 2021) made advances in three key and seemingly distinct areas: first, digital holography and advanced algorithms for compact, high-quality, true 3-D holographic displays; second, hardware and software for robust and comprehensive 3-D eye tracking via Purkinje Images; and third, automatic focus adjusting AR display eyeglasses for well-focused virtual and real imagery, toward potentially achieving 20/20 vision for users of all ages.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"45 1","pages":"56-66"},"PeriodicalIF":1.7,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144041253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01DOI: 10.1109/MCG.2024.3509293
Tom Baumgartl, Mohammad Ghoniem, Tatiana von Landesberger, G Elisabeta Marai, Silvia Miksch, Sibylle Mohr, Simone Scheithauer, Nikita Srivastava, Melanie Tory, Daniel Keefe
Data visualization methodologies were intensively leveraged during the COVID-19 pandemic. We review our design experience working on a set of interdisciplinary COVID-19 pandemic projects. We describe the challenges we met in these projects, characterize the respective user communities, the goals and tasks we supported, and the data types and visual media we worked with. Furthermore, we instantiate these characterizations in a series of case studies. Finally, we describe the visual analysis lessons we learned, considering future pandemics.
{"title":"Empowering Communities: Tailored Pandemic Data Visualization for Varied Tasks and Users.","authors":"Tom Baumgartl, Mohammad Ghoniem, Tatiana von Landesberger, G Elisabeta Marai, Silvia Miksch, Sibylle Mohr, Simone Scheithauer, Nikita Srivastava, Melanie Tory, Daniel Keefe","doi":"10.1109/MCG.2024.3509293","DOIUrl":"10.1109/MCG.2024.3509293","url":null,"abstract":"<p><p>Data visualization methodologies were intensively leveraged during the COVID-19 pandemic. We review our design experience working on a set of interdisciplinary COVID-19 pandemic projects. We describe the challenges we met in these projects, characterize the respective user communities, the goals and tasks we supported, and the data types and visual media we worked with. Furthermore, we instantiate these characterizations in a series of case studies. Finally, we describe the visual analysis lessons we learned, considering future pandemics.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"45 1","pages":"130-138"},"PeriodicalIF":1.7,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12075951/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144053274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01DOI: 10.1109/MCG.2024.3475188
Elif E Firat, Chandana Srinivas, Colm Lang, Bhumika Srinivas, Robert S Laramee, Alark P Joshi, Beatriz Sousa Santos, Alejandra J Magana, Rafael Bidarra
Constructivist learning is based on the principle that learners construct knowledge based on their prior knowledge and experiences. We explored the impact of a constructivist approach to introduce students to the Treemaps visualization technique. We developed software that helps students understand Treemaps using a synchronized, multiview, interactive node-link representation of the same data. While students in both groups-the ones who used the node-link diagram with the Treemaps and the ones who used only the interactive Treemaps-demonstrated significant improvement in learning, students who only interacted with the Treemaps representation performed better on a variety of tasks related to reading and interpreting Treemaps.
{"title":"Evaluating the Impact of a Constructivist Approach to Treemap Literacy.","authors":"Elif E Firat, Chandana Srinivas, Colm Lang, Bhumika Srinivas, Robert S Laramee, Alark P Joshi, Beatriz Sousa Santos, Alejandra J Magana, Rafael Bidarra","doi":"10.1109/MCG.2024.3475188","DOIUrl":"https://doi.org/10.1109/MCG.2024.3475188","url":null,"abstract":"<p><p>Constructivist learning is based on the principle that learners construct knowledge based on their prior knowledge and experiences. We explored the impact of a constructivist approach to introduce students to the Treemaps visualization technique. We developed software that helps students understand Treemaps using a synchronized, multiview, interactive node-link representation of the same data. While students in both groups-the ones who used the node-link diagram with the Treemaps and the ones who used only the interactive Treemaps-demonstrated significant improvement in learning, students who only interacted with the Treemaps representation performed better on a variety of tasks related to reading and interpreting Treemaps.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"45 1","pages":"139-147"},"PeriodicalIF":1.7,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144058763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01DOI: 10.1109/MCG.2024.3497672
Bernardo Marques, Diogo Moreira, Martim Neves, Susana Bras, Jose Maria Fernandes, Mike Potel
Advancements in virtual reality (VR) technology have enabled its use to assist in multiple fields. This study introduces a comprehensive framework designed to support exposure therapy through a series of VR serious games and physiological monitoring. It relies on a generic architecture, allowing for the modification of VR stimuli according to different phobias (e.g., arachnophobia, acrophobia), while the remaining modules can be reused for data collection and analysis. Furthermore, the framework incorporates customizable biofeedback mechanisms that trigger specific events or adjust stimulus levels based on physiological responses. Prior to integration into the overall architecture, the proposed VR serious games underwent assessment in various events with a total of 56 participants. In addition, the framework's ability to capture diverse biosignals and synchronize them with other relevant metrics was evaluated through two user studies involving a total of 23 participants.
{"title":"Battle Against Your Fears: Virtual Reality Serious Games and Physiological Analysis for Phobia Treatment.","authors":"Bernardo Marques, Diogo Moreira, Martim Neves, Susana Bras, Jose Maria Fernandes, Mike Potel","doi":"10.1109/MCG.2024.3497672","DOIUrl":"10.1109/MCG.2024.3497672","url":null,"abstract":"<p><p>Advancements in virtual reality (VR) technology have enabled its use to assist in multiple fields. This study introduces a comprehensive framework designed to support exposure therapy through a series of VR serious games and physiological monitoring. It relies on a generic architecture, allowing for the modification of VR stimuli according to different phobias (e.g., arachnophobia, acrophobia), while the remaining modules can be reused for data collection and analysis. Furthermore, the framework incorporates customizable biofeedback mechanisms that trigger specific events or adjust stimulus levels based on physiological responses. Prior to integration into the overall architecture, the proposed VR serious games underwent assessment in various events with a total of 56 participants. In addition, the framework's ability to capture diverse biosignals and synchronize them with other relevant metrics was evaluated through two user studies involving a total of 23 participants.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"45 1","pages":"67-75"},"PeriodicalIF":1.7,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144033654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-04-14DOI: 10.1109/MCG.2025.3548554
Muhamamd Zeshan Afzal, Sk Aziz Ali, Didier Stricker, Peter Eisert, Anna Hilsmann, Daniel Perez-Marcos, Marco Bianchi, Sonia Crottaz-Herbette, Roberto De Ioris, Eleni Mangina, Mirco Sanguineti, Ander Salaberria, Oier Lopez de Lacalle, Aitor Garcia-Pablos, Montse Cuadros
Extended reality (XR) is evolving rapidly, offering new paradigms for human-computer interaction. This position paper argues that integrating large language models (LLMs) with XR systems represents a fundamental shift toward more intelligent, context-aware, and adaptive mixed-reality experiences. We propose a structured framework built on three key pillars: first, perception and situational awareness, second, knowledge modeling and reasoning, and third, visualization and interaction. We believe leveraging LLMs within XR environments enables enhanced situational awareness, real-time knowledge retrieval, and dynamic user interaction, surpassing traditional XR capabilities. We highlight the potential of this integration in neurorehabilitation, safety training, and architectural design while underscoring ethical considerations, such as privacy, transparency, and inclusivity. This vision aims to spark discussion and drive research toward more intelligent, human-centric XR systems.
{"title":"Next Generation XR Systems-Large Language Models Meet Augmented and Virtual Reality.","authors":"Muhamamd Zeshan Afzal, Sk Aziz Ali, Didier Stricker, Peter Eisert, Anna Hilsmann, Daniel Perez-Marcos, Marco Bianchi, Sonia Crottaz-Herbette, Roberto De Ioris, Eleni Mangina, Mirco Sanguineti, Ander Salaberria, Oier Lopez de Lacalle, Aitor Garcia-Pablos, Montse Cuadros","doi":"10.1109/MCG.2025.3548554","DOIUrl":"10.1109/MCG.2025.3548554","url":null,"abstract":"<p><p>Extended reality (XR) is evolving rapidly, offering new paradigms for human-computer interaction. This position paper argues that integrating large language models (LLMs) with XR systems represents a fundamental shift toward more intelligent, context-aware, and adaptive mixed-reality experiences. We propose a structured framework built on three key pillars: first, perception and situational awareness, second, knowledge modeling and reasoning, and third, visualization and interaction. We believe leveraging LLMs within XR environments enables enhanced situational awareness, real-time knowledge retrieval, and dynamic user interaction, surpassing traditional XR capabilities. We highlight the potential of this integration in neurorehabilitation, safety training, and architectural design while underscoring ethical considerations, such as privacy, transparency, and inclusivity. This vision aims to spark discussion and drive research toward more intelligent, human-centric XR systems.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":"43-55"},"PeriodicalIF":1.7,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143574654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-04-14DOI: 10.1109/MCG.2024.3521716
Alberto Cannavo, Giacomo Offre, Fabrizio Lamberti
Technological advancements are prompting the digitization of many industries, including fashion. Many brands are exploring ways to enhance customers' experience [e.g., offering new shopping-oriented services like virtual fitting rooms (VFRs)]. However, there are still challenges that prevent customers from effectively using these tools for trying on digital garments. Challenges are associated with difficulties in obtaining high-fidelity reconstructions of body shapes and providing realistic visualizations of animated clothes following real-time customers' movements. This article tackles such lacks by proposing a semiautomated pipeline supporting the creation of VFR experiences by exploiting state-of-the-art techniques for the accurate description and reconstruction of customers' 3-D avatars, motion capture-based animation, as well as realistic garment design and simulation. A user study in which the resulting VFR experience was compared with those created with two existing tools showed the benefits of the devised solution in terms of usability, embodiment, model accuracy, perceived value, adoption, and purchase intention.
{"title":"A Semiautomated Pipeline for the Creation of Virtual Fitting Room Experiences Featuring Motion Capture and Cloth Simulation.","authors":"Alberto Cannavo, Giacomo Offre, Fabrizio Lamberti","doi":"10.1109/MCG.2024.3521716","DOIUrl":"10.1109/MCG.2024.3521716","url":null,"abstract":"<p><p>Technological advancements are prompting the digitization of many industries, including fashion. Many brands are exploring ways to enhance customers' experience [e.g., offering new shopping-oriented services like virtual fitting rooms (VFRs)]. However, there are still challenges that prevent customers from effectively using these tools for trying on digital garments. Challenges are associated with difficulties in obtaining high-fidelity reconstructions of body shapes and providing realistic visualizations of animated clothes following real-time customers' movements. This article tackles such lacks by proposing a semiautomated pipeline supporting the creation of VFR experiences by exploiting state-of-the-art techniques for the accurate description and reconstruction of customers' 3-D avatars, motion capture-based animation, as well as realistic garment design and simulation. A user study in which the resulting VFR experience was compared with those created with two existing tools showed the benefits of the devised solution in terms of usability, embodiment, model accuracy, perceived value, adoption, and purchase intention.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":"84-98"},"PeriodicalIF":1.7,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143544570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-04-14DOI: 10.1109/MCG.2024.3462926
Faizan Siddiqui, H Bart Brouwers, Geert-Jan Rutten, Thomas Hollt, Anna Vilanova
Fiber tracking is a powerful technique that provides insight into the brain's white matter structure. Despite its potential, the inherent uncertainties limit its widespread clinical use. These uncertainties potentially hamper the clinical decisions neurosurgeons have to make before, during, and after the surgery. Many techniques have been developed to visualize uncertainties; however, there is limited evidence to suggest whether these uncertainty visualization influences neurosurgical decision-making. In this article, we evaluate the hypothesis that uncertainty visualization in fiber tracking influences neurosurgeon's decisions and confidence in their decisions. For this purpose, we designed a user study through an online interactive questionnaire and evaluate the influence of uncertainty visualization in neurosurgical decision-making. The results of this study emphasize the importance of uncertainty visualization in clinical decision-making by highlighting the influence of different intervals of uncertainty visualization in critical clinical decisions.
{"title":"Effect of White Matter Uncertainty Visualization in Neurosurgical Decision-Making.","authors":"Faizan Siddiqui, H Bart Brouwers, Geert-Jan Rutten, Thomas Hollt, Anna Vilanova","doi":"10.1109/MCG.2024.3462926","DOIUrl":"10.1109/MCG.2024.3462926","url":null,"abstract":"<p><p>Fiber tracking is a powerful technique that provides insight into the brain's white matter structure. Despite its potential, the inherent uncertainties limit its widespread clinical use. These uncertainties potentially hamper the clinical decisions neurosurgeons have to make before, during, and after the surgery. Many techniques have been developed to visualize uncertainties; however, there is limited evidence to suggest whether these uncertainty visualization influences neurosurgical decision-making. In this article, we evaluate the hypothesis that uncertainty visualization in fiber tracking influences neurosurgeon's decisions and confidence in their decisions. For this purpose, we designed a user study through an online interactive questionnaire and evaluate the influence of uncertainty visualization in neurosurgical decision-making. The results of this study emphasize the importance of uncertainty visualization in clinical decision-making by highlighting the influence of different intervals of uncertainty visualization in critical clinical decisions.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":"106-121"},"PeriodicalIF":1.7,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142332712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}