首页 > 最新文献

IEEE Computer Graphics and Applications最新文献

英文 中文
Low-Barrier Dataset Collection With Real Human Body for Interactive Per-Garment Virtual Try-On. 基于真实人体的低障碍数据集,用于服装虚拟交互试穿。
IF 1.4 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-31 DOI: 10.1109/MCG.2025.3649499
Zaiqiang Wu, Yechen Li, Jingyuan Liu, Yuki Shibata, Takayuki Hori, I-Chao Shen, Takeo Igarashi

Existing image-based virtual try-on methods are limited to frontal views and lack real-time performance. While per-garment virtual try-on methods have tackled these issues by adopting per-garment training, they still encounter practical limitations: (1) the robotic mannequin used for per-garment datasets collection is prohibitively expensive; (2) the synthesized garments often misalign with the human body. To address these challenges, we propose a low-barrier approach to collect per-garment datasets using real human bodies, eliminating the need for an expensive robotic mannequin and reducing data collection time from 2 hours to 2 minutes. Additionally, we introduce a hybrid person representation that ensures precise human-garment alignment. We conducted qualitative and quantitative comparisons with state-of-the-art image-based virtual try-on methods to demonstrate the superiority of our method regarding image quality and temporal consistency. Furthermore, most participants in our user study found the system effective in supporting garment purchasing decisions.

现有的基于图像的虚拟试戴方法仅限于正面视图,缺乏实时性。虽然单件服装虚拟试穿方法通过采用单件服装训练解决了这些问题,但它们仍然遇到实际的局限性:(1)用于单件服装数据集收集的机器人模型过于昂贵;(2)合成的服装往往与人体不协调。为了解决这些挑战,我们提出了一种低障碍的方法,使用真实的人体来收集每件衣服的数据集,消除了对昂贵的机器人模型的需求,并将数据收集时间从2小时减少到2分钟。此外,我们引入了一个混合的人表示,以确保精确的人体服装对齐。我们与最先进的基于图像的虚拟试戴方法进行了定性和定量比较,以证明我们的方法在图像质量和时间一致性方面的优越性。此外,在我们的用户研究中,大多数参与者发现该系统在支持服装购买决策方面是有效的。
{"title":"Low-Barrier Dataset Collection With Real Human Body for Interactive Per-Garment Virtual Try-On.","authors":"Zaiqiang Wu, Yechen Li, Jingyuan Liu, Yuki Shibata, Takayuki Hori, I-Chao Shen, Takeo Igarashi","doi":"10.1109/MCG.2025.3649499","DOIUrl":"https://doi.org/10.1109/MCG.2025.3649499","url":null,"abstract":"<p><p>Existing image-based virtual try-on methods are limited to frontal views and lack real-time performance. While per-garment virtual try-on methods have tackled these issues by adopting per-garment training, they still encounter practical limitations: (1) the robotic mannequin used for per-garment datasets collection is prohibitively expensive; (2) the synthesized garments often misalign with the human body. To address these challenges, we propose a low-barrier approach to collect per-garment datasets using real human bodies, eliminating the need for an expensive robotic mannequin and reducing data collection time from 2 hours to 2 minutes. Additionally, we introduce a hybrid person representation that ensures precise human-garment alignment. We conducted qualitative and quantitative comparisons with state-of-the-art image-based virtual try-on methods to demonstrate the superiority of our method regarding image quality and temporal consistency. Furthermore, most participants in our user study found the system effective in supporting garment purchasing decisions.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":""},"PeriodicalIF":1.4,"publicationDate":"2025-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145879409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Experiencing Data Visualization with Language Disability. 体验语言障碍的数据可视化。
IF 1.4 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-10 DOI: 10.1109/MCG.2025.3642747
Jo Wood, Niamh Devane, Abi Roper, Nicola Botting, Madeline Cruice, Ulfa Octaviani, Stephanie Wilson

Current data visualization research demonstrates very limited inclusion of users with language disabilities. To address this, this paper introduces the language disabilities Developmental Language Disorder (DLD) and aphasia. We present outcomes from a novel qualitative diary study exploring whether people living with either DLD or aphasia experience and engage with data visualization in their day-to-day lives. Outcomes reveal evidence of both exposure to, and engagement with, data visualization across a week-long period alongside accompanying experiences of inclusion and exclusion of the benefits of data visualization. We report types of data visualization tasks and application domains encountered and descriptions of issues experienced by participants. Findings highlight a critical need for increased awareness of language access needs within the discipline of data visualization and a case for further research into design practices inclusive of people with language disabilities.

目前的数据可视化研究表明,包含语言障碍的用户非常有限。为此,本文介绍了发展性语言障碍(Developmental language Disorder, DLD)和失语症。我们介绍了一项新的定性日记研究的结果,该研究探讨了患有DLD或失语症的人是否在日常生活中体验并参与数据可视化。结果揭示了在为期一周的时间内,数据可视化的暴露和参与的证据,以及数据可视化的好处的包含和排除的经验。我们报告所遇到的数据可视化任务和应用领域的类型以及参与者所经历的问题的描述。研究结果强调,在数据可视化学科中,迫切需要提高对语言获取需求的认识,并需要进一步研究包括语言障碍人士在内的设计实践。
{"title":"Experiencing Data Visualization with Language Disability.","authors":"Jo Wood, Niamh Devane, Abi Roper, Nicola Botting, Madeline Cruice, Ulfa Octaviani, Stephanie Wilson","doi":"10.1109/MCG.2025.3642747","DOIUrl":"https://doi.org/10.1109/MCG.2025.3642747","url":null,"abstract":"<p><p>Current data visualization research demonstrates very limited inclusion of users with language disabilities. To address this, this paper introduces the language disabilities Developmental Language Disorder (DLD) and aphasia. We present outcomes from a novel qualitative diary study exploring whether people living with either DLD or aphasia experience and engage with data visualization in their day-to-day lives. Outcomes reveal evidence of both exposure to, and engagement with, data visualization across a week-long period alongside accompanying experiences of inclusion and exclusion of the benefits of data visualization. We report types of data visualization tasks and application domains encountered and descriptions of issues experienced by participants. Findings highlight a critical need for increased awareness of language access needs within the discipline of data visualization and a case for further research into design practices inclusive of people with language disabilities.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":""},"PeriodicalIF":1.4,"publicationDate":"2025-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145727348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HyShare: Hybrid Sample and Shading Reuse for Real-time Photorealistic Rendering. HyShare:用于实时逼真渲染的混合样本和阴影重用。
IF 1.4 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-11-28 DOI: 10.1109/MCG.2025.3638242
Yubin Zhou, Xiyun Song, Zhiqiang Lao, Yu Guo, Zongfang Lin, Heather Yu, Liang Peng

Real-time path tracing is computationally expensive due to intensive path sampling and shading, especially under high frame rate and high resolution demands. We present HyShare, a hybrid reuse algorithm that integrates ReSTIR-style path sample reuse with adaptive shading reuse across spatial and temporal domains. Unlike prior methods that treat reuse in isolation, HyShare jointly optimizes both reuse types, addressing their interdependencies while maintaining image fidelity. To prevent artifacts caused by stale data and correlation, we introduce per-pixel validation and dynamic refresh mechanisms. Our system adaptively disables reuse in motion-sensitive regions using radiance and geometric change checks. Evaluated on complex dynamic scenes, HyShare outperforms state-of-the-art baselines-including ReSTIR DI, ReSTIR PT, and Area ReSTIR-improving rendering speed by  37.4% and boosting image quality (PSNR +1.8 dB, SSIM +0.17). These results demonstrate the effectiveness and generalizability of HyShare in advancing real-time photorealistic rendering.

实时路径跟踪由于需要进行密集的路径采样和着色,特别是在高帧率和高分辨率要求下,计算成本很高。我们提出了HyShare,这是一种混合重用算法,它将restr风格的路径样本重用与跨空间和时间域的自适应阴影重用集成在一起。与先前单独处理重用的方法不同,HyShare联合优化了两种重用类型,在保持图像保真度的同时解决了它们的相互依赖性。为了防止由陈旧数据和相关性引起的伪影,我们引入了逐像素验证和动态刷新机制。我们的系统使用亮度和几何变化检查自适应地禁用运动敏感区域的重用。在复杂的动态场景中,HyShare优于最先进的基线-包括restr DI, restr PT和Area restr -将渲染速度提高了37.4%,并提高了图像质量(PSNR +1.8 dB, SSIM +0.17)。这些结果证明了HyShare在推进实时真实感渲染方面的有效性和通用性。
{"title":"HyShare: Hybrid Sample and Shading Reuse for Real-time Photorealistic Rendering.","authors":"Yubin Zhou, Xiyun Song, Zhiqiang Lao, Yu Guo, Zongfang Lin, Heather Yu, Liang Peng","doi":"10.1109/MCG.2025.3638242","DOIUrl":"https://doi.org/10.1109/MCG.2025.3638242","url":null,"abstract":"<p><p>Real-time path tracing is computationally expensive due to intensive path sampling and shading, especially under high frame rate and high resolution demands. We present HyShare, a hybrid reuse algorithm that integrates ReSTIR-style path sample reuse with adaptive shading reuse across spatial and temporal domains. Unlike prior methods that treat reuse in isolation, HyShare jointly optimizes both reuse types, addressing their interdependencies while maintaining image fidelity. To prevent artifacts caused by stale data and correlation, we introduce per-pixel validation and dynamic refresh mechanisms. Our system adaptively disables reuse in motion-sensitive regions using radiance and geometric change checks. Evaluated on complex dynamic scenes, HyShare outperforms state-of-the-art baselines-including ReSTIR DI, ReSTIR PT, and Area ReSTIR-improving rendering speed by  37.4% and boosting image quality (PSNR +1.8 dB, SSIM +0.17). These results demonstrate the effectiveness and generalizability of HyShare in advancing real-time photorealistic rendering.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":""},"PeriodicalIF":1.4,"publicationDate":"2025-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145642820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning-based Eddy Segmentation with Vector-Data for Biochemical Analysis in Ocean Simulations. 基于深度学习的矢量数据涡流分割在海洋模拟生化分析中的应用。
IF 1.4 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-11-07 DOI: 10.1109/MCG.2025.3630582
Weiping Hua, Sedat Ozer, Karen Bemis, Zihan Liu, Deborah Silver

Eddies are dynamic, swirling structures in ocean circulation that significantly influence the distribution of heat, nutrients, and plankton, there by impacting marine biological processes. Accurate eddy segmentation from ocean simulation data is essential for enabling subsequent biological and physical analysis. However, leveraging vector-valued inputs, such as ocean velocity fields, in deep learning-based segmentation models poses unique challenges due to the complexity of representing the vector input in multiple combinations for training. In this paper, we discuss such challenges and provide our solutions. In particular, we present a detailed study into multiple input encoding strategies, including raw velocity components, vector magnitude, and angular direction, and their impacton eddy segmentation performance. We introduce a two-branch attention U-Net architecture that separately encodes vector magnitude and direction. We evaluate seven different network configurations across four large-scale 3D ocean simulation data sets, employing four different segmentation metrics. Our results demonstrate that the proposed two-branch architecture consistently out performs single-branch variants.

涡旋是海洋环流中的动态旋转结构,通过影响海洋生物过程,显著影响热量、营养物质和浮游生物的分布。从海洋模拟数据中准确分割涡流对于后续的生物和物理分析至关重要。然而,在基于深度学习的分割模型中利用矢量值输入(如海洋速度场)带来了独特的挑战,因为在多个训练组合中表示矢量输入很复杂。在本文中,我们讨论了这些挑战并提出了我们的解决方案。特别是,我们详细研究了多种输入编码策略,包括原始速度分量,矢量大小和角方向,以及它们对涡流分割性能的影响。我们引入了一个两分支的注意力U-Net架构,分别对矢量幅度和方向进行编码。我们在四个大规模三维海洋模拟数据集上评估了七种不同的网络配置,采用了四种不同的分割指标。我们的结果表明,所提出的双分支架构始终优于单分支变体。
{"title":"Deep Learning-based Eddy Segmentation with Vector-Data for Biochemical Analysis in Ocean Simulations.","authors":"Weiping Hua, Sedat Ozer, Karen Bemis, Zihan Liu, Deborah Silver","doi":"10.1109/MCG.2025.3630582","DOIUrl":"https://doi.org/10.1109/MCG.2025.3630582","url":null,"abstract":"<p><p>Eddies are dynamic, swirling structures in ocean circulation that significantly influence the distribution of heat, nutrients, and plankton, there by impacting marine biological processes. Accurate eddy segmentation from ocean simulation data is essential for enabling subsequent biological and physical analysis. However, leveraging vector-valued inputs, such as ocean velocity fields, in deep learning-based segmentation models poses unique challenges due to the complexity of representing the vector input in multiple combinations for training. In this paper, we discuss such challenges and provide our solutions. In particular, we present a detailed study into multiple input encoding strategies, including raw velocity components, vector magnitude, and angular direction, and their impacton eddy segmentation performance. We introduce a two-branch attention U-Net architecture that separately encodes vector magnitude and direction. We evaluate seven different network configurations across four large-scale 3D ocean simulation data sets, employing four different segmentation metrics. Our results demonstrate that the proposed two-branch architecture consistently out performs single-branch variants.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":""},"PeriodicalIF":1.4,"publicationDate":"2025-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145472533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Visual Analysis in Person Reidentification With Vision-Language Models. 用视觉语言模型增强人再识别中的视觉分析。
IF 1.4 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-11-01 DOI: 10.1109/MCG.2025.3593227
Wang Xia, Tianci Wang, Jiawei Li, Guodao Sun, Haidong Gao, Xu Tan, Ronghua Liang

Image-based person reidentification aims to match individuals across multiple cameras. Despite advances in machine learning, their effectiveness in real-world scenarios remains limited, often leaving users to handle fine-grained matching manually. Recent work has explored textual information as auxiliary cues, but existing methods generate coarse descriptions and fail to integrate them effectively into retrieval workflows. To address these issues, we adopt a vision-language model fine-tuned with domain-specific knowledge to generate detailed textual descriptions and keywords for pedestrian images. We then create a joint search space combining visual and textual information, using image clustering and keyword co-occurrence to build a semantic layout. In addition, we introduce a dynamic spiral word cloud algorithm to improve visual presentation and enhance semantic associations. Finally, we conduct case studies, a user study, and expert feedback, demonstrating the usability and effectiveness of our system.

基于图像的人物再识别旨在匹配多个摄像机中的个体。尽管机器学习取得了进步,但它们在现实场景中的有效性仍然有限,通常让用户手动处理细粒度匹配。最近的工作已经探索了文本信息作为辅助线索,但现有的方法产生粗糙的描述,并不能有效地将它们集成到检索工作流中。为了解决这些问题,我们采用了一种基于特定领域知识的视觉语言模型,为行人图像生成详细的文本描述和关键词。然后,我们利用图像聚类和关键词共现构建语义布局,创建了一个结合视觉和文本信息的联合搜索空间。此外,我们还引入了一种动态螺旋词云算法来改善视觉呈现和增强语义关联。最后,我们进行案例研究、用户研究和专家反馈,展示我们系统的可用性和有效性。
{"title":"Enhancing Visual Analysis in Person Reidentification With Vision-Language Models.","authors":"Wang Xia, Tianci Wang, Jiawei Li, Guodao Sun, Haidong Gao, Xu Tan, Ronghua Liang","doi":"10.1109/MCG.2025.3593227","DOIUrl":"10.1109/MCG.2025.3593227","url":null,"abstract":"<p><p>Image-based person reidentification aims to match individuals across multiple cameras. Despite advances in machine learning, their effectiveness in real-world scenarios remains limited, often leaving users to handle fine-grained matching manually. Recent work has explored textual information as auxiliary cues, but existing methods generate coarse descriptions and fail to integrate them effectively into retrieval workflows. To address these issues, we adopt a vision-language model fine-tuned with domain-specific knowledge to generate detailed textual descriptions and keywords for pedestrian images. We then create a joint search space combining visual and textual information, using image clustering and keyword co-occurrence to build a semantic layout. In addition, we introduce a dynamic spiral word cloud algorithm to improve visual presentation and enhance semantic associations. Finally, we conduct case studies, a user study, and expert feedback, demonstrating the usability and effectiveness of our system.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":"44-60"},"PeriodicalIF":1.4,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144735526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Do Language Model Agents Align With Humans in Rating Visualizations? An Empirical Study. 语言模型代理在评价可视化方面与人类一致吗?实证研究。
IF 1.4 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-11-01 DOI: 10.1109/MCG.2025.3586461
Zekai Shao, Yi Shan, Yixuan He, Yuxuan Yao, Junhong Wang, Xiaolong Zhang, Yu Zhang, Siming Chen

Large language models (LLMs) show potential in understanding visualizations and may capture design knowledge. However, their ability to predict human feedback remains unclear. To explore this, we conduct three studies evaluating the alignment between LLM-based agents and human ratings in visualization tasks. The first study replicates a human-subject study, showing promising agent performance in human-like reasoning and rating, and informing further experiments. The second study simulates six prior studies using agents and finds that alignment correlates with experts' pre-experiment confidence. The third study tests enhancement techniques, such as input preprocessing and knowledge injection, revealing limitations in robustness and potential bias. These findings suggest that LLM-based agents can simulate human ratings when guided by high-confidence hypotheses from expert evaluators. We also demonstrate the usage scenario in rapid prototyping study designs and discuss future directions. We note that simulation may only serve as complements and cannot replace user studies.

大型语言模型(llm)在理解可视化和捕获设计知识方面显示出潜力。然而,它们预测人类反馈的能力尚不清楚。为了探索这一点,我们进行了三项研究,评估可视化任务中基于llm的代理和人类评级之间的一致性。第一项研究复制了一项以人类为研究对象的研究,显示出智能体在类人类推理和评级方面的良好表现,并为进一步的实验提供了信息。第二项研究模拟了先前使用代理的六项研究,并发现一致性与专家的实验前信心相关。第三项研究测试了输入预处理和知识注入等增强技术,揭示了鲁棒性和潜在偏差的局限性。这些发现表明,在专家评估人员的高置信度假设的指导下,基于法学硕士的代理可以模拟人类评级。我们还演示了快速原型评估中的使用场景,并讨论了未来的发展方向。我们注意到,模拟只能作为补充,不能取代用户研究。
{"title":"Do Language Model Agents Align With Humans in Rating Visualizations? An Empirical Study.","authors":"Zekai Shao, Yi Shan, Yixuan He, Yuxuan Yao, Junhong Wang, Xiaolong Zhang, Yu Zhang, Siming Chen","doi":"10.1109/MCG.2025.3586461","DOIUrl":"10.1109/MCG.2025.3586461","url":null,"abstract":"<p><p>Large language models (LLMs) show potential in understanding visualizations and may capture design knowledge. However, their ability to predict human feedback remains unclear. To explore this, we conduct three studies evaluating the alignment between LLM-based agents and human ratings in visualization tasks. The first study replicates a human-subject study, showing promising agent performance in human-like reasoning and rating, and informing further experiments. The second study simulates six prior studies using agents and finds that alignment correlates with experts' pre-experiment confidence. The third study tests enhancement techniques, such as input preprocessing and knowledge injection, revealing limitations in robustness and potential bias. These findings suggest that LLM-based agents can simulate human ratings when guided by high-confidence hypotheses from expert evaluators. We also demonstrate the usage scenario in rapid prototyping study designs and discuss future directions. We note that simulation may only serve as complements and cannot replace user studies.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":"14-28"},"PeriodicalIF":1.4,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144602294","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Pediatric Liver Transplant Therapy With Virtual Reality. 利用虚拟现实技术加强小儿肝移植治疗。
IF 1.4 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-11-01 DOI: 10.1109/MCG.2025.3613129
Laura Raya, Alberto Sanchez, Carmen Martin, Jose Jesus Garcia Rueda, Erika Guijarro, Mike Potel

Surgery and hospital stays for pediatric transplantation involve frequent interventions that require complete sedation, care and self-care, disease assimilation, and anxiety for the patient. This article presents the development of a comprehensive tool called virtual transplant reality (VTR) currently used in a hospital with actual patients. Our tool is intended to provide an aid to the psychological support of children who have undergone a liver transplant. VTR consists of two applications: a virtual reality application with a head mounted display worn by the patient and a desktop application for the therapist. After tests carried out at the Hospital Universitario La Paz (Madrid, Spain) over a period of one year with 65 patients, the results indicate that our system offers a series of advantages as a complement to the psychological therapy of pediatric transplant patients.

儿童移植的手术和住院涉及频繁的干预,需要完全镇静、护理和自我护理、疾病同化和患者的焦虑。本文介绍了一种称为虚拟移植现实(VTR)的综合工具的发展,目前在医院与实际患者一起使用。我们的工具旨在为接受肝移植的儿童提供心理支持。VTR由两个应用程序组成:一个是患者佩戴的头戴式显示器的虚拟现实应用程序,另一个是治疗师使用的桌面应用程序。在La Paz大学医院(马德里,西班牙)对65名患者进行了为期一年的测试后,结果表明我们的系统作为儿科移植患者心理治疗的补充提供了一系列优势。
{"title":"Enhancing Pediatric Liver Transplant Therapy With Virtual Reality.","authors":"Laura Raya, Alberto Sanchez, Carmen Martin, Jose Jesus Garcia Rueda, Erika Guijarro, Mike Potel","doi":"10.1109/MCG.2025.3613129","DOIUrl":"10.1109/MCG.2025.3613129","url":null,"abstract":"<p><p>Surgery and hospital stays for pediatric transplantation involve frequent interventions that require complete sedation, care and self-care, disease assimilation, and anxiety for the patient. This article presents the development of a comprehensive tool called virtual transplant reality (VTR) currently used in a hospital with actual patients. Our tool is intended to provide an aid to the psychological support of children who have undergone a liver transplant. VTR consists of two applications: a virtual reality application with a head mounted display worn by the patient and a desktop application for the therapist. After tests carried out at the Hospital Universitario La Paz (Madrid, Spain) over a period of one year with 65 patients, the results indicate that our system offers a series of advantages as a complement to the psychological therapy of pediatric transplant patients.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"45 6","pages":"130-140"},"PeriodicalIF":1.4,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145497501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How to Reject a VIS Paper, or Not? 如何拒收VIS文件?
IF 1.4 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-11-01 DOI: 10.1109/MCG.2025.3594817
Min Chen, David Ebert, Theresa-Marie Rhyne

While it is necessary for most (if not all) visualization and visual analytics (VIS) publication venues to use peer review processes to assure the quality of the papers to be published, it is also necessary for the VIS community to appraise and improve the quality of peer review processes from time to time. In recent years, rejecting a VIS paper seems to have become rather easy, as many rejection reasons are available to criticize a given paper. In this article, we analyze possible causes of this phenomenon and recommend possible remedies. In particular, over the past decades, the visualization field has rapidly grown to include many types of contributions and specialized research areas. Given this large landscape of topics, we need to ensure that good contributions within each area are reviewed properly, published, and built upon to make significant advancement in the area concerned. Therefore, it is crucial that our review process applies specific criteria for each area and does not expect individual publications to satisfy many review criteria designed for other areas. In this way, we hope VIS review processes will enable more VIS research with X factors (original, innovative, significant, impactful, rigorous, insightful, or inspirational) to be published promptly, allowing VIS researchers and practitioners to make even more impactful contributions to data sciences.

虽然大多数(如果不是全部)可视化和可视化分析(VIS)出版场所有必要使用同行评议过程来确保要发表的论文的质量,但VIS社区也有必要不时地评估和改进同行评议过程的质量。近年来,拒绝VIS论文似乎变得相当容易,因为有许多拒绝理由可以批评一篇给定的论文。在本文中,我们分析了这一现象的可能原因,并提出了可能的补救措施。特别是,在过去的几十年里,可视化领域迅速发展,包括许多类型的贡献和专门的研究领域。鉴于主题如此之多,我们需要确保每个领域内的优秀贡献都得到适当的审查、发表,并以此为基础在相关领域取得重大进展。因此,至关重要的是,我们的审查过程适用于每个领域的特定标准,而不是期望个别出版物满足为其他领域设计的许多审查标准。通过这种方式,我们希望VIS审查流程能够使更多具有X因素(原创,创新,重要,有影响力,严谨,有洞察力或鼓舞人心)的VIS研究及时发表,使VIS研究人员和从业者能够为数据科学做出更有影响力的贡献。
{"title":"How to Reject a VIS Paper, or Not?","authors":"Min Chen, David Ebert, Theresa-Marie Rhyne","doi":"10.1109/MCG.2025.3594817","DOIUrl":"10.1109/MCG.2025.3594817","url":null,"abstract":"<p><p>While it is necessary for most (if not all) visualization and visual analytics (VIS) publication venues to use peer review processes to assure the quality of the papers to be published, it is also necessary for the VIS community to appraise and improve the quality of peer review processes from time to time. In recent years, rejecting a VIS paper seems to have become rather easy, as many rejection reasons are available to criticize a given paper. In this article, we analyze possible causes of this phenomenon and recommend possible remedies. In particular, over the past decades, the visualization field has rapidly grown to include many types of contributions and specialized research areas. Given this large landscape of topics, we need to ensure that good contributions within each area are reviewed properly, published, and built upon to make significant advancement in the area concerned. Therefore, it is crucial that our review process applies specific criteria for each area and does not expect individual publications to satisfy many review criteria designed for other areas. In this way, we hope VIS review processes will enable more VIS research with X factors (original, innovative, significant, impactful, rigorous, insightful, or inspirational) to be published promptly, allowing VIS researchers and practitioners to make even more impactful contributions to data sciences.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"45 6","pages":"101-111"},"PeriodicalIF":1.4,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145497561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FashionCook: A Visual Analytics System for Human-AI Collaboration in Fashion E-Commerce Design. FashionCook:时尚电子商务设计中人机协作的可视化分析系统。
IF 1.4 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-11-01 DOI: 10.1109/MCG.2025.3597849
Yuheng Shao, Shiyi Liu, Gongyan Chen, Ruofei Ma, Xingbo Wang, Quan Li

Fashion e-commerce design requires the integration of creativity, functionality, and responsiveness to user preferences. While AI offers valuable support, generative models often miss the nuances of user experience, and task-specific models, although more accurate, lack transparency and real-world adaptability-especially with complex multimodal data. These issues reduce designers' trust and hinder effective AI integration. To address this, we present FashionCook, a visual analytics system designed to support human-AI collaboration in the context of fashion e-commerce. The system bridges communication among model builders, designers, and marketers by providing transparent model interpretations, "what-if" scenario exploration, and iterative feedback mechanisms. We validate the system through two real-world case studies and a user study, demonstrating how FashionCook enhances collaborative workflows and improves design outcomes in data-driven fashion e-commerce environments.

时尚电子商务的设计需要将创意、功能和对用户偏好的响应结合起来。虽然人工智能提供了有价值的支持,但生成模型往往忽略了用户体验和任务特定模型的细微差别,尽管更准确,但缺乏透明度和现实世界的适应性——特别是对于复杂的多模态数据。这些问题降低了设计师的信任并阻碍了有效的AI整合。为了解决这个问题,我们提出了FashionCook,这是一个视觉分析系统,旨在支持时尚电子商务背景下的人类与人工智能协作。该系统通过提供透明的模型解释、“假设”场景探索和迭代反馈机制,在模型构建者、设计人员和营销人员之间架起沟通的桥梁。我们通过两个真实案例研究和一个用户研究验证了该系统,展示了FashionCook如何在数据驱动的时尚电子商务环境中增强协作工作流程并改善设计结果。
{"title":"FashionCook: A Visual Analytics System for Human-AI Collaboration in Fashion E-Commerce Design.","authors":"Yuheng Shao, Shiyi Liu, Gongyan Chen, Ruofei Ma, Xingbo Wang, Quan Li","doi":"10.1109/MCG.2025.3597849","DOIUrl":"10.1109/MCG.2025.3597849","url":null,"abstract":"<p><p>Fashion e-commerce design requires the integration of creativity, functionality, and responsiveness to user preferences. While AI offers valuable support, generative models often miss the nuances of user experience, and task-specific models, although more accurate, lack transparency and real-world adaptability-especially with complex multimodal data. These issues reduce designers' trust and hinder effective AI integration. To address this, we present FashionCook, a visual analytics system designed to support human-AI collaboration in the context of fashion e-commerce. The system bridges communication among model builders, designers, and marketers by providing transparent model interpretations, \"what-if\" scenario exploration, and iterative feedback mechanisms. We validate the system through two real-world case studies and a user study, demonstrating how FashionCook enhances collaborative workflows and improves design outcomes in data-driven fashion e-commerce environments.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":"61-75"},"PeriodicalIF":1.4,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144979300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How Visually Literate Are Large Language Models? Reflections on Recent Advances and Future Directions. 大型语言模型的视觉素养如何?对最近进展和未来方向的思考。
IF 1.4 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-11-01 DOI: 10.1109/MCG.2025.3605029
Alexander Bendeck, John Stasko, Rahul C Basole, Francesco Ferrise

Large language models (LLMs) are now being applied to the tasks of visualization generation and understanding, demonstrating these models' ability to be "visually literate." On the generation side, LLMs have shown promise in powering natural languages' interfaces for visualization authoring while also suffering from usability and inconsistency issues. On the interpretation side, models (especially vision-language models) can answer basic questions about visualizations, synthesize visual and textual information, and detect misleading visual designs. However, models also tend to struggle with certain analytic tasks, and their takeaways from reading visualizations often differ from those of humans. We aim to both illuminate the state of the art in LLMs' visualization literacy and speculate on where such work may, and perhaps ought to, take us next.

大型语言模型(llm)现在被应用于可视化生成和理解的任务,展示了这些模型具有“视觉素养”的能力。在生成方面,法学硕士在为可视化创作提供自然语言接口方面表现出了希望,同时也受到可用性和不一致性问题的困扰。在解释方面,模型(尤其是视觉语言模型)可以回答关于可视化的基本问题,综合视觉和文本信息,并检测误导性的视觉设计。然而,模型也倾向于与某些分析任务作斗争,并且它们从阅读可视化中得到的结论通常与人类不同。我们的目标是阐明法学硕士可视化素养的现状,并推测此类工作可能(或许应该)将我们带往何处。
{"title":"How Visually Literate Are Large Language Models? Reflections on Recent Advances and Future Directions.","authors":"Alexander Bendeck, John Stasko, Rahul C Basole, Francesco Ferrise","doi":"10.1109/MCG.2025.3605029","DOIUrl":"10.1109/MCG.2025.3605029","url":null,"abstract":"<p><p>Large language models (LLMs) are now being applied to the tasks of visualization generation and understanding, demonstrating these models' ability to be \"visually literate.\" On the generation side, LLMs have shown promise in powering natural languages' interfaces for visualization authoring while also suffering from usability and inconsistency issues. On the interpretation side, models (especially vision-language models) can answer basic questions about visualizations, synthesize visual and textual information, and detect misleading visual designs. However, models also tend to struggle with certain analytic tasks, and their takeaways from reading visualizations often differ from those of humans. We aim to both illuminate the state of the art in LLMs' visualization literacy and speculate on where such work may, and perhaps ought to, take us next.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"45 6","pages":"120-129"},"PeriodicalIF":1.4,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145497505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Computer Graphics and Applications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1