首页 > 最新文献

Computers & Graphics-Uk最新文献

英文 中文
PBF-FR: Partitioning beyond footprints for façade recognition in urban point clouds PBF-FR:在城市点云中进行地形识别的超越足迹分区
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-09-15 DOI: 10.1016/j.cag.2025.104399
Daniela Cabiddu , Chiara Romanengo , Michela Mortara
The identification and recognition of urban features are essential for creating accurate and comprehensive digital representations of cities. In particular, the automatic characterization of façade elements plays a key role in enabling semantic enrichment and 3D reconstruction. It also supports urban analysis and underpins various applications, including planning, simulation, and visualization. This work presents a pipeline for the automatic recognition of façades within complex urban scenes represented as point clouds. The method employs an enhanced partitioning strategy that extends beyond strict building footprints by incorporating surrounding buffer zones, allowing for a more complete capture of façade geometry, particularly in dense urban contexts. This is combined with a primitive recognition stage based on the Hough transform, enabling the detection of both planar and curved façade structures. The proposed partitioning overcomes the limitations of traditional footprint-based segmentation, which often disregards contextual geometry and leads to misclassifications at building boundaries. Integrated with the primitive recognition step, the resulting pipeline is robust to noise and incomplete data, and supports geometry-aware façade recognition, contributing to scalable analysis of large-scale urban environments.
识别和识别城市特征对于创建准确和全面的城市数字表示至关重要。特别是,farade元素的自动表征在实现语义丰富和三维重建方面起着关键作用。它还支持城市分析,并支持各种应用程序,包括规划、模拟和可视化。这项工作提出了一个自动识别复杂城市场景中以点云表示的街巷的管道。该方法采用了一种增强的分区策略,通过结合周围的缓冲区,超越了严格的建筑足迹,允许更完整地捕捉立面几何形状,特别是在密集的城市环境中。这与基于霍夫变换的原始识别阶段相结合,可以检测平面和弯曲的farade结构。本文提出的分割方法克服了传统的基于足迹的分割方法的局限性,这种方法经常忽略上下文几何,导致在建筑边界上的错误分类。结合原始识别步骤,生成的管道对噪声和不完整数据具有鲁棒性,并支持几何感知的farade识别,有助于大规模城市环境的可扩展分析。
{"title":"PBF-FR: Partitioning beyond footprints for façade recognition in urban point clouds","authors":"Daniela Cabiddu ,&nbsp;Chiara Romanengo ,&nbsp;Michela Mortara","doi":"10.1016/j.cag.2025.104399","DOIUrl":"10.1016/j.cag.2025.104399","url":null,"abstract":"<div><div>The identification and recognition of urban features are essential for creating accurate and comprehensive digital representations of cities. In particular, the automatic characterization of façade elements plays a key role in enabling semantic enrichment and 3D reconstruction. It also supports urban analysis and underpins various applications, including planning, simulation, and visualization. This work presents a pipeline for the automatic recognition of façades within complex urban scenes represented as point clouds. The method employs an enhanced partitioning strategy that extends beyond strict building footprints by incorporating surrounding buffer zones, allowing for a more complete capture of façade geometry, particularly in dense urban contexts. This is combined with a primitive recognition stage based on the Hough transform, enabling the detection of both planar and curved façade structures. The proposed partitioning overcomes the limitations of traditional footprint-based segmentation, which often disregards contextual geometry and leads to misclassifications at building boundaries. Integrated with the primitive recognition step, the resulting pipeline is robust to noise and incomplete data, and supports geometry-aware façade recognition, contributing to scalable analysis of large-scale urban environments.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"132 ","pages":"Article 104399"},"PeriodicalIF":2.8,"publicationDate":"2025-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145105240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Controllable text-to-3D multi-object generation via integrating layout and multiview patterns 通过集成布局和多视图模式,可控文本到3d的多对象生成
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-09-15 DOI: 10.1016/j.cag.2025.104353
Shaorong Sun , Shuchao Pang , Yazhou Yao , Xiaoshui Huang
The controllability of 3D object generation methods is achieved through textual input. Existing text-to-3D object generation methods focus primarily on generating a single object based on a single object description. However, these methods often face challenges in producing results that accurately correspond to our desired positions when the input text involves multiple objects. To address the issue of controllability in the generation of multiple objects, this paper introduces COMOGen, a COntrollable text-to-3D Multi-Object Generation framework. COMOGen enables the simultaneous generation of multiple 3D objects by distilling layout and multiview prior knowledge. The framework consists of three modules: the layout control module, the multiview consistency control module, and the 3D content enhancement module. Moreover, to integrate these three modules as an integral framework, we propose Layout Multiview Score Distillation, which unifies two prior knowledge and further enhances the diversity and quality of generated 3D content. Comprehensive experiments demonstrate the effectiveness of our approach compared to state-of-the-art methods. This represents a significant step forward to enable more controlled and versatile text-based 3D content generation.
通过文本输入实现了三维对象生成方法的可控性。现有的文本到3d对象生成方法主要侧重于基于单个对象描述生成单个对象。然而,当输入文本涉及多个对象时,这些方法在生成准确对应于我们期望位置的结果时经常面临挑战。为了解决多目标生成的可控性问题,本文引入了可控文本到三维多目标生成框架COMOGen。通过提取布局和多视图先验知识,COMOGen可以同时生成多个3D对象。该框架由三个模块组成:布局控制模块、多视图一致性控制模块和3D内容增强模块。此外,为了将这三个模块整合为一个整体框架,我们提出了Layout Multiview Score Distillation,该方法将两个先验知识结合起来,进一步提高了生成的3D内容的多样性和质量。综合实验表明,与最先进的方法相比,我们的方法是有效的。这代表了一个重要的一步,使更多的控制和通用的基于文本的3D内容生成。
{"title":"Controllable text-to-3D multi-object generation via integrating layout and multiview patterns","authors":"Shaorong Sun ,&nbsp;Shuchao Pang ,&nbsp;Yazhou Yao ,&nbsp;Xiaoshui Huang","doi":"10.1016/j.cag.2025.104353","DOIUrl":"10.1016/j.cag.2025.104353","url":null,"abstract":"<div><div>The controllability of 3D object generation methods is achieved through textual input. Existing text-to-3D object generation methods focus primarily on generating a single object based on a single object description. However, these methods often face challenges in producing results that accurately correspond to our desired positions when the input text involves multiple objects. To address the issue of controllability in the generation of multiple objects, this paper introduces COMOGen, a <strong>CO</strong>ntrollable text-to-3D <strong>M</strong>ulti-<strong>O</strong>bject <strong>Gen</strong>eration framework. COMOGen enables the simultaneous generation of multiple 3D objects by distilling layout and multiview prior knowledge. The framework consists of three modules: the layout control module, the multiview consistency control module, and the 3D content enhancement module. Moreover, to integrate these three modules as an integral framework, we propose Layout Multiview Score Distillation, which unifies two prior knowledge and further enhances the diversity and quality of generated 3D content. Comprehensive experiments demonstrate the effectiveness of our approach compared to state-of-the-art methods. This represents a significant step forward to enable more controlled and versatile text-based 3D content generation.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"132 ","pages":"Article 104353"},"PeriodicalIF":2.8,"publicationDate":"2025-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145105239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FocalFormer: Leveraging focal modulation for efficient action segmentation in egocentric videos FocalFormer:利用焦点调制在以自我为中心的视频中进行有效的动作分割
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-09-12 DOI: 10.1016/j.cag.2025.104381
Jialu Xi, Shiguang Liu
With the development of various emerging devices (e.g., AR/VR) and video dissemination technologies, self-centered video tasks have received much attention, and it is especially important to understand user actions in self-centered videos, where self-centered temporal action segmentation complicates the task due to its unique challenges such as abrupt point-of-view shifts and limited field of view. Existing work employs Transformer-based architectures to model long-range dependencies in sequential data. However, these models often struggle to effectively accommodate the nuances of egocentric action segmentation and incur significant computational costs. Therefore, we propose a new framework that integrates focus modulation into the Transformer architecture. Unlike the traditional self-attention mechanism, which focuses uniformly on all features in the entire sequence, focus modulation replaces the self-attention layer with a more focused and efficient mechanism. This design allows for selective aggregation of local features and adaptive integration of global context through content-aware gating, which is critical for capturing detailed local motion (e.g., hand-object interactions) and handling dynamic context changes in first-person video. Our model also adds a context integration module, where focus modulation ensures that only relevant global contexts are integrated based on the content of the current frame, ultimately efficiently decoding aggregated features to produce accurate temporal action boundaries. By using focus modulation, our model achieves a lightweight design that reduces the number of parameters typically associated with Transformer-based models. We validate the effectiveness of our approach on classical datasets for temporal segmentation tasks (50salads, breakfast) as well as additional datasets with a first-person perspective (GTEA, HOI4D, and FineBio).
随着各种新兴设备(如AR/VR)和视频传播技术的发展,以自我为中心的视频任务受到了越来越多的关注,理解以自我为中心的视频中的用户动作尤为重要,而以自我为中心的时间动作分割由于其独特的挑战,如突然的视角转换和有限的视场,使任务复杂化。现有的工作使用基于transformer的体系结构对顺序数据中的长期依赖关系进行建模。然而,这些模型往往难以有效地适应以自我为中心的动作分割的细微差别,并产生显著的计算成本。因此,我们提出了一个将焦点调制集成到Transformer架构中的新框架。与传统的自注意机制统一地关注整个序列中的所有特征不同,焦点调制以一种更集中、更高效的机制取代了自注意层。这种设计允许局部特征的选择性聚合和通过内容感知门控的全局上下文的自适应集成,这对于捕获详细的局部运动(例如,手-对象交互)和处理第一人称视频中的动态上下文变化至关重要。我们的模型还增加了一个上下文集成模块,其中焦点调制确保仅基于当前框架的内容集成相关的全局上下文,最终有效地解码聚合特征以产生准确的时间动作边界。通过使用焦点调制,我们的模型实现了轻量级设计,减少了通常与基于transformer的模型相关的参数数量。我们验证了我们的方法在经典数据集上的有效性,用于时间分割任务(50沙拉,早餐)以及具有第一人称视角的其他数据集(GTEA, HOI4D和FineBio)。
{"title":"FocalFormer: Leveraging focal modulation for efficient action segmentation in egocentric videos","authors":"Jialu Xi,&nbsp;Shiguang Liu","doi":"10.1016/j.cag.2025.104381","DOIUrl":"10.1016/j.cag.2025.104381","url":null,"abstract":"<div><div>With the development of various emerging devices (e.g., AR/VR) and video dissemination technologies, self-centered video tasks have received much attention, and it is especially important to understand user actions in self-centered videos, where self-centered temporal action segmentation complicates the task due to its unique challenges such as abrupt point-of-view shifts and limited field of view. Existing work employs Transformer-based architectures to model long-range dependencies in sequential data. However, these models often struggle to effectively accommodate the nuances of egocentric action segmentation and incur significant computational costs. Therefore, we propose a new framework that integrates focus modulation into the Transformer architecture. Unlike the traditional self-attention mechanism, which focuses uniformly on all features in the entire sequence, focus modulation replaces the self-attention layer with a more focused and efficient mechanism. This design allows for selective aggregation of local features and adaptive integration of global context through content-aware gating, which is critical for capturing detailed local motion (e.g., hand-object interactions) and handling dynamic context changes in first-person video. Our model also adds a context integration module, where focus modulation ensures that only relevant global contexts are integrated based on the content of the current frame, ultimately efficiently decoding aggregated features to produce accurate temporal action boundaries. By using focus modulation, our model achieves a lightweight design that reduces the number of parameters typically associated with Transformer-based models. We validate the effectiveness of our approach on classical datasets for temporal segmentation tasks (50salads, breakfast) as well as additional datasets with a first-person perspective (GTEA, HOI4D, and FineBio).</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"132 ","pages":"Article 104381"},"PeriodicalIF":2.8,"publicationDate":"2025-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145060470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design, development, and evaluation of an immersive augmented virtuality training system for transcatheter aortic valve replacement 经导管主动脉瓣置换术的沉浸式增强虚拟训练系统的设计、开发和评估
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-09-12 DOI: 10.1016/j.cag.2025.104414
Jorik Jakober , Matthias Kunz , Robert Kreher , Matteo Pantano , Daniel Braß , Janine Weidling , Christian Hansen , Rüdiger Braun-Dullaeus , Bernhard Preim
Strong procedural skills are essential to perform safe and effective transcatheter aortic valve replacement (TAVR). Traditional training takes place in the operating room (OR) on real patients and requires learning new motor skills, resulting in longer procedure times, increased risk of complications, and greater radiation exposure for patients and medical personnel. Desktop-based simulators in interventional cardiology have shown some validity but lack true depth perception, whereas head-mounted display based Virtual Reality (VR) offers intuitive 3D interaction that enhances training effectiveness and spatial understanding. However, providing realistic and immersive training remains a challenging task as both lack tactile feedback. We have developed an augmented virtuality (AV) training system for transfemoral TAVR, combining a catheter tracking device (for translational input) with a simulated virtual OR. The system enables users to manually control a virtual angiography system via hand tracking and navigate a guidewire through a virtual patient up to the aortic valve using fluoroscopic-like imaging. In addition, we conducted a preliminary user study with 12 participants, assessing cybersickness, usability, workload, sense of presence, and qualitative factors. Preliminary results indicate that the system provides realistic interaction for key procedural steps, making it a suitable learning tool for novices. Limitations in angiography system operation include the lack of haptic resistance and usability limitations related to C-arm control, particularly due to hand tracking constraints and split attention between interaction and monitoring. Suggestions for improvement include catheter rotation tracking, expanded procedural coverage, and enhanced fluoroscopic image fidelity.
良好的操作技巧是进行安全有效的经导管主动脉瓣置换术(TAVR)必不可少的。传统的培训是在手术室(OR)对真正的病人进行的,需要学习新的运动技能,导致手术时间更长,并发症风险增加,患者和医务人员暴露在更大的辐射下。在介入心脏病学中,基于桌面的模拟器显示出一定的有效性,但缺乏真正的深度感知,而基于头戴式显示器的虚拟现实(VR)提供直观的3D交互,增强了训练效果和空间理解。然而,提供逼真和身临其境的训练仍然是一项具有挑战性的任务,因为两者都缺乏触觉反馈。我们开发了一种用于经股TAVR的增强虚拟(AV)培训系统,将导管跟踪装置(用于平移输入)与模拟虚拟手术室相结合。该系统使用户能够通过手动跟踪来手动控制虚拟血管造影系统,并使用类似透视镜的成像技术引导导丝穿过虚拟患者到达主动脉瓣。此外,我们对12名参与者进行了初步的用户研究,评估了晕动症、可用性、工作量、存在感和定性因素。初步结果表明,该系统为关键的程序步骤提供了真实的交互,使其成为一个适合初学者的学习工具。血管造影系统操作的限制包括缺乏触觉阻力和与c臂控制相关的可用性限制,特别是由于手部跟踪限制以及交互和监测之间的注意力分散。改进建议包括导管旋转跟踪,扩大手术覆盖范围,增强透视图像保真度。
{"title":"Design, development, and evaluation of an immersive augmented virtuality training system for transcatheter aortic valve replacement","authors":"Jorik Jakober ,&nbsp;Matthias Kunz ,&nbsp;Robert Kreher ,&nbsp;Matteo Pantano ,&nbsp;Daniel Braß ,&nbsp;Janine Weidling ,&nbsp;Christian Hansen ,&nbsp;Rüdiger Braun-Dullaeus ,&nbsp;Bernhard Preim","doi":"10.1016/j.cag.2025.104414","DOIUrl":"10.1016/j.cag.2025.104414","url":null,"abstract":"<div><div>Strong procedural skills are essential to perform safe and effective transcatheter aortic valve replacement (TAVR). Traditional training takes place in the operating room (OR) on real patients and requires learning new motor skills, resulting in longer procedure times, increased risk of complications, and greater radiation exposure for patients and medical personnel. Desktop-based simulators in interventional cardiology have shown some validity but lack true depth perception, whereas head-mounted display based Virtual Reality (VR) offers intuitive 3D interaction that enhances training effectiveness and spatial understanding. However, providing realistic and immersive training remains a challenging task as both lack tactile feedback. We have developed an augmented virtuality (AV) training system for transfemoral TAVR, combining a catheter tracking device (for translational input) with a simulated virtual OR. The system enables users to manually control a virtual angiography system via hand tracking and navigate a guidewire through a virtual patient up to the aortic valve using fluoroscopic-like imaging. In addition, we conducted a preliminary user study with 12 participants, assessing cybersickness, usability, workload, sense of presence, and qualitative factors. Preliminary results indicate that the system provides realistic interaction for key procedural steps, making it a suitable learning tool for novices. Limitations in angiography system operation include the lack of haptic resistance and usability limitations related to C-arm control, particularly due to hand tracking constraints and split attention between interaction and monitoring. Suggestions for improvement include catheter rotation tracking, expanded procedural coverage, and enhanced fluoroscopic image fidelity.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"133 ","pages":"Article 104414"},"PeriodicalIF":2.8,"publicationDate":"2025-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145160105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Navigating large-pose challenge for high-fidelity face reenactment with video diffusion model 利用视频扩散模型导航高保真人脸再现的大姿态挑战
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-09-09 DOI: 10.1016/j.cag.2025.104423
Mingtao Guo , Guanyu Xing , Yanci Zhang , Yanli Liu
Face reenactment aims to generate realistic talking head videos by transferring motion from a driving video to a static source image while preserving the source identity. Although existing methods based on either implicit or explicit keypoints have shown promise, they struggle with large pose variations due to warping artifacts or the limitations of coarse facial landmarks. In this paper, we present the Face Reenactment Video Diffusion model (FRVD), a novel framework for high-fidelity face reenactment under large pose changes. Our method first employs a motion extractor to extract implicit facial keypoints from the source and driving images to represent fine-grained motion and to perform motion alignment through a warping module. To address the degradation introduced by warping, we introduce a Warping Feature Mapper (WFM) that maps the warped source image into the motion-aware latent space of a pretrained image-to-video (I2V) model. This latent space encodes rich priors of facial dynamics learned from large-scale video data, enabling effective warping correction and enhancing temporal coherence. Extensive experiments show that FRVD achieves superior performance over existing methods in terms of pose accuracy, identity preservation, and visual quality, especially in challenging scenarios with extreme pose variations.
人脸再现的目的是通过将运动从驾驶视频转移到静态源图像,同时保持源身份,生成逼真的说话头部视频。尽管现有的基于隐式或显式关键点的方法已经显示出希望,但由于扭曲的伪影或粗糙的面部地标的限制,它们难以应对大的姿势变化。本文提出了人脸再现视频扩散模型(FRVD),这是一种用于大姿态变化下高保真人脸再现的新框架。我们的方法首先使用运动提取器从源图像和驱动图像中提取隐式面部关键点,以表示细粒度运动,并通过翘曲模块执行运动对齐。为了解决由扭曲带来的退化问题,我们引入了一个扭曲特征映射器(WFM),它将扭曲的源图像映射到预训练的图像到视频(I2V)模型的运动感知潜在空间中。这种潜在空间编码了从大规模视频数据中学习到的丰富的面部动态先验,从而实现了有效的扭曲校正和增强时间相干性。大量实验表明,FRVD在姿态精度、身份保持和视觉质量方面优于现有方法,特别是在具有极端姿态变化的挑战性场景中。
{"title":"Navigating large-pose challenge for high-fidelity face reenactment with video diffusion model","authors":"Mingtao Guo ,&nbsp;Guanyu Xing ,&nbsp;Yanci Zhang ,&nbsp;Yanli Liu","doi":"10.1016/j.cag.2025.104423","DOIUrl":"10.1016/j.cag.2025.104423","url":null,"abstract":"<div><div>Face reenactment aims to generate realistic talking head videos by transferring motion from a driving video to a static source image while preserving the source identity. Although existing methods based on either implicit or explicit keypoints have shown promise, they struggle with large pose variations due to warping artifacts or the limitations of coarse facial landmarks. In this paper, we present the Face Reenactment Video Diffusion model (FRVD), a novel framework for high-fidelity face reenactment under large pose changes. Our method first employs a motion extractor to extract implicit facial keypoints from the source and driving images to represent fine-grained motion and to perform motion alignment through a warping module. To address the degradation introduced by warping, we introduce a Warping Feature Mapper (WFM) that maps the warped source image into the motion-aware latent space of a pretrained image-to-video (I2V) model. This latent space encodes rich priors of facial dynamics learned from large-scale video data, enabling effective warping correction and enhancing temporal coherence. Extensive experiments show that FRVD achieves superior performance over existing methods in terms of pose accuracy, identity preservation, and visual quality, especially in challenging scenarios with extreme pose variations.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"132 ","pages":"Article 104423"},"PeriodicalIF":2.8,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145049016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Geometry-aware estimation of photovoltaic energy from aerial LiDAR point clouds 基于几何感知的航空激光雷达点云光伏能量估计
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-09-09 DOI: 10.1016/j.cag.2025.104424
Chiara Romanengo , Tommaso Sorgente , Daniela Cabiddu , Matteo Ghellere , Lorenzo Belussi , Ludovico Danza , Michela Mortara
Aerial LiDAR (and photogrammetric) surveys are becoming a common practice in land and urban management, and aerial point clouds (or the reconstructed surfaces) are increasingly used as digital representations of natural and built structures for the monitoring and simulation of urban processes or the generation of what-if scenarios. The geometric analysis of a “digital twin” of the built environment can contribute to provide quantitative evidence to support urban policies like planning of interventions and incentives for the transition to renewable energy. In this work, we present a geometry-based approach to efficiently and accurately estimate the photovoltaic (PV) energy produced by urban roofs. The method combines a primitive fitting technique for detecting and characterizing building roof components from aerial LiDAR data with an optimization strategy to determine the maximum number and optimal placement of PV modules on each roof surface. The energy production of the PV system on each building over a specified time period (e.g., one year) is estimated based on the solar radiation received by each PV module and the shadow projected by neighboring buildings or trees and efficiency requirements. The strength of the proposed approach is its ability to combine computational techniques, domain expertise, and heterogeneous data into a logical and automated workflow, whose effectiveness is evaluated and tested on a large-scale, real-world urban areas with complex morphology in Italy.
空中激光雷达(和摄影测量)调查正在成为土地和城市管理中的一种常见做法,空中点云(或重建表面)越来越多地被用作自然和建筑结构的数字表示,用于监测和模拟城市过程或生成假设场景。对建筑环境的“数字孪生”的几何分析有助于提供定量证据,以支持城市政策,如规划干预措施和向可再生能源过渡的激励措施。在这项工作中,我们提出了一种基于几何的方法来有效和准确地估计城市屋顶产生的光伏(PV)能量。该方法结合了一种原始拟合技术,用于从航空激光雷达数据中检测和表征建筑屋顶组件,并采用优化策略来确定每个屋顶表面上光伏模块的最大数量和最佳位置。根据每个光伏组件接收到的太阳辐射和邻近建筑物或树木投射的阴影以及效率要求,估算每座建筑物在特定时间段(例如一年)的光伏系统的发电量。所提出的方法的优势在于它能够将计算技术、领域专业知识和异构数据结合到一个逻辑和自动化的工作流中,并在意大利具有复杂形态的大规模真实城市地区对其有效性进行评估和测试。
{"title":"Geometry-aware estimation of photovoltaic energy from aerial LiDAR point clouds","authors":"Chiara Romanengo ,&nbsp;Tommaso Sorgente ,&nbsp;Daniela Cabiddu ,&nbsp;Matteo Ghellere ,&nbsp;Lorenzo Belussi ,&nbsp;Ludovico Danza ,&nbsp;Michela Mortara","doi":"10.1016/j.cag.2025.104424","DOIUrl":"10.1016/j.cag.2025.104424","url":null,"abstract":"<div><div>Aerial LiDAR (and photogrammetric) surveys are becoming a common practice in land and urban management, and aerial point clouds (or the reconstructed surfaces) are increasingly used as digital representations of natural and built structures for the monitoring and simulation of urban processes or the generation of what-if scenarios. The geometric analysis of a “digital twin” of the built environment can contribute to provide quantitative evidence to support urban policies like planning of interventions and incentives for the transition to renewable energy. In this work, we present a geometry-based approach to efficiently and accurately estimate the photovoltaic (PV) energy produced by urban roofs. The method combines a primitive fitting technique for detecting and characterizing building roof components from aerial LiDAR data with an optimization strategy to determine the maximum number and optimal placement of PV modules on each roof surface. The energy production of the PV system on each building over a specified time period (e.g., one year) is estimated based on the solar radiation received by each PV module and the shadow projected by neighboring buildings or trees and efficiency requirements. The strength of the proposed approach is its ability to combine computational techniques, domain expertise, and heterogeneous data into a logical and automated workflow, whose effectiveness is evaluated and tested on a large-scale, real-world urban areas with complex morphology in Italy.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"132 ","pages":"Article 104424"},"PeriodicalIF":2.8,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145057250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Prompt2Color: A prompt-based framework for image-derived color generation and visualization optimization Prompt2Color:一个基于提示的框架,用于图像派生的颜色生成和可视化优化
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-09-08 DOI: 10.1016/j.cag.2025.104419
Jiayun Hu , Shiqi Jiang , Haiwen Huang , Shuqi Liu , Yun Wang , Changbo Wang , Chenhui Li
Color is powerful in communicating information in visualizations. However, crafting palettes that improve readability and capture readers’ attention often demands substantial effort, even for seasoned designers. Existing text-based palette generation results in limited and predictable combinations, and finding suitable reference images to extract colors without a clear idea is both tedious and frustrating. In this work, we present Prompt2Color, a novel framework for generating color palettes using prompts. To simplify the process of finding relevant images, we first adopt a concretization approach to visualize the prompts. Furthermore, we introduce an attention-based method for color extraction, which allows for the mining of the visual representations of the prompts. Finally, we utilize a knowledge base to refine the palette and generate the background color to meet aesthetic and design requirements. Evaluations, including quantitative metrics and user experiments, demonstrate the effectiveness of our method.
在可视化中,颜色在传达信息方面是强大的。然而,制作提高可读性和吸引读者注意力的调色板通常需要大量的努力,即使是经验丰富的设计师。现有的基于文本的调色板生成结果是有限的和可预测的组合,并且在没有明确想法的情况下找到合适的参考图像来提取颜色既乏味又令人沮丧。在这项工作中,我们提出了Prompt2Color,这是一个使用提示符生成调色板的新框架。为了简化查找相关图像的过程,我们首先采用具体化的方法将提示可视化。此外,我们引入了一种基于注意力的颜色提取方法,该方法允许挖掘提示的视觉表示。最后,我们利用知识库来完善调色板和生成背景色,以满足美学和设计要求。评估,包括定量指标和用户实验,证明了我们的方法的有效性。
{"title":"Prompt2Color: A prompt-based framework for image-derived color generation and visualization optimization","authors":"Jiayun Hu ,&nbsp;Shiqi Jiang ,&nbsp;Haiwen Huang ,&nbsp;Shuqi Liu ,&nbsp;Yun Wang ,&nbsp;Changbo Wang ,&nbsp;Chenhui Li","doi":"10.1016/j.cag.2025.104419","DOIUrl":"10.1016/j.cag.2025.104419","url":null,"abstract":"<div><div>Color is powerful in communicating information in visualizations. However, crafting palettes that improve readability and capture readers’ attention often demands substantial effort, even for seasoned designers. Existing text-based palette generation results in limited and predictable combinations, and finding suitable reference images to extract colors without a clear idea is both tedious and frustrating. In this work, we present Prompt2Color, a novel framework for generating color palettes using prompts. To simplify the process of finding relevant images, we first adopt a concretization approach to visualize the prompts. Furthermore, we introduce an attention-based method for color extraction, which allows for the mining of the visual representations of the prompts. Finally, we utilize a knowledge base to refine the palette and generate the background color to meet aesthetic and design requirements. Evaluations, including quantitative metrics and user experiments, demonstrate the effectiveness of our method.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"132 ","pages":"Article 104419"},"PeriodicalIF":2.8,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145049017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ProbTalk3D-X: Prosody enhanced non-deterministic emotion controllable speech-driven 3D facial animation synthesis ProbTalk3D-X:韵律增强的非确定性情绪可控语音驱动的3D面部动画合成
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-09-08 DOI: 10.1016/j.cag.2025.104358
Kazi Injamamul Haque, Sichun Wu, Zerrin Yumak
Audio-driven 3D facial animation synthesis has been an active field of research with attention from both academia and industry. While there are promising results in this area, recent approaches largely focus on lip-sync and identity control, neglecting the role of emotions and emotion control in the generative process. That is mainly due to the lack of emotionally rich facial animation data and algorithms that can synthesize speech animations with emotional expressions at the same time. In addition, the majority of the models are deterministic, meaning given the same audio input, they produce the same output motion. We argue that emotions and non-determinism are crucial to generate diverse and emotionally-rich facial animations. In this paper, we present ProbTalk3D-X by extending a prior work ProbTalk3D- a two staged VQ-VAE based non-deterministic model, by additionally incorporating prosody features for improved facial accuracy using an emotionally rich facial animation dataset, 3DMEAD. Further, we present a comprehensive comparison of non-deterministic emotion controllable models (including new extended experimental models) leveraging VQ-VAE, VAE and diffusion techniques. We provide an extensive comparative analysis of the experimental models against the recent 3D facial animation synthesis approaches, by evaluating the results objectively, qualitatively, and with a perceptual user study. We highlight several objective metrics that are more suitable for evaluating stochastic outputs and use both in-the-wild and ground truth data for subjective evaluation. Our evaluation demonstrates that ProbTalk3D-X and original ProbTalk3D achieve superior performance compared to state-of-the-art emotion-controlled, deterministic and non-deterministic models. We recommend watching the supplementary video for visual quality judgment. The entire codebase including the extended models is publicly available.1
音频驱动的三维面部动画合成一直是学术界和工业界关注的一个活跃的研究领域。虽然在这一领域有很好的结果,但最近的方法主要集中在对口型和身份控制上,忽视了情绪和情绪控制在生成过程中的作用。这主要是由于缺乏情感丰富的面部动画数据和能够同时将语音动画与情感表达合成的算法。此外,大多数模型都是确定性的,这意味着给定相同的音频输入,它们会产生相同的输出运动。我们认为情绪和非决定论是产生多样化和情感丰富的面部动画的关键。在本文中,我们通过扩展先前的工作ProbTalk3D-一个基于两阶段VQ-VAE的不确定性模型,通过使用情感丰富的面部动画数据集3DMEAD,另外结合韵律特征来提高面部准确性,提出了ProbTalk3D- x。此外,我们还全面比较了利用VQ-VAE、VAE和扩散技术的非确定性情绪可控模型(包括新的扩展实验模型)。我们通过客观、定性地评估结果,并通过感知用户研究,对实验模型与最近的3D面部动画合成方法进行了广泛的比较分析。我们强调了几个更适合评估随机输出的客观指标,并使用野外和地面真实数据进行主观评估。我们的评估表明,与最先进的情绪控制、确定性和非确定性模型相比,ProbTalk3D- x和原始ProbTalk3D具有优越的性能。我们建议观看补充视频进行视觉质量判断。包括扩展模型在内的整个代码库都是公开可用的
{"title":"ProbTalk3D-X: Prosody enhanced non-deterministic emotion controllable speech-driven 3D facial animation synthesis","authors":"Kazi Injamamul Haque,&nbsp;Sichun Wu,&nbsp;Zerrin Yumak","doi":"10.1016/j.cag.2025.104358","DOIUrl":"10.1016/j.cag.2025.104358","url":null,"abstract":"<div><div>Audio-driven 3D facial animation synthesis has been an active field of research with attention from both academia and industry. While there are promising results in this area, recent approaches largely focus on lip-sync and identity control, neglecting the role of emotions and emotion control in the generative process. That is mainly due to the lack of emotionally rich facial animation data and algorithms that can synthesize speech animations with emotional expressions at the same time. In addition, the majority of the models are deterministic, meaning given the same audio input, they produce the same output motion. We argue that emotions and non-determinism are crucial to generate diverse and emotionally-rich facial animations. In this paper, we present ProbTalk3D-X by extending a prior work ProbTalk3D- a two staged VQ-VAE based non-deterministic model, by additionally incorporating prosody features for improved facial accuracy using an emotionally rich facial animation dataset, 3DMEAD. Further, we present a comprehensive comparison of non-deterministic emotion controllable models (including new extended experimental models) leveraging VQ-VAE, VAE and diffusion techniques. We provide an extensive comparative analysis of the experimental models against the recent 3D facial animation synthesis approaches, by evaluating the results objectively, qualitatively, and with a perceptual user study. We highlight several objective metrics that are more suitable for evaluating stochastic outputs and use both in-the-wild and ground truth data for subjective evaluation. Our evaluation demonstrates that ProbTalk3D-X and original ProbTalk3D achieve superior performance compared to state-of-the-art emotion-controlled, deterministic and non-deterministic models. We recommend watching the supplementary video for visual quality judgment. The entire codebase including the extended models is publicly available.<span><span><sup>1</sup></span></span></div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"132 ","pages":"Article 104358"},"PeriodicalIF":2.8,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145048856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Appearance as reliable evidence: Reconciling appearance and generative priors for monocular motion estimation 外观作为可靠的证据:调和外观和生成先验的单目运动估计
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-09-08 DOI: 10.1016/j.cag.2025.104404
Zipei Chen , Yumeng Li , Zhong Ren, Yao-Xiang Ding, Kun Zhou
Monocular motion estimation in real scenes is challenging with the presence of noisy and possibly occluded detections. The recent method proposes to introduce a diffusion-based generative motion prior, which treats input detections as noisy partial evidence and generates motion through denoising. This advances robustness and motion quality, yet regardless of whether the denoised motion is close to visual observation, which often causes misalignment. In this work, we propose to reconcile model appearance and motion prior, which enables appearance to play the crucial role of providing reliable noise-free visual evidence for accurate visual alignment. The appearance is modeled by the radiance of both scene and human for joint differentiable rendering. To achieve this with monocular RGB input without mask and depth, we propose a semantic-perturbed mode estimation method to faithfully estimate static scene radiance from dynamic input with complex occlusion relationships, and a polyline depth calibration method to leverage knowledge from depth estimation model to recover the missing depth information. Meanwhile, to leverage knowledge from motion prior and reconcile it with the appearance guidance during optimization, we also propose an occlusion-aware gradient merging strategy. Experimental results demonstrate that our method achieves better-aligned tracking results while maintaining competitive motion quality. Our code is released at https://github.com/Zipei-Chen/Appearance-as-Reliable-Evidence-implementation.
在真实场景中,由于存在噪声和可能的遮挡检测,单目运动估计具有挑战性。最近的方法提出了一种基于扩散的生成运动先验,它将输入检测作为有噪声的部分证据,并通过去噪来生成运动。这提高了鲁棒性和运动质量,但不管去噪的运动是否接近视觉观察,这通常会导致不对准。在这项工作中,我们建议调和模型外观和运动先验,这使得外观发挥关键作用,为准确的视觉对齐提供可靠的无噪声视觉证据。外观采用场景和人的亮度建模,实现联合可微渲染。为了在没有遮罩和深度的单目RGB输入中实现这一目标,我们提出了一种语义摄动模式估计方法,从具有复杂遮挡关系的动态输入中忠实地估计静态场景的亮度,并提出了一种折线深度校准方法,利用深度估计模型的知识来恢复缺失的深度信息。同时,为了在优化过程中充分利用运动先验知识并将其与外观指导相协调,我们还提出了一种闭塞感知梯度合并策略。实验结果表明,该方法在保持竞争性运动质量的同时,获得了更好的对齐跟踪结果。我们的代码发布在https://github.com/Zipei-Chen/Appearance-as-Reliable-Evidence-implementation。
{"title":"Appearance as reliable evidence: Reconciling appearance and generative priors for monocular motion estimation","authors":"Zipei Chen ,&nbsp;Yumeng Li ,&nbsp;Zhong Ren,&nbsp;Yao-Xiang Ding,&nbsp;Kun Zhou","doi":"10.1016/j.cag.2025.104404","DOIUrl":"10.1016/j.cag.2025.104404","url":null,"abstract":"<div><div>Monocular motion estimation in real scenes is challenging with the presence of noisy and possibly occluded detections. The recent method proposes to introduce a diffusion-based generative motion prior, which treats input detections as noisy partial evidence and generates motion through denoising. This advances robustness and motion quality, yet regardless of whether the denoised motion is close to visual observation, which often causes misalignment. In this work, we propose to reconcile model appearance and motion prior, which enables appearance to play the crucial role of providing reliable noise-free visual evidence for accurate visual alignment. The appearance is modeled by the radiance of both scene and human for joint differentiable rendering. To achieve this with monocular RGB input without mask and depth, we propose a semantic-perturbed mode estimation method to faithfully estimate static scene radiance from dynamic input with complex occlusion relationships, and a polyline depth calibration method to leverage knowledge from depth estimation model to recover the missing depth information. Meanwhile, to leverage knowledge from motion prior and reconcile it with the appearance guidance during optimization, we also propose an occlusion-aware gradient merging strategy. Experimental results demonstrate that our method achieves better-aligned tracking results while maintaining competitive motion quality. Our code is released at <span><span>https://github.com/Zipei-Chen/Appearance-as-Reliable-Evidence-implementation</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"132 ","pages":"Article 104404"},"PeriodicalIF":2.8,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145057251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
3D reconstruction and precision evaluation of industrial components via Gaussian Splatting 基于高斯溅射的工业部件三维重建与精度评价
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-09-08 DOI: 10.1016/j.cag.2025.104422
Guodong Sun , Dingjie Liu , Zeyu Yang , Shaoran An , Yang Zhang
Traditional 3D reconstruction methods for industrial components present significant limitations. Structured light and laser scanning require costly equipment, complex procedures, and remain sensitive to scan completeness and occlusions. These constraints restrict their application in settings with budget and expertise limitations. Deep learning approaches reduce hardware requirements but fail to accurately reconstruct complex industrial surfaces with real-world data. Industrial components feature intricate geometries and surface irregularities that challenge current deep learning techniques. These methods also demand substantial computational resources, limiting industrial implementation. This paper presents a 3D reconstruction and measurement system based on Gaussian Splatting. The method incorporates adaptive modifications to address the unique surface characteristics of industrial components, ensuring both accuracy and efficiency. To resolve scale and pose discrepancies between the reconstructed Gaussian model and ground truth, a robust scaling and registration pipeline has been developed. This pipeline enables precise evaluation of reconstruction quality and measurement accuracy. Comprehensive experimental evaluations demonstrate that our approach achieves high-precision reconstruction, with an average Chamfer Distance of 2.24 and a mean F1 Score of 0.19, surpassing existing methods. Additionally, the average scale error is 2.41%. The proposed system enables reliable dimensional measurements using only consumer-grade cameras, significantly reducing equipment costs and simplifying operation, thereby improving the accessibility of 3D reconstruction in industrial applications. A publicly available industrial component dataset has been constructed to serve as a benchmark for future research. The dataset and code are available at https://github.com/ldj0o/IndustrialComponentGS.
传统的工业部件三维重建方法存在很大的局限性。结构光和激光扫描需要昂贵的设备,复杂的程序,并且对扫描完整性和闭塞仍然敏感。这些限制限制了它们在预算和专业知识有限的环境中的应用。深度学习方法降低了硬件要求,但无法用真实世界的数据准确地重建复杂的工业表面。工业部件具有复杂的几何形状和表面不规则性,这对当前的深度学习技术构成了挑战。这些方法也需要大量的计算资源,限制了工业实现。提出了一种基于高斯溅射的三维重建与测量系统。该方法结合了自适应修改,以解决工业部件独特的表面特征,确保精度和效率。为了解决重建高斯模型与地面真值之间的尺度和差异,开发了一种鲁棒的尺度和配准管道。该管道能够精确评估重建质量和测量精度。综合实验评价表明,该方法实现了高精度重建,平均Chamfer Distance为2.24,平均F1 Score为0.19,优于现有方法。平均尺度误差为2.41%。该系统仅使用消费级相机即可实现可靠的尺寸测量,大大降低了设备成本并简化了操作,从而提高了工业应用中3D重建的可访问性。构建了一个公开可用的工业组件数据集,作为未来研究的基准。数据集和代码可在https://github.com/ldj0o/IndustrialComponentGS上获得。
{"title":"3D reconstruction and precision evaluation of industrial components via Gaussian Splatting","authors":"Guodong Sun ,&nbsp;Dingjie Liu ,&nbsp;Zeyu Yang ,&nbsp;Shaoran An ,&nbsp;Yang Zhang","doi":"10.1016/j.cag.2025.104422","DOIUrl":"10.1016/j.cag.2025.104422","url":null,"abstract":"<div><div>Traditional 3D reconstruction methods for industrial components present significant limitations. Structured light and laser scanning require costly equipment, complex procedures, and remain sensitive to scan completeness and occlusions. These constraints restrict their application in settings with budget and expertise limitations. Deep learning approaches reduce hardware requirements but fail to accurately reconstruct complex industrial surfaces with real-world data. Industrial components feature intricate geometries and surface irregularities that challenge current deep learning techniques. These methods also demand substantial computational resources, limiting industrial implementation. This paper presents a 3D reconstruction and measurement system based on Gaussian Splatting. The method incorporates adaptive modifications to address the unique surface characteristics of industrial components, ensuring both accuracy and efficiency. To resolve scale and pose discrepancies between the reconstructed Gaussian model and ground truth, a robust scaling and registration pipeline has been developed. This pipeline enables precise evaluation of reconstruction quality and measurement accuracy. Comprehensive experimental evaluations demonstrate that our approach achieves high-precision reconstruction, with an average Chamfer Distance of 2.24 and a mean F1 Score of 0.19, surpassing existing methods. Additionally, the average scale error is 2.41%. The proposed system enables reliable dimensional measurements using only consumer-grade cameras, significantly reducing equipment costs and simplifying operation, thereby improving the accessibility of 3D reconstruction in industrial applications. A publicly available industrial component dataset has been constructed to serve as a benchmark for future research. The dataset and code are available at <span><span>https://github.com/ldj0o/IndustrialComponentGS</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"132 ","pages":"Article 104422"},"PeriodicalIF":2.8,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145048805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computers & Graphics-Uk
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1