Recently, the editing of neural radiance fields (NeRFs) has gained considerable attention, but most prior works focus on static scenes while research on the appearance editing of dynamic scenes is relatively lacking. In this paper, we propose a novel framework to edit the local appearance of dynamic NeRFs by manipulating pixels in a single frame of training video. Specifically, to locally edit the appearance of dynamic NeRFs while preserving unedited regions, we introduce a local surface representation of the edited region, which can be inserted into and rendered along with the original NeRF and warped to arbitrary other frames through a learned invertible motion representation network. By employing our method, users without professional expertise can easily add desired content to the appearance of a dynamic scene. We extensively evaluate our approach on various scenes and show that our approach achieves spatially and temporally consistent editing results. Notably, our approach is versatile and applicable to different variants of dynamic NeRF representations.
{"title":"Dyn-E: Local appearance editing of dynamic neural radiance fields","authors":"Yinji ShenTu , Shangzhan Zhang , Mingyue Xu , Qing Shuai , Tianrun Chen , Sida Peng , Xiaowei Zhou","doi":"10.1016/j.cag.2024.104140","DOIUrl":"10.1016/j.cag.2024.104140","url":null,"abstract":"<div><div>Recently, the editing of neural radiance fields (NeRFs) has gained considerable attention, but most prior works focus on static scenes while research on the appearance editing of dynamic scenes is relatively lacking. In this paper, we propose a novel framework to edit the local appearance of dynamic NeRFs by manipulating pixels in a single frame of training video. Specifically, to locally edit the appearance of dynamic NeRFs while preserving unedited regions, we introduce a local surface representation of the edited region, which can be inserted into and rendered along with the original NeRF and warped to arbitrary other frames through a learned invertible motion representation network. By employing our method, users without professional expertise can easily add desired content to the appearance of a dynamic scene. We extensively evaluate our approach on various scenes and show that our approach achieves spatially and temporally consistent editing results. Notably, our approach is versatile and applicable to different variants of dynamic NeRF representations.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"126 ","pages":"Article 104140"},"PeriodicalIF":2.5,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143096774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-01DOI: 10.1016/j.cag.2024.104105
Lingfeng Shen , Yanlong Cao , Wenbin Zhu , Kai Ren , Yejun Shou , Haocheng Wang , Zhijie Xu
Semantic segmentation has made notable strides in analyzing homogeneous large-scale 3D scenes, yet its application to varied scenes with diverse characteristics poses considerable challenges. Traditional methods have been hampered by the dependence on resource-intensive neighborhood search algorithms, leading to elevated computational demands. To overcome these limitations, we introduce the MFAF-SCNet, a novel and computationally streamlined approach for voxel-based sparse convolutional. Our key innovation is the multi-scale feature adaptive fusion (MFAF) module, which intelligently applies a spectrum of convolution kernel sizes at the network’s entry point, enabling the extraction of multi-scale features. It adaptively calibrates the feature weighting to achieve optimal scale representation for different objects. Further augmenting our methodology is the LKSNet, an original sparse convolutional backbone designed to tackle the inherent inconsistencies in point cloud distribution. This is achieved by integrating inverted bottleneck structures with large kernel convolutions, significantly bolstering the network’s feature extraction and spatial correlation proficiency. The efficacy of MFAF-SCNet was rigorously tested against three large-scale benchmark datasets—ScanNet and S3DIS for indoor scenes, and SemanticKITTI for outdoor scenes. The experimental results underscore our method’s competitive edge, achieving high-performance benchmarks while ensuring computational efficiency.
{"title":"Enhanced multi-scale feature adaptive fusion sparse convolutional network for large-scale scenes semantic segmentation","authors":"Lingfeng Shen , Yanlong Cao , Wenbin Zhu , Kai Ren , Yejun Shou , Haocheng Wang , Zhijie Xu","doi":"10.1016/j.cag.2024.104105","DOIUrl":"10.1016/j.cag.2024.104105","url":null,"abstract":"<div><div>Semantic segmentation has made notable strides in analyzing homogeneous large-scale 3D scenes, yet its application to varied scenes with diverse characteristics poses considerable challenges. Traditional methods have been hampered by the dependence on resource-intensive neighborhood search algorithms, leading to elevated computational demands. To overcome these limitations, we introduce the MFAF-SCNet, a novel and computationally streamlined approach for voxel-based sparse convolutional. Our key innovation is the multi-scale feature adaptive fusion (MFAF) module, which intelligently applies a spectrum of convolution kernel sizes at the network’s entry point, enabling the extraction of multi-scale features. It adaptively calibrates the feature weighting to achieve optimal scale representation for different objects. Further augmenting our methodology is the LKSNet, an original sparse convolutional backbone designed to tackle the inherent inconsistencies in point cloud distribution. This is achieved by integrating inverted bottleneck structures with large kernel convolutions, significantly bolstering the network’s feature extraction and spatial correlation proficiency. The efficacy of MFAF-SCNet was rigorously tested against three large-scale benchmark datasets—ScanNet and S3DIS for indoor scenes, and SemanticKITTI for outdoor scenes. The experimental results underscore our method’s competitive edge, achieving high-performance benchmarks while ensuring computational efficiency.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"126 ","pages":"Article 104105"},"PeriodicalIF":2.5,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143096897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-01DOI: 10.1016/j.cag.2024.104115
Henry Ehlers , Nicolas Brich , Michael Krone , Martin Nöllenburg , Jiacheng Yu , Hiroaki Natsukawa , Xiaoru Yuan , Hsiang-Yun Wu
Biological networks describe complex relationships in biological systems, which represent biological entities as vertices and their underlying connectivity as edges. Ideally, for a complete analysis of such systems, domain experts need to visually integrate multiple sources of heterogeneous data, and visually, as well as numerically, probe said data in order to explore or validate (mechanistic) hypotheses. Such visual analyses require the coming together of biological domain experts, bioinformaticians, as well as network scientists to create useful visualization tools. Owing to the underlying graph data becoming ever larger and more complex, the visual representation of such biological networks has become challenging in its own right. This introduction and survey aims to describe the current state of biological network visualization in order to identify scientific gaps for visualization experts, network scientists, bioinformaticians, and domain experts, such as biologists, or biochemists, alike. Specifically, we revisit the classic visualization pipeline, upon which we base this paper’s taxonomy and structure, which in turn forms the basis of our literature classification. This pipeline describes the process of visualizing data, starting with the raw data itself, through the construction of data tables, to the actual creation of visual structures and views, as a function of task-driven user interaction. Literature was systematically surveyed using API-driven querying where possible, and the collected papers were manually read and categorized based on the identified sub-components of this visualization pipeline’s individual steps. From this survey, we highlight a number of exemplary visualization tools from multiple biological sub-domains in order to explore how they adapt these discussed techniques and why. Additionally, this taxonomic classification of the collected set of papers allows us to identify existing gaps in biological network visualization practices. We finally conclude this report with a list of open challenges and potential research directions. Examples of such gaps include (i) the overabundance of visualization tools using schematic or straight-line node-link diagrams, despite the availability of powerful alternatives, or (ii) the lack of visualization tools that also integrate more advanced network analysis techniques beyond basic graph descriptive statistics.
{"title":"An introduction to and survey of biological network visualization","authors":"Henry Ehlers , Nicolas Brich , Michael Krone , Martin Nöllenburg , Jiacheng Yu , Hiroaki Natsukawa , Xiaoru Yuan , Hsiang-Yun Wu","doi":"10.1016/j.cag.2024.104115","DOIUrl":"10.1016/j.cag.2024.104115","url":null,"abstract":"<div><div>Biological networks describe complex relationships in biological systems, which represent biological entities as vertices and their underlying connectivity as edges. Ideally, for a complete analysis of such systems, domain experts need to visually integrate multiple sources of heterogeneous data, and visually, as well as numerically, probe said data in order to explore or validate (mechanistic) hypotheses. Such visual analyses require the coming together of biological domain experts, bioinformaticians, as well as network scientists to create useful visualization tools. Owing to the underlying graph data becoming ever larger and more complex, the visual representation of such biological networks has become challenging in its own right. This introduction and survey aims to describe the current state of biological network visualization in order to identify scientific gaps for visualization experts, network scientists, bioinformaticians, and domain experts, such as biologists, or biochemists, alike. Specifically, we revisit the classic visualization pipeline, upon which we base this paper’s taxonomy and structure, which in turn forms the basis of our literature classification. This pipeline describes the process of visualizing data, starting with the raw data itself, through the construction of data tables, to the actual creation of visual structures and views, as a function of task-driven user interaction. Literature was systematically surveyed using API-driven querying where possible, and the collected papers were manually read and categorized based on the identified sub-components of this visualization pipeline’s individual steps. From this survey, we highlight a number of exemplary visualization tools from multiple biological sub-domains in order to explore how they adapt these discussed techniques and why. Additionally, this taxonomic classification of the collected set of papers allows us to identify existing gaps in biological network visualization practices. We finally conclude this report with a list of open challenges and potential research directions. Examples of such gaps include (i) the overabundance of visualization tools using schematic or straight-line node-link diagrams, despite the availability of powerful alternatives, or (ii) the lack of visualization tools that also integrate more advanced network analysis techniques beyond basic graph descriptive statistics.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"126 ","pages":"Article 104115"},"PeriodicalIF":2.5,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143097300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-01DOI: 10.1016/j.cag.2024.104155
Haian Lin, Yinwei Zhan
Many application fields including virtual reality and movie production demand reconstructing high-quality digital human avatars from monocular videos and real-time rendering. However, existing neural radiance field (NeRF)-based methods are costly to train and render. In this paper, we propose GaussianAvatar, a novel framework that extends 3D Gaussian to dynamic human scenes, enabling fast training and real-time rendering. The human 3D Gaussian in canonical space is initialized and transformed to posed space using Linear Blend Skinning (LBS), based on pose parameters, to learn the fine details of the human body at a very small computational cost. We design a pose parameter refinement module and a LBS weight optimization module to increase the accuracy of the pose parameter detection in the real dataset and introduce multi-resolution hash coding to accelerate the training speed. Experimental results demonstrate that our method outperforms existing methods in terms of training time, rendering speed, and reconstruction quality.
{"title":"GaussianAvatar: Human avatar Gaussian splatting from monocular videos","authors":"Haian Lin, Yinwei Zhan","doi":"10.1016/j.cag.2024.104155","DOIUrl":"10.1016/j.cag.2024.104155","url":null,"abstract":"<div><div>Many application fields including virtual reality and movie production demand reconstructing high-quality digital human avatars from monocular videos and real-time rendering. However, existing neural radiance field (NeRF)-based methods are costly to train and render. In this paper, we propose GaussianAvatar, a novel framework that extends 3D Gaussian to dynamic human scenes, enabling fast training and real-time rendering. The human 3D Gaussian in canonical space is initialized and transformed to posed space using Linear Blend Skinning (LBS), based on pose parameters, to learn the fine details of the human body at a very small computational cost. We design a pose parameter refinement module and a LBS weight optimization module to increase the accuracy of the pose parameter detection in the real dataset and introduce multi-resolution hash coding to accelerate the training speed. Experimental results demonstrate that our method outperforms existing methods in terms of training time, rendering speed, and reconstruction quality.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"126 ","pages":"Article 104155"},"PeriodicalIF":2.5,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143096904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-01DOI: 10.1016/j.cag.2025.104162
KangKang Yin , Paul Kry
{"title":"Foreword to the special section on graphics interface 2023","authors":"KangKang Yin , Paul Kry","doi":"10.1016/j.cag.2025.104162","DOIUrl":"10.1016/j.cag.2025.104162","url":null,"abstract":"","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"126 ","pages":"Article 104162"},"PeriodicalIF":2.5,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143096773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-01DOI: 10.1016/j.cag.2024.104156
Zijing Ma , Zhi Yang , Aihua Mao , Shuyi Wen , Ran Yi , Yongjin Liu
3D dense captioning has received increasing attention in the multimodal field of 3D vision and language. This task aims to generate a specific descriptive sentence for each object in the 3D scene, which helps build a semantic understanding of the scene. However, due to inevitable holes in point clouds, there are often incorrect objects in the generated descriptions. Moreover, most existing models use KNN to construct relation graphs, which are not robust and have poor adaptability to different scenes. They cannot represent the relationship between the surrounding objects well. To address these challenges, in this paper, we propose a novel multi-level mixed encoding model for accurate 3D dense captioning of objects in point clouds. To handle holes in point clouds, we extract multi-view projection image features of objects based on our key observation that a hole in an object seldom exists in all projection images from different view angles. Then, the image features are fused with object detection features as the input of subsequent modules. Moreover, we combine KNN and DBSCAN clustering algorithms to construct a graph G and fuse their output features subsequently, which ensures the robustness of the graph structure for accurately describing the relationships between objects. Specifically, DBSCAN clusters are formed based on density, which alleviates the problem of using a fixed K value in KNN. Extensive experiments conducted on ScanRefer and Nr3D datasets demonstrate the effectiveness of our proposed model.
{"title":"A multi-view projection-based object-aware graph network for dense captioning of point clouds","authors":"Zijing Ma , Zhi Yang , Aihua Mao , Shuyi Wen , Ran Yi , Yongjin Liu","doi":"10.1016/j.cag.2024.104156","DOIUrl":"10.1016/j.cag.2024.104156","url":null,"abstract":"<div><div>3D dense captioning has received increasing attention in the multimodal field of 3D vision and language. This task aims to generate a specific descriptive sentence for each object in the 3D scene, which helps build a semantic understanding of the scene. However, due to inevitable holes in point clouds, there are often incorrect objects in the generated descriptions. Moreover, most existing models use KNN to construct relation graphs, which are not robust and have poor adaptability to different scenes. They cannot represent the relationship between the surrounding objects well. To address these challenges, in this paper, we propose a novel multi-level mixed encoding model for accurate 3D dense captioning of objects in point clouds. To handle holes in point clouds, we extract multi-view projection image features of objects based on our key observation that a hole in an object seldom exists in all projection images from different view angles. Then, the image features are fused with object detection features as the input of subsequent modules. Moreover, we combine KNN and DBSCAN clustering algorithms to construct a graph G and fuse their output features subsequently, which ensures the robustness of the graph structure for accurately describing the relationships between objects. Specifically, DBSCAN clusters are formed based on density, which alleviates the problem of using a fixed K value in KNN. Extensive experiments conducted on ScanRefer and Nr3D datasets demonstrate the effectiveness of our proposed model.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"126 ","pages":"Article 104156"},"PeriodicalIF":2.5,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143096905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-01DOI: 10.1016/j.cag.2025.104164
Ruifei Sun, Sulan Zhang, Meihong Su, Lihua Hu, Jifu Zhang
Existing methods of weakly supervised semantic segmentation for ancient architecture have several limitations including difficulty in capturing decorative details and achieving precise segmentation boundaries due to the many details and complex shapes of these structures. To mitigate the effect of the above issues in ancient architecture images, this paper proposes a method for weakly supervised semantic segmentation of ancient architecture based on multiscale adaptive fusion and spectral clustering. Specifically, low-level features are able to capture localized details in an image, which helps to identify small objects. In contrast, high-level features can capture the overall shape of an object, making them more effective in recognizing large objects. We use a gating mechanism to adaptively fuse high-level and low-level features in order to retain objects of different sizes. Additionally, by employing spectral clustering, pixels in ancient architectural images can be divided into different regions based on their feature similarities. These regions serve as processing units, providing precise boundaries for class activation map (CAM) and improving segmentation accuracy. Experimental results on the Ancient Architecture, Baroque Architecture, MS COCO 2014 and PASCAL VOC 2012 datasets show that the method outperforms the existing weakly supervised methods, achieving 46.9%, 55.8%, 69.9% and 38.3% in Mean Intersection Over Union (MIOU), respectively. The code is available at https://github.com/hao530/MASC.git
{"title":"Weakly supervised semantic segmentation for ancient architecture based on multiscale adaptive fusion and spectral clustering","authors":"Ruifei Sun, Sulan Zhang, Meihong Su, Lihua Hu, Jifu Zhang","doi":"10.1016/j.cag.2025.104164","DOIUrl":"10.1016/j.cag.2025.104164","url":null,"abstract":"<div><div>Existing methods of weakly supervised semantic segmentation for ancient architecture have several limitations including difficulty in capturing decorative details and achieving precise segmentation boundaries due to the many details and complex shapes of these structures. To mitigate the effect of the above issues in ancient architecture images, this paper proposes a method for weakly supervised semantic segmentation of ancient architecture based on multiscale adaptive fusion and spectral clustering. Specifically, low-level features are able to capture localized details in an image, which helps to identify small objects. In contrast, high-level features can capture the overall shape of an object, making them more effective in recognizing large objects. We use a gating mechanism to adaptively fuse high-level and low-level features in order to retain objects of different sizes. Additionally, by employing spectral clustering, pixels in ancient architectural images can be divided into different regions based on their feature similarities. These regions serve as processing units, providing precise boundaries for class activation map (CAM) and improving segmentation accuracy. Experimental results on the Ancient Architecture, Baroque Architecture, MS COCO 2014 and PASCAL VOC 2012 datasets show that the method outperforms the existing weakly supervised methods, achieving 46.9%, 55.8%, 69.9% and 38.3% in Mean Intersection Over Union (MIOU), respectively. The code is available at <span><span>https://github.com/hao530/MASC.git</span><svg><path></path></svg></span></div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"126 ","pages":"Article 104164"},"PeriodicalIF":2.5,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143096932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-31DOI: 10.1016/j.cag.2025.104175
Yuntao Wang, Wenming Wu, Yue Fei, Liping Zheng
The layout of commercial space is crucial for enhancing user experience and creating business value. However, designing the layout of a mid-scale commercial space remains challenging due to the need to balance rationality, functionality, and safety. In this paper, we propose a novel method that utilizes the Centroidal Voronoi Tessellation (CVT) to generate commercial space layouts automatically. Our method is a multi-level spatial division framework, where at each level, we create and optimize Voronoi diagrams to accommodate complex multi-scale boundaries. We achieve spatial division at different levels by combining the standard Voronoi diagrams with the rectangular Voronoi diagrams. Our method also leverages Voronoi diagrams’ generation controllability and division diversity, offering customized control and diversity generation that previous methods struggled to provide. Extensive experiments and comparisons show that our method offers an automated and efficient solution for generating high-quality commercial space layouts.
{"title":"CVTLayout: Automated generation of mid-scale commercial space layout via Centroidal Voronoi Tessellation","authors":"Yuntao Wang, Wenming Wu, Yue Fei, Liping Zheng","doi":"10.1016/j.cag.2025.104175","DOIUrl":"10.1016/j.cag.2025.104175","url":null,"abstract":"<div><div>The layout of commercial space is crucial for enhancing user experience and creating business value. However, designing the layout of a mid-scale commercial space remains challenging due to the need to balance rationality, functionality, and safety. In this paper, we propose a novel method that utilizes the Centroidal Voronoi Tessellation (CVT) to generate commercial space layouts automatically. Our method is a multi-level spatial division framework, where at each level, we create and optimize Voronoi diagrams to accommodate complex multi-scale boundaries. We achieve spatial division at different levels by combining the standard Voronoi diagrams with the rectangular Voronoi diagrams. Our method also leverages Voronoi diagrams’ generation controllability and division diversity, offering customized control and diversity generation that previous methods struggled to provide. Extensive experiments and comparisons show that our method offers an automated and efficient solution for generating high-quality commercial space layouts.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"127 ","pages":"Article 104175"},"PeriodicalIF":2.5,"publicationDate":"2025-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143349627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-27DOI: 10.1016/j.cag.2025.104168
Renata G. Raidou , James B. Procter , Christian Hansen , Thomas Höllt , Daniel Jönsson
This special section of the Computers and Graphics Journal (C&G) features three articles within the scope of the EG Workshop on Visual Computing for Biology and Medicine, which took place for the 13th time on September 20–22, 2023 in Norrköping, Sweden.
{"title":"Foreword to the special section on visual computing for biology and medicine (VCBM 2023)","authors":"Renata G. Raidou , James B. Procter , Christian Hansen , Thomas Höllt , Daniel Jönsson","doi":"10.1016/j.cag.2025.104168","DOIUrl":"10.1016/j.cag.2025.104168","url":null,"abstract":"<div><div>This special section of the Computers and Graphics Journal (C&G) features three articles within the scope of the EG Workshop on Visual Computing for Biology and Medicine, which took place for the 13th time on September 20–22, 2023 in Norrköping, Sweden.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"127 ","pages":"Article 104168"},"PeriodicalIF":2.5,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143156232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This special issue contains extended and revised versions of the best papers presented at the 9th Conference on Smart Tools and Applications in Graphics (STAG 2022), held in Cagliari, on November 17–18, 2022. Three papers were selected by the appointed members of the Program Committee; extended versions were submitted and further reviewed by external experts. The result is a collection of papers spanning a broad spectrum of topics, from shape analysis and computational geometry to rendering. These include areas such as shape matching, functional maps, and realistic appearance modeling, highlighting cutting-edge advancements and novel approaches in each domain.
{"title":"Foreword to the Special Section on Smart Tool and Applications for Graphics (STAG 2022)","authors":"Daniela Cabiddu , Teseo Schneider , Gianmarco Cherchi","doi":"10.1016/j.cag.2025.104174","DOIUrl":"10.1016/j.cag.2025.104174","url":null,"abstract":"<div><div>This special issue contains extended and revised versions of the best papers presented at the 9th Conference on Smart Tools and Applications in Graphics (STAG 2022), held in Cagliari, on November 17–18, 2022. Three papers were selected by the appointed members of the Program Committee; extended versions were submitted and further reviewed by external experts. The result is a collection of papers spanning a broad spectrum of topics, from shape analysis and computational geometry to rendering. These include areas such as shape matching, functional maps, and realistic appearance modeling, highlighting cutting-edge advancements and novel approaches in each domain.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"127 ","pages":"Article 104174"},"PeriodicalIF":2.5,"publicationDate":"2025-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143155674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}