Pub Date : 2025-06-01Epub Date: 2025-04-24DOI: 10.1016/j.gmod.2025.101264
Pengbo Bo , Siyu Xue , Xiwen Xu , Caiming Zhang
Tool shape selection and path planning are critical for 5-axis CNC flank milling of freeform surfaces, typically addressed using optimization algorithms where initialization plays a pivotal role. Existing approaches rely on user-specified initialization of either tool shapes or motion paths, often resulting in suboptimal outcomes. This paper introduces a fully automated method that simultaneously initializes both tool shapes and motion paths, achieving high-precision machining with efficient surface coverage. Our approach explores a solution space of potential tool axes represented by line segments near the design surface. To efficiently manage the vast number of lines, we integrate space voxelization with a discrete distance field for effective line sampling. A graph-based algorithm generates feasible line sequences for motion paths, while path optimization refines a single tool shape across multiple paths simultaneously. The method identifies optimal tool shapes of various sizes, each paired with corresponding motion paths for multi-pass machining. Experiments on industrial benchmark models and freeform surfaces validate the effectiveness and practicality of the proposed approach.
{"title":"Initialization of cutting tools and milling paths for 5-axis CNC flank milling of freeform surfaces","authors":"Pengbo Bo , Siyu Xue , Xiwen Xu , Caiming Zhang","doi":"10.1016/j.gmod.2025.101264","DOIUrl":"10.1016/j.gmod.2025.101264","url":null,"abstract":"<div><div>Tool shape selection and path planning are critical for 5-axis CNC flank milling of freeform surfaces, typically addressed using optimization algorithms where initialization plays a pivotal role. Existing approaches rely on user-specified initialization of either tool shapes or motion paths, often resulting in suboptimal outcomes. This paper introduces a fully automated method that simultaneously initializes both tool shapes and motion paths, achieving high-precision machining with efficient surface coverage. Our approach explores a solution space of potential tool axes represented by line segments near the design surface. To efficiently manage the vast number of lines, we integrate space voxelization with a discrete distance field for effective line sampling. A graph-based algorithm generates feasible line sequences for motion paths, while path optimization refines a single tool shape across multiple paths simultaneously. The method identifies optimal tool shapes of various sizes, each paired with corresponding motion paths for multi-pass machining. Experiments on industrial benchmark models and freeform surfaces validate the effectiveness and practicality of the proposed approach.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"139 ","pages":"Article 101264"},"PeriodicalIF":2.5,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143870721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-01Epub Date: 2025-03-25DOI: 10.1016/j.gmod.2025.101259
Wenjin Yang, Jie He, Xiaotong Zhang
Line charts, as a common data visualization tool in scientific research and business analysis, encapsulate rich experimental data. However, existing data extraction tools face challenges such as low automation levels and difficulties in handling complex charts. This paper proposes a novel method for extracting data from line charts, reformulating the extraction problem as an instance segmentation task, and introducing the Mamba-enhanced Transformer mask query method along with a curve mask-guided training approach to address challenges such as long dependencies and intersections in curve detection. Additionally, YOLOv9 is utilized for the detection and classification of chart elements, and a text recognition dataset comprising approximately 100K charts is constructed. An LSTM-based attention mechanism is employed for precise scale value recognition. Lastly, we present a method for automatically converting image data into structured JSON data, significantly enhancing the efficiency and accuracy of data extraction. Experimental results demonstrate that this method exhibits high efficiency and accuracy in handling complex charts, achieving an average extraction accuracy of 93% on public datasets, significantly surpassing the current state-of-the-art methods. This research provides an efficient foundation for large-scale scientific data analysis and machine learning model development, advancing the field of automated data extraction technology.
{"title":"Efficient extraction of experimental data from line charts using advanced machine learning techniques","authors":"Wenjin Yang, Jie He, Xiaotong Zhang","doi":"10.1016/j.gmod.2025.101259","DOIUrl":"10.1016/j.gmod.2025.101259","url":null,"abstract":"<div><div>Line charts, as a common data visualization tool in scientific research and business analysis, encapsulate rich experimental data. However, existing data extraction tools face challenges such as low automation levels and difficulties in handling complex charts. This paper proposes a novel method for extracting data from line charts, reformulating the extraction problem as an instance segmentation task, and introducing the Mamba-enhanced Transformer mask query method along with a curve mask-guided training approach to address challenges such as long dependencies and intersections in curve detection. Additionally, YOLOv9 is utilized for the detection and classification of chart elements, and a text recognition dataset comprising approximately 100K charts is constructed. An LSTM-based attention mechanism is employed for precise scale value recognition. Lastly, we present a method for automatically converting image data into structured JSON data, significantly enhancing the efficiency and accuracy of data extraction. Experimental results demonstrate that this method exhibits high efficiency and accuracy in handling complex charts, achieving an average extraction accuracy of 93% on public datasets, significantly surpassing the current state-of-the-art methods. This research provides an efficient foundation for large-scale scientific data analysis and machine learning model development, advancing the field of automated data extraction technology.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"139 ","pages":"Article 101259"},"PeriodicalIF":2.5,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143683471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-01Epub Date: 2025-04-11DOI: 10.1016/j.gmod.2025.101262
Xufei Guo , Xiao Dong , Juan Cao , Zhonggui Chen
The creation of computational agents capable of generating computer-aided design (CAD) models that rival those produced by professional designers is a pressing challenge in the field of computational design. The key obstacle is the need to generate a large number of realistic and diverse models while maintaining control over the output to a certain degree. Therefore, we propose a novel CAD model generation network called CADTrans which is based on a code tree-guided transformer framework to autoregressively generate CAD construction sequences. Firstly, three regularized discrete codebooks are extracted through vector quantized adversarial learning, with each codebook respectively representing the features of Loop, Profile, and Solid. Secondly, these codebooks are used to normalize a CAD construction sequence into a structured code tree representation which is then used to train a standard transformer network to reconstruct the code tree. Finally, the code tree is used as global information to guide the sketch-and-extrude method to recover the corresponding geometric information, thereby reconstructing the complete CAD model. Extensive experiments demonstrate that CADTrans achieves state-of-the-art performance, generating higher-quality, more varied, and complex models. Meanwhile, it provides more possibilities for CAD applications through its flexible control method, enabling users to quickly experiment with different design schemes, inspiring diverse design ideas and the generation of a wide variety of models or even inspiring models, thereby improving design efficiency and promoting creativity. The code is available at https://effieguoxufei.github.io/CADtrans/.
{"title":"CADTrans: A code tree-guided CAD generative transformer model with regularized discrete codebooks","authors":"Xufei Guo , Xiao Dong , Juan Cao , Zhonggui Chen","doi":"10.1016/j.gmod.2025.101262","DOIUrl":"10.1016/j.gmod.2025.101262","url":null,"abstract":"<div><div>The creation of computational agents capable of generating computer-aided design (CAD) models that rival those produced by professional designers is a pressing challenge in the field of computational design. The key obstacle is the need to generate a large number of realistic and diverse models while maintaining control over the output to a certain degree. Therefore, we propose a novel CAD model generation network called CADTrans which is based on a code tree-guided transformer framework to autoregressively generate CAD construction sequences. Firstly, three regularized discrete codebooks are extracted through vector quantized adversarial learning, with each codebook respectively representing the features of Loop, Profile, and Solid. Secondly, these codebooks are used to normalize a CAD construction sequence into a structured code tree representation which is then used to train a standard transformer network to reconstruct the code tree. Finally, the code tree is used as global information to guide the sketch-and-extrude method to recover the corresponding geometric information, thereby reconstructing the complete CAD model. Extensive experiments demonstrate that CADTrans achieves state-of-the-art performance, generating higher-quality, more varied, and complex models. Meanwhile, it provides more possibilities for CAD applications through its flexible control method, enabling users to quickly experiment with different design schemes, inspiring diverse design ideas and the generation of a wide variety of models or even inspiring models, thereby improving design efficiency and promoting creativity. The code is available at <span><span>https://effieguoxufei.github.io/CADtrans/</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"139 ","pages":"Article 101262"},"PeriodicalIF":2.5,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143817456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-01Epub Date: 2025-01-28DOI: 10.1016/j.gmod.2025.101254
Khang Yeu Tang , Ge Yu , Juhong Wang , Yu He , Sen-Zhe Xu , Song-Hai Zhang
As technology advances, user demand for immersive and authentic information presentation rises. Traditional 2D displays and interactions fail to meet modern standards, while virtual reality (VR) is gaining attention for its immersive experience. However, using a controller for VR movement can cause dizziness due to mismatched visual and vestibular cues, impacting the VR experience. This paper analyzes the main causes of VR-induced vertigo and develops improved handheld controller movement strategies. These strategies adjust the user’s pitch angle and field of view in real time or map the user’s real-world head acceleration to the virtual character. By intelligently adjusting the controller-to-VR display mapping, these methods reduce vertigo. In addition, this paper also verified the actual effects of these designs through a series of experiments, and conducted detailed data analysis on the degree of user vertigo. The experimental results showed that using a specific improved handheld controller movement design can significantly improve the user’s comfort in the VR environment, effectively reducing the occurrence of vertigo and discomfort.
{"title":"Strategies for reducing motion sickness in virtual reality through improved handheld controller movements","authors":"Khang Yeu Tang , Ge Yu , Juhong Wang , Yu He , Sen-Zhe Xu , Song-Hai Zhang","doi":"10.1016/j.gmod.2025.101254","DOIUrl":"10.1016/j.gmod.2025.101254","url":null,"abstract":"<div><div>As technology advances, user demand for immersive and authentic information presentation rises. Traditional 2D displays and interactions fail to meet modern standards, while virtual reality (VR) is gaining attention for its immersive experience. However, using a controller for VR movement can cause dizziness due to mismatched visual and vestibular cues, impacting the VR experience. This paper analyzes the main causes of VR-induced vertigo and develops improved handheld controller movement strategies. These strategies adjust the user’s pitch angle and field of view in real time or map the user’s real-world head acceleration to the virtual character. By intelligently adjusting the controller-to-VR display mapping, these methods reduce vertigo. In addition, this paper also verified the actual effects of these designs through a series of experiments, and conducted detailed data analysis on the degree of user vertigo. The experimental results showed that using a specific improved handheld controller movement design can significantly improve the user’s comfort in the VR environment, effectively reducing the occurrence of vertigo and discomfort.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"138 ","pages":"Article 101254"},"PeriodicalIF":2.5,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143104629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-01Epub Date: 2025-02-04DOI: 10.1016/j.gmod.2025.101255
Changshuang Zhou , Frederick W.B. Li , Chao Song , Dong Zheng , Bailin Yang
We propose Dual-Branch Network (DBNet), a novel deepfake detection framework that addresses key limitations of existing works by jointly modeling 3D-temporal and fine-grained texture representations. Specifically, we aim to investigate how to (1) capture dynamic properties and spatial details in a unified model and (2) identify subtle inconsistencies beyond localized artifacts through temporally consistent modeling. To this end, DBNet extracts 3D landmarks from videos to construct temporal sequences for an RNN branch, while a Vision Transformer analyzes local patches. A Temporal Consistency-aware Loss is introduced to explicitly supervise the RNN. Additionally, a 3D generative model augments training data. Extensive experiments demonstrate our method achieves state-of-the-art performance on benchmarks, and ablation studies validate its effectiveness in generalizing to unseen data under various manipulations and compression.
{"title":"3D data augmentation and dual-branch model for robust face forgery detection","authors":"Changshuang Zhou , Frederick W.B. Li , Chao Song , Dong Zheng , Bailin Yang","doi":"10.1016/j.gmod.2025.101255","DOIUrl":"10.1016/j.gmod.2025.101255","url":null,"abstract":"<div><div>We propose Dual-Branch Network (DBNet), a novel deepfake detection framework that addresses key limitations of existing works by jointly modeling 3D-temporal and fine-grained texture representations. Specifically, we aim to investigate how to (1) capture dynamic properties and spatial details in a unified model and (2) identify subtle inconsistencies beyond localized artifacts through temporally consistent modeling. To this end, DBNet extracts 3D landmarks from videos to construct temporal sequences for an RNN branch, while a Vision Transformer analyzes local patches. A Temporal Consistency-aware Loss is introduced to explicitly supervise the RNN. Additionally, a 3D generative model augments training data. Extensive experiments demonstrate our method achieves state-of-the-art performance on benchmarks, and ablation studies validate its effectiveness in generalizing to unseen data under various manipulations and compression.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"138 ","pages":"Article 101255"},"PeriodicalIF":2.5,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143104630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-01Epub Date: 2024-12-19DOI: 10.1016/j.gmod.2024.101251
Carlotta Giannelli , Sofia Imperatore , Angelos Mantzaflaris , Dominik Mokriš
We propose a new paradigm for scattered data fitting with adaptive spline constructions based on the key interplay between parameterization and adaptivity. Specifically, we introduce two novel adaptive fitting schemes that combine moving parameterizations with adaptive spline refinement for highly accurate CAD models reconstruction from real-world scattered point clouds. The first scheme alternates surface fitting and data parameter optimization. The second scheme jointly optimizes the parameters and the surface control points. To combine the proposed fitting methods with adaptive spline constructions, we present a key treatment of boundary points. Industrial examples show that updating the parameterization, within an adaptive spline approximation framework, significantly reduces the number of degrees of freedom needed for a certain accuracy, especially if spline adaptivity is driven by suitably graded hierarchical meshes. The numerical experiments employ THB-splines, thus exploiting the existing CAD integration within the considered industrial setting, nevertheless, any adaptive spline construction can be chosen.
{"title":"Efficient alternating and joint distance minimization methods for adaptive spline surface fitting","authors":"Carlotta Giannelli , Sofia Imperatore , Angelos Mantzaflaris , Dominik Mokriš","doi":"10.1016/j.gmod.2024.101251","DOIUrl":"10.1016/j.gmod.2024.101251","url":null,"abstract":"<div><div>We propose a new paradigm for scattered data fitting with adaptive spline constructions based on the key interplay between parameterization and adaptivity. Specifically, we introduce two novel adaptive fitting schemes that combine moving parameterizations with adaptive spline refinement for highly accurate CAD models reconstruction from real-world scattered point clouds. The first scheme alternates surface fitting and data parameter optimization. The second scheme jointly optimizes the parameters and the surface control points. To combine the proposed fitting methods with adaptive spline constructions, we present a key treatment of boundary points. Industrial examples show that updating the parameterization, within an adaptive spline approximation framework, significantly reduces the number of degrees of freedom needed for a certain accuracy, especially if spline adaptivity is driven by suitably graded hierarchical meshes. The numerical experiments employ THB-splines, thus exploiting the existing CAD integration within the considered industrial setting, nevertheless, any adaptive spline construction can be chosen.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"137 ","pages":"Article 101251"},"PeriodicalIF":2.5,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143140780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-01Epub Date: 2025-01-06DOI: 10.1016/j.gmod.2024.101253
Manuel Prado-Velasco, Laura García-Ruesgas
The Computer extended Descriptive Geometry (CeDG) is as a novel approach based on Descriptive Geometry to build 3D models within the framework provided by Dynamic Geometry Software tools. Parametric CeDG models can be interactively explored when continuous parameters change, but this is not the case for discrete parameters. This study demonstrates the capability of the GeoGebra - CeDG approach to incorporate algorithms that build discrete variable 3D models with dynamic parameterization. Several 3D models and their flattened patterns (neutral fiber), based on a new developed CeDG algorithm, were compared to their LogiTRACE v.14 and Solid Edge 2024 (CAD) counterparts. The accuracy of the CeDG models surpassed that of CAD models for nearly all dimensions defined as metrics. In addition, the CeDG approach was the unique that provided an automatic solution for any value of the number of ferrules.
{"title":"Discrete variable 3D models in Computer extended Descriptive Geometry (CeDG): Building of polygonal sheet-metal elbows and comparison against CAD","authors":"Manuel Prado-Velasco, Laura García-Ruesgas","doi":"10.1016/j.gmod.2024.101253","DOIUrl":"10.1016/j.gmod.2024.101253","url":null,"abstract":"<div><div>The Computer extended Descriptive Geometry (CeDG) is as a novel approach based on Descriptive Geometry to build 3D models within the framework provided by Dynamic Geometry Software tools. Parametric CeDG models can be interactively explored when continuous parameters change, but this is not the case for discrete parameters. This study demonstrates the capability of the GeoGebra - CeDG approach to incorporate algorithms that build discrete variable 3D models with dynamic parameterization. Several 3D models and their flattened patterns (neutral fiber), based on a new developed CeDG algorithm, were compared to their LogiTRACE v.14 and Solid Edge 2024 (CAD) counterparts. The accuracy of the CeDG models surpassed that of CAD models for nearly all dimensions defined as metrics. In addition, the CeDG approach was the unique that provided an automatic solution for any value of the number of ferrules.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"137 ","pages":"Article 101253"},"PeriodicalIF":2.5,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143140781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-01Epub Date: 2024-12-05DOI: 10.1016/j.gmod.2024.101239
Francesco Ballerin, Erlend Grong
Equipping the rototranslation group SE(2) with a sub-Riemannian structure inspired by the visual cortex V1, we propose algorithms for image inpainting and enhancement based on hypoelliptic diffusion. We innovate on previous implementations of the methods by Citti, Sarti, and Boscain et al., by proposing an alternative that prevents fading and is capable of producing sharper results in a procedure that we call WaxOn-WaxOff. We also exploit the sub-Riemannian structure to define a completely new unsharp filter using SE(2), analogous to the classical unsharp filter for 2D image processing. We demonstrate our method on blood vessels enhancement in retinal scans.
{"title":"Geometry of the visual cortex with applications to image inpainting and enhancement","authors":"Francesco Ballerin, Erlend Grong","doi":"10.1016/j.gmod.2024.101239","DOIUrl":"10.1016/j.gmod.2024.101239","url":null,"abstract":"<div><div>Equipping the rototranslation group SE(2) with a sub-Riemannian structure inspired by the visual cortex V1, we propose algorithms for image inpainting and enhancement based on hypoelliptic diffusion. We innovate on previous implementations of the methods by Citti, Sarti, and Boscain et al., by proposing an alternative that prevents fading and is capable of producing sharper results in a procedure that we call <span>WaxOn</span>-<span>WaxOff</span>. We also exploit the sub-Riemannian structure to define a completely new unsharp filter using SE(2), analogous to the classical unsharp filter for 2D image processing. We demonstrate our method on blood vessels enhancement in retinal scans.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"137 ","pages":"Article 101239"},"PeriodicalIF":2.5,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143140783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-01Epub Date: 2024-12-14DOI: 10.1016/j.gmod.2024.101250
Hailun Xu, Zepeng Wen, Hongmei Kang
Subdivision surfaces as an extension of splines have become a promising technique for addressing PDEs on models with complex topologies in isogeometric analysis. This has sparked interest in exploring the approximation by subdivision function spaces. Quasi-interpolation serves as a significant tool in the field of approximation, offering benefits such as low computational expense and strong numerical stability. In this paper, we propose a straightforward approach for constructing the quasi-interpolation projectors of subdivision function spaces that features explicit formulations and achieves a highly desirable approximation order. The local interpolation problem is constructed based on the subdivision mask and the limit position mask, overcoming the cumbersome evaluation of the subdivision basis functions and the difficulty associated with deriving explicit solutions to the problem. Explicit quasi-interpolation formulas for the loop, modified loop, and Catmull–Clark subdivisions are provided. Numerical experiments demonstrate that these quasi-interpolation projectors achieve an expected approximate order and present promising prospects in isogeometric collocation.
{"title":"Quasi-interpolation projectors for subdivision function spaces","authors":"Hailun Xu, Zepeng Wen, Hongmei Kang","doi":"10.1016/j.gmod.2024.101250","DOIUrl":"10.1016/j.gmod.2024.101250","url":null,"abstract":"<div><div>Subdivision surfaces as an extension of splines have become a promising technique for addressing PDEs on models with complex topologies in isogeometric analysis. This has sparked interest in exploring the approximation by subdivision function spaces. Quasi-interpolation serves as a significant tool in the field of approximation, offering benefits such as low computational expense and strong numerical stability. In this paper, we propose a straightforward approach for constructing the quasi-interpolation projectors of subdivision function spaces that features explicit formulations and achieves a highly desirable approximation order. The local interpolation problem is constructed based on the subdivision mask and the limit position mask, overcoming the cumbersome evaluation of the subdivision basis functions and the difficulty associated with deriving explicit solutions to the problem. Explicit quasi-interpolation formulas for the loop, modified loop, and Catmull–Clark subdivisions are provided. Numerical experiments demonstrate that these quasi-interpolation projectors achieve an expected approximate order and present promising prospects in isogeometric collocation.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"137 ","pages":"Article 101250"},"PeriodicalIF":2.5,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143140779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-01Epub Date: 2024-12-28DOI: 10.1016/j.gmod.2024.101252
Linjun Jiang , Yue Liu , Zhiyuan Dong , Yinghao Li , Yusong Lin
Point cloud registration, a fundamental task in computer science and artificial intelligence, involves rigidly transforming point clouds from different perspectives into a common coordinate system. Traditional registration methods often lack robustness and fail to achieve the desired level of accuracy. In contrast, deep learning-based registration methods have demonstrated improved accuracy and generalization. However, these methods are hindered by large parameter sizes, complex network architectures, and challenges related to efficiency, robustness, and partial overlaps. In this study, we propose a lightweight deep learning-based registration method that captures features from multiple perspectives to predict overlapping points and mitigate the interference of non-overlapping points. Specifically, our approach utilizes pruning and weight-sharing quantization techniques to reduce model size and simplify the network structure. We evaluate the proposed model on noisy and partially overlapping point clouds from the ModelNet40 dataset, comparing its performance against other existing methods. Experimental results show that the proposed method significantly reduces the model's parameter size without compromising registration accuracy.
{"title":"Lightweight deep learning method for end-to-end point cloud registration","authors":"Linjun Jiang , Yue Liu , Zhiyuan Dong , Yinghao Li , Yusong Lin","doi":"10.1016/j.gmod.2024.101252","DOIUrl":"10.1016/j.gmod.2024.101252","url":null,"abstract":"<div><div>Point cloud registration, a fundamental task in computer science and artificial intelligence, involves rigidly transforming point clouds from different perspectives into a common coordinate system. Traditional registration methods often lack robustness and fail to achieve the desired level of accuracy. In contrast, deep learning-based registration methods have demonstrated improved accuracy and generalization. However, these methods are hindered by large parameter sizes, complex network architectures, and challenges related to efficiency, robustness, and partial overlaps. In this study, we propose a lightweight deep learning-based registration method that captures features from multiple perspectives to predict overlapping points and mitigate the interference of non-overlapping points. Specifically, our approach utilizes pruning and weight-sharing quantization techniques to reduce model size and simplify the network structure. We evaluate the proposed model on noisy and partially overlapping point clouds from the ModelNet40 dataset, comparing its performance against other existing methods. Experimental results show that the proposed method significantly reduces the model's parameter size without compromising registration accuracy.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"137 ","pages":"Article 101252"},"PeriodicalIF":2.5,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143140782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}