Pub Date : 2021-11-01DOI: 10.1016/j.gmod.2021.101118
Xiaopeng Sun , Jia Fu , Teng Chen , Yu Dong
An algorithm was proposed to simulate the withering deformation of plant leaves by wrinkle and curl due to dehydration, based on cell dynamics and time-varying external force. First, a leaf boundary expansion algorithm was proposed to locate the feature points on the tip of the vein and construct the primary vein using a discrete geodesic path. Second, a novel mass-spring system by cell dynamics and a non-uniform mass distribution was defined to accelerate the movement of the boundary cells. Third, the cell swelling force was defined and adjusted to generate wrinkle deformation along with dehydration. Fourth, the time-varying external force on the feature points was defined to generate the curl deformation by adjusting the initial value of the external force and multiple iterative parameters. The implicit midpoint method was used to solve the equation of motion. The experimental results showed that our algorithm could simulate the wrinkle and curl deformation caused by dehydration and withering of leaves with high authenticity.
{"title":"Wrinkle and curl distortion of leaves using plant dynamic","authors":"Xiaopeng Sun , Jia Fu , Teng Chen , Yu Dong","doi":"10.1016/j.gmod.2021.101118","DOIUrl":"10.1016/j.gmod.2021.101118","url":null,"abstract":"<div><p>An algorithm was proposed to simulate the withering deformation of plant leaves by wrinkle and curl due to dehydration, based on cell dynamics and time-varying external force. First, a leaf boundary expansion algorithm<span> was proposed to locate the feature points on the tip of the vein and construct the primary vein using a discrete geodesic path. Second, a novel mass-spring system by cell dynamics and a non-uniform mass distribution was defined to accelerate the movement of the boundary cells. Third, the cell swelling force was defined and adjusted to generate wrinkle deformation along with dehydration. Fourth, the time-varying external force on the feature points was defined to generate the curl deformation by adjusting the initial value of the external force and multiple iterative parameters. The implicit midpoint method was used to solve the equation of motion. The experimental results showed that our algorithm could simulate the wrinkle and curl deformation caused by dehydration and withering of leaves with high authenticity.</span></p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"118 ","pages":"Article 101118"},"PeriodicalIF":1.7,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.gmod.2021.101118","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"54327030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-01DOI: 10.1016/j.gmod.2021.101117
Carlos Arango Duque, Adrien Bartoli
The vast majority of mesh-based modelling applications iteratively transform the mesh vertices under prescribed geometric conditions. This occurs in particular in methods cycling through the constraint set such as Position-Based Dynamics (PBD). A common case is the approximate local area preservation of triangular 2D meshes under external editing constraints. At the constraint level, this yields the nonconvex optimal triangle projection under prescribed area problem, for which there does not currently exist a direct solution method. In current PBD implementations, the area preservation constraint is linearised. The solution comes out through the iterations, without a guarantee of optimality, and the process may fail for degenerate inputs where the vertices are colinear or colocated. We propose a closed-form solution method and its numerically robust algebraic implementation. Our method handles degenerate inputs through a two-case analysis of the problem’s generic ambiguities. We show in a series of experiments in area-based 2D mesh editing that using optimal projection in place of area constraint linearisation in PBD speeds up and stabilises convergence.
{"title":"An optimal triangle projector with prescribed area and orientation, application to position-based dynamics","authors":"Carlos Arango Duque, Adrien Bartoli","doi":"10.1016/j.gmod.2021.101117","DOIUrl":"10.1016/j.gmod.2021.101117","url":null,"abstract":"<div><p>The vast majority of mesh-based modelling applications iteratively transform the mesh vertices under prescribed geometric conditions. This occurs in particular in methods cycling through the constraint set such as Position-Based Dynamics (PBD). A common case is the approximate local area preservation of triangular 2D meshes under external editing constraints. At the constraint level, this yields the nonconvex optimal triangle projection under prescribed area problem, for which there does not currently exist a direct solution method. In current PBD implementations, the area preservation constraint is linearised. The solution comes out through the iterations, without a guarantee of optimality, and the process may fail for degenerate inputs where the vertices are colinear or colocated. We propose a closed-form solution method and its numerically robust algebraic implementation. Our method handles degenerate inputs through a two-case analysis of the problem’s generic ambiguities. We show in a series of experiments in area-based 2D mesh editing that using optimal projection in place of area constraint linearisation in PBD speeds up and stabilises convergence.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"118 ","pages":"Article 101117"},"PeriodicalIF":1.7,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.gmod.2021.101117","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82892493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-01DOI: 10.1016/j.gmod.2021.101114
Levi Kapllani , Chelsea Amanatides , Genevieve Dion , Vadim Shapiro , David E. Breen
Machine knitted textiles are complex multi-scale material structures increasingly important in many industries, including consumer products, architecture, composites, medical, and military. Computational modeling, simulation, and design of industrial fabrics require efficient representations of the spatial, material, and physical properties of such structures. We propose a process-oriented representation, TopoKnit, that defines a foundational data structure for representing the topology of weft-knitted textiles at the yarn scale. Process space serves as an intermediary between the machine and fabric spaces, and supports a concise, computationally efficient evaluation approach based on on-demand, near constant-time queries. In this paper, we define the properties of the process space, and design a data structure to represent it and algorithms to evaluate it. We demonstrate the effectiveness of the representation scheme by providing results of evaluations of the data structure in support of common topological operations in the fabric space.
{"title":"TopoKnit: A Process-Oriented Representation for Modeling the Topology of Yarns in Weft-Knitted Textiles","authors":"Levi Kapllani , Chelsea Amanatides , Genevieve Dion , Vadim Shapiro , David E. Breen","doi":"10.1016/j.gmod.2021.101114","DOIUrl":"10.1016/j.gmod.2021.101114","url":null,"abstract":"<div><p>Machine knitted textiles are complex multi-scale material structures increasingly important in many industries, including consumer products, architecture, composites, medical, and military. Computational modeling<span>, simulation, and design of industrial fabrics require efficient representations of the spatial, material, and physical properties of such structures. We propose a process-oriented representation, TopoKnit, that defines a foundational data structure for representing the topology of weft-knitted textiles at the yarn scale. Process space serves as an intermediary between the machine and fabric spaces, and supports a concise, computationally efficient evaluation approach based on on-demand, near constant-time queries. In this paper, we define the properties of the process space, and design a data structure to represent it and algorithms to evaluate it. We demonstrate the effectiveness of the representation scheme by providing results of evaluations of the data structure in support of common topological operations in the fabric space.</span></p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"118 ","pages":"Article 101114"},"PeriodicalIF":1.7,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.gmod.2021.101114","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88984406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-01DOI: 10.1016/j.gmod.2021.101115
Zhihao Liu , Kai Wu , Jianwei Guo , Yunhai Wang , Oliver Deussen , Zhanglin Cheng
Realistic 3D tree reconstruction is still a tedious and time-consuming task in the graphics community. In this paper, we propose a simple and efficient method for reconstructing 3D tree models with high fidelity from a single image. The key to single image-based tree reconstruction is to recover 3D shape information of trees via a deep neural network learned from a set of synthetic tree models. We adopted a conditional generative adversarial network (cGAN) to infer the 3D silhouette and skeleton of a tree respectively from edges extracted from the image and simple 2D strokes drawn by the user. Based on the predicted 3D silhouette and skeleton, a realistic tree model that inherits the tree shape in the input image can be generated using a procedural modeling technique. Experiments on varieties of tree examples demonstrate the efficiency and effectiveness of the proposed method in reconstructing realistic 3D tree models from a single image.
{"title":"Single Image Tree Reconstruction via Adversarial Network","authors":"Zhihao Liu , Kai Wu , Jianwei Guo , Yunhai Wang , Oliver Deussen , Zhanglin Cheng","doi":"10.1016/j.gmod.2021.101115","DOIUrl":"10.1016/j.gmod.2021.101115","url":null,"abstract":"<div><p><span>Realistic 3D tree reconstruction is still a tedious and time-consuming task in the graphics community. In this paper, we propose a simple and efficient method for reconstructing 3D tree models with high fidelity from a single image. The key to single image-based tree reconstruction is to recover 3D shape<span> information of trees via a deep neural network learned from a set of synthetic tree models. We adopted a conditional </span></span>generative adversarial network (cGAN) to infer the 3D silhouette and skeleton of a tree respectively from edges extracted from the image and simple 2D strokes drawn by the user. Based on the predicted 3D silhouette and skeleton, a realistic tree model that inherits the tree shape in the input image can be generated using a procedural modeling technique. Experiments on varieties of tree examples demonstrate the efficiency and effectiveness of the proposed method in reconstructing realistic 3D tree models from a single image.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"117 ","pages":"Article 101115"},"PeriodicalIF":1.7,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.gmod.2021.101115","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88654312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-01DOI: 10.1016/j.gmod.2021.101113
Yusuf Sahillioğlu , Ladislav Kavan
We present a new scale-adaptive ICP (Iterative Closest Point) method which aligns two objects that differ by rigid transformations (translations and rotations) and uniform scaling. The motivation is that input data may come in different scales (measurement units) which may not be known a priori, or when two range scans of the same object are obtained by different scanners. Classical ICP and its many variants do not handle this scale difference problem adequately. Our novel solution outperforms three different methods that estimate scale prior to alignment and a fourth method that, similar to ours, jointly optimizes for scale during the alignment.
{"title":"Scale-Adaptive ICP","authors":"Yusuf Sahillioğlu , Ladislav Kavan","doi":"10.1016/j.gmod.2021.101113","DOIUrl":"10.1016/j.gmod.2021.101113","url":null,"abstract":"<div><p>We present a new scale-adaptive ICP (Iterative Closest Point) method which aligns two objects that differ by rigid transformations (translations and rotations) and uniform scaling. The motivation is that input data may come in different scales (measurement units) which may not be known a priori, or when two range scans of the same object are obtained by different scanners. Classical ICP and its many variants do not handle this scale difference problem adequately. Our novel solution outperforms three different methods that estimate scale prior to alignment and a fourth method that, similar to ours, jointly optimizes for scale during the alignment.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"116 ","pages":"Article 101113"},"PeriodicalIF":1.7,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.gmod.2021.101113","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81645301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-01DOI: 10.1016/j.gmod.2021.101110
Zeyu Shen , Mingyang Zhao , Xiaohong Jia , Yuan Liang , Lubin Fan , Dong-Ming Yan
Detecting ellipses from images is a fundamental task in many computer vision applications. However, due to the complexity of real-world scenarios, it is still a challenge to detect ellipses accurately and efficiently. In this paper, we propose a novel method to tackle this problem based on the fast computation of convex hull and directed graph, which achieves promising results on both accuracy and efficiency. We use Depth-First-Search to extract branch-free curves after adaptive edge detection. Line segments are used to represent the curvature characteristic of the curves, followed by splitting at sharp corners and inflection points to attain smooth arcs. Then the convex hull is constructed, together with the distance, length, and direction constraints, to find co-elliptic arc pairs. Arcs and their connectivity are encoded into a sparse directed graph, and then ellipses are generated via a fast access of the adjacency list. Finally, salient ellipses are selected subject to strict verification and weighted clustering. Extensive experiments are conducted on eight real-world datasets (six publicly available and two built by ourselves), as well as five synthetic datasets. Our method achieves the overall highest F-measure with competitive speed compared to representative state-of-the-art methods.
{"title":"Combining convex hull and directed graph for fast and accurate ellipse detection","authors":"Zeyu Shen , Mingyang Zhao , Xiaohong Jia , Yuan Liang , Lubin Fan , Dong-Ming Yan","doi":"10.1016/j.gmod.2021.101110","DOIUrl":"10.1016/j.gmod.2021.101110","url":null,"abstract":"<div><p><span>Detecting ellipses<span> from images is a fundamental task in many computer vision applications. However, due to the complexity of real-world scenarios, it is still a challenge to detect ellipses accurately and efficiently. In this paper, we propose a novel method to tackle this problem based on the fast computation of </span></span><span><em>convex hull</em></span> and <span><em>directed graph</em></span><span>, which achieves promising results on both accuracy and efficiency. We use Depth-First-Search to extract branch-free curves after adaptive edge detection. Line segments are used to represent the curvature characteristic of the curves, followed by splitting at sharp corners and inflection points<span> to attain smooth arcs. Then the convex hull is constructed, together with the distance, length, and direction constraints, to find co-elliptic arc pairs. Arcs and their connectivity are encoded into a sparse directed graph, and then ellipses are generated via a fast access of the adjacency list<span>. Finally, salient ellipses are selected subject to strict verification and weighted clustering. Extensive experiments are conducted on eight real-world datasets (six publicly available and two built by ourselves), as well as five synthetic datasets. Our method achieves the overall highest F-measure with competitive speed compared to representative state-of-the-art methods.</span></span></span></p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"116 ","pages":"Article 101110"},"PeriodicalIF":1.7,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.gmod.2021.101110","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78297964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-01DOI: 10.1016/j.gmod.2021.101109
Hongyang Zhou, Zhong Ren, Kun Zhou
We introduce an A-weighting variance measurement, an objective estimation of the sound quality generated by geometric acoustic methods. Unlike the previous measurement, which applies to the impulse response, our measurement establishes the relationship between the impulse response and the auralized sound that the user hears. We also develop interactive methods to evaluate the measurement at run time and an adaptive algorithm that balances quality and performance based on the measurement. Experiments show that our method is more efficient in a wide variety of scene geometry, input sound, reverberation, and path tracing strategies.
{"title":"Adaptive geometric sound propagation based on A-weighting variance measure","authors":"Hongyang Zhou, Zhong Ren, Kun Zhou","doi":"10.1016/j.gmod.2021.101109","DOIUrl":"10.1016/j.gmod.2021.101109","url":null,"abstract":"<div><p>We introduce an A-weighting variance measurement, an objective estimation of the sound quality generated by geometric acoustic methods. Unlike the previous measurement, which applies to the impulse response, our measurement establishes the relationship between the impulse response and the auralized sound that the user hears. We also develop interactive methods to evaluate the measurement at run time and an adaptive algorithm that balances quality and performance based on the measurement. Experiments show that our method is more efficient in a wide variety of scene geometry, input sound, reverberation, and path tracing strategies.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"116 ","pages":"Article 101109"},"PeriodicalIF":1.7,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.gmod.2021.101109","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91280357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-01DOI: 10.1016/j.gmod.2021.101108
Yilin Liu, Ke Xie, Hui Huang
The drone navigation requires the comprehensive understanding of both visual and geometric information in the 3D world. In this paper, we present a Visual-Geometric Fusion Network (VGF-Net), a deep network for the fusion analysis of visual/geometric data and the construction of 2.5D height maps for simultaneous drone navigation in novel environments. Given an initial rough height map and a sequence of RGB images, our VGF-Net extracts the visual information of the scene, along with a sparse set of 3D keypoints that capture the geometric relationship between objects in the scene. Driven by the data, VGF-Net adaptively fuses visual and geometric information, forming a unified Visual-Geometric Representation. This representation is fed to a new Directional Attention Model (DAM), which helps enhance the visual-geometric object relationship and propagates the informative data to dynamically refine the height map and the corresponding keypoints. An entire end-to-end information fusion and mapping system is formed, demonstrating remarkable robustness and high accuracy on the autonomous drone navigation across complex indoor and large-scale outdoor scenes.
{"title":"VGF-Net: Visual-Geometric fusion learning for simultaneous drone navigation and height mapping","authors":"Yilin Liu, Ke Xie, Hui Huang","doi":"10.1016/j.gmod.2021.101108","DOIUrl":"10.1016/j.gmod.2021.101108","url":null,"abstract":"<div><p><span>The drone navigation requires the comprehensive understanding of both visual and geometric information in the 3D world. In this paper, we present a </span><em>Visual-Geometric Fusion Network</em><span> (VGF-Net), a deep network for the fusion analysis of visual/geometric data and the construction of 2.5D height maps for simultaneous drone navigation in novel environments. Given an initial rough height map and a sequence of RGB images, our VGF-Net extracts the visual information of the scene, along with a sparse set of 3D keypoints that capture the geometric relationship between objects in the scene. Driven by the data, VGF-Net adaptively fuses visual and geometric information, forming a unified </span><em>Visual-Geometric Representation</em>. This representation is fed to a new <em>Directional Attention Model</em> (DAM), which helps enhance the visual-geometric object relationship and propagates the informative data to dynamically refine the height map and the corresponding keypoints. An entire end-to-end information fusion and mapping system is formed, demonstrating remarkable robustness and high accuracy on the autonomous drone navigation across complex indoor and large-scale outdoor scenes.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"116 ","pages":"Article 101108"},"PeriodicalIF":1.7,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.gmod.2021.101108","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81693972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-01DOI: 10.1016/j.gmod.2021.101104
Shao-Kui Zhang , Wei-Yu Xie , Song-Hai Zhang
Recent studies show increasing demands and interests in automatic layout generation, while there is still much room for improving the plausibility and robustness. In this paper, we present a data-driven layout generation framework without model formulation and loss term optimization. We achieve and organize priors directly based on samples from datasets instead of sampling probabilistic distributions. Therefore, our method enables expressing relations among three or more objects that are hard to be mathematically modeled. Subsequently, a non-learning geometric algorithm is proposed to arrange objects considering constraints such as positions of walls and windows. Experiments show that the proposed method outperforms the state-of-the-art and our generated layouts are competitive to those designed by professionals.1
{"title":"Geometry-Based Layout Generation with Hyper-Relations AMONG Objects","authors":"Shao-Kui Zhang , Wei-Yu Xie , Song-Hai Zhang","doi":"10.1016/j.gmod.2021.101104","DOIUrl":"10.1016/j.gmod.2021.101104","url":null,"abstract":"<div><p><span>Recent studies show increasing demands and interests in automatic layout generation, while there is still much room for improving the plausibility<span> and robustness. In this paper, we present a data-driven layout generation framework without model formulation and loss term optimization. We achieve and organize priors directly based on samples from datasets instead of sampling probabilistic distributions. Therefore, our method enables expressing relations among three or more objects that are hard to be mathematically modeled. Subsequently, a non-learning geometric algorithm is proposed to arrange objects considering constraints such as positions of walls and windows. Experiments show that the proposed method outperforms the state-of-the-art and our generated layouts are competitive to those designed by professionals.</span></span><span><sup>1</sup></span></p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"116 ","pages":"Article 101104"},"PeriodicalIF":1.7,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.gmod.2021.101104","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81194409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Unmanned airborne vehicles (UAVs) are useful in both military and civilian operations. In this paper, we consider a recreational scenario, i.e., multi-UAV formation transformation show. A visually smooth transformation needs to enforce the following three requirements at the same time: (1) visually pleasing contour morphing - for any intermediate frame, the agents form a meaningful shape and align with the contour, (2) uniform placement - for any intermediate frame, the agents are (isotropically) evenly spaced, and (3) smooth trajectories - the trajectory of each agent is as rigid/smooth as possible and completely collision free. First, we use the technique of 2-Wasserstein distance based interpolation to generate a sequence of intermediate shape contours. Second, we consider the spatio-temporal motion of all the agents altogether, and integrate the uniformity requirement and the spatial coherence into one objective function. Finally, the optimal formation transformation plan can be inferred by collaborative optimization.
Extensive experimental results show that our algorithm outperforms the existing algorithms in terms of visual smoothness of transformation, boundary alignment, uniformity of agents, and rigidity of trajectories. Furthermore, our algorithm is able to cope with some challenging scenarios including (1) source/target shapes with multiple connected components, (2) source/target shapes with different typology structures, and (3) existence of obstacles. Therefore, it has a great potential in the real multi-UAV light show. We created an animation to demonstrate how our algorithm works; See the demo at https://1drv.ms/v/s!AheMg5fKdtdugVL0aNFfEt_deTbT?e=le5poN .
{"title":"Visually smooth multi-UAV formation transformation","authors":"Xinyu Zheng , Chen Zong , Jingliang Cheng , Jian Xu , Shiqing Xin , Changhe Tu , Shuangmin Chen , Wenping Wang","doi":"10.1016/j.gmod.2021.101111","DOIUrl":"10.1016/j.gmod.2021.101111","url":null,"abstract":"<div><p>Unmanned airborne vehicles (UAVs) are useful in both military and civilian operations. In this paper, we consider a recreational scenario, i.e., multi-UAV formation transformation show. A visually smooth transformation needs to enforce the following three requirements at the same time: (1) visually pleasing contour morphing - for any intermediate frame, the agents form a meaningful shape and align with the contour, (2) uniform placement - for any intermediate frame, the agents are (isotropically) evenly spaced, and (3) smooth trajectories - the trajectory of each agent is as rigid/smooth as possible and completely collision free. First, we use the technique of 2-Wasserstein distance based interpolation to generate a sequence of intermediate shape contours. Second, we consider the spatio-temporal motion of all the agents altogether, and integrate the uniformity requirement and the spatial coherence into one objective function. Finally, the optimal formation transformation plan can be inferred by collaborative optimization.</p><p>Extensive experimental results show that our algorithm outperforms the existing algorithms in terms of visual smoothness of transformation, boundary alignment, uniformity of agents, and rigidity of trajectories. Furthermore, our algorithm is able to cope with some challenging scenarios including (1) source/target shapes with multiple connected components, (2) source/target shapes with different typology structures, and (3) existence of obstacles. Therefore, it has a great potential in the real multi-UAV light show. We created an animation to demonstrate how our algorithm works; See the demo at <span>https://1drv.ms/v/s!AheMg5fKdtdugVL0aNFfEt_deTbT?e=le5poN</span><svg><path></path></svg> .</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"116 ","pages":"Article 101111"},"PeriodicalIF":1.7,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.gmod.2021.101111","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76203195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}