Pub Date : 2020-07-01DOI: 10.1109/ICIVC50857.2020.9177486
Jiehua Zhang, Zhang Li, Qifeng Yu
Image registration is a basic task in biological image processing. Different stained histology images contain different clinical information, which could assist pathologists to diagnose a certain disease. It is necessary to improve the accuracy of image registration. In this paper, we present a robust registration method that consists of three steps: 1) extracting match points; 2) a pre-alignment consisting of a rigid transformation and an affine transformation on the coarse level; 3) an accurate non-rigid registration optimized by the extracted points. The existing methods use the features of the image pair to initial alignment. We proposed a new metric for the non-rigid transformation which adding the part of optimizing extracting points into the original metric. We evaluate our method on the dataset from the ANHIR Registration Challenge and use MrTRE (median relative target registration error) to measure the performance on the training data. The test result illustrates that the presented method is accurate and robust.
{"title":"Point-Based Registration for Multi-stained Histology Images","authors":"Jiehua Zhang, Zhang Li, Qifeng Yu","doi":"10.1109/ICIVC50857.2020.9177486","DOIUrl":"https://doi.org/10.1109/ICIVC50857.2020.9177486","url":null,"abstract":"Image registration is a basic task in biological image processing. Different stained histology images contain different clinical information, which could assist pathologists to diagnose a certain disease. It is necessary to improve the accuracy of image registration. In this paper, we present a robust registration method that consists of three steps: 1) extracting match points; 2) a pre-alignment consisting of a rigid transformation and an affine transformation on the coarse level; 3) an accurate non-rigid registration optimized by the extracted points. The existing methods use the features of the image pair to initial alignment. We proposed a new metric for the non-rigid transformation which adding the part of optimizing extracting points into the original metric. We evaluate our method on the dataset from the ANHIR Registration Challenge and use MrTRE (median relative target registration error) to measure the performance on the training data. The test result illustrates that the presented method is accurate and robust.","PeriodicalId":6806,"journal":{"name":"2020 IEEE 5th International Conference on Image, Vision and Computing (ICIVC)","volume":"41 1","pages":"92-96"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81318867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents an approach to single-object video object segmentation, only using the first-frame bounding box (without mask) to initialize. The proposed method is a tracking-driven single-object video object segmentation, which combines an effective Box2Segmentation module with a general object tracking module. Just initialize the first frame box, the Box2Segmentation module can obtain the segmentation results based on the predicted tracking bounding box. Evaluations on the single-object video object segmentation dataset DAVIS2016 show that the proposed method achieves a competitive performance with a Region Similarity score of 75.4% and a Contour Accuracy score of 73.1%, only under the settings of first-frame bounding box initialization. The proposed method outperforms SiamMask which is the most competitive method for video object segmentation under the same settings, with Region Similarity score by 5.2% and Contour Accuracy score by 7.8%. Compared with the semi-supervised VOS methods without online fine-tuning initialized by a first frame mask, the proposed method also achieves comparable results.
{"title":"Td-VOS: Tracking-Driven Single-Object Video Object Segmentation","authors":"Shaopan Xiong, Shengyang Li, Longxuan Kou, Weilong Guo, Zhuang Zhou, Zifei Zhao","doi":"10.1109/ICIVC50857.2020.9177471","DOIUrl":"https://doi.org/10.1109/ICIVC50857.2020.9177471","url":null,"abstract":"This paper presents an approach to single-object video object segmentation, only using the first-frame bounding box (without mask) to initialize. The proposed method is a tracking-driven single-object video object segmentation, which combines an effective Box2Segmentation module with a general object tracking module. Just initialize the first frame box, the Box2Segmentation module can obtain the segmentation results based on the predicted tracking bounding box. Evaluations on the single-object video object segmentation dataset DAVIS2016 show that the proposed method achieves a competitive performance with a Region Similarity score of 75.4% and a Contour Accuracy score of 73.1%, only under the settings of first-frame bounding box initialization. The proposed method outperforms SiamMask which is the most competitive method for video object segmentation under the same settings, with Region Similarity score by 5.2% and Contour Accuracy score by 7.8%. Compared with the semi-supervised VOS methods without online fine-tuning initialized by a first frame mask, the proposed method also achieves comparable results.","PeriodicalId":6806,"journal":{"name":"2020 IEEE 5th International Conference on Image, Vision and Computing (ICIVC)","volume":"51 1","pages":"102-107"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79154877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-07-01DOI: 10.1109/ICIVC50857.2020.9177478
Tianpeng Xia, S. Liao
In this research, we have conducted a study on color image filtering in Bessel-Fourier moments domain. Bessel-Fourier moments of the two testing color images are computed independently from the three color channels (RGB), then lowpass and highpass filters are applied to the data in Bessel-Fourier moments domain for our investigation. For comparison, filters are applied in Fourier Frequency domain as well. The experimental results suggest that Bessel-Fourier moments of the lower orders contain mainly information of smooth varying components of images, while those of the higher orders are more related to details such as sharp transitions in intensity. It is also found that the Gaussian filters would reduce the ringing effect in Bessel-Fourier moments domain as they do in the Fourier Frequency domain.
{"title":"Color Image Filtering in Bessel-Fourier Moments Domain","authors":"Tianpeng Xia, S. Liao","doi":"10.1109/ICIVC50857.2020.9177478","DOIUrl":"https://doi.org/10.1109/ICIVC50857.2020.9177478","url":null,"abstract":"In this research, we have conducted a study on color image filtering in Bessel-Fourier moments domain. Bessel-Fourier moments of the two testing color images are computed independently from the three color channels (RGB), then lowpass and highpass filters are applied to the data in Bessel-Fourier moments domain for our investigation. For comparison, filters are applied in Fourier Frequency domain as well. The experimental results suggest that Bessel-Fourier moments of the lower orders contain mainly information of smooth varying components of images, while those of the higher orders are more related to details such as sharp transitions in intensity. It is also found that the Gaussian filters would reduce the ringing effect in Bessel-Fourier moments domain as they do in the Fourier Frequency domain.","PeriodicalId":6806,"journal":{"name":"2020 IEEE 5th International Conference on Image, Vision and Computing (ICIVC)","volume":"112 1","pages":"75-81"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88776646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-07-01DOI: 10.1109/ICIVC50857.2020.9177479
Hongtao Deng, D. Jiang, Kai Wang, Q. Fei
A full-field, three-dimensional and non-contact deformation field measurement method under high temperature environment based on 3D digital image correlation (3D-DIC) is introduced. In order to reduce the impact of high temperature radiation on the image quality, a band-pass filter is placed in front of the camera lens. The two cameras simultaneously take pictures of the object before and after deformation, and use 3D-DIC to measure the three-dimensional deformation field of the object surface. The high temperature deformation field measurement test shows that 3D-DIC can accurately and conveniently measure the deformation field of an object under high temperature environment.
{"title":"High Temperature Deformation Field Measurement Using 3D Digital Image Correlation Method","authors":"Hongtao Deng, D. Jiang, Kai Wang, Q. Fei","doi":"10.1109/ICIVC50857.2020.9177479","DOIUrl":"https://doi.org/10.1109/ICIVC50857.2020.9177479","url":null,"abstract":"A full-field, three-dimensional and non-contact deformation field measurement method under high temperature environment based on 3D digital image correlation (3D-DIC) is introduced. In order to reduce the impact of high temperature radiation on the image quality, a band-pass filter is placed in front of the camera lens. The two cameras simultaneously take pictures of the object before and after deformation, and use 3D-DIC to measure the three-dimensional deformation field of the object surface. The high temperature deformation field measurement test shows that 3D-DIC can accurately and conveniently measure the deformation field of an object under high temperature environment.","PeriodicalId":6806,"journal":{"name":"2020 IEEE 5th International Conference on Image, Vision and Computing (ICIVC)","volume":"96 1","pages":"188-192"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88969096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-07-01DOI: 10.1109/ICIVC50857.2020.9177435
Jie Zhou, Mengying Xu, Rui Yang
One of the most interesting issue regarding to wireless multimedia sensor networks (WMSNs) is to maximizing the network lifetime. Because sensor nodes are constrained in energy, it is very important and necessary to exploit novel duty cycle design algorithms. Such a problem is important in improving network lifetime in WMSNs. The new contribution of our paper is that we propose a clone chaotic niche evolutionary algorithm (CCNEA) for duty cycle design problem in WMSNs. Novel clone operator and chaotic operator have been designed to develop solutions randomly. The strategy merges the merits of clone selection, chaotic generation, and niche operator. CCNEA is a style of swarm algorithm, which has strong global exploit ability. CCNEA utilizes chaotic generation approach which targets to avoid local optima. Then, simulations are performed to verify the robust and efficacy performance of CCNEA compared to methods according to particle swarm optimization (PSO) and quantum genetic algorithm (QGA) under an WMSNs conditions. Simulation experiments denote that the presented CCNEA outperforms PSO and QGA under different conditions, especially for WMSNs that has large number of sensors.
{"title":"Clone Chaotic Niche Evolutionary Algorithm for Duty Cycle Control Optimization in Wireless Multimedia Sensor Networks","authors":"Jie Zhou, Mengying Xu, Rui Yang","doi":"10.1109/ICIVC50857.2020.9177435","DOIUrl":"https://doi.org/10.1109/ICIVC50857.2020.9177435","url":null,"abstract":"One of the most interesting issue regarding to wireless multimedia sensor networks (WMSNs) is to maximizing the network lifetime. Because sensor nodes are constrained in energy, it is very important and necessary to exploit novel duty cycle design algorithms. Such a problem is important in improving network lifetime in WMSNs. The new contribution of our paper is that we propose a clone chaotic niche evolutionary algorithm (CCNEA) for duty cycle design problem in WMSNs. Novel clone operator and chaotic operator have been designed to develop solutions randomly. The strategy merges the merits of clone selection, chaotic generation, and niche operator. CCNEA is a style of swarm algorithm, which has strong global exploit ability. CCNEA utilizes chaotic generation approach which targets to avoid local optima. Then, simulations are performed to verify the robust and efficacy performance of CCNEA compared to methods according to particle swarm optimization (PSO) and quantum genetic algorithm (QGA) under an WMSNs conditions. Simulation experiments denote that the presented CCNEA outperforms PSO and QGA under different conditions, especially for WMSNs that has large number of sensors.","PeriodicalId":6806,"journal":{"name":"2020 IEEE 5th International Conference on Image, Vision and Computing (ICIVC)","volume":"15 1","pages":"278-282"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90767734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-07-01DOI: 10.1109/ICIVC50857.2020.9177430
Xuchou Xu, Zhou Ruan, Lei Yang
Facial expressions are one of the most powerful, natural and immediate means for human being to present their emotions and intensions. In this paper, we present a novel method for fully automatic facial expression recognition. The facial landmarks are detected for characterizing facial expressions. A graph convolutional neural network is proposed for feature extraction and facial expression recognition classification. The experiments were performed on the three facial expression databases. The result shows that the proposed FER method can achieve good recognition accuracy up to 95.85% using the proposed method.
{"title":"Facial Expression Recognition Based on Graph Neural Network","authors":"Xuchou Xu, Zhou Ruan, Lei Yang","doi":"10.1109/ICIVC50857.2020.9177430","DOIUrl":"https://doi.org/10.1109/ICIVC50857.2020.9177430","url":null,"abstract":"Facial expressions are one of the most powerful, natural and immediate means for human being to present their emotions and intensions. In this paper, we present a novel method for fully automatic facial expression recognition. The facial landmarks are detected for characterizing facial expressions. A graph convolutional neural network is proposed for feature extraction and facial expression recognition classification. The experiments were performed on the three facial expression databases. The result shows that the proposed FER method can achieve good recognition accuracy up to 95.85% using the proposed method.","PeriodicalId":6806,"journal":{"name":"2020 IEEE 5th International Conference on Image, Vision and Computing (ICIVC)","volume":"80 1","pages":"211-214"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88967613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Modern real-time segmentation methods employ two-branch framework to achieve good speed and accuracy trade-off. However, we observe that low-level features coming from the shallow layers go through less processing, producing a potential semantic gap between different levels of features. Meanwhile, a rigid fusion is less effective due to the absence of consideration for two-branch framework characteristics. In this paper, we propose two novel modules: Unified Interplay Module and Separate Pyramid Pooling Module to address those two issues respectively. Based on our proposed modules, we present a novel Dual Stream Segmentation Network (DSSNet), a two-branch framework for real-time semantic segmentation. Compared with BiSeNet, our DSSNet based on ResNet18 achieves better performance 76.45% mIoU on the Cityscapes test dataset while sharing similar computation costs with BiSeNet. Furthermore, our DSSNet with ResNet34 backbone outperforms previous real-time models, achieving 78.5% mIoU on the Cityscapes test dataset with speed of 39 FPS on GTX1080Ti.
{"title":"Dual Stream Segmentation Network for Real-Time Semantic Segmentation","authors":"Changyuan Zhong, Zelin Hu, Miao Li, Hualong Li, Xuanjiang Yang, Fei Liu","doi":"10.1109/ICIVC50857.2020.9177439","DOIUrl":"https://doi.org/10.1109/ICIVC50857.2020.9177439","url":null,"abstract":"Modern real-time segmentation methods employ two-branch framework to achieve good speed and accuracy trade-off. However, we observe that low-level features coming from the shallow layers go through less processing, producing a potential semantic gap between different levels of features. Meanwhile, a rigid fusion is less effective due to the absence of consideration for two-branch framework characteristics. In this paper, we propose two novel modules: Unified Interplay Module and Separate Pyramid Pooling Module to address those two issues respectively. Based on our proposed modules, we present a novel Dual Stream Segmentation Network (DSSNet), a two-branch framework for real-time semantic segmentation. Compared with BiSeNet, our DSSNet based on ResNet18 achieves better performance 76.45% mIoU on the Cityscapes test dataset while sharing similar computation costs with BiSeNet. Furthermore, our DSSNet with ResNet34 backbone outperforms previous real-time models, achieving 78.5% mIoU on the Cityscapes test dataset with speed of 39 FPS on GTX1080Ti.","PeriodicalId":6806,"journal":{"name":"2020 IEEE 5th International Conference on Image, Vision and Computing (ICIVC)","volume":"46 1","pages":"144-149"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91101276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-07-01DOI: 10.1109/ICIVC50857.2020.9177492
Hai-Wu Lee, Wen-Tan Gu, Yuan-yuan Wang
In recent years, face recognition technology has developed rapidly, and its application range has become more and more extensive. It is one of the most important application fields in computer vision technology. However, there are still many technical factors that restrict the application and promotion of face recognition technology. For example: shadows, occlusions, light and dark areas, dark light, highlights and other factors will make the face recognition rate drop sharply. Therefore, face recognition has extremely high research and application value. We use the Local Binary Patterns (LBP) algorithms with histogram equalization to obtain high-resolution images and improve the recognition rate in different scenarios, and try to apply face recognition to attendance.
{"title":"Design of Face Recognition Attendance","authors":"Hai-Wu Lee, Wen-Tan Gu, Yuan-yuan Wang","doi":"10.1109/ICIVC50857.2020.9177492","DOIUrl":"https://doi.org/10.1109/ICIVC50857.2020.9177492","url":null,"abstract":"In recent years, face recognition technology has developed rapidly, and its application range has become more and more extensive. It is one of the most important application fields in computer vision technology. However, there are still many technical factors that restrict the application and promotion of face recognition technology. For example: shadows, occlusions, light and dark areas, dark light, highlights and other factors will make the face recognition rate drop sharply. Therefore, face recognition has extremely high research and application value. We use the Local Binary Patterns (LBP) algorithms with histogram equalization to obtain high-resolution images and improve the recognition rate in different scenarios, and try to apply face recognition to attendance.","PeriodicalId":6806,"journal":{"name":"2020 IEEE 5th International Conference on Image, Vision and Computing (ICIVC)","volume":"84 1","pages":"222-226"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83810027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-07-01DOI: 10.1109/ICIVC50857.2020.9177494
Zihan Chen, Lianghong Chen, Zhiyuan Zhao, Yue Wang
In recent years, people's pursuit of art has been on the rise. People want computers to be able to create artistic paintings based on descriptions. In this paper, we proposed a novel project, Painting Creator, which uses deep learning technology to enable the computer to generate artistic illustrations from a short piece of text. Our scheme includes two models, image generation model and style transfer model. In the real image generation model, inspired by the application of stack generative adversarial networks in text to image generation, we proposed an improved model, IStackGAN, to solve the problem of image generation. We added a classifier based on the original model and added image structure loss and feature extraction loss to improve the performance of the generator. The generator network can get additional hidden information from the classification information to produce better pictures. The loss of image structure can force the generator to restore the real image, and the loss of feature extraction can verify whether the generator network has extracted the features of the real image set. For the style transfer model, we improved the generator based on the original cycle generative adversarial networks and used the residual block to improve the stability and performance of the u-net generator. To improve the performance of the generator, we also added the cycle consistent loss with MS-SSIM. The experimental results show that our model is improved significantly based on the original paper, and the generated pictures are more vivid in detail, and pictures after the style transfer are more artistic to watch.
{"title":"AI Illustrator: Art Illustration Generation Based on Generative Adversarial Network","authors":"Zihan Chen, Lianghong Chen, Zhiyuan Zhao, Yue Wang","doi":"10.1109/ICIVC50857.2020.9177494","DOIUrl":"https://doi.org/10.1109/ICIVC50857.2020.9177494","url":null,"abstract":"In recent years, people's pursuit of art has been on the rise. People want computers to be able to create artistic paintings based on descriptions. In this paper, we proposed a novel project, Painting Creator, which uses deep learning technology to enable the computer to generate artistic illustrations from a short piece of text. Our scheme includes two models, image generation model and style transfer model. In the real image generation model, inspired by the application of stack generative adversarial networks in text to image generation, we proposed an improved model, IStackGAN, to solve the problem of image generation. We added a classifier based on the original model and added image structure loss and feature extraction loss to improve the performance of the generator. The generator network can get additional hidden information from the classification information to produce better pictures. The loss of image structure can force the generator to restore the real image, and the loss of feature extraction can verify whether the generator network has extracted the features of the real image set. For the style transfer model, we improved the generator based on the original cycle generative adversarial networks and used the residual block to improve the stability and performance of the u-net generator. To improve the performance of the generator, we also added the cycle consistent loss with MS-SSIM. The experimental results show that our model is improved significantly based on the original paper, and the generated pictures are more vivid in detail, and pictures after the style transfer are more artistic to watch.","PeriodicalId":6806,"journal":{"name":"2020 IEEE 5th International Conference on Image, Vision and Computing (ICIVC)","volume":"69 1","pages":"155-159"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84354177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-07-01DOI: 10.1109/ICIVC50857.2020.9177461
Feng Guoqiang, Leng Liang, Ye Yinghui, Han Dong-liang
Topological relations can be used to describe qualitative geometric position relations between spatial objects in geospatial world, which plays important roles in spatial query, spatial analysis and spatial reasoning. People can apply topological relations to describe the morphological changes of real objects, such as changes of cadastral parcels, rivers, water systems, etc. Gully planform changes (GPCs) reflect the state of surface soil erosion, so it is important and valuable to describe GPCs in detail. In this paper, based on a hierarchical topological relation description method and combined with the features of GPCs in GIS, we propose a simple hierarchical topological relationship description method to describe GPCs. This method can be used to completely describe GPCs, and is more concise and efficient than the former hierarchical topological relation description method in describing GPCs.
{"title":"Analyzing Gully Planform Changes in GIS Based on Multi-level Topological Relations","authors":"Feng Guoqiang, Leng Liang, Ye Yinghui, Han Dong-liang","doi":"10.1109/ICIVC50857.2020.9177461","DOIUrl":"https://doi.org/10.1109/ICIVC50857.2020.9177461","url":null,"abstract":"Topological relations can be used to describe qualitative geometric position relations between spatial objects in geospatial world, which plays important roles in spatial query, spatial analysis and spatial reasoning. People can apply topological relations to describe the morphological changes of real objects, such as changes of cadastral parcels, rivers, water systems, etc. Gully planform changes (GPCs) reflect the state of surface soil erosion, so it is important and valuable to describe GPCs in detail. In this paper, based on a hierarchical topological relation description method and combined with the features of GPCs in GIS, we propose a simple hierarchical topological relationship description method to describe GPCs. This method can be used to completely describe GPCs, and is more concise and efficient than the former hierarchical topological relation description method in describing GPCs.","PeriodicalId":6806,"journal":{"name":"2020 IEEE 5th International Conference on Image, Vision and Computing (ICIVC)","volume":"29 1","pages":"292-295"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82513028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}