Pub Date : 2020-07-01DOI: 10.1109/ICIVC50857.2020.9177461
Feng Guoqiang, Leng Liang, Ye Yinghui, Han Dong-liang
Topological relations can be used to describe qualitative geometric position relations between spatial objects in geospatial world, which plays important roles in spatial query, spatial analysis and spatial reasoning. People can apply topological relations to describe the morphological changes of real objects, such as changes of cadastral parcels, rivers, water systems, etc. Gully planform changes (GPCs) reflect the state of surface soil erosion, so it is important and valuable to describe GPCs in detail. In this paper, based on a hierarchical topological relation description method and combined with the features of GPCs in GIS, we propose a simple hierarchical topological relationship description method to describe GPCs. This method can be used to completely describe GPCs, and is more concise and efficient than the former hierarchical topological relation description method in describing GPCs.
{"title":"Analyzing Gully Planform Changes in GIS Based on Multi-level Topological Relations","authors":"Feng Guoqiang, Leng Liang, Ye Yinghui, Han Dong-liang","doi":"10.1109/ICIVC50857.2020.9177461","DOIUrl":"https://doi.org/10.1109/ICIVC50857.2020.9177461","url":null,"abstract":"Topological relations can be used to describe qualitative geometric position relations between spatial objects in geospatial world, which plays important roles in spatial query, spatial analysis and spatial reasoning. People can apply topological relations to describe the morphological changes of real objects, such as changes of cadastral parcels, rivers, water systems, etc. Gully planform changes (GPCs) reflect the state of surface soil erosion, so it is important and valuable to describe GPCs in detail. In this paper, based on a hierarchical topological relation description method and combined with the features of GPCs in GIS, we propose a simple hierarchical topological relationship description method to describe GPCs. This method can be used to completely describe GPCs, and is more concise and efficient than the former hierarchical topological relation description method in describing GPCs.","PeriodicalId":6806,"journal":{"name":"2020 IEEE 5th International Conference on Image, Vision and Computing (ICIVC)","volume":"29 1","pages":"292-295"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82513028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-07-01DOI: 10.1109/ICIVC50857.2020.9177457
Zhen Yang, Yingzhe Yao, Shanshan Tu
Information explosion, both in cyberspace world and real world, nowadays has brought about pressing needs for comprehensive summary of information. The challenge for constructing a quality one lies in filtering out information of low relevance and mining out highly sparse relevant topics in the vast sea of data. It is a typical imbalanced learning task and we need to achieve a precise summary of temporal event via an accurate description and definition of the useful information and redundant information. In response to such challenge, we introduced: (1) a uniform framework of temporal event summarization with minimal residual optimization matrix factorization as its key part; and (2) a novel neighborhood preserving semantic measure (NPS) to capture the sparse candidate topics under that low-rank matrix factorization model. To evaluate the effectiveness of the proposed solution, a series of experiments are conducted on an annotated KBA corpus. The results of these experiments show that the solution proposed in this study can improve the quality of temporal summarization as compared with the established baselines.
{"title":"Exploiting Sparse Topics Mining for Temporal Event Summarization","authors":"Zhen Yang, Yingzhe Yao, Shanshan Tu","doi":"10.1109/ICIVC50857.2020.9177457","DOIUrl":"https://doi.org/10.1109/ICIVC50857.2020.9177457","url":null,"abstract":"Information explosion, both in cyberspace world and real world, nowadays has brought about pressing needs for comprehensive summary of information. The challenge for constructing a quality one lies in filtering out information of low relevance and mining out highly sparse relevant topics in the vast sea of data. It is a typical imbalanced learning task and we need to achieve a precise summary of temporal event via an accurate description and definition of the useful information and redundant information. In response to such challenge, we introduced: (1) a uniform framework of temporal event summarization with minimal residual optimization matrix factorization as its key part; and (2) a novel neighborhood preserving semantic measure (NPS) to capture the sparse candidate topics under that low-rank matrix factorization model. To evaluate the effectiveness of the proposed solution, a series of experiments are conducted on an annotated KBA corpus. The results of these experiments show that the solution proposed in this study can improve the quality of temporal summarization as compared with the established baselines.","PeriodicalId":6806,"journal":{"name":"2020 IEEE 5th International Conference on Image, Vision and Computing (ICIVC)","volume":"074 1","pages":"322-331"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89799160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Straw burning has serious pollution to the air. Only by finding the location of straw burning can we stop the pollution caused by straw burning. The detection of straw burning can start from two aspects: flame and smoke. Because straw burning is usually accompanied by strong smoke, we decide to determine whether there is straw burning through smoke. The existing smoke detection methods all has various shortcomings, such as not using the dynamic characteristics of smoke, and inefficient and complex processing. Therefore, this paper proposes a smoke detection method based on improved frame difference method and Faster R-CNN. For smoke detection, first uses the improved frame difference method to extracts candidate regions, and then uses the Faster R-CNN model for smoke detection. For the extracted candidate areas, this paper proposes a variety of schemes to expands the candidate areas to ensure that the complete smoke information could be obtained to the maximum extent. Through the experiment, we get the best expansion scheme. Experiments shows that the improved frame difference method has obvious effects, compared to Faster R-CNN model method, the maximum accuracy rate has improved by 10.6%.
{"title":"Straw Burning Detection Method Based on Improved Frame Difference Method and Deep Learning","authors":"Shiwei Wang, Feng Yu, Changlong Zhou, Minghua Jiang","doi":"10.1109/ICIVC50857.2020.9177456","DOIUrl":"https://doi.org/10.1109/ICIVC50857.2020.9177456","url":null,"abstract":"Straw burning has serious pollution to the air. Only by finding the location of straw burning can we stop the pollution caused by straw burning. The detection of straw burning can start from two aspects: flame and smoke. Because straw burning is usually accompanied by strong smoke, we decide to determine whether there is straw burning through smoke. The existing smoke detection methods all has various shortcomings, such as not using the dynamic characteristics of smoke, and inefficient and complex processing. Therefore, this paper proposes a smoke detection method based on improved frame difference method and Faster R-CNN. For smoke detection, first uses the improved frame difference method to extracts candidate regions, and then uses the Faster R-CNN model for smoke detection. For the extracted candidate areas, this paper proposes a variety of schemes to expands the candidate areas to ensure that the complete smoke information could be obtained to the maximum extent. Through the experiment, we get the best expansion scheme. Experiments shows that the improved frame difference method has obvious effects, compared to Faster R-CNN model method, the maximum accuracy rate has improved by 10.6%.","PeriodicalId":6806,"journal":{"name":"2020 IEEE 5th International Conference on Image, Vision and Computing (ICIVC)","volume":"37 1","pages":"29-33"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88345097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-07-01DOI: 10.1109/ICIVC50857.2020.9177469
Li Shupei, Z. Hui, Zhang Zhisheng, Xia Zhijie
This paper propose a new approach that combine Hough Transform (HT) and corner detection to detect polygons, which consider integrated characteristics not the individual characteristics. We establish a Polygon Parameter Space (PPS) to fit and characterize polygons, which consist of angles, coordinates, USAN values and every two lines of intersections. Firstly, canny operator is used to extract edges map, applied HT to detect line along edges of polygon shape and compute PPS. Secondly, corner detection among intersections is realized by comparing USAN value with angle of intersections, an adaptive threshold and adjusted brightness of nucleus of USAN is introduced to obtain accurate vertices from corners. Finally, we propose an algorithm based on Deep First Search (DFS) to fit the set of vertices regardless convex polygons (CVPs) or concave polygons (CCPs) according to parameters in PPS. The experimental results show that the proposed approach can effectively detect polygons with a less running time and higher accuracy, and shows the advantage of detecting the CVP and CCP shapes of broken vertices.
本文提出了一种将霍夫变换与角点检测相结合的多边形检测方法,该方法考虑的是多边形的整体特征而不是单个特征。建立了多边形参数空间(Polygon Parameter Space, PPS)来拟合和表征多边形,多边形由角度、坐标、USAN值和每两条交点线组成。首先,利用canny算子提取边缘映射,利用HT检测多边形形状边缘的直线,计算PPS;其次,通过对比USAN值与交点角度实现交点间的角点检测,引入自适应阈值和调整USAN核亮度,从交点处获取精确的顶点;最后,我们提出了一种基于深度优先搜索(Deep First Search, DFS)的算法,根据深度优先搜索中的参数对凸多边形(CVPs)和凹多边形(ccp)的顶点集进行拟合。实验结果表明,该方法可以有效地检测多边形,且运行时间短,精度高,在检测破碎顶点的CVP和CCP形状方面具有优势。
{"title":"A New Method for Polygon Detection Based on Hough Parameter Space and USAN Region","authors":"Li Shupei, Z. Hui, Zhang Zhisheng, Xia Zhijie","doi":"10.1109/ICIVC50857.2020.9177469","DOIUrl":"https://doi.org/10.1109/ICIVC50857.2020.9177469","url":null,"abstract":"This paper propose a new approach that combine Hough Transform (HT) and corner detection to detect polygons, which consider integrated characteristics not the individual characteristics. We establish a Polygon Parameter Space (PPS) to fit and characterize polygons, which consist of angles, coordinates, USAN values and every two lines of intersections. Firstly, canny operator is used to extract edges map, applied HT to detect line along edges of polygon shape and compute PPS. Secondly, corner detection among intersections is realized by comparing USAN value with angle of intersections, an adaptive threshold and adjusted brightness of nucleus of USAN is introduced to obtain accurate vertices from corners. Finally, we propose an algorithm based on Deep First Search (DFS) to fit the set of vertices regardless convex polygons (CVPs) or concave polygons (CCPs) according to parameters in PPS. The experimental results show that the proposed approach can effectively detect polygons with a less running time and higher accuracy, and shows the advantage of detecting the CVP and CCP shapes of broken vertices.","PeriodicalId":6806,"journal":{"name":"2020 IEEE 5th International Conference on Image, Vision and Computing (ICIVC)","volume":"1 1","pages":"44-49"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76291715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-07-01DOI: 10.1109/ICIVC50857.2020.9177470
Di Chen, Yi Wu
With the development of deep learning, learning-based 3D reconstruction has attracted a substantial amount of attention and various single-image 3D reconstruction networks have been proposed. However, due to self-occlusion, the information captured in a single image is highly limited, resulting in inaccuracy and instability in reconstruction results. In this paper, a feature combination module is proposed to enable existing single-image 3D reconstruction networks to perform 3D reconstruction from multiview images. In addition, we study the impact of the number of the input multiview images as well as the network output points on reconstruction quality, in order to determine the required number of the input multiview images and the output points for reasonable reconstruction. In experiment, point cloud generations with different number of input images and output points are conducted. Experimental results show that the Chamfer distance decreases by 20%∼30% with the optimal number of input multiview images of five and at least 1000 output points.
{"title":"Exploring the Properties of Points Generation Network","authors":"Di Chen, Yi Wu","doi":"10.1109/ICIVC50857.2020.9177470","DOIUrl":"https://doi.org/10.1109/ICIVC50857.2020.9177470","url":null,"abstract":"With the development of deep learning, learning-based 3D reconstruction has attracted a substantial amount of attention and various single-image 3D reconstruction networks have been proposed. However, due to self-occlusion, the information captured in a single image is highly limited, resulting in inaccuracy and instability in reconstruction results. In this paper, a feature combination module is proposed to enable existing single-image 3D reconstruction networks to perform 3D reconstruction from multiview images. In addition, we study the impact of the number of the input multiview images as well as the network output points on reconstruction quality, in order to determine the required number of the input multiview images and the output points for reasonable reconstruction. In experiment, point cloud generations with different number of input images and output points are conducted. Experimental results show that the Chamfer distance decreases by 20%∼30% with the optimal number of input multiview images of five and at least 1000 output points.","PeriodicalId":6806,"journal":{"name":"2020 IEEE 5th International Conference on Image, Vision and Computing (ICIVC)","volume":"39 6 1","pages":"272-277"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80911067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
An improved three-frame-difference algorithm is proposed. The algorithm contains two sub-algorithm‐‐‐‐double-layer three-frame-difference algorithm, and history location data statistics and analysis algorithm. Double-layer three-frame-difference algorithm can fill the incomplete parts, and the history location data statistics and analysis algorithm can eliminate noise. Two examples containing one target (with size about 10*40 pixels) and two targets (with size about 80*160 pixels) respectively are chosen. Results of them prove that the improved three-frame-difference algorithm can resolve the problems of traditional one-layer three-frame-difference algorithm, and get the accurate results.
{"title":"Improved Three-Frame-Difference Algorithm for Infrared Moving Target","authors":"X. Luo, Ke-bin Jia, Pengyu Liu, Daoquan Xiong, Xiuchen Tian","doi":"10.1109/ICIVC50857.2020.9177468","DOIUrl":"https://doi.org/10.1109/ICIVC50857.2020.9177468","url":null,"abstract":"An improved three-frame-difference algorithm is proposed. The algorithm contains two sub-algorithm‐‐‐‐double-layer three-frame-difference algorithm, and history location data statistics and analysis algorithm. Double-layer three-frame-difference algorithm can fill the incomplete parts, and the history location data statistics and analysis algorithm can eliminate noise. Two examples containing one target (with size about 10*40 pixels) and two targets (with size about 80*160 pixels) respectively are chosen. Results of them prove that the improved three-frame-difference algorithm can resolve the problems of traditional one-layer three-frame-difference algorithm, and get the accurate results.","PeriodicalId":6806,"journal":{"name":"2020 IEEE 5th International Conference on Image, Vision and Computing (ICIVC)","volume":"66 1","pages":"108-112"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80218848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-07-01DOI: 10.1109/ICIVC50857.2020.9177450
Hongtao Wu, Ying Meng, Bingqing Niu
This paper proposed a novel 3D surface reconstruction method with posterior constraints of edge detection applying to general digital camera. The intrinsic parameters are calibrated with Zhang calibration method. After matching the images taken at two different orientations, the fundamental matrix and corresponding motion parameters by two different orientations are estimated, selecting optical center coordinate system of left camera as world coordinate system, and the projection matrix corresponding the two orientations is obtained. At last, the 3D coordinates of object feature point is computed and object surface is displayed with VRML technology. This system is simple, in addition, the proposed method is suit for general digital camera.
{"title":"A Novel 3D Surface Reconstruction Method with Posterior Constraints of Edge Detection","authors":"Hongtao Wu, Ying Meng, Bingqing Niu","doi":"10.1109/ICIVC50857.2020.9177450","DOIUrl":"https://doi.org/10.1109/ICIVC50857.2020.9177450","url":null,"abstract":"This paper proposed a novel 3D surface reconstruction method with posterior constraints of edge detection applying to general digital camera. The intrinsic parameters are calibrated with Zhang calibration method. After matching the images taken at two different orientations, the fundamental matrix and corresponding motion parameters by two different orientations are estimated, selecting optical center coordinate system of left camera as world coordinate system, and the projection matrix corresponding the two orientations is obtained. At last, the 3D coordinates of object feature point is computed and object surface is displayed with VRML technology. This system is simple, in addition, the proposed method is suit for general digital camera.","PeriodicalId":6806,"journal":{"name":"2020 IEEE 5th International Conference on Image, Vision and Computing (ICIVC)","volume":"441 1","pages":"55-58"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82918862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-07-01DOI: 10.1109/ICIVC50857.2020.9177493
Yingjie Gao, Yuechao Chen, Fangyong Wang, Yalong He
The scarcity and access difficulty of labeled underwater acoustic samples have created a bottleneck in introducing deep learning methods into recognition tasks of underwater acoustic targets. In this paper, a recognition method based on the combination of Deep Convolutional Generative Adversarial Network (DCGAN) and Densely Connected Convolutional Networks (DenseNet) for underwater acoustic targets is proposed aiming at these problems. On the basis of meeting the adaption requirements of the deep learning model for the input form, the sample set of wavelet time-frequency graph for the underwater acoustic target was constructed, combined with the prior knowledge of conventional sonar signal processing. The DCGAN model for generation of underwater acoustic sample and the DenseNet model for recognition of underwater acoustic target are designed, and the quality of generated samples is optimized through three stages of iterative training, thus expanding the training set, and improving the recognition effect of underwater acoustic target.
{"title":"Recognition Method for Underwater Acoustic Target Based on DCGAN and DenseNet","authors":"Yingjie Gao, Yuechao Chen, Fangyong Wang, Yalong He","doi":"10.1109/ICIVC50857.2020.9177493","DOIUrl":"https://doi.org/10.1109/ICIVC50857.2020.9177493","url":null,"abstract":"The scarcity and access difficulty of labeled underwater acoustic samples have created a bottleneck in introducing deep learning methods into recognition tasks of underwater acoustic targets. In this paper, a recognition method based on the combination of Deep Convolutional Generative Adversarial Network (DCGAN) and Densely Connected Convolutional Networks (DenseNet) for underwater acoustic targets is proposed aiming at these problems. On the basis of meeting the adaption requirements of the deep learning model for the input form, the sample set of wavelet time-frequency graph for the underwater acoustic target was constructed, combined with the prior knowledge of conventional sonar signal processing. The DCGAN model for generation of underwater acoustic sample and the DenseNet model for recognition of underwater acoustic target are designed, and the quality of generated samples is optimized through three stages of iterative training, thus expanding the training set, and improving the recognition effect of underwater acoustic target.","PeriodicalId":6806,"journal":{"name":"2020 IEEE 5th International Conference on Image, Vision and Computing (ICIVC)","volume":"1 1","pages":"215-221"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82970421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-07-01DOI: 10.1109/ICIVC50857.2020.9177473
Li Guo, Xia Wang, Mingyu Yue
The integrating of ZY3 and GF3 satellite image can achieved long-term acquisition of DOM product, however, the selection of point in SAR image was a difficult task, and the edge error cannot be guaranteed if the optical and SAR images were produced individually, an improved DOM product generation approach was proposed in order to ensure the accuracy and efficiency of DOM generation. 29 scenes of ZY3 image and 20 scenes of GF3 image in Hainan island were selected as experimental image, and the results showed that the horizontal accuracy of 73.47% of orthorectification result were better than 1 pixel (10 m), and 26.53% of orthorectification result were better than 2 pixels (20 m), which can meet the horizontal accuracy requirement of 1:50000 scale surveying and mapping in China. At the same time, an improved approach using the dodging and mosaic line editing was proposed to integrate the orthrectification image, it can be seen from this article that the color transition of ZY3 data region and the hue transition of GF3 data region was more natural, and the manual editing was not big. Therefore, the efficiency and accuracy of the improved approach of DOM generation proposed in this paper can be guaranteed in large areas, which can be used as a reference for users when generating the DOM using the integrating image of ZY3 and GF3.
{"title":"Improved Digital Orthorectification Map Generation Approach Using the Integrating of ZY3 and GF3 Image","authors":"Li Guo, Xia Wang, Mingyu Yue","doi":"10.1109/ICIVC50857.2020.9177473","DOIUrl":"https://doi.org/10.1109/ICIVC50857.2020.9177473","url":null,"abstract":"The integrating of ZY3 and GF3 satellite image can achieved long-term acquisition of DOM product, however, the selection of point in SAR image was a difficult task, and the edge error cannot be guaranteed if the optical and SAR images were produced individually, an improved DOM product generation approach was proposed in order to ensure the accuracy and efficiency of DOM generation. 29 scenes of ZY3 image and 20 scenes of GF3 image in Hainan island were selected as experimental image, and the results showed that the horizontal accuracy of 73.47% of orthorectification result were better than 1 pixel (10 m), and 26.53% of orthorectification result were better than 2 pixels (20 m), which can meet the horizontal accuracy requirement of 1:50000 scale surveying and mapping in China. At the same time, an improved approach using the dodging and mosaic line editing was proposed to integrate the orthrectification image, it can be seen from this article that the color transition of ZY3 data region and the hue transition of GF3 data region was more natural, and the manual editing was not big. Therefore, the efficiency and accuracy of the improved approach of DOM generation proposed in this paper can be guaranteed in large areas, which can be used as a reference for users when generating the DOM using the integrating image of ZY3 and GF3.","PeriodicalId":6806,"journal":{"name":"2020 IEEE 5th International Conference on Image, Vision and Computing (ICIVC)","volume":"50 1","pages":"82-85"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84158370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}