Pub Date : 2020-12-07DOI: 10.1109/iCAST51195.2020.9319496
Hidetoshi Ito, B. Chakraborty
Recently Social Networking Service (SNS) is used extensively due to proliferation of the Internet and cheaper, compact, easy to use computing devices. Texting, especially via Twitter, is very popular among people of all ages all over the world, and enormous text data is generated regularly which contains various types of information, rumors, sentimental expressions etc. The variety of topics related to the contents of the social media data are prone to changes with the passing of time and sometimes fade out completely after a certain time. Such time varying topics may include beneficial information that could be used for various decision making by general public as well as governmental organization. Especially for the recent pandemic of COVID-19, extraction and visualization of the changing needs of people might help them making some better countermeasures. In this study, COVID-19 related tweets have been collected and analyzed in units of time (hour, day and month) by means of various clustering models to visualize the dynamic changes of topics with time. It is found that Sentence-Bert is the most effective tool among the techniques used here though it is not yet enough for clear understanding of the topics semantically.
{"title":"Social Media Mining with Dynamic Clustering: A Case Study by COVID-19 Tweets","authors":"Hidetoshi Ito, B. Chakraborty","doi":"10.1109/iCAST51195.2020.9319496","DOIUrl":"https://doi.org/10.1109/iCAST51195.2020.9319496","url":null,"abstract":"Recently Social Networking Service (SNS) is used extensively due to proliferation of the Internet and cheaper, compact, easy to use computing devices. Texting, especially via Twitter, is very popular among people of all ages all over the world, and enormous text data is generated regularly which contains various types of information, rumors, sentimental expressions etc. The variety of topics related to the contents of the social media data are prone to changes with the passing of time and sometimes fade out completely after a certain time. Such time varying topics may include beneficial information that could be used for various decision making by general public as well as governmental organization. Especially for the recent pandemic of COVID-19, extraction and visualization of the changing needs of people might help them making some better countermeasures. In this study, COVID-19 related tweets have been collected and analyzed in units of time (hour, day and month) by means of various clustering models to visualize the dynamic changes of topics with time. It is found that Sentence-Bert is the most effective tool among the techniques used here though it is not yet enough for clear understanding of the topics semantically.","PeriodicalId":212570,"journal":{"name":"2020 11th International Conference on Awareness Science and Technology (iCAST)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115692079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-07DOI: 10.1109/iCAST51195.2020.9319478
Jian Song, Cui Xie, Junyu Dong
The ocean temperature front is a narrow transition zone where the temperature changes dramatically, which can be described by gradient of temperature. The temporal and spatial variations and the patterns of ocean fronts are of great concern to researchers by tedious observation and comparison of many spatial distribution maps of ocean front at different moments. However, a particular number of spatial states may only reflect certain spatial or temporal aspects of ocean front. This study designed a collaborative interactive visualization system to simultaneously integrate temporal and spatial analysis of ocean fronts with experts' knowledge, obtaining higher analysis efficiency and more comprehensiveness. The interactive statistical charts facilitate focus + context selection of points of interest in time and space, while the interactive Map-View and Map-Gallery support spatial analysis from overview to details. Moreover, this paper uses an unsupervised learning model named Self-Organizing Mapping network (SOM) to conduct spatio-temporal cluster analysis on different ocean fronts near the China Sea. The clustering results can be customized by user's colors specification, evaluated and interactively adjusted by researchers' knowledge. The spatio-temporal patterns of clustering result can easily mined by collaborative linkage of multi-graphs including unified distance matrix (U-Matrix), component plane, feature parallel coordinates plot, Map-View and other charts. The effectiveness and usability of the proposed system are demonstrated with two case studies.
{"title":"OFViser: An Interactive Visual System for Spatiotemporal Analysis of Ocean Front","authors":"Jian Song, Cui Xie, Junyu Dong","doi":"10.1109/iCAST51195.2020.9319478","DOIUrl":"https://doi.org/10.1109/iCAST51195.2020.9319478","url":null,"abstract":"The ocean temperature front is a narrow transition zone where the temperature changes dramatically, which can be described by gradient of temperature. The temporal and spatial variations and the patterns of ocean fronts are of great concern to researchers by tedious observation and comparison of many spatial distribution maps of ocean front at different moments. However, a particular number of spatial states may only reflect certain spatial or temporal aspects of ocean front. This study designed a collaborative interactive visualization system to simultaneously integrate temporal and spatial analysis of ocean fronts with experts' knowledge, obtaining higher analysis efficiency and more comprehensiveness. The interactive statistical charts facilitate focus + context selection of points of interest in time and space, while the interactive Map-View and Map-Gallery support spatial analysis from overview to details. Moreover, this paper uses an unsupervised learning model named Self-Organizing Mapping network (SOM) to conduct spatio-temporal cluster analysis on different ocean fronts near the China Sea. The clustering results can be customized by user's colors specification, evaluated and interactively adjusted by researchers' knowledge. The spatio-temporal patterns of clustering result can easily mined by collaborative linkage of multi-graphs including unified distance matrix (U-Matrix), component plane, feature parallel coordinates plot, Map-View and other charts. The effectiveness and usability of the proposed system are demonstrated with two case studies.","PeriodicalId":212570,"journal":{"name":"2020 11th International Conference on Awareness Science and Technology (iCAST)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116898227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ocean eddies are common phenomena of ocean water movement. They have a significant impact on the physical properties of marine hydrology, marine chemistry and marine biological environment. Current study of ocean eddy detection has already become one of the most active research areas in physical oceanography. Recent trend in eddy detection attempts to employ deep learning methods, but it is still in the early stage. Accordingly, this work takes the advantage of the rapid development of deep learning to improve the current result on ocean eddy detection. We apply the improved and reliable high-resolution representation network to eddy detection and classification from Sea Surface Height (SSH) maps based on semantic segmentation. This high-resolution network can aggregate representations from all the parallel convolutions and repeat the operation of feature fusion. It can therefore maintain and eventually produce high-resolution representations throughout the whole feature extraction process. We then effectively combine the segmentation result with a CascadePSP module and obtain more accurate results than those produced by existing approaches. Our work shows a good performance based on the sea surface height data, which also verifies the application value of deep learning technology in the field of ocean monitoring and data mining.
{"title":"Mesoscale Ocean Eddy Detection Using High-Resolution Network","authors":"Xirong Lu, Shaoxiang Guo, Meng Zhang, Junyu Dong, Xue'en Chen, Xin Sun","doi":"10.1109/iCAST51195.2020.9319490","DOIUrl":"https://doi.org/10.1109/iCAST51195.2020.9319490","url":null,"abstract":"Ocean eddies are common phenomena of ocean water movement. They have a significant impact on the physical properties of marine hydrology, marine chemistry and marine biological environment. Current study of ocean eddy detection has already become one of the most active research areas in physical oceanography. Recent trend in eddy detection attempts to employ deep learning methods, but it is still in the early stage. Accordingly, this work takes the advantage of the rapid development of deep learning to improve the current result on ocean eddy detection. We apply the improved and reliable high-resolution representation network to eddy detection and classification from Sea Surface Height (SSH) maps based on semantic segmentation. This high-resolution network can aggregate representations from all the parallel convolutions and repeat the operation of feature fusion. It can therefore maintain and eventually produce high-resolution representations throughout the whole feature extraction process. We then effectively combine the segmentation result with a CascadePSP module and obtain more accurate results than those produced by existing approaches. Our work shows a good performance based on the sea surface height data, which also verifies the application value of deep learning technology in the field of ocean monitoring and data mining.","PeriodicalId":212570,"journal":{"name":"2020 11th International Conference on Awareness Science and Technology (iCAST)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129868692","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-07DOI: 10.1109/iCAST51195.2020.9319481
Dingxin Ma, Xuguang Zhang, Hui Yu
Crowd counting is a research hotspot for video surveillance due to its great significance to public safety. The accuracy of crowd counting depends on whether the extracted features can effectively map the number of pedestrians. This paper focuses on this problem by proposing a crowd counting method based on the expression of image appearance and fluid forces. Firstly, Horn-Schunck optical flow method is used to extract the motion crowd. Secondly, based on the motion information of crowd, pedestrians in different directions are distinguished by the k-means clustering algorithm. Then, image appearance features and fluid features are extracted to describe different motion crowd. The image appearance features are gained by calculating the foreground area, foreground perimeter and edge length. The gravity, inertia force, pressure and viscous force are taken as the fluid features. Finally, two kinds of features are combined as the final descriptor and then least squares regression is used to fit features and the number of pedestrians. The experimental results demonstrate that the proposed crowd counting method acquires satisfied performance and outperforms other methods in terms of the mean absolute error and mean square error.
{"title":"Crowd counting by feature-level fusion of appearance and fluid force","authors":"Dingxin Ma, Xuguang Zhang, Hui Yu","doi":"10.1109/iCAST51195.2020.9319481","DOIUrl":"https://doi.org/10.1109/iCAST51195.2020.9319481","url":null,"abstract":"Crowd counting is a research hotspot for video surveillance due to its great significance to public safety. The accuracy of crowd counting depends on whether the extracted features can effectively map the number of pedestrians. This paper focuses on this problem by proposing a crowd counting method based on the expression of image appearance and fluid forces. Firstly, Horn-Schunck optical flow method is used to extract the motion crowd. Secondly, based on the motion information of crowd, pedestrians in different directions are distinguished by the k-means clustering algorithm. Then, image appearance features and fluid features are extracted to describe different motion crowd. The image appearance features are gained by calculating the foreground area, foreground perimeter and edge length. The gravity, inertia force, pressure and viscous force are taken as the fluid features. Finally, two kinds of features are combined as the final descriptor and then least squares regression is used to fit features and the number of pedestrians. The experimental results demonstrate that the proposed crowd counting method acquires satisfied performance and outperforms other methods in terms of the mean absolute error and mean square error.","PeriodicalId":212570,"journal":{"name":"2020 11th International Conference on Awareness Science and Technology (iCAST)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125741342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-07DOI: 10.1109/iCAST51195.2020.9319471
Mai Uchida, Yoichi Tomioka
Pattern noise-based source camera identification is a promising technology for preventing crimes such as illegal uploading and secret photography. In order to identify the source camera model of an input image, recently, highly accurate camera model classification methods based on convolutional neural networks (CNNs) have been proposed. However, the pattern noise in an image is typically contaminated by JPEG compression, and the degree of contamination depends on the quality factor (Q-Factor). Therefore, it could be that JPEG compression of different Q-factors from that of training samples degenerates the accuracy for CNN-based camera model classification. In this paper, we propose a CNN-based camera model classification and metric learning trained with the JPEG-base a noise suppression technique. In the experiments, we evaluate camera model classification accuracy and metric learning performance for various Q-Factors. We demonstrate that JPEG-based noise suppression improves camera model classification accuracy from 87.25% to 99.89% on average. We also demonstrate JPEG-based noise suppression improves the robustness of metric learning to JPEG contamination.
{"title":"CNN-based Camera Model Classification and Metric Learning Robust to JPEG Noise Contamination","authors":"Mai Uchida, Yoichi Tomioka","doi":"10.1109/iCAST51195.2020.9319471","DOIUrl":"https://doi.org/10.1109/iCAST51195.2020.9319471","url":null,"abstract":"Pattern noise-based source camera identification is a promising technology for preventing crimes such as illegal uploading and secret photography. In order to identify the source camera model of an input image, recently, highly accurate camera model classification methods based on convolutional neural networks (CNNs) have been proposed. However, the pattern noise in an image is typically contaminated by JPEG compression, and the degree of contamination depends on the quality factor (Q-Factor). Therefore, it could be that JPEG compression of different Q-factors from that of training samples degenerates the accuracy for CNN-based camera model classification. In this paper, we propose a CNN-based camera model classification and metric learning trained with the JPEG-base a noise suppression technique. In the experiments, we evaluate camera model classification accuracy and metric learning performance for various Q-Factors. We demonstrate that JPEG-based noise suppression improves camera model classification accuracy from 87.25% to 99.89% on average. We also demonstrate JPEG-based noise suppression improves the robustness of metric learning to JPEG contamination.","PeriodicalId":212570,"journal":{"name":"2020 11th International Conference on Awareness Science and Technology (iCAST)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130752244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-07DOI: 10.1109/iCAST51195.2020.9319406
Shoki Sato, T. Hashimoto, Y. Shirota
Goal 4: Quality Education is one of goals in SDGs and it says the necessity for promoting ESD (education for sustainable development) as one of targets. For Humanitarian Technology, ESD should be also one of key concepts. In universities, ESD has become the core subject. Since its founding in 1928, the Chiba University of Commerce (CUC) has accepted many young people to study under the educational philosophy of “practical scholarship with high morality” as it started as a school for accounting. Recently, CUC is promoting “RE (Renewable Energy) 100” university to consider ethical consumption and generation as well. Therefore, ESD matches the philosophy of CUC. As part of ESD, CUC is organizing the special lecture for SDGs to provide students opportunities to think about what we can do for developing sustainable societies. In this paper, we evaluate the ESD at CUC using information technologies such as morphological analysis, Bag of Words, and word2vec model. Using information technologies, the evaluation for ESD can be done effectively.
{"title":"Evaluation for ESD (Education for Sustainable Development) to achieve SDGs at University","authors":"Shoki Sato, T. Hashimoto, Y. Shirota","doi":"10.1109/iCAST51195.2020.9319406","DOIUrl":"https://doi.org/10.1109/iCAST51195.2020.9319406","url":null,"abstract":"Goal 4: Quality Education is one of goals in SDGs and it says the necessity for promoting ESD (education for sustainable development) as one of targets. For Humanitarian Technology, ESD should be also one of key concepts. In universities, ESD has become the core subject. Since its founding in 1928, the Chiba University of Commerce (CUC) has accepted many young people to study under the educational philosophy of “practical scholarship with high morality” as it started as a school for accounting. Recently, CUC is promoting “RE (Renewable Energy) 100” university to consider ethical consumption and generation as well. Therefore, ESD matches the philosophy of CUC. As part of ESD, CUC is organizing the special lecture for SDGs to provide students opportunities to think about what we can do for developing sustainable societies. In this paper, we evaluate the ESD at CUC using information technologies such as morphological analysis, Bag of Words, and word2vec model. Using information technologies, the evaluation for ESD can be done effectively.","PeriodicalId":212570,"journal":{"name":"2020 11th International Conference on Awareness Science and Technology (iCAST)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131159326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-07DOI: 10.1109/iCAST51195.2020.9319475
Vinay Kumar Reddy Chimmula, Lei Zhang, Dhanya Palliath, Abhinay Kumar
For more than a decade Deep Learning, a subset of machine learning have been using for many applications such as forecasting, data visualization, classification etc. However, it consumes more energy and also takes longer training periods for computation, when compared to human brain. In most cases, it is difficult to reach human level performance. With the recent technological improvements in neuroscience and thanks to neuromorphic computing, we now can achieve higher classification efficacy for producing the desired outputs with considerably lower power consumption. Latest advancements in brain simulation technologies has given a breakthrough for analysing and modelling brain functions. Despite its advancements, this research remains undiscovered due to lack of coordination between neuroscientists, electronics engineers and computer scientists. Recent progress in Spiking Neural Networks(SNN) led towards integration different fields under one single roof. Biological neurons inside human brain communicate with each other through synapses. Similarly, bio-inspired synapses in the neuromorphic model mimic the biological neuro synapses for computing. In this novel research, we have modelled a supervised Spiking Neural Network algorithm using Leaky Integrate and Fire (LIF), Izhikevich and rectified linear neurons and tested its spike latency under different conditions. Furthermore, these SNN models are tested on the MNIST dataset to classify the handwritten digits, and the results are compared with the results of the Convolutional Neural Network (CNN).
十多年来,深度学习作为机器学习的一个子集已经被用于许多应用,如预测、数据可视化、分类等。然而,与人脑相比,它消耗更多的能量,也需要更长的训练时间来进行计算。在大多数情况下,很难达到人类水平的表现。随着最近神经科学技术的进步和神经形态计算的发展,我们现在可以实现更高的分类效率,以相当低的功耗产生所需的输出。大脑模拟技术的最新进展为分析和模拟大脑功能提供了突破。尽管取得了进展,但由于神经科学家、电子工程师和计算机科学家之间缺乏协调,这项研究仍未被发现。脉冲神经网络(SNN)的最新进展导致了不同领域在一个屋檐下的整合。人类大脑内的生物神经元通过突触相互交流。类似地,神经形态模型中的生物启发突触模仿用于计算的生物神经突触。在这项新颖的研究中,我们使用Leaky Integrate and Fire (LIF), Izhikevich和整流线性神经元建模了一个有监督的峰值神经网络算法,并测试了其在不同条件下的峰值延迟。在MNIST数据集上对SNN模型进行测试,对手写数字进行分类,并将结果与卷积神经网络(CNN)的结果进行比较。
{"title":"Improved Spiking Neural Networks with multiple neurons for digit recognition","authors":"Vinay Kumar Reddy Chimmula, Lei Zhang, Dhanya Palliath, Abhinay Kumar","doi":"10.1109/iCAST51195.2020.9319475","DOIUrl":"https://doi.org/10.1109/iCAST51195.2020.9319475","url":null,"abstract":"For more than a decade Deep Learning, a subset of machine learning have been using for many applications such as forecasting, data visualization, classification etc. However, it consumes more energy and also takes longer training periods for computation, when compared to human brain. In most cases, it is difficult to reach human level performance. With the recent technological improvements in neuroscience and thanks to neuromorphic computing, we now can achieve higher classification efficacy for producing the desired outputs with considerably lower power consumption. Latest advancements in brain simulation technologies has given a breakthrough for analysing and modelling brain functions. Despite its advancements, this research remains undiscovered due to lack of coordination between neuroscientists, electronics engineers and computer scientists. Recent progress in Spiking Neural Networks(SNN) led towards integration different fields under one single roof. Biological neurons inside human brain communicate with each other through synapses. Similarly, bio-inspired synapses in the neuromorphic model mimic the biological neuro synapses for computing. In this novel research, we have modelled a supervised Spiking Neural Network algorithm using Leaky Integrate and Fire (LIF), Izhikevich and rectified linear neurons and tested its spike latency under different conditions. Furthermore, these SNN models are tested on the MNIST dataset to classify the handwritten digits, and the results are compared with the results of the Convolutional Neural Network (CNN).","PeriodicalId":212570,"journal":{"name":"2020 11th International Conference on Awareness Science and Technology (iCAST)","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115144705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-07DOI: 10.1109/iCAST51195.2020.9319497
Kaisei Shimura, Yoichi Tomioka, Qiangfu Zhao
Railway crossing is one of the places where mobility scooter accidents happen relatively often. To support drivers to prevent such accidents, we propose a scene recognition system for the railway crossing scene. This system can detect railway crossing scene, objects which typically exist close to the railway crossing scene, and the distance to the detected railway crossing. In this system, we propose an efficient four-stage recognition scheme that combines scene screening based on a compact CNN, CNN-based object detection, railway crossing detection, and distance estimation based on the detected warning sign of railway crossing. In the experiments, we demonstrate our system improves precision and F-score for each class by up to 20.6% and 35.0% for the same recall, respectively compared with existing object detection. Moreover, by using the proposed scene screening, we achieved 1.7 to 1.9 times faster execution for scenes in which a railway crossing does not exist on the desktop PC, Raspberry Pi3 model B, Raspberry Pi model B with Neural Compute Stick 2.
{"title":"An Efficient Scene Recognition System of Railway Crossing","authors":"Kaisei Shimura, Yoichi Tomioka, Qiangfu Zhao","doi":"10.1109/iCAST51195.2020.9319497","DOIUrl":"https://doi.org/10.1109/iCAST51195.2020.9319497","url":null,"abstract":"Railway crossing is one of the places where mobility scooter accidents happen relatively often. To support drivers to prevent such accidents, we propose a scene recognition system for the railway crossing scene. This system can detect railway crossing scene, objects which typically exist close to the railway crossing scene, and the distance to the detected railway crossing. In this system, we propose an efficient four-stage recognition scheme that combines scene screening based on a compact CNN, CNN-based object detection, railway crossing detection, and distance estimation based on the detected warning sign of railway crossing. In the experiments, we demonstrate our system improves precision and F-score for each class by up to 20.6% and 35.0% for the same recall, respectively compared with existing object detection. Moreover, by using the proposed scene screening, we achieved 1.7 to 1.9 times faster execution for scenes in which a railway crossing does not exist on the desktop PC, Raspberry Pi3 model B, Raspberry Pi model B with Neural Compute Stick 2.","PeriodicalId":212570,"journal":{"name":"2020 11th International Conference on Awareness Science and Technology (iCAST)","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133747739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-07DOI: 10.1109/iCAST51195.2020.9319479
Zhengwu Shi, Qingxuan Lyu, Shu Zhang, Lin Qi, H. Fan, Junyu Dong
Integration of the line laser scanning system with visual SLAM for 3D mapping is conceptually attractive yet facing the difficulty with processing projected line laser, which is not only hard to be extracted from images captured under natural light, but also disrupts the feature tracking procedure in visual SLAM. This paper proposes a method of segmenting the target object and extracting the laser line to build an accurate and realistic 3D model by using a semantic segmentation method. First, we introduce adaptive thresholds for the recognized objects to solve the laser extraction problem. Second, we discard the extracted image features in the laser area for better pose estimation of visual SLAM. Finally, we complement the surface of lasers with the color information in the related objects of 3D mapping. In our experiments, we show that the proposed method can produce a dense colored 3D mapping and has higher performance than the traditional visual SLAM based laser scanning system.
{"title":"A Visual-SLAM based Line Laser Scanning System using Semantically Segmented Images","authors":"Zhengwu Shi, Qingxuan Lyu, Shu Zhang, Lin Qi, H. Fan, Junyu Dong","doi":"10.1109/iCAST51195.2020.9319479","DOIUrl":"https://doi.org/10.1109/iCAST51195.2020.9319479","url":null,"abstract":"Integration of the line laser scanning system with visual SLAM for 3D mapping is conceptually attractive yet facing the difficulty with processing projected line laser, which is not only hard to be extracted from images captured under natural light, but also disrupts the feature tracking procedure in visual SLAM. This paper proposes a method of segmenting the target object and extracting the laser line to build an accurate and realistic 3D model by using a semantic segmentation method. First, we introduce adaptive thresholds for the recognized objects to solve the laser extraction problem. Second, we discard the extracted image features in the laser area for better pose estimation of visual SLAM. Finally, we complement the surface of lasers with the color information in the related objects of 3D mapping. In our experiments, we show that the proposed method can produce a dense colored 3D mapping and has higher performance than the traditional visual SLAM based laser scanning system.","PeriodicalId":212570,"journal":{"name":"2020 11th International Conference on Awareness Science and Technology (iCAST)","volume":"106 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116193910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-07DOI: 10.1109/iCAST51195.2020.9319493
Wei Cai, Haojie Chen, Jian Zhang
In this research, an enhanced invasive weed optimization (EIWO) has been proposed to solve resource-constrained project scheduling problem (RCPSP) which subjects to the makespan minimization. Firstly, a hybrid population initialization method is illustrated to improve the quality of initial solutions. Secondly, to enhance the local exploitation ability, a local search approach is embedded in the spatial dispersal process. Thirdly, an improved competitive exclusion based on acceptance probability is proposed. At the end of this article, EIWO is tested and verified by standard benchmark problems from PSPLIB. Compared with the existing algorithms through computer numerical experiments, the new EIWO algorithm is more effective and efficient in solving RCPSP.
{"title":"An Enhanced Invasive Weed Optimization in Resource-Constrained Project Scheduling Problem","authors":"Wei Cai, Haojie Chen, Jian Zhang","doi":"10.1109/iCAST51195.2020.9319493","DOIUrl":"https://doi.org/10.1109/iCAST51195.2020.9319493","url":null,"abstract":"In this research, an enhanced invasive weed optimization (EIWO) has been proposed to solve resource-constrained project scheduling problem (RCPSP) which subjects to the makespan minimization. Firstly, a hybrid population initialization method is illustrated to improve the quality of initial solutions. Secondly, to enhance the local exploitation ability, a local search approach is embedded in the spatial dispersal process. Thirdly, an improved competitive exclusion based on acceptance probability is proposed. At the end of this article, EIWO is tested and verified by standard benchmark problems from PSPLIB. Compared with the existing algorithms through computer numerical experiments, the new EIWO algorithm is more effective and efficient in solving RCPSP.","PeriodicalId":212570,"journal":{"name":"2020 11th International Conference on Awareness Science and Technology (iCAST)","volume":"463 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121616869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}