首页 > 最新文献

2020 11th International Conference on Awareness Science and Technology (iCAST)最新文献

英文 中文
Social Media Mining with Dynamic Clustering: A Case Study by COVID-19 Tweets 动态聚类的社交媒体挖掘:以COVID-19推文为例
Pub Date : 2020-12-07 DOI: 10.1109/iCAST51195.2020.9319496
Hidetoshi Ito, B. Chakraborty
Recently Social Networking Service (SNS) is used extensively due to proliferation of the Internet and cheaper, compact, easy to use computing devices. Texting, especially via Twitter, is very popular among people of all ages all over the world, and enormous text data is generated regularly which contains various types of information, rumors, sentimental expressions etc. The variety of topics related to the contents of the social media data are prone to changes with the passing of time and sometimes fade out completely after a certain time. Such time varying topics may include beneficial information that could be used for various decision making by general public as well as governmental organization. Especially for the recent pandemic of COVID-19, extraction and visualization of the changing needs of people might help them making some better countermeasures. In this study, COVID-19 related tweets have been collected and analyzed in units of time (hour, day and month) by means of various clustering models to visualize the dynamic changes of topics with time. It is found that Sentence-Bert is the most effective tool among the techniques used here though it is not yet enough for clear understanding of the topics semantically.
最近,由于互联网的普及和更便宜、更紧凑、更易于使用的计算设备,社交网络服务(SNS)得到了广泛的应用。短信,尤其是通过Twitter,在世界各地的各个年龄段的人们中都很受欢迎,并且经常产生大量的文本数据,其中包含各种类型的信息,谣言,情感表达等。与社交媒体数据内容相关的各种话题随着时间的推移会发生变化,有时会在一段时间后完全消失。这些随时间变化的主题可能包括有益的信息,可用于公众和政府组织的各种决策。特别是对于最近的COVID-19大流行,提取和可视化人们不断变化的需求可能有助于他们制定更好的对策。本研究采用不同的聚类模型,以时间为单位(小时、日、月)收集和分析与COVID-19相关的推文,可视化主题随时间的动态变化。研究发现,在本文所使用的技术中,Sentence-Bert是最有效的工具,尽管它还不足以在语义上清晰地理解主题。
{"title":"Social Media Mining with Dynamic Clustering: A Case Study by COVID-19 Tweets","authors":"Hidetoshi Ito, B. Chakraborty","doi":"10.1109/iCAST51195.2020.9319496","DOIUrl":"https://doi.org/10.1109/iCAST51195.2020.9319496","url":null,"abstract":"Recently Social Networking Service (SNS) is used extensively due to proliferation of the Internet and cheaper, compact, easy to use computing devices. Texting, especially via Twitter, is very popular among people of all ages all over the world, and enormous text data is generated regularly which contains various types of information, rumors, sentimental expressions etc. The variety of topics related to the contents of the social media data are prone to changes with the passing of time and sometimes fade out completely after a certain time. Such time varying topics may include beneficial information that could be used for various decision making by general public as well as governmental organization. Especially for the recent pandemic of COVID-19, extraction and visualization of the changing needs of people might help them making some better countermeasures. In this study, COVID-19 related tweets have been collected and analyzed in units of time (hour, day and month) by means of various clustering models to visualize the dynamic changes of topics with time. It is found that Sentence-Bert is the most effective tool among the techniques used here though it is not yet enough for clear understanding of the topics semantically.","PeriodicalId":212570,"journal":{"name":"2020 11th International Conference on Awareness Science and Technology (iCAST)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115692079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
OFViser: An Interactive Visual System for Spatiotemporal Analysis of Ocean Front OFViser:一个用于海洋前沿时空分析的交互式视觉系统
Pub Date : 2020-12-07 DOI: 10.1109/iCAST51195.2020.9319478
Jian Song, Cui Xie, Junyu Dong
The ocean temperature front is a narrow transition zone where the temperature changes dramatically, which can be described by gradient of temperature. The temporal and spatial variations and the patterns of ocean fronts are of great concern to researchers by tedious observation and comparison of many spatial distribution maps of ocean front at different moments. However, a particular number of spatial states may only reflect certain spatial or temporal aspects of ocean front. This study designed a collaborative interactive visualization system to simultaneously integrate temporal and spatial analysis of ocean fronts with experts' knowledge, obtaining higher analysis efficiency and more comprehensiveness. The interactive statistical charts facilitate focus + context selection of points of interest in time and space, while the interactive Map-View and Map-Gallery support spatial analysis from overview to details. Moreover, this paper uses an unsupervised learning model named Self-Organizing Mapping network (SOM) to conduct spatio-temporal cluster analysis on different ocean fronts near the China Sea. The clustering results can be customized by user's colors specification, evaluated and interactively adjusted by researchers' knowledge. The spatio-temporal patterns of clustering result can easily mined by collaborative linkage of multi-graphs including unified distance matrix (U-Matrix), component plane, feature parallel coordinates plot, Map-View and other charts. The effectiveness and usability of the proposed system are demonstrated with two case studies.
海洋温度锋是一个温度变化剧烈的狭窄过渡带,可以用温度梯度来描述。通过对多幅不同时刻的海锋空间分布图进行繁琐的观测和比较,海锋的时空变化和格局一直是研究人员关注的问题。然而,特定数量的空间状态可能只反映了海锋的某些空间或时间方面。本研究设计了一个协同交互可视化系统,将海洋锋的时空分析与专家知识同步整合,分析效率更高,更全面。交互式统计图表有助于在时间和空间上选择兴趣点的焦点和上下文,而交互式Map-View和Map-Gallery支持从概述到细节的空间分析。此外,本文采用自组织映射网络(SOM)无监督学习模型对中国近海不同海锋进行了时空聚类分析。聚类结果可以根据用户的颜色规格进行定制,并根据研究人员的知识进行评估和交互调整。通过统一距离矩阵(U-Matrix)、分量平面、特征平行坐标图、Map-View图等多图的协同联动,可以方便地挖掘聚类结果的时空格局。通过两个案例研究证明了该系统的有效性和可用性。
{"title":"OFViser: An Interactive Visual System for Spatiotemporal Analysis of Ocean Front","authors":"Jian Song, Cui Xie, Junyu Dong","doi":"10.1109/iCAST51195.2020.9319478","DOIUrl":"https://doi.org/10.1109/iCAST51195.2020.9319478","url":null,"abstract":"The ocean temperature front is a narrow transition zone where the temperature changes dramatically, which can be described by gradient of temperature. The temporal and spatial variations and the patterns of ocean fronts are of great concern to researchers by tedious observation and comparison of many spatial distribution maps of ocean front at different moments. However, a particular number of spatial states may only reflect certain spatial or temporal aspects of ocean front. This study designed a collaborative interactive visualization system to simultaneously integrate temporal and spatial analysis of ocean fronts with experts' knowledge, obtaining higher analysis efficiency and more comprehensiveness. The interactive statistical charts facilitate focus + context selection of points of interest in time and space, while the interactive Map-View and Map-Gallery support spatial analysis from overview to details. Moreover, this paper uses an unsupervised learning model named Self-Organizing Mapping network (SOM) to conduct spatio-temporal cluster analysis on different ocean fronts near the China Sea. The clustering results can be customized by user's colors specification, evaluated and interactively adjusted by researchers' knowledge. The spatio-temporal patterns of clustering result can easily mined by collaborative linkage of multi-graphs including unified distance matrix (U-Matrix), component plane, feature parallel coordinates plot, Map-View and other charts. The effectiveness and usability of the proposed system are demonstrated with two case studies.","PeriodicalId":212570,"journal":{"name":"2020 11th International Conference on Awareness Science and Technology (iCAST)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116898227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mesoscale Ocean Eddy Detection Using High-Resolution Network 基于高分辨率网络的中尺度海洋涡旋探测
Pub Date : 2020-12-07 DOI: 10.1109/iCAST51195.2020.9319490
Xirong Lu, Shaoxiang Guo, Meng Zhang, Junyu Dong, Xue'en Chen, Xin Sun
Ocean eddies are common phenomena of ocean water movement. They have a significant impact on the physical properties of marine hydrology, marine chemistry and marine biological environment. Current study of ocean eddy detection has already become one of the most active research areas in physical oceanography. Recent trend in eddy detection attempts to employ deep learning methods, but it is still in the early stage. Accordingly, this work takes the advantage of the rapid development of deep learning to improve the current result on ocean eddy detection. We apply the improved and reliable high-resolution representation network to eddy detection and classification from Sea Surface Height (SSH) maps based on semantic segmentation. This high-resolution network can aggregate representations from all the parallel convolutions and repeat the operation of feature fusion. It can therefore maintain and eventually produce high-resolution representations throughout the whole feature extraction process. We then effectively combine the segmentation result with a CascadePSP module and obtain more accurate results than those produced by existing approaches. Our work shows a good performance based on the sea surface height data, which also verifies the application value of deep learning technology in the field of ocean monitoring and data mining.
海洋涡旋是常见的海水运动现象。它们对海洋水文、海洋化学和海洋生物环境的物理性质有重要影响。目前海洋涡旋探测研究已成为物理海洋学中最活跃的研究领域之一。近年来,在涡流检测领域有尝试采用深度学习方法的趋势,但目前仍处于早期阶段。因此,本工作利用深度学习的快速发展来改进目前海洋涡流检测的结果。我们将改进的、可靠的高分辨率表示网络应用于基于语义分割的海面高度(SSH)图的涡流检测和分类。该高分辨率网络可以聚合所有并行卷积的表示,并重复特征融合操作。因此,它可以在整个特征提取过程中保持并最终产生高分辨率的表示。然后,我们将分割结果与CascadePSP模块有效地结合起来,获得了比现有方法更准确的结果。我们的工作显示了基于海面高度数据的良好性能,这也验证了深度学习技术在海洋监测和数据挖掘领域的应用价值。
{"title":"Mesoscale Ocean Eddy Detection Using High-Resolution Network","authors":"Xirong Lu, Shaoxiang Guo, Meng Zhang, Junyu Dong, Xue'en Chen, Xin Sun","doi":"10.1109/iCAST51195.2020.9319490","DOIUrl":"https://doi.org/10.1109/iCAST51195.2020.9319490","url":null,"abstract":"Ocean eddies are common phenomena of ocean water movement. They have a significant impact on the physical properties of marine hydrology, marine chemistry and marine biological environment. Current study of ocean eddy detection has already become one of the most active research areas in physical oceanography. Recent trend in eddy detection attempts to employ deep learning methods, but it is still in the early stage. Accordingly, this work takes the advantage of the rapid development of deep learning to improve the current result on ocean eddy detection. We apply the improved and reliable high-resolution representation network to eddy detection and classification from Sea Surface Height (SSH) maps based on semantic segmentation. This high-resolution network can aggregate representations from all the parallel convolutions and repeat the operation of feature fusion. It can therefore maintain and eventually produce high-resolution representations throughout the whole feature extraction process. We then effectively combine the segmentation result with a CascadePSP module and obtain more accurate results than those produced by existing approaches. Our work shows a good performance based on the sea surface height data, which also verifies the application value of deep learning technology in the field of ocean monitoring and data mining.","PeriodicalId":212570,"journal":{"name":"2020 11th International Conference on Awareness Science and Technology (iCAST)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129868692","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Crowd counting by feature-level fusion of appearance and fluid force 基于特征级外观和流体力融合的人群计数
Pub Date : 2020-12-07 DOI: 10.1109/iCAST51195.2020.9319481
Dingxin Ma, Xuguang Zhang, Hui Yu
Crowd counting is a research hotspot for video surveillance due to its great significance to public safety. The accuracy of crowd counting depends on whether the extracted features can effectively map the number of pedestrians. This paper focuses on this problem by proposing a crowd counting method based on the expression of image appearance and fluid forces. Firstly, Horn-Schunck optical flow method is used to extract the motion crowd. Secondly, based on the motion information of crowd, pedestrians in different directions are distinguished by the k-means clustering algorithm. Then, image appearance features and fluid features are extracted to describe different motion crowd. The image appearance features are gained by calculating the foreground area, foreground perimeter and edge length. The gravity, inertia force, pressure and viscous force are taken as the fluid features. Finally, two kinds of features are combined as the final descriptor and then least squares regression is used to fit features and the number of pedestrians. The experimental results demonstrate that the proposed crowd counting method acquires satisfied performance and outperforms other methods in terms of the mean absolute error and mean square error.
人群统计对公共安全具有重要意义,是视频监控领域的研究热点。人群计数的准确性取决于提取的特征能否有效映射出行人的数量。本文针对这一问题,提出了一种基于图像外观和流体力表达的人群计数方法。首先,采用Horn-Schunck光流法提取运动人群;其次,基于人群的运动信息,采用k-means聚类算法区分不同方向的行人;然后提取图像的外观特征和流体特征来描述不同的运动人群;通过计算前景面积、前景周长和边缘长度,得到图像的外观特征。将重力、惯性力、压力和粘性力作为流体特征。最后,将两类特征组合为最终描述符,然后使用最小二乘回归对特征和行人数量进行拟合。实验结果表明,本文提出的人群计数方法取得了满意的性能,在平均绝对误差和均方误差方面优于其他方法。
{"title":"Crowd counting by feature-level fusion of appearance and fluid force","authors":"Dingxin Ma, Xuguang Zhang, Hui Yu","doi":"10.1109/iCAST51195.2020.9319481","DOIUrl":"https://doi.org/10.1109/iCAST51195.2020.9319481","url":null,"abstract":"Crowd counting is a research hotspot for video surveillance due to its great significance to public safety. The accuracy of crowd counting depends on whether the extracted features can effectively map the number of pedestrians. This paper focuses on this problem by proposing a crowd counting method based on the expression of image appearance and fluid forces. Firstly, Horn-Schunck optical flow method is used to extract the motion crowd. Secondly, based on the motion information of crowd, pedestrians in different directions are distinguished by the k-means clustering algorithm. Then, image appearance features and fluid features are extracted to describe different motion crowd. The image appearance features are gained by calculating the foreground area, foreground perimeter and edge length. The gravity, inertia force, pressure and viscous force are taken as the fluid features. Finally, two kinds of features are combined as the final descriptor and then least squares regression is used to fit features and the number of pedestrians. The experimental results demonstrate that the proposed crowd counting method acquires satisfied performance and outperforms other methods in terms of the mean absolute error and mean square error.","PeriodicalId":212570,"journal":{"name":"2020 11th International Conference on Awareness Science and Technology (iCAST)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125741342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CNN-based Camera Model Classification and Metric Learning Robust to JPEG Noise Contamination 基于cnn的相机模型分类和度量学习对JPEG噪声污染的鲁棒性
Pub Date : 2020-12-07 DOI: 10.1109/iCAST51195.2020.9319471
Mai Uchida, Yoichi Tomioka
Pattern noise-based source camera identification is a promising technology for preventing crimes such as illegal uploading and secret photography. In order to identify the source camera model of an input image, recently, highly accurate camera model classification methods based on convolutional neural networks (CNNs) have been proposed. However, the pattern noise in an image is typically contaminated by JPEG compression, and the degree of contamination depends on the quality factor (Q-Factor). Therefore, it could be that JPEG compression of different Q-factors from that of training samples degenerates the accuracy for CNN-based camera model classification. In this paper, we propose a CNN-based camera model classification and metric learning trained with the JPEG-base a noise suppression technique. In the experiments, we evaluate camera model classification accuracy and metric learning performance for various Q-Factors. We demonstrate that JPEG-based noise suppression improves camera model classification accuracy from 87.25% to 99.89% on average. We also demonstrate JPEG-based noise suppression improves the robustness of metric learning to JPEG contamination.
基于模式噪声的源摄像机识别是一种很有前途的技术,可用于防止非法上传和秘密拍摄等犯罪。为了识别输入图像的源摄像机模型,近年来提出了基于卷积神经网络(cnn)的高精度摄像机模型分类方法。然而,图像中的模式噪声通常会受到JPEG压缩的污染,污染程度取决于质量因子(Q-Factor)。因此,JPEG压缩不同于训练样本的q因子,可能会降低基于cnn的摄像机模型分类的准确率。在本文中,我们提出了一种基于cnn的摄像机模型分类和度量学习,并使用基于jpeg的噪声抑制技术进行训练。在实验中,我们评估了相机模型在不同Q-Factors下的分类精度和度量学习性能。我们证明了基于jpeg的噪声抑制将相机模型分类准确率平均从87.25%提高到99.89%。我们还证明了基于JPEG的噪声抑制提高了度量学习对JPEG污染的鲁棒性。
{"title":"CNN-based Camera Model Classification and Metric Learning Robust to JPEG Noise Contamination","authors":"Mai Uchida, Yoichi Tomioka","doi":"10.1109/iCAST51195.2020.9319471","DOIUrl":"https://doi.org/10.1109/iCAST51195.2020.9319471","url":null,"abstract":"Pattern noise-based source camera identification is a promising technology for preventing crimes such as illegal uploading and secret photography. In order to identify the source camera model of an input image, recently, highly accurate camera model classification methods based on convolutional neural networks (CNNs) have been proposed. However, the pattern noise in an image is typically contaminated by JPEG compression, and the degree of contamination depends on the quality factor (Q-Factor). Therefore, it could be that JPEG compression of different Q-factors from that of training samples degenerates the accuracy for CNN-based camera model classification. In this paper, we propose a CNN-based camera model classification and metric learning trained with the JPEG-base a noise suppression technique. In the experiments, we evaluate camera model classification accuracy and metric learning performance for various Q-Factors. We demonstrate that JPEG-based noise suppression improves camera model classification accuracy from 87.25% to 99.89% on average. We also demonstrate JPEG-based noise suppression improves the robustness of metric learning to JPEG contamination.","PeriodicalId":212570,"journal":{"name":"2020 11th International Conference on Awareness Science and Technology (iCAST)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130752244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluation for ESD (Education for Sustainable Development) to achieve SDGs at University 可持续发展教育(ESD)在大学实现可持续发展目标的评估
Pub Date : 2020-12-07 DOI: 10.1109/iCAST51195.2020.9319406
Shoki Sato, T. Hashimoto, Y. Shirota
Goal 4: Quality Education is one of goals in SDGs and it says the necessity for promoting ESD (education for sustainable development) as one of targets. For Humanitarian Technology, ESD should be also one of key concepts. In universities, ESD has become the core subject. Since its founding in 1928, the Chiba University of Commerce (CUC) has accepted many young people to study under the educational philosophy of “practical scholarship with high morality” as it started as a school for accounting. Recently, CUC is promoting “RE (Renewable Energy) 100” university to consider ethical consumption and generation as well. Therefore, ESD matches the philosophy of CUC. As part of ESD, CUC is organizing the special lecture for SDGs to provide students opportunities to think about what we can do for developing sustainable societies. In this paper, we evaluate the ESD at CUC using information technologies such as morphological analysis, Bag of Words, and word2vec model. Using information technologies, the evaluation for ESD can be done effectively.
目标4:优质教育是可持续发展目标中的目标之一,它表示有必要将促进可持续发展教育作为目标之一。对于人道主义技术,ESD也应该是关键概念之一。在高校,ESD已经成为核心学科。自1928年建校以来,千叶商业大学以会计学起家,秉承“学实、德高”的办学理念,吸引了众多年轻人前来学习。最近,中国大学正在推动“可再生能源100强”大学,同时考虑道德消费和生产。因此,ESD符合CUC的理念。作为可持续发展课程的一部分,中国中文大学举办了可持续发展目标专题讲座,让学生有机会思考我们可以为发展可持续社会做些什么。本文采用词法分析、词袋分析和word2vec模型等信息技术对中文大学的ESD进行了评价。利用信息技术可以有效地进行ESD评估。
{"title":"Evaluation for ESD (Education for Sustainable Development) to achieve SDGs at University","authors":"Shoki Sato, T. Hashimoto, Y. Shirota","doi":"10.1109/iCAST51195.2020.9319406","DOIUrl":"https://doi.org/10.1109/iCAST51195.2020.9319406","url":null,"abstract":"Goal 4: Quality Education is one of goals in SDGs and it says the necessity for promoting ESD (education for sustainable development) as one of targets. For Humanitarian Technology, ESD should be also one of key concepts. In universities, ESD has become the core subject. Since its founding in 1928, the Chiba University of Commerce (CUC) has accepted many young people to study under the educational philosophy of “practical scholarship with high morality” as it started as a school for accounting. Recently, CUC is promoting “RE (Renewable Energy) 100” university to consider ethical consumption and generation as well. Therefore, ESD matches the philosophy of CUC. As part of ESD, CUC is organizing the special lecture for SDGs to provide students opportunities to think about what we can do for developing sustainable societies. In this paper, we evaluate the ESD at CUC using information technologies such as morphological analysis, Bag of Words, and word2vec model. Using information technologies, the evaluation for ESD can be done effectively.","PeriodicalId":212570,"journal":{"name":"2020 11th International Conference on Awareness Science and Technology (iCAST)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131159326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Improved Spiking Neural Networks with multiple neurons for digit recognition 基于多神经元的数字识别改进脉冲神经网络
Pub Date : 2020-12-07 DOI: 10.1109/iCAST51195.2020.9319475
Vinay Kumar Reddy Chimmula, Lei Zhang, Dhanya Palliath, Abhinay Kumar
For more than a decade Deep Learning, a subset of machine learning have been using for many applications such as forecasting, data visualization, classification etc. However, it consumes more energy and also takes longer training periods for computation, when compared to human brain. In most cases, it is difficult to reach human level performance. With the recent technological improvements in neuroscience and thanks to neuromorphic computing, we now can achieve higher classification efficacy for producing the desired outputs with considerably lower power consumption. Latest advancements in brain simulation technologies has given a breakthrough for analysing and modelling brain functions. Despite its advancements, this research remains undiscovered due to lack of coordination between neuroscientists, electronics engineers and computer scientists. Recent progress in Spiking Neural Networks(SNN) led towards integration different fields under one single roof. Biological neurons inside human brain communicate with each other through synapses. Similarly, bio-inspired synapses in the neuromorphic model mimic the biological neuro synapses for computing. In this novel research, we have modelled a supervised Spiking Neural Network algorithm using Leaky Integrate and Fire (LIF), Izhikevich and rectified linear neurons and tested its spike latency under different conditions. Furthermore, these SNN models are tested on the MNIST dataset to classify the handwritten digits, and the results are compared with the results of the Convolutional Neural Network (CNN).
十多年来,深度学习作为机器学习的一个子集已经被用于许多应用,如预测、数据可视化、分类等。然而,与人脑相比,它消耗更多的能量,也需要更长的训练时间来进行计算。在大多数情况下,很难达到人类水平的表现。随着最近神经科学技术的进步和神经形态计算的发展,我们现在可以实现更高的分类效率,以相当低的功耗产生所需的输出。大脑模拟技术的最新进展为分析和模拟大脑功能提供了突破。尽管取得了进展,但由于神经科学家、电子工程师和计算机科学家之间缺乏协调,这项研究仍未被发现。脉冲神经网络(SNN)的最新进展导致了不同领域在一个屋檐下的整合。人类大脑内的生物神经元通过突触相互交流。类似地,神经形态模型中的生物启发突触模仿用于计算的生物神经突触。在这项新颖的研究中,我们使用Leaky Integrate and Fire (LIF), Izhikevich和整流线性神经元建模了一个有监督的峰值神经网络算法,并测试了其在不同条件下的峰值延迟。在MNIST数据集上对SNN模型进行测试,对手写数字进行分类,并将结果与卷积神经网络(CNN)的结果进行比较。
{"title":"Improved Spiking Neural Networks with multiple neurons for digit recognition","authors":"Vinay Kumar Reddy Chimmula, Lei Zhang, Dhanya Palliath, Abhinay Kumar","doi":"10.1109/iCAST51195.2020.9319475","DOIUrl":"https://doi.org/10.1109/iCAST51195.2020.9319475","url":null,"abstract":"For more than a decade Deep Learning, a subset of machine learning have been using for many applications such as forecasting, data visualization, classification etc. However, it consumes more energy and also takes longer training periods for computation, when compared to human brain. In most cases, it is difficult to reach human level performance. With the recent technological improvements in neuroscience and thanks to neuromorphic computing, we now can achieve higher classification efficacy for producing the desired outputs with considerably lower power consumption. Latest advancements in brain simulation technologies has given a breakthrough for analysing and modelling brain functions. Despite its advancements, this research remains undiscovered due to lack of coordination between neuroscientists, electronics engineers and computer scientists. Recent progress in Spiking Neural Networks(SNN) led towards integration different fields under one single roof. Biological neurons inside human brain communicate with each other through synapses. Similarly, bio-inspired synapses in the neuromorphic model mimic the biological neuro synapses for computing. In this novel research, we have modelled a supervised Spiking Neural Network algorithm using Leaky Integrate and Fire (LIF), Izhikevich and rectified linear neurons and tested its spike latency under different conditions. Furthermore, these SNN models are tested on the MNIST dataset to classify the handwritten digits, and the results are compared with the results of the Convolutional Neural Network (CNN).","PeriodicalId":212570,"journal":{"name":"2020 11th International Conference on Awareness Science and Technology (iCAST)","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115144705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Efficient Scene Recognition System of Railway Crossing 一种高效的铁路道口场景识别系统
Pub Date : 2020-12-07 DOI: 10.1109/iCAST51195.2020.9319497
Kaisei Shimura, Yoichi Tomioka, Qiangfu Zhao
Railway crossing is one of the places where mobility scooter accidents happen relatively often. To support drivers to prevent such accidents, we propose a scene recognition system for the railway crossing scene. This system can detect railway crossing scene, objects which typically exist close to the railway crossing scene, and the distance to the detected railway crossing. In this system, we propose an efficient four-stage recognition scheme that combines scene screening based on a compact CNN, CNN-based object detection, railway crossing detection, and distance estimation based on the detected warning sign of railway crossing. In the experiments, we demonstrate our system improves precision and F-score for each class by up to 20.6% and 35.0% for the same recall, respectively compared with existing object detection. Moreover, by using the proposed scene screening, we achieved 1.7 to 1.9 times faster execution for scenes in which a railway crossing does not exist on the desktop PC, Raspberry Pi3 model B, Raspberry Pi model B with Neural Compute Stick 2.
铁路道口是机动滑板车事故频发的地方之一。为了支持驾驶员预防此类事故的发生,我们提出了一种铁路道口场景识别系统。该系统可以检测铁路道口场景、铁路道口场景附近典型存在的物体以及与被检测道口的距离。在该系统中,我们提出了一种高效的四阶段识别方案,该方案将基于紧凑CNN的场景筛选、基于CNN的目标检测、铁路道口检测和基于检测到的铁路道口警告标志的距离估计相结合。在实验中,我们证明了与现有的目标检测相比,我们的系统在相同召回率下,将每个类别的精度和f分数分别提高了20.6%和35.0%。此外,通过使用所提出的场景筛选,我们实现了1.7到1.9倍的执行速度,其中在桌面PC,树莓派3模型B,树莓派模型B与神经计算棒2上不存在铁路过境的场景。
{"title":"An Efficient Scene Recognition System of Railway Crossing","authors":"Kaisei Shimura, Yoichi Tomioka, Qiangfu Zhao","doi":"10.1109/iCAST51195.2020.9319497","DOIUrl":"https://doi.org/10.1109/iCAST51195.2020.9319497","url":null,"abstract":"Railway crossing is one of the places where mobility scooter accidents happen relatively often. To support drivers to prevent such accidents, we propose a scene recognition system for the railway crossing scene. This system can detect railway crossing scene, objects which typically exist close to the railway crossing scene, and the distance to the detected railway crossing. In this system, we propose an efficient four-stage recognition scheme that combines scene screening based on a compact CNN, CNN-based object detection, railway crossing detection, and distance estimation based on the detected warning sign of railway crossing. In the experiments, we demonstrate our system improves precision and F-score for each class by up to 20.6% and 35.0% for the same recall, respectively compared with existing object detection. Moreover, by using the proposed scene screening, we achieved 1.7 to 1.9 times faster execution for scenes in which a railway crossing does not exist on the desktop PC, Raspberry Pi3 model B, Raspberry Pi model B with Neural Compute Stick 2.","PeriodicalId":212570,"journal":{"name":"2020 11th International Conference on Awareness Science and Technology (iCAST)","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133747739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Visual-SLAM based Line Laser Scanning System using Semantically Segmented Images 基于视觉slam的语义分割图像行激光扫描系统
Pub Date : 2020-12-07 DOI: 10.1109/iCAST51195.2020.9319479
Zhengwu Shi, Qingxuan Lyu, Shu Zhang, Lin Qi, H. Fan, Junyu Dong
Integration of the line laser scanning system with visual SLAM for 3D mapping is conceptually attractive yet facing the difficulty with processing projected line laser, which is not only hard to be extracted from images captured under natural light, but also disrupts the feature tracking procedure in visual SLAM. This paper proposes a method of segmenting the target object and extracting the laser line to build an accurate and realistic 3D model by using a semantic segmentation method. First, we introduce adaptive thresholds for the recognized objects to solve the laser extraction problem. Second, we discard the extracted image features in the laser area for better pose estimation of visual SLAM. Finally, we complement the surface of lasers with the color information in the related objects of 3D mapping. In our experiments, we show that the proposed method can produce a dense colored 3D mapping and has higher performance than the traditional visual SLAM based laser scanning system.
将直线激光扫描系统与视觉SLAM相结合用于三维制图在概念上很有吸引力,但面临着投影直线激光处理的困难,不仅难以从自然光下捕获的图像中提取,而且会干扰视觉SLAM中的特征跟踪过程。本文提出了一种利用语义分割方法对目标物体进行分割并提取激光线以建立准确逼真的三维模型的方法。首先,对识别目标引入自适应阈值,解决激光提取问题;其次,我们将提取的图像特征丢弃在激光区域,以便更好地估计视觉SLAM的姿态。最后,我们用三维映射中相关物体的颜色信息对激光表面进行补充。实验结果表明,该方法可以生成密集的彩色三维映射,比传统的基于视觉SLAM的激光扫描系统具有更高的性能。
{"title":"A Visual-SLAM based Line Laser Scanning System using Semantically Segmented Images","authors":"Zhengwu Shi, Qingxuan Lyu, Shu Zhang, Lin Qi, H. Fan, Junyu Dong","doi":"10.1109/iCAST51195.2020.9319479","DOIUrl":"https://doi.org/10.1109/iCAST51195.2020.9319479","url":null,"abstract":"Integration of the line laser scanning system with visual SLAM for 3D mapping is conceptually attractive yet facing the difficulty with processing projected line laser, which is not only hard to be extracted from images captured under natural light, but also disrupts the feature tracking procedure in visual SLAM. This paper proposes a method of segmenting the target object and extracting the laser line to build an accurate and realistic 3D model by using a semantic segmentation method. First, we introduce adaptive thresholds for the recognized objects to solve the laser extraction problem. Second, we discard the extracted image features in the laser area for better pose estimation of visual SLAM. Finally, we complement the surface of lasers with the color information in the related objects of 3D mapping. In our experiments, we show that the proposed method can produce a dense colored 3D mapping and has higher performance than the traditional visual SLAM based laser scanning system.","PeriodicalId":212570,"journal":{"name":"2020 11th International Conference on Awareness Science and Technology (iCAST)","volume":"106 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116193910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Enhanced Invasive Weed Optimization in Resource-Constrained Project Scheduling Problem 资源约束项目调度问题中的一种增强入侵杂草优化方法
Pub Date : 2020-12-07 DOI: 10.1109/iCAST51195.2020.9319493
Wei Cai, Haojie Chen, Jian Zhang
In this research, an enhanced invasive weed optimization (EIWO) has been proposed to solve resource-constrained project scheduling problem (RCPSP) which subjects to the makespan minimization. Firstly, a hybrid population initialization method is illustrated to improve the quality of initial solutions. Secondly, to enhance the local exploitation ability, a local search approach is embedded in the spatial dispersal process. Thirdly, an improved competitive exclusion based on acceptance probability is proposed. At the end of this article, EIWO is tested and verified by standard benchmark problems from PSPLIB. Compared with the existing algorithms through computer numerical experiments, the new EIWO algorithm is more effective and efficient in solving RCPSP.
本文提出了一种增强型入侵杂草优化算法(EIWO)来解决以最大完工时间最小化为目标的资源约束型项目调度问题。首先,提出了一种混合种群初始化方法,以提高初始解的质量。其次,在空间扩散过程中嵌入局部搜索方法,增强局部开发能力;第三,提出了一种改进的基于接受概率的竞争排除方法。在本文的最后,EIWO通过PSPLIB的标准基准问题进行了测试和验证。通过计算机数值实验,与现有算法相比,新的EIWO算法在求解RCPSP时更加有效和高效。
{"title":"An Enhanced Invasive Weed Optimization in Resource-Constrained Project Scheduling Problem","authors":"Wei Cai, Haojie Chen, Jian Zhang","doi":"10.1109/iCAST51195.2020.9319493","DOIUrl":"https://doi.org/10.1109/iCAST51195.2020.9319493","url":null,"abstract":"In this research, an enhanced invasive weed optimization (EIWO) has been proposed to solve resource-constrained project scheduling problem (RCPSP) which subjects to the makespan minimization. Firstly, a hybrid population initialization method is illustrated to improve the quality of initial solutions. Secondly, to enhance the local exploitation ability, a local search approach is embedded in the spatial dispersal process. Thirdly, an improved competitive exclusion based on acceptance probability is proposed. At the end of this article, EIWO is tested and verified by standard benchmark problems from PSPLIB. Compared with the existing algorithms through computer numerical experiments, the new EIWO algorithm is more effective and efficient in solving RCPSP.","PeriodicalId":212570,"journal":{"name":"2020 11th International Conference on Awareness Science and Technology (iCAST)","volume":"463 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121616869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2020 11th International Conference on Awareness Science and Technology (iCAST)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1