Pub Date : 2021-10-27DOI: 10.1109/iemcon53756.2021.9623217
Cameron Lane, Calvin Jarrod Smith, Nan Wang
Battery-powered computing solutions have grown in importance and utility across a wide range of applications in the technology industry, including both consumer and industrial uses. Devices that are not attached to a stable and constant power source must ensure that all power consumption is minimized while necessary computation and communications are performed. WiFi networking is ubiquitous in modern devices, and thus the power consumption necessary to transmit data is of utmost concern for these battery powered devices. The Ad hoc OnDemand Distance Vector (AODV) routing algorithm is a widely adopted and adapted routing system for path finding in wireless networks. AODV's original implementation did not include power consumption as a consideration for route determinations. The Energy Aware AODV (EA-AODV) algorithm was an attempt to account for energy conservation by varying broadcast power and choosing paths with distance between nodes as a consideration in routing. Lightning Strike AODV (LS-AODV) described in this paper is a proposed routing algorithm that further accounts for energy consumption in wireless networking by balancing energy in a network. Quality of service is maintained while energy levels are increased through networks using the LS-AODV algorithm.
{"title":"LS-AODV: An Energy Balancing Routing Algorithm For Mobile Ad Hoc Networks","authors":"Cameron Lane, Calvin Jarrod Smith, Nan Wang","doi":"10.1109/iemcon53756.2021.9623217","DOIUrl":"https://doi.org/10.1109/iemcon53756.2021.9623217","url":null,"abstract":"Battery-powered computing solutions have grown in importance and utility across a wide range of applications in the technology industry, including both consumer and industrial uses. Devices that are not attached to a stable and constant power source must ensure that all power consumption is minimized while necessary computation and communications are performed. WiFi networking is ubiquitous in modern devices, and thus the power consumption necessary to transmit data is of utmost concern for these battery powered devices. The Ad hoc OnDemand Distance Vector (AODV) routing algorithm is a widely adopted and adapted routing system for path finding in wireless networks. AODV's original implementation did not include power consumption as a consideration for route determinations. The Energy Aware AODV (EA-AODV) algorithm was an attempt to account for energy conservation by varying broadcast power and choosing paths with distance between nodes as a consideration in routing. Lightning Strike AODV (LS-AODV) described in this paper is a proposed routing algorithm that further accounts for energy consumption in wireless networking by balancing energy in a network. Quality of service is maintained while energy levels are increased through networks using the LS-AODV algorithm.","PeriodicalId":272590,"journal":{"name":"2021 IEEE 12th Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121976011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-27DOI: 10.1109/iemcon53756.2021.9623226
I. Gueye, A. Kebe, Moustapha Diop
This paper proposes a solution to facilitate the maintenance activities of autonomous solar photovoltaic (PV) installations. With the growth of autonomous PV installations, in developing countries, it is now essential to focus on the maintenance activity. The autonomous PV installation meets the electricity needs, on the one hand, in remote areas. On the other hand, it allows to avoid the constraints of connection to the electrical grid. However, to have an efficient and reliable PV system, a safe and proper maintenance is essential. This work focuses on the capitalization of knowledge in maintenance activity. The objective is to propose a model able to help the maintenance technicians during their interventions by providing them with knowledge elements which will be drawn from a knowledge base. This knowledge base is built from the knowledge collected during the previous maintenance activities of a given PV installation.
{"title":"Creation of a knowledge management model based on CBR: Application to the maintenance of autonomous solar photovoltaic installations","authors":"I. Gueye, A. Kebe, Moustapha Diop","doi":"10.1109/iemcon53756.2021.9623226","DOIUrl":"https://doi.org/10.1109/iemcon53756.2021.9623226","url":null,"abstract":"This paper proposes a solution to facilitate the maintenance activities of autonomous solar photovoltaic (PV) installations. With the growth of autonomous PV installations, in developing countries, it is now essential to focus on the maintenance activity. The autonomous PV installation meets the electricity needs, on the one hand, in remote areas. On the other hand, it allows to avoid the constraints of connection to the electrical grid. However, to have an efficient and reliable PV system, a safe and proper maintenance is essential. This work focuses on the capitalization of knowledge in maintenance activity. The objective is to propose a model able to help the maintenance technicians during their interventions by providing them with knowledge elements which will be drawn from a knowledge base. This knowledge base is built from the knowledge collected during the previous maintenance activities of a given PV installation.","PeriodicalId":272590,"journal":{"name":"2021 IEEE 12th Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126329159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-27DOI: 10.1109/iemcon53756.2021.9623254
Arnab Paul Joy, Mosarrat Jahan, U. Kabir, S. Mahato
A Bayesian Attack Graph (BAG) is an essential model for red teams in cyber security to detect the most vulnerable components of a system. It is a probabilistic graphical model in which each node is initially assigned a probability value called local probability. For realistic and better analysis of BAGs, it is essential to evaluate local probabilities precisely. For that purpose, in this paper, we use the Common Vulnerability Scoring System (CVSS) to estimate temporal and environmental scores. We further consider various factors reflecting attackers' characteristics in BAG analysis. In this respect, we inaugurated a new environmental variable named “host type” that influences an attacker's motivation and abolishes the need for earlier network architecture knowledge to determine the factor values.
{"title":"Precise Estimation of Local Probabilities for Bayesian Attack Graph Analysis","authors":"Arnab Paul Joy, Mosarrat Jahan, U. Kabir, S. Mahato","doi":"10.1109/iemcon53756.2021.9623254","DOIUrl":"https://doi.org/10.1109/iemcon53756.2021.9623254","url":null,"abstract":"A Bayesian Attack Graph (BAG) is an essential model for red teams in cyber security to detect the most vulnerable components of a system. It is a probabilistic graphical model in which each node is initially assigned a probability value called local probability. For realistic and better analysis of BAGs, it is essential to evaluate local probabilities precisely. For that purpose, in this paper, we use the Common Vulnerability Scoring System (CVSS) to estimate temporal and environmental scores. We further consider various factors reflecting attackers' characteristics in BAG analysis. In this respect, we inaugurated a new environmental variable named “host type” that influences an attacker's motivation and abolishes the need for earlier network architecture knowledge to determine the factor values.","PeriodicalId":272590,"journal":{"name":"2021 IEEE 12th Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126856645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-27DOI: 10.1109/iemcon53756.2021.9623120
Md. Hafizur Rahman, Sayeda Islam Nahid, Ibna Huda Al Fahad, Faysal Mahmud Nahid, Mohammad Monirujjaman Khan
The estimation of possible fluctuations in stock prices has been the focus of a lot of research work. Price prediction is a technique for predicting a stock's potential future price, and as a result, the price. This study shows how we can use Machine Learning Models based on Long Short-Term Memory (LSTM) to forecast the price of a stock. Stock prices may be anticipated with a high degree of accuracy if correctly modeled, according to certain suggestions. There is also a lot of literature on basic analysis of stock prices, which focuses on detecting and learning from trends in stock price movements. The focus of this research is on stock market forecasting utilizing Long Short-Term Memory (LSTM) models. For the purpose of our study, we have used DSE30's top 10 companies' historical data. We have built two LSTM models to predict and compare the results of the prediction. To train these models, we used training data that consisted of these companies' stock records from January, 2019 till January, 2021. Our target was to find out which version of the LSTM architecture model gives the best prediction among these models.
{"title":"Price Prediction Using LSTM Based Machine Learning Models","authors":"Md. Hafizur Rahman, Sayeda Islam Nahid, Ibna Huda Al Fahad, Faysal Mahmud Nahid, Mohammad Monirujjaman Khan","doi":"10.1109/iemcon53756.2021.9623120","DOIUrl":"https://doi.org/10.1109/iemcon53756.2021.9623120","url":null,"abstract":"The estimation of possible fluctuations in stock prices has been the focus of a lot of research work. Price prediction is a technique for predicting a stock's potential future price, and as a result, the price. This study shows how we can use Machine Learning Models based on Long Short-Term Memory (LSTM) to forecast the price of a stock. Stock prices may be anticipated with a high degree of accuracy if correctly modeled, according to certain suggestions. There is also a lot of literature on basic analysis of stock prices, which focuses on detecting and learning from trends in stock price movements. The focus of this research is on stock market forecasting utilizing Long Short-Term Memory (LSTM) models. For the purpose of our study, we have used DSE30's top 10 companies' historical data. We have built two LSTM models to predict and compare the results of the prediction. To train these models, we used training data that consisted of these companies' stock records from January, 2019 till January, 2021. Our target was to find out which version of the LSTM architecture model gives the best prediction among these models.","PeriodicalId":272590,"journal":{"name":"2021 IEEE 12th Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON)","volume":"118 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128106561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-27DOI: 10.1109/iemcon53756.2021.9623236
Fardeen Mahbub, R. Islam, Shouherdho Banerjee Akash, M. T. Ali, Saiful Islam
Considering numerous benefits of Microwave Imaging (MI) regarding the Biomedical sector, in this paper, the simulation of a Microstrip Patch Antenna has been done in the CST Studio Suite 2019 Software, which is capable of Microwave Imaging (MI) for detecting Cancer/Tumor of Skeletal Muscle. The Antenna operates at 2.45 GHz (ISM-Band), consisting of a maximum frequency of 1.6 GHz and a minimum frequency of 3.2 GHz, respectively. In this paper, a three-layer Human Body Phantom has been created consisting of Skin, Fat, and Muscle, and then a small size (5 mm) tumor has been placed on the muscle portion of the Phantom. The Antenna was applied at three distances of 5 mm, 10 mm, and 15 mm from the Phantom to deduce the Antenna's performance. The SAR values of 0.000287 W/kg, 0.000229 W/kg, and 0.0000346 W/kg were obtained after applying the Antenna to the Cancer-affected body phantom at the Antenna to the Body Phantom distances of 5 mm, 10 mm, and 15 mm, respectively with a resonant frequency of 2.45 GHz which fulfills the minimum SAR requirement of 1.6 W/kg governed by the Federal Communications Commission (FCC). The other obtained output parameters are Return Loss (S1,1), VSWR, Polar Radiation, Directivity (3D), etc. This demonstrates that the simulated Antenna is a better option for diagnosing the Early-Stage Cancers/Tumors in Skeletal muscles.
考虑到微波成像(MI)在生物医学领域的众多好处,本文在CST Studio Suite 2019软件中进行了微带贴片天线的模拟,该软件能够进行微波成像(MI)以检测骨骼肌的癌症/肿瘤。天线工作在2.45 GHz (ism频段),最高频率为1.6 GHz,最低频率为3.2 GHz。在这篇论文中,我们制作了一个由皮肤、脂肪和肌肉组成的三层人体幻影,然后在幻影的肌肉部分放置了一个小尺寸(5毫米)的肿瘤。天线被应用在距离幻影5毫米、10毫米和15毫米的三个距离上,以推断天线的性能。将天线应用于受癌症影响的体影上,天线与体影距离分别为5 mm、10 mm和15 mm,其SAR值分别为0.000287 W/kg、0.000229 W/kg和0.0000346 W/kg,谐振频率为2.45 GHz,满足美国联邦通信委员会(FCC)规定的最低SAR要求1.6 W/kg。得到的其他输出参数有回波损耗(S1,1)、驻波比、极辐射、指向性(3D)等。这表明模拟天线是诊断早期骨骼肌癌症/肿瘤的更好选择。
{"title":"Design and Implementation of a Microstrip Patch Antenna for the Detection of Cancers and Tumors in Skeletal Muscle of the Human Body Using ISM Band","authors":"Fardeen Mahbub, R. Islam, Shouherdho Banerjee Akash, M. T. Ali, Saiful Islam","doi":"10.1109/iemcon53756.2021.9623236","DOIUrl":"https://doi.org/10.1109/iemcon53756.2021.9623236","url":null,"abstract":"Considering numerous benefits of Microwave Imaging (MI) regarding the Biomedical sector, in this paper, the simulation of a Microstrip Patch Antenna has been done in the CST Studio Suite 2019 Software, which is capable of Microwave Imaging (MI) for detecting Cancer/Tumor of Skeletal Muscle. The Antenna operates at 2.45 GHz (ISM-Band), consisting of a maximum frequency of 1.6 GHz and a minimum frequency of 3.2 GHz, respectively. In this paper, a three-layer Human Body Phantom has been created consisting of Skin, Fat, and Muscle, and then a small size (5 mm) tumor has been placed on the muscle portion of the Phantom. The Antenna was applied at three distances of 5 mm, 10 mm, and 15 mm from the Phantom to deduce the Antenna's performance. The SAR values of 0.000287 W/kg, 0.000229 W/kg, and 0.0000346 W/kg were obtained after applying the Antenna to the Cancer-affected body phantom at the Antenna to the Body Phantom distances of 5 mm, 10 mm, and 15 mm, respectively with a resonant frequency of 2.45 GHz which fulfills the minimum SAR requirement of 1.6 W/kg governed by the Federal Communications Commission (FCC). The other obtained output parameters are Return Loss (S1,1), VSWR, Polar Radiation, Directivity (3D), etc. This demonstrates that the simulated Antenna is a better option for diagnosing the Early-Stage Cancers/Tumors in Skeletal muscles.","PeriodicalId":272590,"journal":{"name":"2021 IEEE 12th Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON)","volume":"21 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130817804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-27DOI: 10.1109/iemcon53756.2021.9623222
Hardik Sharma, Harshini Sewani, Rajat Garg, R. Kashef
The accelerated spread of the COVID-19 (coronavirus) disease has put stress on healthcare systems. Some safety measures are provided, such as keeping social distance and wearing a mask, which can help curb transmission and save lives. This paper aims to detect whether a person is wearing a mask or not with video surveillance to enforce health and safety regulations in real-time. We propose a solution for face mask detection using two deep learning models, the MobileNetV2 and the Modified Convolutional Neural Network (MCNN). The trained models are converted to TensorFlow Lite to deploy an Android Application. Our models can achieve up to 99% accuracy. In this paper, an analysis of the number of individuals not wearing masks is provided by capturing the face and storing it on a mobile-backend-as-a-service. Our application can be adopted to increase health measures in real-time and control the spread of COVID-19.
{"title":"Face Mask Detection: A Real-Time Android Application Based on Deep Learning Modeling","authors":"Hardik Sharma, Harshini Sewani, Rajat Garg, R. Kashef","doi":"10.1109/iemcon53756.2021.9623222","DOIUrl":"https://doi.org/10.1109/iemcon53756.2021.9623222","url":null,"abstract":"The accelerated spread of the COVID-19 (coronavirus) disease has put stress on healthcare systems. Some safety measures are provided, such as keeping social distance and wearing a mask, which can help curb transmission and save lives. This paper aims to detect whether a person is wearing a mask or not with video surveillance to enforce health and safety regulations in real-time. We propose a solution for face mask detection using two deep learning models, the MobileNetV2 and the Modified Convolutional Neural Network (MCNN). The trained models are converted to TensorFlow Lite to deploy an Android Application. Our models can achieve up to 99% accuracy. In this paper, an analysis of the number of individuals not wearing masks is provided by capturing the face and storing it on a mobile-backend-as-a-service. Our application can be adopted to increase health measures in real-time and control the spread of COVID-19.","PeriodicalId":272590,"journal":{"name":"2021 IEEE 12th Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131378903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Due to the variety of underwater object species and small object, the traditional object detection model is difficult to adapt to underwater object detection in accuracy and real-time. In this paper, a lightweight detection model FL-YOLOV3-TINY is proposed, which improves the detection accuracy and real-time performance while shrinking the model size. In FL-YOLOV3-TINY, first, the model reduces the number of parameters by introducing deep separable convolutional module to replace traditional convolutional feature extraction module. Secondly, in order to improve the detection ability of small objects and obtain more delicate image features, FL-YOLOV3-TINY adds the feature size to the three-scale to improve the detection performance. Finally, the CIoU loss regression function is introduced to make the prediction box closer to the actual box. Experiments show that compared with other lightweight models YOLOV3-MobilenetV1 and YOLOV3-Tiny, FL-YOLOV3-TINY has better mAP performance (13.7% and 10.9% increase, respectively) and better real-time perfurmance(6% and 29% increase in FPS, respectively). Meanwhile, the model size is reduced by 43% compared to YOLOV3-Tiny.
{"title":"A Lightweight Underwater Object Detection Model: FL-YOLOV3-TINY","authors":"Cong Tan, Dandan Chen, Haijie Huang, Qiuling Yang, Xiangdang Huang","doi":"10.1109/iemcon53756.2021.9623066","DOIUrl":"https://doi.org/10.1109/iemcon53756.2021.9623066","url":null,"abstract":"Due to the variety of underwater object species and small object, the traditional object detection model is difficult to adapt to underwater object detection in accuracy and real-time. In this paper, a lightweight detection model FL-YOLOV3-TINY is proposed, which improves the detection accuracy and real-time performance while shrinking the model size. In FL-YOLOV3-TINY, first, the model reduces the number of parameters by introducing deep separable convolutional module to replace traditional convolutional feature extraction module. Secondly, in order to improve the detection ability of small objects and obtain more delicate image features, FL-YOLOV3-TINY adds the feature size to the three-scale to improve the detection performance. Finally, the CIoU loss regression function is introduced to make the prediction box closer to the actual box. Experiments show that compared with other lightweight models YOLOV3-MobilenetV1 and YOLOV3-Tiny, FL-YOLOV3-TINY has better mAP performance (13.7% and 10.9% increase, respectively) and better real-time perfurmance(6% and 29% increase in FPS, respectively). Meanwhile, the model size is reduced by 43% compared to YOLOV3-Tiny.","PeriodicalId":272590,"journal":{"name":"2021 IEEE 12th Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114954782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-27DOI: 10.1109/iemcon53756.2021.9623152
Thomas G. Paveglio, Damian Valles
When contending with the safety and awareness of active shooters or one who may be carrying an instrument of destruction, a solution exists which may enable others to quickly respond and become aware of the presence of a firearm. For additional insight and spatial awareness of potentially dangerous activities, machine learning and computer vision further augment the span of abilities to increase one's sense when navigating through an environment. Convolutional Neural Network (CNN) architectures provide incredible accuracy in object recognition when trained unto a specific subject domain through feature extraction. However, CNNs may become computationally expensive and require indefinite amounts of hardware to support real-time vision propagation -which imposes difficulty if one was adorned with computer vision equipment such as body cameras. MobileNet is a computer vision paradigm specifically designed to perform on compact computational devices while retaining object recognition performance. This paper proposes the analysis and implementation of MobileNet in low computational capacity devices -such as body cameras-and still enacts high object recognition performance in dynamic and time-critical scenarios to further augment an officer's insight into potentially dangerous situations. A custom firearm dataset was produced of over 10,000 images and the model performed over 85% on weapon validation accuracy when provided and trained on a variety of different dynamic and active environments containing the presence of firearms.
{"title":"Second Sight: MobileNet v1 Integration in Dynamic and Time Critical Scenarios","authors":"Thomas G. Paveglio, Damian Valles","doi":"10.1109/iemcon53756.2021.9623152","DOIUrl":"https://doi.org/10.1109/iemcon53756.2021.9623152","url":null,"abstract":"When contending with the safety and awareness of active shooters or one who may be carrying an instrument of destruction, a solution exists which may enable others to quickly respond and become aware of the presence of a firearm. For additional insight and spatial awareness of potentially dangerous activities, machine learning and computer vision further augment the span of abilities to increase one's sense when navigating through an environment. Convolutional Neural Network (CNN) architectures provide incredible accuracy in object recognition when trained unto a specific subject domain through feature extraction. However, CNNs may become computationally expensive and require indefinite amounts of hardware to support real-time vision propagation -which imposes difficulty if one was adorned with computer vision equipment such as body cameras. MobileNet is a computer vision paradigm specifically designed to perform on compact computational devices while retaining object recognition performance. This paper proposes the analysis and implementation of MobileNet in low computational capacity devices -such as body cameras-and still enacts high object recognition performance in dynamic and time-critical scenarios to further augment an officer's insight into potentially dangerous situations. A custom firearm dataset was produced of over 10,000 images and the model performed over 85% on weapon validation accuracy when provided and trained on a variety of different dynamic and active environments containing the presence of firearms.","PeriodicalId":272590,"journal":{"name":"2021 IEEE 12th Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122329604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-27DOI: 10.1109/iemcon53756.2021.9623077
Sinead V. Fernandes, M. S. Ullah
Analysis of stress in speech signals offers a noninvasive alternative to detect deception. The aim of this study is to investigate and synthesize by exploring the spectral energy features namely delta energy and difference energy for deception detection from speech signals. The results are compared with innovative delta and time-difference cepstrum features. Extractions of the proposed speech features are based on the perception of human speech and the psychoacoustic masking property. To classify deceptive and truthful speech, spectral energy in twenty - one (21) bands of frequencies on a Bark scale is used. The reference speech signals for the database are garnered by using the track record of a blameworthy person throughout a police investigation. Two neural networks such as (1) the Levenberg-Marquardt (LM) algorithm and (2) the Long Short-Term Memory (LSTM) algorithm are used in the MATLAB environment to observe the performance and test the results. Simulation results indicate that the delta and time-difference energy are critical features to classify deceptive speech from truthful speech.
{"title":"Development of Spectral Speech Features for Deception Detection Using Neural Networks","authors":"Sinead V. Fernandes, M. S. Ullah","doi":"10.1109/iemcon53756.2021.9623077","DOIUrl":"https://doi.org/10.1109/iemcon53756.2021.9623077","url":null,"abstract":"Analysis of stress in speech signals offers a noninvasive alternative to detect deception. The aim of this study is to investigate and synthesize by exploring the spectral energy features namely delta energy and difference energy for deception detection from speech signals. The results are compared with innovative delta and time-difference cepstrum features. Extractions of the proposed speech features are based on the perception of human speech and the psychoacoustic masking property. To classify deceptive and truthful speech, spectral energy in twenty - one (21) bands of frequencies on a Bark scale is used. The reference speech signals for the database are garnered by using the track record of a blameworthy person throughout a police investigation. Two neural networks such as (1) the Levenberg-Marquardt (LM) algorithm and (2) the Long Short-Term Memory (LSTM) algorithm are used in the MATLAB environment to observe the performance and test the results. Simulation results indicate that the delta and time-difference energy are critical features to classify deceptive speech from truthful speech.","PeriodicalId":272590,"journal":{"name":"2021 IEEE 12th Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122262030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-27DOI: 10.1109/iemcon53756.2021.9623141
Sina Kiaei, S. Honarparvar, S. Saeedi, S. Liang
There are a lot of ongoing efforts to combat the COVID-19 pandemic using different combinations of low-cost sensing technologies, information/communication technologies, and smart computation. To provide COVID-19 situational awareness and early warnings, a scalable, real-time sensing solution is needed to recognize risky behaviors in COVID-19 virus spreading such as coughing and sneezing. Various coughing and sneezing recognition methods use audio-only or video-only sensors and Deep Learning (DL) algorithms for smart event recognition. However, each of these recognition processes experiences several types of failure behaviors due to false detection. Sensor integration is a solution to overcome such failures. Moreover, it improves event recognition precision. With the wide availability of low-cost audio and video sensors, we proposed a real-time integrated Internet of Things (IoT) architecture to improve the results of coughing and sneezing recognition. Implemented architecture joins edge and cloud computing. In edge computing, the microphone and camera are connected to the internet and embedded with a DL engine. Audio and video streams are fed to edge computing to detect coughing and sneezing actions in realtime. Cloud computing, which is developed based on the Amazon Web Service (AWS), combines the results of audio and video processing. In this paper, a scenario of a person coughing and sneezing was developed to demonstrate the capabilities of the proposed architecture. The experimental results show that the proposed architecture improved the reliability of coughing and sneezing recognition in the integrated cloud system compared to audio-only and video-only detectors. Three factors have been considered to compare the results of the proposed architecture: F-score, precision, and recall. The precision and recall of the cloud detector are improved on average by %43 and %15, respectively, compared to audio-only and video-only detectors. The F-score improved on average 1.24 times.
{"title":"Design and Development of an Integrated Internet of Audio and Video Sensors for COVID-19 Coughing and Sneezing Recognition","authors":"Sina Kiaei, S. Honarparvar, S. Saeedi, S. Liang","doi":"10.1109/iemcon53756.2021.9623141","DOIUrl":"https://doi.org/10.1109/iemcon53756.2021.9623141","url":null,"abstract":"There are a lot of ongoing efforts to combat the COVID-19 pandemic using different combinations of low-cost sensing technologies, information/communication technologies, and smart computation. To provide COVID-19 situational awareness and early warnings, a scalable, real-time sensing solution is needed to recognize risky behaviors in COVID-19 virus spreading such as coughing and sneezing. Various coughing and sneezing recognition methods use audio-only or video-only sensors and Deep Learning (DL) algorithms for smart event recognition. However, each of these recognition processes experiences several types of failure behaviors due to false detection. Sensor integration is a solution to overcome such failures. Moreover, it improves event recognition precision. With the wide availability of low-cost audio and video sensors, we proposed a real-time integrated Internet of Things (IoT) architecture to improve the results of coughing and sneezing recognition. Implemented architecture joins edge and cloud computing. In edge computing, the microphone and camera are connected to the internet and embedded with a DL engine. Audio and video streams are fed to edge computing to detect coughing and sneezing actions in realtime. Cloud computing, which is developed based on the Amazon Web Service (AWS), combines the results of audio and video processing. In this paper, a scenario of a person coughing and sneezing was developed to demonstrate the capabilities of the proposed architecture. The experimental results show that the proposed architecture improved the reliability of coughing and sneezing recognition in the integrated cloud system compared to audio-only and video-only detectors. Three factors have been considered to compare the results of the proposed architecture: F-score, precision, and recall. The precision and recall of the cloud detector are improved on average by %43 and %15, respectively, compared to audio-only and video-only detectors. The F-score improved on average 1.24 times.","PeriodicalId":272590,"journal":{"name":"2021 IEEE 12th Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123971567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}