Pub Date : 2021-09-13DOI: 10.1109/IICAIET51634.2021.9573815
U. A. Saleh, S. A. Jumaat, M. A. Johar, W. Jamaludin
This paper presents the design and development of a data acquisition system (DAQ) for a hybrid photovoltaic-thermoelectric generator (PV-TEG) to monitor and store system parameters collected from the PV-TEG source in large memory storage. A DAQ is an electronic device that collects and records data through a time-based microcontroller (DS1307 Real-Time Clock (RTC) chip). It utilizes the Arduino Mega 328P board in conjunction with the ATmega 328P chip for monitoring the system parameters such as voltage, current and power every second of the logging process. The hybrid system comprises 200 W panels and a combination of 192 TEG connected in series and parallel for higher output power. The system converts the original data into digital input for data acquisition and stores it on a secure digital card (SD card). The hybrid system performance was examined. The results from the DAQ shows that data were stored on the SD Card at a 1-second update cycle. The DAQ recorded maximum system parameter values as 39 V, 4.9 A for the hybrid system, 33 V, 4.7 A for the PV subsystem, and 6 V, 0.2 A for the TEG subsystem. The maximum power of 191.1 W was computed across a resistive load of $8 Omega$, 300 W. This shows an increase of 15.38 % than the PV subsystem.
本文介绍了一种用于光伏热电混合型发电机(PV-TEG)的数据采集系统(DAQ)的设计和开发,该系统用于监测从PV-TEG源采集的系统参数并将其存储在大存储器中。DAQ是一种电子设备,通过基于时间的微控制器(DS1307实时时钟(RTC)芯片)收集和记录数据。它利用Arduino Mega 328P板和ATmega 328P芯片来监测系统参数,如电压、电流和功率在日志记录过程中的每一秒。混合动力系统包括200w面板和192teg串联和并联,以获得更高的输出功率。系统将原始数据转换为数字输入进行数据采集,并存储在安全数字卡(SD卡)上。测试了混合系统的性能。数据采集的结果表明,数据以1秒的更新周期存储在SD卡上。DAQ记录的最大系统参数值为混合系统39v, 4.9 A, PV子系统33v, 4.7 A, TEG子系统6v, 0.2 A。在$8 Omega$ 300 W的阻性负载下,计算出191.1 W的最大功率。这比PV子系统增加了15.38%。
{"title":"Photovoltaic-Thermoelectric Generator Monitoring System using Arduino Based Data Acquisition system Technique","authors":"U. A. Saleh, S. A. Jumaat, M. A. Johar, W. Jamaludin","doi":"10.1109/IICAIET51634.2021.9573815","DOIUrl":"https://doi.org/10.1109/IICAIET51634.2021.9573815","url":null,"abstract":"This paper presents the design and development of a data acquisition system (DAQ) for a hybrid photovoltaic-thermoelectric generator (PV-TEG) to monitor and store system parameters collected from the PV-TEG source in large memory storage. A DAQ is an electronic device that collects and records data through a time-based microcontroller (DS1307 Real-Time Clock (RTC) chip). It utilizes the Arduino Mega 328P board in conjunction with the ATmega 328P chip for monitoring the system parameters such as voltage, current and power every second of the logging process. The hybrid system comprises 200 W panels and a combination of 192 TEG connected in series and parallel for higher output power. The system converts the original data into digital input for data acquisition and stores it on a secure digital card (SD card). The hybrid system performance was examined. The results from the DAQ shows that data were stored on the SD Card at a 1-second update cycle. The DAQ recorded maximum system parameter values as 39 V, 4.9 A for the hybrid system, 33 V, 4.7 A for the PV subsystem, and 6 V, 0.2 A for the TEG subsystem. The maximum power of 191.1 W was computed across a resistive load of $8 Omega$, 300 W. This shows an increase of 15.38 % than the PV subsystem.","PeriodicalId":234229,"journal":{"name":"2021 IEEE International Conference on Artificial Intelligence in Engineering and Technology (IICAIET)","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127122858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-13DOI: 10.1109/IICAIET51634.2021.9573691
Nouar Aldahoul, H. A. Karim, Rishav Datta, Shreyash Gupta, Kashish Agrawal, Ahmad Albunni
Violence detection has been investigated extensively in the literature. Recently, IOT based violence video surveillance is an intelligent component integrated in security system of smart buildings. Violence video detector is a specific kind of detection models that should be highly accurate to increase the model's sensitivity and reduce the false alarm rate. This paper proposes a novel architecture of end-to-end CNN-LSTM (Convolutional Neural Network - Long Short-Term Memory) model that can run on low-cost Internet of Things (IOT) device such as raspberry pi board. The paper utilized CNN to learn spatial features from video's frames that were applied to LSTM for video classification into violence/non-violence classes. A complex dataset including two public datasets: RWF-2000 and RLVS-2000 was used for model training and evaluation. The challenging video content includes crowds and chaos, small object at far distance, low resolution, and transient action. Additionally, the videos were captured in various environments such as street, prison, and schools with several human actions such as eating, playing basketball, football, tennis, and swimming. The experimental results show good performance of the proposed violence detection model in terms of average metrics having an accuracy of 73.35 %, recall of 76.90 %, precision of 72.53 %, F1 score of 74.01 %, false negative rate of 23.10 %, false positive rate of 30.20 %, and AUC of 82.0 %. The proposed CNN-LSTM can balance good performance with low number of parameters and thus can be implemented on low-cost IOT node.
{"title":"Convolutional Neural Network - Long Short Term Memory based IOT Node for Violence Detection","authors":"Nouar Aldahoul, H. A. Karim, Rishav Datta, Shreyash Gupta, Kashish Agrawal, Ahmad Albunni","doi":"10.1109/IICAIET51634.2021.9573691","DOIUrl":"https://doi.org/10.1109/IICAIET51634.2021.9573691","url":null,"abstract":"Violence detection has been investigated extensively in the literature. Recently, IOT based violence video surveillance is an intelligent component integrated in security system of smart buildings. Violence video detector is a specific kind of detection models that should be highly accurate to increase the model's sensitivity and reduce the false alarm rate. This paper proposes a novel architecture of end-to-end CNN-LSTM (Convolutional Neural Network - Long Short-Term Memory) model that can run on low-cost Internet of Things (IOT) device such as raspberry pi board. The paper utilized CNN to learn spatial features from video's frames that were applied to LSTM for video classification into violence/non-violence classes. A complex dataset including two public datasets: RWF-2000 and RLVS-2000 was used for model training and evaluation. The challenging video content includes crowds and chaos, small object at far distance, low resolution, and transient action. Additionally, the videos were captured in various environments such as street, prison, and schools with several human actions such as eating, playing basketball, football, tennis, and swimming. The experimental results show good performance of the proposed violence detection model in terms of average metrics having an accuracy of 73.35 %, recall of 76.90 %, precision of 72.53 %, F1 score of 74.01 %, false negative rate of 23.10 %, false positive rate of 30.20 %, and AUC of 82.0 %. The proposed CNN-LSTM can balance good performance with low number of parameters and thus can be implemented on low-cost IOT node.","PeriodicalId":234229,"journal":{"name":"2021 IEEE International Conference on Artificial Intelligence in Engineering and Technology (IICAIET)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116366140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-13DOI: 10.1109/IICAIET51634.2021.9574005
Yi Zhen Quak, Yi Xin Loke, Zhi Yuan Chan, Sze Qi Chew, P. Ooi
Coronavirus disease 2019 (COVID-19) is highly contagious and has swept the globe. Countries worldwide is in urgent need of efficient technological solutions to control the transmission of COVID-19 disease. The objective of this project is to develop an artificial intelligence-driven contact tracing platform and communication to come up with an integrated solution to block the transmission chain of the disease. Three elements are included in this platform, which are behavioral recognition system, mobile application and smart wristband. Mobile application developed through Android Studio SDK, has multiple functions, which are Quick Response (QR) code scanner for location tracking, close contact identification, COVID-19 cases update, district color alert system and exposure notification. Behavioral recognition system developed on Raspberry Pi v4 and Faster Region Based Convolutional Neural Network Version 2 (RCNN_v2) and Single Shot Multibox Detection MobileNet Version 2 (SSD MobileNet_v2) are adopted as machine learning algorithm can carry out close-proximity detection, people counting, and face mask detection. Smart wristband built with Arduino MKR GSM1400 microcontroller and various sensors are developed through Arduino Integrated Development Environment (IDE) to keep track on the location and vital signs of the quarantined people and is designed with an emergency button to allow the quarantined people to get help immediately if they are not feeling well. The data obtained from the three elements is uploaded to a centralized database, Firestore associating with accurate timestamp and location. This system integrated with various preventive measure and control measure can mitigate and manage COVID-19 pandemic effectively and efficiently.
2019冠状病毒病(COVID-19)具有高度传染性,已席卷全球。世界各国迫切需要有效的技术解决方案来控制COVID-19疾病的传播。该项目的目标是开发人工智能驱动的接触者追踪平台和通信,以提出阻断疾病传播链的综合解决方案。该平台包括行为识别系统、移动应用和智能手环三部分。通过Android Studio SDK开发的手机应用程序,具有位置跟踪快速响应(QR)扫描、密切接触者识别、COVID-19病例更新、区域颜色警报系统和暴露通知等多种功能。采用基于Raspberry Pi v4和Faster Region Based Convolutional Neural Network Version 2 (RCNN_v2)和Single Shot Multibox Detection MobileNet Version 2 (SSD MobileNet_v2)开发的行为识别系统作为机器学习算法,可以进行近距离检测、人计数、人脸检测。智能腕带采用Arduino MKR GSM1400微控制器和各种传感器,通过Arduino集成开发环境(IDE)开发,跟踪被隔离者的位置和生命体征,并设计了紧急按钮,让被隔离者在感觉不适时立即获得帮助。从这三个元素获得的数据被上传到集中式数据库,Firestore与准确的时间戳和位置相关联。该系统与各种防控措施相结合,能够有效、高效地缓解和管理COVID-19大流行。
{"title":"IoT-Based Tracing and Communication Platform for Disease Control","authors":"Yi Zhen Quak, Yi Xin Loke, Zhi Yuan Chan, Sze Qi Chew, P. Ooi","doi":"10.1109/IICAIET51634.2021.9574005","DOIUrl":"https://doi.org/10.1109/IICAIET51634.2021.9574005","url":null,"abstract":"Coronavirus disease 2019 (COVID-19) is highly contagious and has swept the globe. Countries worldwide is in urgent need of efficient technological solutions to control the transmission of COVID-19 disease. The objective of this project is to develop an artificial intelligence-driven contact tracing platform and communication to come up with an integrated solution to block the transmission chain of the disease. Three elements are included in this platform, which are behavioral recognition system, mobile application and smart wristband. Mobile application developed through Android Studio SDK, has multiple functions, which are Quick Response (QR) code scanner for location tracking, close contact identification, COVID-19 cases update, district color alert system and exposure notification. Behavioral recognition system developed on Raspberry Pi v4 and Faster Region Based Convolutional Neural Network Version 2 (RCNN_v2) and Single Shot Multibox Detection MobileNet Version 2 (SSD MobileNet_v2) are adopted as machine learning algorithm can carry out close-proximity detection, people counting, and face mask detection. Smart wristband built with Arduino MKR GSM1400 microcontroller and various sensors are developed through Arduino Integrated Development Environment (IDE) to keep track on the location and vital signs of the quarantined people and is designed with an emergency button to allow the quarantined people to get help immediately if they are not feeling well. The data obtained from the three elements is uploaded to a centralized database, Firestore associating with accurate timestamp and location. This system integrated with various preventive measure and control measure can mitigate and manage COVID-19 pandemic effectively and efficiently.","PeriodicalId":234229,"journal":{"name":"2021 IEEE International Conference on Artificial Intelligence in Engineering and Technology (IICAIET)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128466560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-13DOI: 10.1109/IICAIET51634.2021.9573652
C. Fernandez, R. R. Porle
Automated Number Plate Recognition (ANPR) is a term that refers to a system that acquire image of a vehicle and recognises the characters on the number plate. The purpose of this paper is to investigate how a Raspberry Pi-based ANPR system for smart access can be used to replace the traditional access system for high-rise residents. Number plate recognition was chosen over other systems due to its high level of security. The process of recognising number plates is divided into four stages: image acquisition and preprocessing, extraction, segmentation, and character recognition. Preprocessing involves converting RGB to Grayscale, filtering out noise with a Gaussian Filter, and enhancing the image with Adaptive Thresholding. The number plate extraction step includes morphological operations, image binarization, and contour extraction. The techniques used in segmentation are Connected Component Analysis (CCA) and Boundary Box Analysis (BBA). Character recognition using the KNN method is the final stage. The primary hardware consists of a Raspberry Pi model 4, a Raspberry Pi camera, and servo motors. A total of 120 number plates from 24 different cars were used in the experiments. The number plates are divided into two categories: training and testing, with approximately 83 percent being used for training, which includes approximately 100 plates from four different cars. 17 percent, or approximately 20 number plates from four different cars, are used for testing purposes. The experiment establishes the optimal distance, angle, and height from which to capture the licence plate. At two metres, the system recognises the number plate. The system's design is 85 percent accurate.
{"title":"Raspberry Pi based ANPR for Smart Access","authors":"C. Fernandez, R. R. Porle","doi":"10.1109/IICAIET51634.2021.9573652","DOIUrl":"https://doi.org/10.1109/IICAIET51634.2021.9573652","url":null,"abstract":"Automated Number Plate Recognition (ANPR) is a term that refers to a system that acquire image of a vehicle and recognises the characters on the number plate. The purpose of this paper is to investigate how a Raspberry Pi-based ANPR system for smart access can be used to replace the traditional access system for high-rise residents. Number plate recognition was chosen over other systems due to its high level of security. The process of recognising number plates is divided into four stages: image acquisition and preprocessing, extraction, segmentation, and character recognition. Preprocessing involves converting RGB to Grayscale, filtering out noise with a Gaussian Filter, and enhancing the image with Adaptive Thresholding. The number plate extraction step includes morphological operations, image binarization, and contour extraction. The techniques used in segmentation are Connected Component Analysis (CCA) and Boundary Box Analysis (BBA). Character recognition using the KNN method is the final stage. The primary hardware consists of a Raspberry Pi model 4, a Raspberry Pi camera, and servo motors. A total of 120 number plates from 24 different cars were used in the experiments. The number plates are divided into two categories: training and testing, with approximately 83 percent being used for training, which includes approximately 100 plates from four different cars. 17 percent, or approximately 20 number plates from four different cars, are used for testing purposes. The experiment establishes the optimal distance, angle, and height from which to capture the licence plate. At two metres, the system recognises the number plate. The system's design is 85 percent accurate.","PeriodicalId":234229,"journal":{"name":"2021 IEEE International Conference on Artificial Intelligence in Engineering and Technology (IICAIET)","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130542227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-13DOI: 10.1109/IICAIET51634.2021.9573811
Jonel R. Macalisang, A. Alon, Moises F. Jardiniano, Deanne Cameren P. Evangelista, Julius C. Castro, Meriam L. Tria
Nowadays, road accidents have become a major concern. The drowsiness of drivers owing to overfatigue or tiredness, driving while intoxicated, or driving too quickly is some of the primary causes of this. Drowsy driving contributes to or increases the number of traffic accidents each year. The study presented a technique for detecting driver drowsiness in response to this issue. The sleep states of the drivers in the driving environment were detected using a deep learning approach. To assess if the eyes of particular constant face images of drivers are closed, a convolutional neural network (CNN) model has been developed. The suggested model has a wide range of possible applications, including human-computer interface design, facial expression detection, and determining driver tiredness and drowsiness. The YOLOv3 algorithm, as well as additional tools like Pascal VOC and LabelImg, were used to build this approach, which collects and trains a driver dataset that feels drowsy. The study's total detection accuracy was 100%, with detection per frame accuracy ranging from 49% to 89%.
{"title":"Drive-Awake: A YOLOv3 Machine Vision Inference Approach of Eyes Closure for Drowsy Driving Detection","authors":"Jonel R. Macalisang, A. Alon, Moises F. Jardiniano, Deanne Cameren P. Evangelista, Julius C. Castro, Meriam L. Tria","doi":"10.1109/IICAIET51634.2021.9573811","DOIUrl":"https://doi.org/10.1109/IICAIET51634.2021.9573811","url":null,"abstract":"Nowadays, road accidents have become a major concern. The drowsiness of drivers owing to overfatigue or tiredness, driving while intoxicated, or driving too quickly is some of the primary causes of this. Drowsy driving contributes to or increases the number of traffic accidents each year. The study presented a technique for detecting driver drowsiness in response to this issue. The sleep states of the drivers in the driving environment were detected using a deep learning approach. To assess if the eyes of particular constant face images of drivers are closed, a convolutional neural network (CNN) model has been developed. The suggested model has a wide range of possible applications, including human-computer interface design, facial expression detection, and determining driver tiredness and drowsiness. The YOLOv3 algorithm, as well as additional tools like Pascal VOC and LabelImg, were used to build this approach, which collects and trains a driver dataset that feels drowsy. The study's total detection accuracy was 100%, with detection per frame accuracy ranging from 49% to 89%.","PeriodicalId":234229,"journal":{"name":"2021 IEEE International Conference on Artificial Intelligence in Engineering and Technology (IICAIET)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125631436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-13DOI: 10.1109/IICAIET51634.2021.9573756
M. Basir, M. S. Hussin
It is very difficult and crucial to achieve the selection of optimal features, particularly for the classification task. Because the conventional method of identifying features that function independently has resulted in the selection of unrelated features, the consistency of the classification's accuracy has been degraded. The objective of this article is to optimize Meta-heuristic algorithms, particularly Tabu Search (TS) and Harmony Search (HS), using the capabilities of bioinspired search algorithms in conjunction with the wrapper. The essential stages are to idealize the TS and HS combination with appropriate bio-search methods, and to incorporate the creation of various feature subsets. The following step is to do a subset evaluation to confirm the optimum feature set. The evaluation criteria are based on the number of features utilized and the classification accuracy. To be tested, eight (8) comparison datasets of different sizes were carefully chosen. Extensive testing has indicated that the ideal combination of the chosen bio-search algorithm and meta-heuristics algorithms, especially TS and HS, promises to offer a better optimum solution (i.e. fewer features with greater classification accuracy) for the selected datasets. As a consequence of this research, the ability of bio-inspired algorithms with wrapper/filtered to select and identify characteristics would enhance the efficiency of TS and HS.
{"title":"Exploitation of Meta-Heuristic Search Methods with Bio-Inspired Algorithms for Optimal Feature Selection","authors":"M. Basir, M. S. Hussin","doi":"10.1109/IICAIET51634.2021.9573756","DOIUrl":"https://doi.org/10.1109/IICAIET51634.2021.9573756","url":null,"abstract":"It is very difficult and crucial to achieve the selection of optimal features, particularly for the classification task. Because the conventional method of identifying features that function independently has resulted in the selection of unrelated features, the consistency of the classification's accuracy has been degraded. The objective of this article is to optimize Meta-heuristic algorithms, particularly Tabu Search (TS) and Harmony Search (HS), using the capabilities of bioinspired search algorithms in conjunction with the wrapper. The essential stages are to idealize the TS and HS combination with appropriate bio-search methods, and to incorporate the creation of various feature subsets. The following step is to do a subset evaluation to confirm the optimum feature set. The evaluation criteria are based on the number of features utilized and the classification accuracy. To be tested, eight (8) comparison datasets of different sizes were carefully chosen. Extensive testing has indicated that the ideal combination of the chosen bio-search algorithm and meta-heuristics algorithms, especially TS and HS, promises to offer a better optimum solution (i.e. fewer features with greater classification accuracy) for the selected datasets. As a consequence of this research, the ability of bio-inspired algorithms with wrapper/filtered to select and identify characteristics would enhance the efficiency of TS and HS.","PeriodicalId":234229,"journal":{"name":"2021 IEEE International Conference on Artificial Intelligence in Engineering and Technology (IICAIET)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128920881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-13DOI: 10.1109/IICAIET51634.2021.9573979
Jia Xin Goh, K. Lim, C. Lee
Human activity recognition aims to determine the actions or behavior of a person based on the time series data. In recent year, more large human activity recognition datasets are available as it can be collected in easier and cheaper ways. In this work, a 1D Convolutional Neural Network with Long Short-Term Memory Network for human activity recognition is proposed. The 1D Convolutional Neural Network is employed to learn high-level representative features from the accelerometer and gyroscope signal data. The Long Short-Term Memory network is then used to encode the temporal dependencies of the features. The final classification is performed with a softmax classifier. The proposed 1D Convolutional Neural Network with Long Short-Term Memory Network is evaluated on MotionSense, UCI-HAR, and USC-HAD datasets. The class distributions of these datasets are imbalanced. In view of this, adjusted class weight is proposed to mitigate the imbalanced class issue. Furthermore, early stopping is utilized to reduce the overfitting in the training. The proposed method achieved promising performance on MotionSense, UCI-HAR, and USC-HAD datasets, with F1-score of 98.14%, 91.04%, and 76.42%, respectively.
{"title":"1D Convolutional Neural Network with Long Short-Term Memory for Human Activity Recognition","authors":"Jia Xin Goh, K. Lim, C. Lee","doi":"10.1109/IICAIET51634.2021.9573979","DOIUrl":"https://doi.org/10.1109/IICAIET51634.2021.9573979","url":null,"abstract":"Human activity recognition aims to determine the actions or behavior of a person based on the time series data. In recent year, more large human activity recognition datasets are available as it can be collected in easier and cheaper ways. In this work, a 1D Convolutional Neural Network with Long Short-Term Memory Network for human activity recognition is proposed. The 1D Convolutional Neural Network is employed to learn high-level representative features from the accelerometer and gyroscope signal data. The Long Short-Term Memory network is then used to encode the temporal dependencies of the features. The final classification is performed with a softmax classifier. The proposed 1D Convolutional Neural Network with Long Short-Term Memory Network is evaluated on MotionSense, UCI-HAR, and USC-HAD datasets. The class distributions of these datasets are imbalanced. In view of this, adjusted class weight is proposed to mitigate the imbalanced class issue. Furthermore, early stopping is utilized to reduce the overfitting in the training. The proposed method achieved promising performance on MotionSense, UCI-HAR, and USC-HAD datasets, with F1-score of 98.14%, 91.04%, and 76.42%, respectively.","PeriodicalId":234229,"journal":{"name":"2021 IEEE International Conference on Artificial Intelligence in Engineering and Technology (IICAIET)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131332927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-13DOI: 10.1109/IICAIET51634.2021.9573899
Niazi Mahrab, S. Salim, Abdullah Ibne Ali, Israt Jahan Mim, R. Khan
A large amount of money is added every year to the economy through the restaurant business in a country. Nowadays, the restaurant business in Bangladesh has become very popular because of the increasing number of customers and high profit margins. Different people prefer various types of foods in the restaurant; moreover, they order food without knowing the quality and the taste of the food. There are a few restaurant review systems for customers in Bangladesh, they are mostly mobile application-based. As a result, the customer does not have any appropriate knowledge about the restaurant and the food. In this work, we tried to apply deep learning techniques for the restaurant and food review system by recognizing facial expressions with the help of convolutional neural network and the FER-2013 dataset, which is an open-source dataset. The experiment results show that the proposed technique performs satisfactorily with an accuracy of 81%. Finally, the efficiency of the system has been tested by using realtime images.
{"title":"Facial Expression Based Automated Restaurant Food Review System using CNN","authors":"Niazi Mahrab, S. Salim, Abdullah Ibne Ali, Israt Jahan Mim, R. Khan","doi":"10.1109/IICAIET51634.2021.9573899","DOIUrl":"https://doi.org/10.1109/IICAIET51634.2021.9573899","url":null,"abstract":"A large amount of money is added every year to the economy through the restaurant business in a country. Nowadays, the restaurant business in Bangladesh has become very popular because of the increasing number of customers and high profit margins. Different people prefer various types of foods in the restaurant; moreover, they order food without knowing the quality and the taste of the food. There are a few restaurant review systems for customers in Bangladesh, they are mostly mobile application-based. As a result, the customer does not have any appropriate knowledge about the restaurant and the food. In this work, we tried to apply deep learning techniques for the restaurant and food review system by recognizing facial expressions with the help of convolutional neural network and the FER-2013 dataset, which is an open-source dataset. The experiment results show that the proposed technique performs satisfactorily with an accuracy of 81%. Finally, the efficiency of the system has been tested by using realtime images.","PeriodicalId":234229,"journal":{"name":"2021 IEEE International Conference on Artificial Intelligence in Engineering and Technology (IICAIET)","volume":"439 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120879133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-13DOI: 10.1109/IICAIET51634.2021.9573945
Hiroki Takeuchi, M. Ohsuga, Y. Kamakura
Remote-photoplethysmography (rPPG) is a technique for measuring pulse waves without burdening the person using a remotely installed camera. The pulse waves are estimated by capturing minute color changes in the skin area. From the pulse rate and pulse rate variability metrics estimated from the pulse wave, it is possible to estimate a person's arousal state and emotional response. In this study, the most suitable skin area to accurately detect the pulse using rPPG is verified. The authors also propose a method to automatically remove and correct incorrect pulse detection by introducing machine learning based on the features of the pulse wave waveform obtained from rPPG and demonstrated its effectiveness.
{"title":"A Study on Region of Interest in Remote PPG and an Attempt to Eliminate False Positive Results Using SVM Classification","authors":"Hiroki Takeuchi, M. Ohsuga, Y. Kamakura","doi":"10.1109/IICAIET51634.2021.9573945","DOIUrl":"https://doi.org/10.1109/IICAIET51634.2021.9573945","url":null,"abstract":"Remote-photoplethysmography (rPPG) is a technique for measuring pulse waves without burdening the person using a remotely installed camera. The pulse waves are estimated by capturing minute color changes in the skin area. From the pulse rate and pulse rate variability metrics estimated from the pulse wave, it is possible to estimate a person's arousal state and emotional response. In this study, the most suitable skin area to accurately detect the pulse using rPPG is verified. The authors also propose a method to automatically remove and correct incorrect pulse detection by introducing machine learning based on the features of the pulse wave waveform obtained from rPPG and demonstrated its effectiveness.","PeriodicalId":234229,"journal":{"name":"2021 IEEE International Conference on Artificial Intelligence in Engineering and Technology (IICAIET)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127527537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-13DOI: 10.1109/IICAIET51634.2021.9573842
Kit Guan Lim, Ke Wen Teh, M. K. Tan, H. Lago, Soo Siang Yang, K. Teo
As on-road vehicles are increasing every year, safety on the road has become one of the major concerns. Therefore, Vehicular Ad Hoc Network (VANET) becomes as an important role on the road. In VANET, communication occurs between vehicles and the infrastructures. During broadcasting, an emergency message is transferred to the surrounding vehicles to alert other vehicles in the area. However, blind flooding in wireless network might result in redundant rebroadcast, contentions and collision with the neighbouring nodes. This situation is named as broadcast storm. Broadcast storm might lead to the losses of the information or lead to the wrong information being transmitted to the neighbouring nodes. The paper aims to design a broadcast control system which is able to optimize the broadcast process in VANET. Vehicular network is modelled in Simulation of Urban Mobility (SUMO) and the algorithm is formulated in MATLAB. Data is extracted from SUMO through Traffic Control Interface for MATLAB (TraCI4Matlab). The broadcast protocol and Particle Swarm Optimization (PSO) algorithm are formulated in this paper. At the same time PSO is modified for the broadcast enhancement. Results showed that after parameters tuning the modified PSO is able to broadcast into a larger coverage area at a faster rate.
{"title":"Enhanced Multi-Hop Mechanism in Vehicular Communication System using Swarm Algorithm","authors":"Kit Guan Lim, Ke Wen Teh, M. K. Tan, H. Lago, Soo Siang Yang, K. Teo","doi":"10.1109/IICAIET51634.2021.9573842","DOIUrl":"https://doi.org/10.1109/IICAIET51634.2021.9573842","url":null,"abstract":"As on-road vehicles are increasing every year, safety on the road has become one of the major concerns. Therefore, Vehicular Ad Hoc Network (VANET) becomes as an important role on the road. In VANET, communication occurs between vehicles and the infrastructures. During broadcasting, an emergency message is transferred to the surrounding vehicles to alert other vehicles in the area. However, blind flooding in wireless network might result in redundant rebroadcast, contentions and collision with the neighbouring nodes. This situation is named as broadcast storm. Broadcast storm might lead to the losses of the information or lead to the wrong information being transmitted to the neighbouring nodes. The paper aims to design a broadcast control system which is able to optimize the broadcast process in VANET. Vehicular network is modelled in Simulation of Urban Mobility (SUMO) and the algorithm is formulated in MATLAB. Data is extracted from SUMO through Traffic Control Interface for MATLAB (TraCI4Matlab). The broadcast protocol and Particle Swarm Optimization (PSO) algorithm are formulated in this paper. At the same time PSO is modified for the broadcast enhancement. Results showed that after parameters tuning the modified PSO is able to broadcast into a larger coverage area at a faster rate.","PeriodicalId":234229,"journal":{"name":"2021 IEEE International Conference on Artificial Intelligence in Engineering and Technology (IICAIET)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130455685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}