Ahmad Danish Suffian Ahmad Taufik, R. Abdullah, A. Jaafar, Nik Nur Shaadah Nik Dzulkefli, S. I. Ismail
With the emergence of smart home appliances, traditional power sockets are becoming less compatible with modern living styles. Furthermore, modern commercialized sockets are expensive and unaffordable. This project presents the development of a low-cost Wi-Fi smart home socket using internet of things (IoT) technology that is user-friendly for smartphone users to control home appliances. Smart home socket devices can turn on and off power outlets automatically from any location if they are linked to the internet and providing the user with more convenience and energy savings. This project uses a node microcontroller unit (NodeMCU) Wi-Fi module (ESP8266) as the main microcontroller unit to connect to a cloud platform. It also uses a mobile phone application to send instructions to the microcontroller for turning on and off household appliances remotely through a smart socket. The switching mechanism is monitored and controlled through the Blynk platform. A 4-channel relay module is used to transition DC current loads to AC current loads in order to activate switching processes. According to the study’s findings, the Wi-Fi smart home socket system is able to save on excessive usage of electrical appliances while also increasing electrical appliance safety.
{"title":"A low-cost Wi-Fi smart home socket using internet of things","authors":"Ahmad Danish Suffian Ahmad Taufik, R. Abdullah, A. Jaafar, Nik Nur Shaadah Nik Dzulkefli, S. I. Ismail","doi":"10.11591/eei.v13i2.6521","DOIUrl":"https://doi.org/10.11591/eei.v13i2.6521","url":null,"abstract":"With the emergence of smart home appliances, traditional power sockets are becoming less compatible with modern living styles. Furthermore, modern commercialized sockets are expensive and unaffordable. This project presents the development of a low-cost Wi-Fi smart home socket using internet of things (IoT) technology that is user-friendly for smartphone users to control home appliances. Smart home socket devices can turn on and off power outlets automatically from any location if they are linked to the internet and providing the user with more convenience and energy savings. This project uses a node microcontroller unit (NodeMCU) Wi-Fi module (ESP8266) as the main microcontroller unit to connect to a cloud platform. It also uses a mobile phone application to send instructions to the microcontroller for turning on and off household appliances remotely through a smart socket. The switching mechanism is monitored and controlled through the Blynk platform. A 4-channel relay module is used to transition DC current loads to AC current loads in order to activate switching processes. According to the study’s findings, the Wi-Fi smart home socket system is able to save on excessive usage of electrical appliances while also increasing electrical appliance safety.","PeriodicalId":37619,"journal":{"name":"Bulletin of Electrical Engineering and Informatics","volume":"58 26","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140356747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. A. Taha, Hussein I. Zaynal, A. T. T. Hussain, H. Desa, Faris Hassan Taha
This paper has investigated the application of the definite time over-current (DTOC) which reacts to protect the breaker from damage during the occurrence of over-current in the transmission lines. After a distance relay, this kind of over-current relay is utilized as backup protection. The overcurrent relay will provide a signal after a predetermined amount of time delay, and the breaker will trip if the distance relay does not detect a line failure. As a result, this over-current relay functions with a time delay that is just slightly longer than the combined working times of the distance relay and the breaker. This DTOC is tested for various types of faults which are 3- phase fault occurring at load 1, 3-phase fault occurring at load 2, a 3-phase fault occurring before primary protection, and the behaviour of voltage and current with a failed primary protection. All the results will be obtained using the MATLAB/Simulink software package.
{"title":"Definite time over-current protection on transmission line using MATLAB/Simulink","authors":"T. A. Taha, Hussein I. Zaynal, A. T. T. Hussain, H. Desa, Faris Hassan Taha","doi":"10.11591/eei.v13i2.5301","DOIUrl":"https://doi.org/10.11591/eei.v13i2.5301","url":null,"abstract":"This paper has investigated the application of the definite time over-current (DTOC) which reacts to protect the breaker from damage during the occurrence of over-current in the transmission lines. After a distance relay, this kind of over-current relay is utilized as backup protection. The overcurrent relay will provide a signal after a predetermined amount of time delay, and the breaker will trip if the distance relay does not detect a line failure. As a result, this over-current relay functions with a time delay that is just slightly longer than the combined working times of the distance relay and the breaker. This DTOC is tested for various types of faults which are 3- phase fault occurring at load 1, 3-phase fault occurring at load 2, a 3-phase fault occurring before primary protection, and the behaviour of voltage and current with a failed primary protection. All the results will be obtained using the MATLAB/Simulink software package.","PeriodicalId":37619,"journal":{"name":"Bulletin of Electrical Engineering and Informatics","volume":"6 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140357055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this research manuscript, a new protocol is proposed for predicting the available space in the cloud and verifying the security of stored data. The protocol is utilized for learning the available data, and based on this learning, the available storage space is identified, after which the cloud service providers allow for data storage. The Integrity verification separates the private and the public data, which avoids privacy issues. The integration of the private data is done with the help of cloud service providers with respect to the third-party auditing (TPA). Earlier, public key cryptography and bilinear map technologies have been combined by the researchers, but the computation time and costs were high. To secure the integrity of the data storage, the client should execute several computations. Therefore, this research suggests a reliable and effective method called position-aware Merkle tree (PMT), which is implemented for ensuring data integrity. The proposed system uses a PMT that enables the TPA to perform multiple auditing tasks with high efficiency, less computational cost and computation time. Simulation results clearly shows that the developed PMT method consumed 0.00459 milliseconds of computation time, which is limited when compared to the existing models.
{"title":"Effective privacy preserving in cloud computing using position aware Merkle tree model","authors":"Shruthi Gangadharaiah, Purohit Shrinivasacharya","doi":"10.11591/eei.v13i2.6636","DOIUrl":"https://doi.org/10.11591/eei.v13i2.6636","url":null,"abstract":"In this research manuscript, a new protocol is proposed for predicting the available space in the cloud and verifying the security of stored data. The protocol is utilized for learning the available data, and based on this learning, the available storage space is identified, after which the cloud service providers allow for data storage. The Integrity verification separates the private and the public data, which avoids privacy issues. The integration of the private data is done with the help of cloud service providers with respect to the third-party auditing (TPA). Earlier, public key cryptography and bilinear map technologies have been combined by the researchers, but the computation time and costs were high. To secure the integrity of the data storage, the client should execute several computations. Therefore, this research suggests a reliable and effective method called position-aware Merkle tree (PMT), which is implemented for ensuring data integrity. The proposed system uses a PMT that enables the TPA to perform multiple auditing tasks with high efficiency, less computational cost and computation time. Simulation results clearly shows that the developed PMT method consumed 0.00459 milliseconds of computation time, which is limited when compared to the existing models.","PeriodicalId":37619,"journal":{"name":"Bulletin of Electrical Engineering and Informatics","volume":"26 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140353832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shovan Bhowmik, Rifat Sadik, Wahiduzzaman Akanda, Juboraj Roy Pavel
Recent research has focused on opinion mining from public sentiments using natural language processing (NLP) and machine learning (ML) techniques. Transformer-based models, such as bidirectional encoder representations from transformers (BERT), excel in extracting semantic information but are resourceintensive. Google’s new research, mixing tokens with fourier transform, also known as FNet, replaced BERT’s attention mechanism with a non-parameterized fourier transform, aiming to reduce training time without compromising performance. This study fine-tuned the FNet model with a publicly available Kaggle hotel review dataset and investigated the performance of this dataset in both FNet and BERT architectures along with conventional machine learning models such as long short-term memory (LSTM) and support vector machine (SVM). Results revealed that FNet significantly reduces the training time by almost 20% and memory utilization by nearly 60% compared to BERT. The highest test accuracy observed in this experiment by FNet was 80.27% which is nearly 97.85% of BERT’s performance with identical parameters.
{"title":"Sentiment analysis with hotel customer reviews using FNet","authors":"Shovan Bhowmik, Rifat Sadik, Wahiduzzaman Akanda, Juboraj Roy Pavel","doi":"10.11591/eei.v13i2.6301","DOIUrl":"https://doi.org/10.11591/eei.v13i2.6301","url":null,"abstract":"Recent research has focused on opinion mining from public sentiments using natural language processing (NLP) and machine learning (ML) techniques. Transformer-based models, such as bidirectional encoder representations from transformers (BERT), excel in extracting semantic information but are resourceintensive. Google’s new research, mixing tokens with fourier transform, also known as FNet, replaced BERT’s attention mechanism with a non-parameterized fourier transform, aiming to reduce training time without compromising performance. This study fine-tuned the FNet model with a publicly available Kaggle hotel review dataset and investigated the performance of this dataset in both FNet and BERT architectures along with conventional machine learning models such as long short-term memory (LSTM) and support vector machine (SVM). Results revealed that FNet significantly reduces the training time by almost 20% and memory utilization by nearly 60% compared to BERT. The highest test accuracy observed in this experiment by FNet was 80.27% which is nearly 97.85% of BERT’s performance with identical parameters.","PeriodicalId":37619,"journal":{"name":"Bulletin of Electrical Engineering and Informatics","volume":"46 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140357696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Patients’ information and images transfer among medical institutes represent a major tool for delivering better healthcare services. However, privacy and security for healthcare information are big challenges in telemedicine. Evidently, even a small change in patients’ information might lead to wrong diagnosis. This paper suggests a new model for hiding patient information inside magnetic resonance imaging (MRI) cover image based on Euclidean distribution. Both least signification bit (LSB) and most signification bit (MSB) techniques are implemented for the physical hiding. A new method is proposed with a very high level of security information based on distributing the secret text in a random way on the cover image. Experimentally, the proposed method has high peak signal to noise ratio (PSNR), structural similarity index metric (SSIM) and reduced mean square error (MSE). Finally, the obtained results are compared with approaches in the last five years and found to be better by increasing the security for patient information for telemedicine.
{"title":"Secure Euclidean random distribution for patients’ magnetic resonance imaging privacy protection","authors":"Ali Jaber Tayh Albderi, Lamjed Ben Said","doi":"10.11591/eei.v13i2.5989","DOIUrl":"https://doi.org/10.11591/eei.v13i2.5989","url":null,"abstract":"Patients’ information and images transfer among medical institutes represent a major tool for delivering better healthcare services. However, privacy and security for healthcare information are big challenges in telemedicine. Evidently, even a small change in patients’ information might lead to wrong diagnosis. This paper suggests a new model for hiding patient information inside magnetic resonance imaging (MRI) cover image based on Euclidean distribution. Both least signification bit (LSB) and most signification bit (MSB) techniques are implemented for the physical hiding. A new method is proposed with a very high level of security information based on distributing the secret text in a random way on the cover image. Experimentally, the proposed method has high peak signal to noise ratio (PSNR), structural similarity index metric (SSIM) and reduced mean square error (MSE). Finally, the obtained results are compared with approaches in the last five years and found to be better by increasing the security for patient information for telemedicine.","PeriodicalId":37619,"journal":{"name":"Bulletin of Electrical Engineering and Informatics","volume":"2 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140353045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Complex system science has recently shifted its focus to include modeling, simulation, and behavior control. An effective simulation software built on robot operating system (ROS) is used in robotics development to facilitate the smooth transition between the simulation environment and the hardware testing of control behavior. In this paper, we demonstrate how the simultaneous localization and mapping (SLAM) algorithm can be used to allow a robot to navigate autonomously. The Gazebo is used to simulate the robot, and Rviz is used to visualize the simulated data. The G-mapping package is used to create maps using collected data from a variety of sensors, including laser and odometry. To test and implement autonomous navigation, a Turtlebot was used in a Gazebo-generated simulated environment. In our opinion, additional study on ROS using these important tools might lead to a greater adoption of robotics tests performed, further evaluation automation, and efficient robotic systems.
{"title":"Simulation of autonomous navigation of turtlebot robot system based on robot operating system","authors":"M. Ghazal, Murtadha Al-Ghadhanfari, N. Waisi","doi":"10.11591/eei.v13i2.6419","DOIUrl":"https://doi.org/10.11591/eei.v13i2.6419","url":null,"abstract":"Complex system science has recently shifted its focus to include modeling, simulation, and behavior control. An effective simulation software built on robot operating system (ROS) is used in robotics development to facilitate the smooth transition between the simulation environment and the hardware testing of control behavior. In this paper, we demonstrate how the simultaneous localization and mapping (SLAM) algorithm can be used to allow a robot to navigate autonomously. The Gazebo is used to simulate the robot, and Rviz is used to visualize the simulated data. The G-mapping package is used to create maps using collected data from a variety of sensors, including laser and odometry. To test and implement autonomous navigation, a Turtlebot was used in a Gazebo-generated simulated environment. In our opinion, additional study on ROS using these important tools might lead to a greater adoption of robotics tests performed, further evaluation automation, and efficient robotic systems.","PeriodicalId":37619,"journal":{"name":"Bulletin of Electrical Engineering and Informatics","volume":"3 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140353336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
There are several methods to predict a woman's ovulation time, including using a calendar system, basal body temperature, ovulation prediction kit, and OvuScope. This is the first study to predict the time of ovulation in women by calculating the results of detecting the fractal shape of the full ferning (FF) line pattern in salivary using pixel counting, box counting, and deep learning for computer vision methods. The peak of a woman's ovulation every month in her menstrual cycle occurs when the number of ferning lines is the most numerous or dense, and this condition is called FF. In this study, the computational results based on the visualization of the fractal shape of the salivary ferning line pattern from the pixel-counting method have an accuracy of 80%, while the fractal dimensions achieved by the box-counting are 1.474. On the other hand, using the deep learning image classification, we obtain the highest accuracy of 100% with a precision value of 1.00, recall of 1.00, and F1-score 1.00 on the pre-trained network model ResNet-18. Furthermore, visualization of the ResNet-34 model results in the highest number of patches, i.e., 586 patches (equal to 36,352 pixels), by applying fern-like lines pattern detection with windows size 8x8 pixels.
{"title":"A novel women's ovulation prediction through salivary ferning using the box counting and deep learning","authors":"Heri Pratikno, Mohd Zamri Ibrahim, J. Jusak","doi":"10.11591/eei.v13i2.5847","DOIUrl":"https://doi.org/10.11591/eei.v13i2.5847","url":null,"abstract":"There are several methods to predict a woman's ovulation time, including using a calendar system, basal body temperature, ovulation prediction kit, and OvuScope. This is the first study to predict the time of ovulation in women by calculating the results of detecting the fractal shape of the full ferning (FF) line pattern in salivary using pixel counting, box counting, and deep learning for computer vision methods. The peak of a woman's ovulation every month in her menstrual cycle occurs when the number of ferning lines is the most numerous or dense, and this condition is called FF. In this study, the computational results based on the visualization of the fractal shape of the salivary ferning line pattern from the pixel-counting method have an accuracy of 80%, while the fractal dimensions achieved by the box-counting are 1.474. On the other hand, using the deep learning image classification, we obtain the highest accuracy of 100% with a precision value of 1.00, recall of 1.00, and F1-score 1.00 on the pre-trained network model ResNet-18. Furthermore, visualization of the ResNet-34 model results in the highest number of patches, i.e., 586 patches (equal to 36,352 pixels), by applying fern-like lines pattern detection with windows size 8x8 pixels.","PeriodicalId":37619,"journal":{"name":"Bulletin of Electrical Engineering and Informatics","volume":"3 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140354527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Lahame, H. Outzguinrimt, R. Oumghar, B. Bahani, M. Chraygane
This paper presents a new design of a magnetic shunt transformer for use in industrial microwave generators. The proposed transformer has a triangular shape and offers several advantages over existing transformer designs, including reduced volume and maintenance costs. We provide a detailed analysis of the transformer's dimensions and an equivalent model of the three-phase high voltage power supply system. The results of this study have significant implications for the field of industrial microwave generator design and could lead to the development of more efficient and costeffective systems. The resulting model is comprised of saturable inductors capable of accounting for the non-linear phenomena of saturation. The power supply is simulated using MATLAB/Simulink with a neuro-fuzzy ANFIS approach. The results are compared to experimental validations of a single-phase reference power supply for a magnetron, validating the proposed power supply. Additionally, the simulation results demonstrate the effectiveness of the proposed design, which outperforms existing transformers in terms of volume, energy efficiency and maintenance costs.
{"title":"A novel cost-effective power supply model for industrial appliances based on triangular magnetic shunt transformer design","authors":"M. Lahame, H. Outzguinrimt, R. Oumghar, B. Bahani, M. Chraygane","doi":"10.11591/eei.v13i2.6459","DOIUrl":"https://doi.org/10.11591/eei.v13i2.6459","url":null,"abstract":"This paper presents a new design of a magnetic shunt transformer for use in industrial microwave generators. The proposed transformer has a triangular shape and offers several advantages over existing transformer designs, including reduced volume and maintenance costs. We provide a detailed analysis of the transformer's dimensions and an equivalent model of the three-phase high voltage power supply system. The results of this study have significant implications for the field of industrial microwave generator design and could lead to the development of more efficient and costeffective systems. The resulting model is comprised of saturable inductors capable of accounting for the non-linear phenomena of saturation. The power supply is simulated using MATLAB/Simulink with a neuro-fuzzy ANFIS approach. The results are compared to experimental validations of a single-phase reference power supply for a magnetron, validating the proposed power supply. Additionally, the simulation results demonstrate the effectiveness of the proposed design, which outperforms existing transformers in terms of volume, energy efficiency and maintenance costs.","PeriodicalId":37619,"journal":{"name":"Bulletin of Electrical Engineering and Informatics","volume":"40 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140356597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The masked face recognition-based attendance management system is an important biometric-based attendance tracking solution, especially in light of the COVID-19 pandemic. Despite the use of various methods and techniques for face detection and recognition, there currently needs to be a system that can accurately recognize individuals while they are wearing a mask. This system has been designed to overcome the challenges of widespread mask use, impacting the effectiveness of traditional face recognition-based attendance systems. The proposed system uses an innovative method that recognizes individuals even while wearing a mask without the need for removal. With its high compatibility and real-time operation, it can be easily integrated into schools and workplaces through an embedded system like the Jetson Nano or conventional computers executing attendance applications. This innovative approach and its compatibility make it a desirable solution for organizations looking to improve their attendance-tracking process. The Experimental results indicates using maximum resources possible the execution time needed on Jetson Nano is 15 to 22 seconds and 14 to 18 seconds respectively and the average frame capture if there are at least one face detected on Jetson Nano is 3-4 frames.
{"title":"Portable smart attendance system on Jetson Nano","authors":"Edward Yose, Victor Victor, Nico Surantha","doi":"10.11591/eei.v13i2.6061","DOIUrl":"https://doi.org/10.11591/eei.v13i2.6061","url":null,"abstract":"The masked face recognition-based attendance management system is an important biometric-based attendance tracking solution, especially in light of the COVID-19 pandemic. Despite the use of various methods and techniques for face detection and recognition, there currently needs to be a system that can accurately recognize individuals while they are wearing a mask. This system has been designed to overcome the challenges of widespread mask use, impacting the effectiveness of traditional face recognition-based attendance systems. The proposed system uses an innovative method that recognizes individuals even while wearing a mask without the need for removal. With its high compatibility and real-time operation, it can be easily integrated into schools and workplaces through an embedded system like the Jetson Nano or conventional computers executing attendance applications. This innovative approach and its compatibility make it a desirable solution for organizations looking to improve their attendance-tracking process. The Experimental results indicates using maximum resources possible the execution time needed on Jetson Nano is 15 to 22 seconds and 14 to 18 seconds respectively and the average frame capture if there are at least one face detected on Jetson Nano is 3-4 frames.","PeriodicalId":37619,"journal":{"name":"Bulletin of Electrical Engineering and Informatics","volume":"14 7","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140356698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The recognition of human faces poses a complex challenge within the domains of computer vision and artificial intelligence. Emotions play a pivotal role in human interaction, serving as a primary means of communication. This manuscript aims to develop a robust recommendation system capable of identifying individual faces from rasterized images, encompassing features such as eyes, nose, cheeks, lips, forehead, and chin. Human faces exhibit a wide array of emotions, with some emotions, including anger, sadness, happiness, surprise, fear, disgust, and neutrality, being universally recognizable. To achieve this objective, deep learning techniques are leveraged to detect objects containing human faces. Every human face exhibits common characteristics known as Haar features, which are employed to extract feature values from images containing multiple elements. The process is executed through three distinct stages, starting with the initial image and involving calculations. Real-time images from popular social media platforms like Facebook are employed as the dataset for this endeavor. The utilization of deep learning techniques offers superior results, owing to their computational demands and intricate design when compared to classical computer vision methods using OpenCV. The implementation of deep learning is carried out using PyTorch, further enhancing the precision and efficiency of face recognition.
{"title":"Mathematics for 2D face recognition from real time image data set using deep learning techniques","authors":"Ambika G. N., Yeresime Suresh","doi":"10.11591/eei.v13i2.5424","DOIUrl":"https://doi.org/10.11591/eei.v13i2.5424","url":null,"abstract":"The recognition of human faces poses a complex challenge within the domains of computer vision and artificial intelligence. Emotions play a pivotal role in human interaction, serving as a primary means of communication. This manuscript aims to develop a robust recommendation system capable of identifying individual faces from rasterized images, encompassing features such as eyes, nose, cheeks, lips, forehead, and chin. Human faces exhibit a wide array of emotions, with some emotions, including anger, sadness, happiness, surprise, fear, disgust, and neutrality, being universally recognizable. To achieve this objective, deep learning techniques are leveraged to detect objects containing human faces. Every human face exhibits common characteristics known as Haar features, which are employed to extract feature values from images containing multiple elements. The process is executed through three distinct stages, starting with the initial image and involving calculations. Real-time images from popular social media platforms like Facebook are employed as the dataset for this endeavor. The utilization of deep learning techniques offers superior results, owing to their computational demands and intricate design when compared to classical computer vision methods using OpenCV. The implementation of deep learning is carried out using PyTorch, further enhancing the precision and efficiency of face recognition.","PeriodicalId":37619,"journal":{"name":"Bulletin of Electrical Engineering and Informatics","volume":"15 11","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140352591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}