Pub Date : 2022-09-21DOI: 10.1109/ICOASE56293.2022.10075603
Fatimah H. Mohialdeen, Y. E. Mohammed Ali, F. Mahmood
The deployment of mobile telecommunication networks has increased dramatically in recent decades. This increase in the number of mobile devices, and towers yields to increase in consumed energy. Hence, the need for energy efficiency (EE) has increased to reduce cost and pollution. In this paper, the following parameters are studied to enhance EE: increasing the number of base station antennas, increasing the number of user equipment (UEs), and other parameters such as channel state information (CSI). The purpose of this study is to look into how improvement might be achieved. Using the MATLAB program, this article analyzes and enhances EE using a mathematical model in the fifth generation of wireless communication (5G) massive multiple-input multiple-output (Massive-MIMO). The EE effectiveness is demonstrated through simulation results and shows how different parameter selections affect the fundamental balance between EE and spectral efficiency (SE) or only on the EE. The results show that a couple of parameters enhance the EE-SE curve, such as the number of base station antenna, transmit bandwidth, circuit power, number of users, and the availability of CSI. The increase in the number of base station antennas is considered to be a simple solution to increase the EE before the increase in circuit power. Increasing the number of antennas, also, reduces the impact of having imperfect CSI. The results show an increasing number of antennas with respect to the number of users from 4 to 10 do not increase EE, yet increase the SE by around %55.
{"title":"Energy Efficiency Parameters Evaluation for 5G Application","authors":"Fatimah H. Mohialdeen, Y. E. Mohammed Ali, F. Mahmood","doi":"10.1109/ICOASE56293.2022.10075603","DOIUrl":"https://doi.org/10.1109/ICOASE56293.2022.10075603","url":null,"abstract":"The deployment of mobile telecommunication networks has increased dramatically in recent decades. This increase in the number of mobile devices, and towers yields to increase in consumed energy. Hence, the need for energy efficiency (EE) has increased to reduce cost and pollution. In this paper, the following parameters are studied to enhance EE: increasing the number of base station antennas, increasing the number of user equipment (UEs), and other parameters such as channel state information (CSI). The purpose of this study is to look into how improvement might be achieved. Using the MATLAB program, this article analyzes and enhances EE using a mathematical model in the fifth generation of wireless communication (5G) massive multiple-input multiple-output (Massive-MIMO). The EE effectiveness is demonstrated through simulation results and shows how different parameter selections affect the fundamental balance between EE and spectral efficiency (SE) or only on the EE. The results show that a couple of parameters enhance the EE-SE curve, such as the number of base station antenna, transmit bandwidth, circuit power, number of users, and the availability of CSI. The increase in the number of base station antennas is considered to be a simple solution to increase the EE before the increase in circuit power. Increasing the number of antennas, also, reduces the impact of having imperfect CSI. The results show an increasing number of antennas with respect to the number of users from 4 to 10 do not increase EE, yet increase the SE by around %55.","PeriodicalId":297211,"journal":{"name":"2022 4th International Conference on Advanced Science and Engineering (ICOASE)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133004950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-21DOI: 10.1109/ICOASE56293.2022.10075596
Mustafa Zaki Mohammed, I. Saleh
Software systems have gotten increasingly complicated and adaptable in today's computer world. As a result, it's critical to track down and fix software design flaws on a regular basis. Software fault prediction in early phase is useful for enhancing software quality and for reducing software testing time and expense; it's a technique for predicting problems using historical data. To anticipate software flaws from historical databases, several machine learning approaches are applied. This paper focuses on creating a predictor to predict software defects, Based on previous data. For this purpose, a supervised machine learning techniques was utilized to forecast future software failures, K-Nearest Neighbor (KNN) and Random Forest (RF) applied technique applied to the defective data set belonging to the NASA's PROMISE repository. Also, a set of performance measures such as accuracy, precision, recall and f1 measure were used to evaluate the performance of the models. This paper showed a good performance of the RF model compared to the KNN model resulting in a maximum and minimum accuracy are 99%,88% on the MC1 and KC1 responsibly. In general, the study's findings suggest that software defect metrics may be used to determine the problematic module, and that the RF model can be used to anticipate software errors.
{"title":"Predicted of Software Fault Based on Random Forest and K-Nearest Neighbor","authors":"Mustafa Zaki Mohammed, I. Saleh","doi":"10.1109/ICOASE56293.2022.10075596","DOIUrl":"https://doi.org/10.1109/ICOASE56293.2022.10075596","url":null,"abstract":"Software systems have gotten increasingly complicated and adaptable in today's computer world. As a result, it's critical to track down and fix software design flaws on a regular basis. Software fault prediction in early phase is useful for enhancing software quality and for reducing software testing time and expense; it's a technique for predicting problems using historical data. To anticipate software flaws from historical databases, several machine learning approaches are applied. This paper focuses on creating a predictor to predict software defects, Based on previous data. For this purpose, a supervised machine learning techniques was utilized to forecast future software failures, K-Nearest Neighbor (KNN) and Random Forest (RF) applied technique applied to the defective data set belonging to the NASA's PROMISE repository. Also, a set of performance measures such as accuracy, precision, recall and f1 measure were used to evaluate the performance of the models. This paper showed a good performance of the RF model compared to the KNN model resulting in a maximum and minimum accuracy are 99%,88% on the MC1 and KC1 responsibly. In general, the study's findings suggest that software defect metrics may be used to determine the problematic module, and that the RF model can be used to anticipate software errors.","PeriodicalId":297211,"journal":{"name":"2022 4th International Conference on Advanced Science and Engineering (ICOASE)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115689399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-21DOI: 10.1109/ICOASE56293.2022.10075610
Qusay Hussein Mirdas, N. Yasin, N. Alshamaa
Because induction motors are used in most industries, IM control is more essential, Optimization is used approaches are becoming more common for improving Three - Phase induction motor (TIM). In addition, the Volt/Hz (V/f) control is utilized to minimize the harmonics level of other control and modulation approaches. This study is about tuning the PI controller parameters for utilization in TIM. To optimize the speed response performance of the TIM, the Particle Swarm Optimization (PSO) algorithm is used to adjust each parameter of the PI speed controller. Kp and Ki of the PI speed controller parameters are optimized for TIM operation with V/ f Control by designing an appropriate PSO algorithm. The PI speed controller's performance on the TIM is measured by measuring changes in speed and torque under-speed response events. In PSO, the PI controller performs well in terms of overshoot, settling time, and steady-state error.
{"title":"PSO Algorithm for Three Phase Induction Motor with V/F Speed Control","authors":"Qusay Hussein Mirdas, N. Yasin, N. Alshamaa","doi":"10.1109/ICOASE56293.2022.10075610","DOIUrl":"https://doi.org/10.1109/ICOASE56293.2022.10075610","url":null,"abstract":"Because induction motors are used in most industries, IM control is more essential, Optimization is used approaches are becoming more common for improving Three - Phase induction motor (TIM). In addition, the Volt/Hz (V/f) control is utilized to minimize the harmonics level of other control and modulation approaches. This study is about tuning the PI controller parameters for utilization in TIM. To optimize the speed response performance of the TIM, the Particle Swarm Optimization (PSO) algorithm is used to adjust each parameter of the PI speed controller. Kp and Ki of the PI speed controller parameters are optimized for TIM operation with V/ f Control by designing an appropriate PSO algorithm. The PI speed controller's performance on the TIM is measured by measuring changes in speed and torque under-speed response events. In PSO, the PI controller performs well in terms of overshoot, settling time, and steady-state error.","PeriodicalId":297211,"journal":{"name":"2022 4th International Conference on Advanced Science and Engineering (ICOASE)","volume":"119 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123485695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-21DOI: 10.1109/ICOASE56293.2022.10075605
D. Abdullah, H. Mohammed
Clouds are the most powerful computation architecture; nevertheless, some applications are delay sensitive and need real time responses. Offloading tasks from user device to the cloud will take relatively long time and consumes network bandwidth. This motivates the appearance of fog computing. In fog, computing additional layer falls between user device layer and the cloud. Offloading tasks to fog layer will be faster and save network bandwidth. Fog computing has spread widely, but it is difficult to build and test such systems in real word. This led the developers to use fog simulation frameworks to simulate and test their own systems. In this paper, we adopt fog simulation formwork, which adds smart agent layer between user device and fog layer. The framework uses multilevel queue instead of single queue at the Ethernet layer, these queues are scheduled according to weighted round robin and tasks dispatched to theses queues according to the value of Type of Service (ToS) bits which falls at the second byte inside the IP header. The value of ToS bits given by the smart agent layer according to take constraints. Framework behavior compared with mFogSim framework and the results shows that the proposed framework has significantly decrease the delay on both brokers and fog nodes. furthermore, packet drop count and packet error rate are slightly improved
{"title":"DHFogSim: Smart Real-Time Traffic Management Framework for Fog Computing Systems","authors":"D. Abdullah, H. Mohammed","doi":"10.1109/ICOASE56293.2022.10075605","DOIUrl":"https://doi.org/10.1109/ICOASE56293.2022.10075605","url":null,"abstract":"Clouds are the most powerful computation architecture; nevertheless, some applications are delay sensitive and need real time responses. Offloading tasks from user device to the cloud will take relatively long time and consumes network bandwidth. This motivates the appearance of fog computing. In fog, computing additional layer falls between user device layer and the cloud. Offloading tasks to fog layer will be faster and save network bandwidth. Fog computing has spread widely, but it is difficult to build and test such systems in real word. This led the developers to use fog simulation frameworks to simulate and test their own systems. In this paper, we adopt fog simulation formwork, which adds smart agent layer between user device and fog layer. The framework uses multilevel queue instead of single queue at the Ethernet layer, these queues are scheduled according to weighted round robin and tasks dispatched to theses queues according to the value of Type of Service (ToS) bits which falls at the second byte inside the IP header. The value of ToS bits given by the smart agent layer according to take constraints. Framework behavior compared with mFogSim framework and the results shows that the proposed framework has significantly decrease the delay on both brokers and fog nodes. furthermore, packet drop count and packet error rate are slightly improved","PeriodicalId":297211,"journal":{"name":"2022 4th International Conference on Advanced Science and Engineering (ICOASE)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125467940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-21DOI: 10.1109/ICOASE56293.2022.10075614
Naaman Omar, Adel Al-zebari, A. Şengur
K-means clustering is known to be the most traditional approach in machine learning. It's been put to a lot of different uses. However, it has difficulty with initialization and performs poorly for non-linear clusters. Several approaches have been offered in the literature to circumvent these restrictions. Kernel K-means (KK-M) is a type of K-means that falls under this group. In this paper, a two-stepped approach is developed to increase the clustering performance of the K-means algorithm. A transformation procedure is applied in the first step where the low-dimensional input space is transferred to a high-dimensional feature space. To this end, the hidden layer of a Radial basis function (RBF) network is used. The typical K-means method is used in the second part of our approach. We offer experimental results comparing the KK-M on simulated data sets to assess the correctness of the suggested approach. The results of the experiments show the efficiency of the proposed method. The clustering accuracy attained is higher than that of the KK-M algorithm. We also applied the proposed clustering algorithm on image segmentation application. A series of segmentation results were given accordingly.
{"title":"Improving the Clustering Performance of the K-Means Algorithm for Non-linear Clusters","authors":"Naaman Omar, Adel Al-zebari, A. Şengur","doi":"10.1109/ICOASE56293.2022.10075614","DOIUrl":"https://doi.org/10.1109/ICOASE56293.2022.10075614","url":null,"abstract":"K-means clustering is known to be the most traditional approach in machine learning. It's been put to a lot of different uses. However, it has difficulty with initialization and performs poorly for non-linear clusters. Several approaches have been offered in the literature to circumvent these restrictions. Kernel K-means (KK-M) is a type of K-means that falls under this group. In this paper, a two-stepped approach is developed to increase the clustering performance of the K-means algorithm. A transformation procedure is applied in the first step where the low-dimensional input space is transferred to a high-dimensional feature space. To this end, the hidden layer of a Radial basis function (RBF) network is used. The typical K-means method is used in the second part of our approach. We offer experimental results comparing the KK-M on simulated data sets to assess the correctness of the suggested approach. The results of the experiments show the efficiency of the proposed method. The clustering accuracy attained is higher than that of the KK-M algorithm. We also applied the proposed clustering algorithm on image segmentation application. A series of segmentation results were given accordingly.","PeriodicalId":297211,"journal":{"name":"2022 4th International Conference on Advanced Science and Engineering (ICOASE)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121100457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-21DOI: 10.1109/ICOASE56293.2022.10075607
Nabeel N. Ali, N. Kako, A. Abdi
In recent years, the machine learning field has been inundated with a variety of deep learning methods. Different deep learning model types, including recurrent neural networks (RNNs), convolutional neural networks (CNNs), adversarial neural networks (ANNs), and autoencoders, are successfully tackling challenging computer vision problems including image detection and segmentation in an unconstrained environment. Although image segmentation has received a lot of interest, there have been several new deep learning methods discovered with regard to object detection and recognition. An academic review of deep learning image segmentation methods is presented in this article. In this study, the major goal is to offer a sensible comprehension of the basic approaches that have already made a substantial contribution to the domain of image segmentation throughout the years. The article describes the existing state of image segmentation, and goes on to make the argument that deep learning has revolutionized this field. Afterwards, segmentation algorithms have been scientifically classified and optimized, each with their own special contribution. With a variety of informative narratives, the reader may be able to understand the internal workings of these processes more quickly.
{"title":"Review on Image Segmentation Methods Using Deep Learning","authors":"Nabeel N. Ali, N. Kako, A. Abdi","doi":"10.1109/ICOASE56293.2022.10075607","DOIUrl":"https://doi.org/10.1109/ICOASE56293.2022.10075607","url":null,"abstract":"In recent years, the machine learning field has been inundated with a variety of deep learning methods. Different deep learning model types, including recurrent neural networks (RNNs), convolutional neural networks (CNNs), adversarial neural networks (ANNs), and autoencoders, are successfully tackling challenging computer vision problems including image detection and segmentation in an unconstrained environment. Although image segmentation has received a lot of interest, there have been several new deep learning methods discovered with regard to object detection and recognition. An academic review of deep learning image segmentation methods is presented in this article. In this study, the major goal is to offer a sensible comprehension of the basic approaches that have already made a substantial contribution to the domain of image segmentation throughout the years. The article describes the existing state of image segmentation, and goes on to make the argument that deep learning has revolutionized this field. Afterwards, segmentation algorithms have been scientifically classified and optimized, each with their own special contribution. With a variety of informative narratives, the reader may be able to understand the internal workings of these processes more quickly.","PeriodicalId":297211,"journal":{"name":"2022 4th International Conference on Advanced Science and Engineering (ICOASE)","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134147489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-21DOI: 10.1109/ICOASE56293.2022.10075600
A. A. Rasheed, Khalil H. Sayidmarie
Nanoantennas have attracted much attention because of their unique ability to collect light into subwavelength dimensions while enhancing a high electric field via localized surface plasmon resonance. Engineering the shape and size of the nanoantenna mostly focuses on improving the confined field or altering the resonance wavelength. This study focuses on improving the absorption and scattering properties of a circular-dipole nanoantenna by inserting circular holes in the two arms of the dipole. The influence of the dipole parameters on its properties such as resonance wavelength, reflection, and absorption, as well as the electric field in the gap was investigated. The proposed ring geometry can significantly increase the absorption while also inhibiting scattering, thus achieving an optimal operating state. The scattered power of a solid circular dipole nanoantenna can be up to 85%, while the remaining 15% of the incident power is absorbed. It is shown that the absorbed coupled power in the hollow circular dipole can be increased to 55%. This property results in optimal plasmonic localization of the field in the gap of the dipole nanoantenna. This finding can be deployed in photovoltaics, thermoplastics, fluorescence microscopy, and biosensing applications.
{"title":"A Circular Dipole Nanoantenna with Improved Performance","authors":"A. A. Rasheed, Khalil H. Sayidmarie","doi":"10.1109/ICOASE56293.2022.10075600","DOIUrl":"https://doi.org/10.1109/ICOASE56293.2022.10075600","url":null,"abstract":"Nanoantennas have attracted much attention because of their unique ability to collect light into subwavelength dimensions while enhancing a high electric field via localized surface plasmon resonance. Engineering the shape and size of the nanoantenna mostly focuses on improving the confined field or altering the resonance wavelength. This study focuses on improving the absorption and scattering properties of a circular-dipole nanoantenna by inserting circular holes in the two arms of the dipole. The influence of the dipole parameters on its properties such as resonance wavelength, reflection, and absorption, as well as the electric field in the gap was investigated. The proposed ring geometry can significantly increase the absorption while also inhibiting scattering, thus achieving an optimal operating state. The scattered power of a solid circular dipole nanoantenna can be up to 85%, while the remaining 15% of the incident power is absorbed. It is shown that the absorbed coupled power in the hollow circular dipole can be increased to 55%. This property results in optimal plasmonic localization of the field in the gap of the dipole nanoantenna. This finding can be deployed in photovoltaics, thermoplastics, fluorescence microscopy, and biosensing applications.","PeriodicalId":297211,"journal":{"name":"2022 4th International Conference on Advanced Science and Engineering (ICOASE)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114257499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-21DOI: 10.1109/ICOASE56293.2022.10075599
Hamzah Noori Fejer, Ali Hadi Hasan
The field of argumentation in Artificial Intelligence (AI) has witnessed a great increase an important cognitive to deal with uncertain information and conflicting opinions. This has led to a number of interesting lines of research in this field and related fields, giving rise to computational models of the argument as a promising research field. The remedies conflict problem is considered one of the challenges in the field of medicine the world. This paper makes use of Toulmin's argumentation model to deal with conflicting problems within the medicine field. In addition, inference rules were used for associating a patient's symptoms and patient history(premises) with remedies use, eventually leading to medications diagnosis for patient (claims). After that, several remedy features are used to compete for the support and the attack (pros and cons) for each remedy item. A decision is made during the qualifier phase in Toulmin's model about whether or not the drug should be used based on the highest value of support or attack. The dataset consists of 200 patients as samples for two heart diseases (hypertension, angina pectoris). It is collected from the Iraqi educational hospitals, annotated by a team of experts working in the medical field. The performance achieved in the proposed model in hypertension and angina pectoris diseases were 78% and 83%, respectively, using the confusion matrix method.
{"title":"The Use of Toulmin's Argumentation Model in Solving The Drug Conflict Problems","authors":"Hamzah Noori Fejer, Ali Hadi Hasan","doi":"10.1109/ICOASE56293.2022.10075599","DOIUrl":"https://doi.org/10.1109/ICOASE56293.2022.10075599","url":null,"abstract":"The field of argumentation in Artificial Intelligence (AI) has witnessed a great increase an important cognitive to deal with uncertain information and conflicting opinions. This has led to a number of interesting lines of research in this field and related fields, giving rise to computational models of the argument as a promising research field. The remedies conflict problem is considered one of the challenges in the field of medicine the world. This paper makes use of Toulmin's argumentation model to deal with conflicting problems within the medicine field. In addition, inference rules were used for associating a patient's symptoms and patient history(premises) with remedies use, eventually leading to medications diagnosis for patient (claims). After that, several remedy features are used to compete for the support and the attack (pros and cons) for each remedy item. A decision is made during the qualifier phase in Toulmin's model about whether or not the drug should be used based on the highest value of support or attack. The dataset consists of 200 patients as samples for two heart diseases (hypertension, angina pectoris). It is collected from the Iraqi educational hospitals, annotated by a team of experts working in the medical field. The performance achieved in the proposed model in hypertension and angina pectoris diseases were 78% and 83%, respectively, using the confusion matrix method.","PeriodicalId":297211,"journal":{"name":"2022 4th International Conference on Advanced Science and Engineering (ICOASE)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114334090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-21DOI: 10.1109/ICOASE56293.2022.10075601
Waleed Ayad Mahdi, S. Q. Mahdi, Ali Al-Naji
In 2020, the COVID-19 pandemic spread globally, leading to countries imposing health restrictions on people, including wearing masks, to prevent the spread of the disease. Wearing a mask significantly decreases distinguishing ability due to its concealment of the main facial features. After the outbreak of the pandemic, the existing datasets became unsuitable because they did not contain images of people wearing masks. To address the shortage of large-scale masked faces datasets, a developed method was proposed to generate artificial masks and place them on the faces in the unmasked faces dataset to generate the masked faces dataset. Following the proposed method, masked faces are generated in two steps. First, the face is detected in the unmasked image, and then the detected face image is aligned. The second step is to overlay the mask on the cropped face images using the dlib-ml library. Depending on the proposed method, two datasets of masked faces called masked-dataset-1 and masked-dataset-2 were created. Promising results were obtained when they were evaluated using the Labeled Faces in the Wild (LFW) dataset, and two of the state-of-the-art facial recognition systems for evaluation are FaceNet and ArcFace, where the accuracy of using the two systems was 96.1 and 97, respectively with masked-dataset-1 and 87.6 and 88.9, respectively with masked-dataset-2.
{"title":"Generating Masked Facial Datasets Using Dlib-Machine Learning Library","authors":"Waleed Ayad Mahdi, S. Q. Mahdi, Ali Al-Naji","doi":"10.1109/ICOASE56293.2022.10075601","DOIUrl":"https://doi.org/10.1109/ICOASE56293.2022.10075601","url":null,"abstract":"In 2020, the COVID-19 pandemic spread globally, leading to countries imposing health restrictions on people, including wearing masks, to prevent the spread of the disease. Wearing a mask significantly decreases distinguishing ability due to its concealment of the main facial features. After the outbreak of the pandemic, the existing datasets became unsuitable because they did not contain images of people wearing masks. To address the shortage of large-scale masked faces datasets, a developed method was proposed to generate artificial masks and place them on the faces in the unmasked faces dataset to generate the masked faces dataset. Following the proposed method, masked faces are generated in two steps. First, the face is detected in the unmasked image, and then the detected face image is aligned. The second step is to overlay the mask on the cropped face images using the dlib-ml library. Depending on the proposed method, two datasets of masked faces called masked-dataset-1 and masked-dataset-2 were created. Promising results were obtained when they were evaluated using the Labeled Faces in the Wild (LFW) dataset, and two of the state-of-the-art facial recognition systems for evaluation are FaceNet and ArcFace, where the accuracy of using the two systems was 96.1 and 97, respectively with masked-dataset-1 and 87.6 and 88.9, respectively with masked-dataset-2.","PeriodicalId":297211,"journal":{"name":"2022 4th International Conference on Advanced Science and Engineering (ICOASE)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114779694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-21DOI: 10.1109/ICOASE56293.2022.10075567
F. E. Samann, S. Ameen, Shavan K. Askar
The existing Internet infrastructure cannot meet the demands of the exponential growth in data users need to access. Therefore, Fog Computing (FC), Internet of Things (IoT), and 5G are upgrading conventional data transfer with innovative solutions and intelligently processing data to improve performance. Fog computing is considered a central component in the growth of the new 5G networks and the Internet of Things. These advanced technologies allow the Internet to provide enhanced services through sensors, continually monitoring a wide range of information. The paper reviews the most recent studies that implemented fog computing in a 5G environment by defining the essential services and network-oriented functionality. Moreover, the surveyed study is also discussed and assessed through sum-up tables with general remarks about the followed trends. The mentioned studies presented legitimate solutions for issues in the Vehicular Network and improved the current network architecture.
{"title":"Fog Computing in 5G Mobile Networks: A Review","authors":"F. E. Samann, S. Ameen, Shavan K. Askar","doi":"10.1109/ICOASE56293.2022.10075567","DOIUrl":"https://doi.org/10.1109/ICOASE56293.2022.10075567","url":null,"abstract":"The existing Internet infrastructure cannot meet the demands of the exponential growth in data users need to access. Therefore, Fog Computing (FC), Internet of Things (IoT), and 5G are upgrading conventional data transfer with innovative solutions and intelligently processing data to improve performance. Fog computing is considered a central component in the growth of the new 5G networks and the Internet of Things. These advanced technologies allow the Internet to provide enhanced services through sensors, continually monitoring a wide range of information. The paper reviews the most recent studies that implemented fog computing in a 5G environment by defining the essential services and network-oriented functionality. Moreover, the surveyed study is also discussed and assessed through sum-up tables with general remarks about the followed trends. The mentioned studies presented legitimate solutions for issues in the Vehicular Network and improved the current network architecture.","PeriodicalId":297211,"journal":{"name":"2022 4th International Conference on Advanced Science and Engineering (ICOASE)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130071602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}