Pub Date : 2022-11-28DOI: 10.15849/ijasca.221128.03
Ferdaous Benrouba, R. Boudour
Abstract Nowadays, we are dealing with panic and unpleasant situations in which, we are constrained to make crucial decisions in a limited delay, due to the mixed emotions that may affect our decision, especially FEAR, this kind of emotion occurs when unwanted or uncontrollable events are present in the environment. These recent years, fear modelling has been well researched and since this emotion is usually associated with the fact that one or more fundamental desires are at stake Unluckily, most of these models miss that FEAR does not always occur similarly in all agents. This paper proposes a new conceptual architecture with a new component by extending BDI logic with the emotion of FEAR, so that the new Emotional-BDI agents may better cope with extremely dynamic unpleasant situations in their surroundings. We also address how we verify the emotional properties by employing a model checker NuSMV. The proposed architecture confirms that NuSMV can be applied to verify the emotional specifications we can program agents that are capable of reasoning over emotions, our experimental results indicate the viability and efficiency of our model. Keywords: Emotional-BDI, Model checking, NuSMV, CUDD, Unpleasant situations.
{"title":"A Model Combining BDI Logic and Temporal Logics for Decision-Making in Emergency","authors":"Ferdaous Benrouba, R. Boudour","doi":"10.15849/ijasca.221128.03","DOIUrl":"https://doi.org/10.15849/ijasca.221128.03","url":null,"abstract":"Abstract Nowadays, we are dealing with panic and unpleasant situations in which, we are constrained to make crucial decisions in a limited delay, due to the mixed emotions that may affect our decision, especially FEAR, this kind of emotion occurs when unwanted or uncontrollable events are present in the environment. These recent years, fear modelling has been well researched and since this emotion is usually associated with the fact that one or more fundamental desires are at stake Unluckily, most of these models miss that FEAR does not always occur similarly in all agents. This paper proposes a new conceptual architecture with a new component by extending BDI logic with the emotion of FEAR, so that the new Emotional-BDI agents may better cope with extremely dynamic unpleasant situations in their surroundings. We also address how we verify the emotional properties by employing a model checker NuSMV. The proposed architecture confirms that NuSMV can be applied to verify the emotional specifications we can program agents that are capable of reasoning over emotions, our experimental results indicate the viability and efficiency of our model. Keywords: Emotional-BDI, Model checking, NuSMV, CUDD, Unpleasant situations.","PeriodicalId":38638,"journal":{"name":"International Journal of Advances in Soft Computing and its Applications","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47290101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-28DOI: 10.15849/ijasca.221128.13
M. Khder, S. Fujo
Abstract ATP Tennis stands for the “The Association of Tennis Professionals” which is the primary governing body for male tennis players. ATP was formed in Sep 1972 for professional tennis players. A study has been done on tennis players’ datasets to implement supervised machine learning techniques to illustrate match data and make predictions. An appropriate dataset has been chosen, data cleaning has been implemented to extract anomalies, data is visualized via plotting methods in R language and supervised machine learning models applied. The main models applied are linear regression and decision tree. Results and predictions have been extracted from the applied models. In the linear regression model, the correlation is calculated to find the relation between dependent and independent variables, furthermore the results and prediction are extracted from the linear regression model. Also, three hypotheses are applied for multiple linear regression model. The decision tree modeled the best of 3 or best of 5 sets of matches and predicted which set of matches would be considered best. Keywords: Machine Learning, supervised learning, linear regression, decision tree, R language, Tennis, ATP.
{"title":"Applying Machine Learning- Supervised Learning Techniques for Tennis Players Dataset Analysis","authors":"M. Khder, S. Fujo","doi":"10.15849/ijasca.221128.13","DOIUrl":"https://doi.org/10.15849/ijasca.221128.13","url":null,"abstract":"Abstract ATP Tennis stands for the “The Association of Tennis Professionals” which is the primary governing body for male tennis players. ATP was formed in Sep 1972 for professional tennis players. A study has been done on tennis players’ datasets to implement supervised machine learning techniques to illustrate match data and make predictions. An appropriate dataset has been chosen, data cleaning has been implemented to extract anomalies, data is visualized via plotting methods in R language and supervised machine learning models applied. The main models applied are linear regression and decision tree. Results and predictions have been extracted from the applied models. In the linear regression model, the correlation is calculated to find the relation between dependent and independent variables, furthermore the results and prediction are extracted from the linear regression model. Also, three hypotheses are applied for multiple linear regression model. The decision tree modeled the best of 3 or best of 5 sets of matches and predicted which set of matches would be considered best. Keywords: Machine Learning, supervised learning, linear regression, decision tree, R language, Tennis, ATP.","PeriodicalId":38638,"journal":{"name":"International Journal of Advances in Soft Computing and its Applications","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44606114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-28DOI: 10.15849/ijasca.221128.09
A. Akdim, A. Mahdaoui, H. Roukhe, A. M. Hseini, A. Bouazi
Abstract 3D image of a real object is a process that must be passed through two stages. The first is scanning real object by using 3D scanner, this step allows the acquisition of 3D point cloud of the object. The second is the reconstruction step, where the construction of the mesh that represents the real object is done. The surface reconstruction is carried out by means of an existing surface reconstruction method. Mesh reconstruction techniques can be grouped into two categories: the combinatorial approach and the approach by adjusting a predefined model. A large number of combinatorial methods have the principle of establishing relations between the points of a sample. The second approach is based on the idea of approximating the sampled surface using predefined models, built on global or local assumptions concerning the shape to be reconstructed. In this paper, a review of literature and experimental studies of 3d reconstruction methods, that exist in the literature, are realized then a comparison, between these methods based on Frey criterion that represents the quality of the produced surface and execution time. The experimental results show that in terms of surface quality, Ball Pivoting technique, presents a good result. However, alpha shapes method gives relevant results in execution time. Keywords: 3D reconstruction, Delaunay triangulation, Alpha Shapes, Ball Pivoting Algorithm, Poisson Method, Frey Quality, RBF, MLS
{"title":"A Study and Comparison of Different 3D Reconstruction Methods Following Quality Criteria","authors":"A. Akdim, A. Mahdaoui, H. Roukhe, A. M. Hseini, A. Bouazi","doi":"10.15849/ijasca.221128.09","DOIUrl":"https://doi.org/10.15849/ijasca.221128.09","url":null,"abstract":"Abstract 3D image of a real object is a process that must be passed through two stages. The first is scanning real object by using 3D scanner, this step allows the acquisition of 3D point cloud of the object. The second is the reconstruction step, where the construction of the mesh that represents the real object is done. The surface reconstruction is carried out by means of an existing surface reconstruction method. Mesh reconstruction techniques can be grouped into two categories: the combinatorial approach and the approach by adjusting a predefined model. A large number of combinatorial methods have the principle of establishing relations between the points of a sample. The second approach is based on the idea of approximating the sampled surface using predefined models, built on global or local assumptions concerning the shape to be reconstructed. In this paper, a review of literature and experimental studies of 3d reconstruction methods, that exist in the literature, are realized then a comparison, between these methods based on Frey criterion that represents the quality of the produced surface and execution time. The experimental results show that in terms of surface quality, Ball Pivoting technique, presents a good result. However, alpha shapes method gives relevant results in execution time. Keywords: 3D reconstruction, Delaunay triangulation, Alpha Shapes, Ball Pivoting Algorithm, Poisson Method, Frey Quality, RBF, MLS","PeriodicalId":38638,"journal":{"name":"International Journal of Advances in Soft Computing and its Applications","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48754111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-28DOI: 10.15849/ijasca.221128.01
Firda Ummah, D. Utari
Abstract Tuberculosis is an infectious disease with symptoms similar to those of Covid-19, such as fever, cough, and shortness of breath. Based on the existing cases, these two diseases attack the lungs and can affect their shape. Detection of this disease can be done through a chest X-ray. In the X-ray images of Covid-19 and Tuberculosis, both have ground-glass opacity and consolidation, thus classifying the two diseases is tricky if done manually. One method that can be used for classification is Convolutional Neural Network (CNN). The results obtained from this research are the implementation of the CNN algorithm with four convolutions which are convolution - pooling and repeated four times. The best architecture for parameters epoch 50 with the optimizer ADAM, image size 100x100 pixels, kernel size 3x3, and in the data scenario 80%:20%. The results of the level of accuracy of the classification process in the test data are 85.4%. In addition, the labeling prediction obtained is that the Covid-19 label is predicted to be correct with a probability percentage of 95.85%, while the probability percentage for the Tuberculosis label is 98%. Keywords: Covid-19, Tuberculosis, Image, CNN
{"title":"Covid-19 and Tuberculosis Detection in X-Ray of Lung Images with Deep Convolutional Neural Network","authors":"Firda Ummah, D. Utari","doi":"10.15849/ijasca.221128.01","DOIUrl":"https://doi.org/10.15849/ijasca.221128.01","url":null,"abstract":"Abstract Tuberculosis is an infectious disease with symptoms similar to those of Covid-19, such as fever, cough, and shortness of breath. Based on the existing cases, these two diseases attack the lungs and can affect their shape. Detection of this disease can be done through a chest X-ray. In the X-ray images of Covid-19 and Tuberculosis, both have ground-glass opacity and consolidation, thus classifying the two diseases is tricky if done manually. One method that can be used for classification is Convolutional Neural Network (CNN). The results obtained from this research are the implementation of the CNN algorithm with four convolutions which are convolution - pooling and repeated four times. The best architecture for parameters epoch 50 with the optimizer ADAM, image size 100x100 pixels, kernel size 3x3, and in the data scenario 80%:20%. The results of the level of accuracy of the classification process in the test data are 85.4%. In addition, the labeling prediction obtained is that the Covid-19 label is predicted to be correct with a probability percentage of 95.85%, while the probability percentage for the Tuberculosis label is 98%. Keywords: Covid-19, Tuberculosis, Image, CNN","PeriodicalId":38638,"journal":{"name":"International Journal of Advances in Soft Computing and its Applications","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46386193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-28DOI: 10.15849/ijasca.221128.02
Waleed K AbdulRaheem
Abstract Cloud computing is internet-distributed computing model transferring processes from personal computers or servers to cloud servers. Nowadays, security and performance of cloud computing is considered challenging for both users and cloud service providers. Securing data on cloud computing servers will ensures privacy, confidentiality, integrity, and availability. Using cryptographic techniques is one of the major methods to ensure the data security while storing and transmission. Hypervisor in a cloud is a software that provides abstraction and called virtual machine monitor. Hyper-V and Xen are two different types of hypervisors. In this paper, eight different types of cryptographic algorithms are deployed by using the two hypervisors with instances, to measure the hypervisors performance while encryption and decryption. CPU utilization and response time are measured while encryption and decryption are having different data types and sizes. Results show that Xen is better than Hyper-V in most results on average at 15% and 6.1% for time duration and CPU utilization respectively. Keywords: Cloud Computing, Virtualization, Hypervisors, Xen, Hyper-V, Cryptographic Algorithm
{"title":"Performance Comparison of Xen AND Hyper-V in Cloud Computing While Using Cryptosystems","authors":"Waleed K AbdulRaheem","doi":"10.15849/ijasca.221128.02","DOIUrl":"https://doi.org/10.15849/ijasca.221128.02","url":null,"abstract":"Abstract Cloud computing is internet-distributed computing model transferring processes from personal computers or servers to cloud servers. Nowadays, security and performance of cloud computing is considered challenging for both users and cloud service providers. Securing data on cloud computing servers will ensures privacy, confidentiality, integrity, and availability. Using cryptographic techniques is one of the major methods to ensure the data security while storing and transmission. Hypervisor in a cloud is a software that provides abstraction and called virtual machine monitor. Hyper-V and Xen are two different types of hypervisors. In this paper, eight different types of cryptographic algorithms are deployed by using the two hypervisors with instances, to measure the hypervisors performance while encryption and decryption. CPU utilization and response time are measured while encryption and decryption are having different data types and sizes. Results show that Xen is better than Hyper-V in most results on average at 15% and 6.1% for time duration and CPU utilization respectively. Keywords: Cloud Computing, Virtualization, Hypervisors, Xen, Hyper-V, Cryptographic Algorithm","PeriodicalId":38638,"journal":{"name":"International Journal of Advances in Soft Computing and its Applications","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45286791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-20DOI: 10.15849/ijasca.220720.11
Zuharah Jaafar, N. Ismail
A common research area in statistical machine learning has been variable selection in high dimensional settings. In recent years, numerous effective approaches have been created to deal with these challenges. In order to improve the prediction accuracy of the model for the given dataset, this study sought to present a double approach variable selection method when pairwise interactions between the explanatory variables exist and to choose the smallest explanatory variable set (considering interactions among them). In this study, a double step method consolidating Random Forest and Adaptive Elastic Net was further examined to mimic potential health effects of environmental contamination. When there were existing interactions in the data or none at all, the double step approach was compared to the single-step adaptive elastic net method and two-step CART paired with the adaptive elastic net method. Using significant statistical tests like RMSE, R2 , and the quantity of the variable chosen for the final model, the success of the strategies was measured. The double step RF+AENET approach produces a simple, constrained model. Despite the complex association between exposure variables, it has the lowest false detection rate for null interactions. A set of variables that have correlation with the result are effectively retained by the screening and variable reduction processes in the RF step of the RF+AENET approach. The double step RF+AENET performs prediction better than a single technique and chooses a sparse model that is close to the true model. Thus, it can be said that when there are pairwise interactions between variables in the simulated biological dataset, the double step technique is a better method for model prediction and parameter estimation. Keywords: Adaptive Elastic Net, Random Forest, Variable Selection, CART.
{"title":"Variable Selection in High Dimensional Data with Interactions","authors":"Zuharah Jaafar, N. Ismail","doi":"10.15849/ijasca.220720.11","DOIUrl":"https://doi.org/10.15849/ijasca.220720.11","url":null,"abstract":"A common research area in statistical machine learning has been variable selection in high dimensional settings. In recent years, numerous effective approaches have been created to deal with these challenges. In order to improve the prediction accuracy of the model for the given dataset, this study sought to present a double approach variable selection method when pairwise interactions between the explanatory variables exist and to choose the smallest explanatory variable set (considering interactions among them). In this study, a double step method consolidating Random Forest and Adaptive Elastic Net was further examined to mimic potential health effects of environmental contamination. When there were existing interactions in the data or none at all, the double step approach was compared to the single-step adaptive elastic net method and two-step CART paired with the adaptive elastic net method. Using significant statistical tests like RMSE, R2 , and the quantity of the variable chosen for the final model, the success of the strategies was measured. The double step RF+AENET approach produces a simple, constrained model. Despite the complex association between exposure variables, it has the lowest false detection rate for null interactions. A set of variables that have correlation with the result are effectively retained by the screening and variable reduction processes in the RF step of the RF+AENET approach. The double step RF+AENET performs prediction better than a single technique and chooses a sparse model that is close to the true model. Thus, it can be said that when there are pairwise interactions between variables in the simulated biological dataset, the double step technique is a better method for model prediction and parameter estimation. Keywords: Adaptive Elastic Net, Random Forest, Variable Selection, CART.","PeriodicalId":38638,"journal":{"name":"International Journal of Advances in Soft Computing and its Applications","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47920848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-20DOI: 10.15849/ijasca.220720.03
Priyanka Gupta, Seth D.D.
Heart or cardiovascular disease is main cause of mortality. The main objective of developing the proposed model is to increase the accuracy and reliability of predicting the coronary heart disease. This paper attempts in predicting the risk of heart disease more accurately using the techniques of ensemble learning. Moreover, the techniques of feature selection and hyper parameter tuning has been implemented in this work leading to further increase in accuracy. Among the three ensemble techniques, stacking, majority voting and bagging used in this work, the improvement achieved in prediction accuracies is 2.11%, 7.42% and 0.14% respectively. Majority voting has shown the best results in terms of increase in prediction accuracies with an accuracy of 98.38%. Keywords: Heart Disease, Ensemble Learning, Feature selection, Machine Learning
{"title":"Improving the Prediction of Heart Disease Using Ensemble Learning and Feature Selection","authors":"Priyanka Gupta, Seth D.D.","doi":"10.15849/ijasca.220720.03","DOIUrl":"https://doi.org/10.15849/ijasca.220720.03","url":null,"abstract":"Heart or cardiovascular disease is main cause of mortality. The main objective of developing the proposed model is to increase the accuracy and reliability of predicting the coronary heart disease. This paper attempts in predicting the risk of heart disease more accurately using the techniques of ensemble learning. Moreover, the techniques of feature selection and hyper parameter tuning has been implemented in this work leading to further increase in accuracy. Among the three ensemble techniques, stacking, majority voting and bagging used in this work, the improvement achieved in prediction accuracies is 2.11%, 7.42% and 0.14% respectively. Majority voting has shown the best results in terms of increase in prediction accuracies with an accuracy of 98.38%. Keywords: Heart Disease, Ensemble Learning, Feature selection, Machine Learning","PeriodicalId":38638,"journal":{"name":"International Journal of Advances in Soft Computing and its Applications","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47009721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-20DOI: 10.15849/ijasca.220720.13
Muhammad Siraj
An Intrusion Detection System (IDS) helps the computer system notify an admin when an attack is coming to a network. However, some problems may delay this process, such as a long time caused by several features in the captured data to classify. One of the optimization approaches is to select those critical features. It is intended to increase performance and reduce computational time. This research evaluates feature selection methods using the ANOVA F-test and Sequential Feature Selection (SFS), whose performance is measured using some metrics: accuracy, specificity, and sensitivity over NSLKDD, Kyoto2006, and UNSW_NB15 datasets. Using that approach, the performance increases, on average, by more than 10% for multiclass; and about 5% for binary class. It can be inferred that an optimal number of features can be obtained, where the best features are selected by SFS. Nevertheless, this method still needs to be improved before being implemented in a real system. Keywords: Network security, Network infrastructure, Intrusion Detection System, Data Security, Information Security.
{"title":"Analyzing ANOVA F-test and Sequential Feature Selection for Intrusion Detection Systems","authors":"Muhammad Siraj","doi":"10.15849/ijasca.220720.13","DOIUrl":"https://doi.org/10.15849/ijasca.220720.13","url":null,"abstract":"An Intrusion Detection System (IDS) helps the computer system notify an admin when an attack is coming to a network. However, some problems may delay this process, such as a long time caused by several features in the captured data to classify. One of the optimization approaches is to select those critical features. It is intended to increase performance and reduce computational time. This research evaluates feature selection methods using the ANOVA F-test and Sequential Feature Selection (SFS), whose performance is measured using some metrics: accuracy, specificity, and sensitivity over NSLKDD, Kyoto2006, and UNSW_NB15 datasets. Using that approach, the performance increases, on average, by more than 10% for multiclass; and about 5% for binary class. It can be inferred that an optimal number of features can be obtained, where the best features are selected by SFS. Nevertheless, this method still needs to be improved before being implemented in a real system. Keywords: Network security, Network infrastructure, Intrusion Detection System, Data Security, Information Security.","PeriodicalId":38638,"journal":{"name":"International Journal of Advances in Soft Computing and its Applications","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42031562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-20DOI: 10.15849/ijasca.220720.01
M. Rifaee, Mohammad Al Rawajbeh, Basem AlOkosh, Farhan AbdelFattah
Human face is considered as one of the most useful traits in biometrics, and it has been widely used in education, security, military and many other applications. However, in most of currently deployed face recognition systems ideal imaging conditions are assumed; to capture a fully featured images with enough quality to perform the recognition process. As the unmasked face will have a considerable impact on the numbers of new infections in the era of COVID-19 pandemic, a new unconstrained partial facial recognition method must be developed. In this research we proposed a mask detection method based on HOG (Histogram of Gradient) features descriptor and SVM (Support Vector Machine) to determine whether the face is masked or not, the proposed method was tested over 10000 randomly selected images from Masked Face-Net database and was able to correctly classify 98.73% of the tested images. Moreover, and to extract enough features from partially occluded face images, a new geometrical features extraction algorithm based on Contourlet transform was proposed. The method achieved 97.86% recognition accuracy when tested over 4784 correctly masked face images from Masked Face-Net database. Keywords: Facial Recognition, Unconstraint conditions, masked faces, HOG, Support Vector Machine.
人脸被认为是生物识别技术中最有用的特征之一,在教育、安全、军事等领域有着广泛的应用。然而,在目前部署的大多数人脸识别系统中,都假设了理想的成像条件;捕捉到具有足够质量的全功能图像来执行识别过程。在新冠肺炎大流行时代,揭下的人脸将对新增感染人数产生相当大的影响,因此必须开发一种新的无约束部分人脸识别方法。在本研究中,我们提出了一种基于HOG (Histogram of Gradient)特征描述符和SVM (Support Vector Machine)来判断人脸是否被屏蔽的掩模检测方法,该方法对从蒙面网数据库中随机选择的10000多张图像进行了测试,正确分类率达到98.73%。此外,为了从部分遮挡的人脸图像中提取足够的特征,提出了一种基于Contourlet变换的几何特征提取算法。通过对来自蒙面网数据库的4784张正确蒙面的人脸图像进行测试,该方法的识别准确率达到97.86%。关键词:人脸识别,无约束条件,蒙面,HOG,支持向量机
{"title":"A New approach to Recognize Human Face Under Unconstrained Environment","authors":"M. Rifaee, Mohammad Al Rawajbeh, Basem AlOkosh, Farhan AbdelFattah","doi":"10.15849/ijasca.220720.01","DOIUrl":"https://doi.org/10.15849/ijasca.220720.01","url":null,"abstract":"Human face is considered as one of the most useful traits in biometrics, and it has been widely used in education, security, military and many other applications. However, in most of currently deployed face recognition systems ideal imaging conditions are assumed; to capture a fully featured images with enough quality to perform the recognition process. As the unmasked face will have a considerable impact on the numbers of new infections in the era of COVID-19 pandemic, a new unconstrained partial facial recognition method must be developed. In this research we proposed a mask detection method based on HOG (Histogram of Gradient) features descriptor and SVM (Support Vector Machine) to determine whether the face is masked or not, the proposed method was tested over 10000 randomly selected images from Masked Face-Net database and was able to correctly classify 98.73% of the tested images. Moreover, and to extract enough features from partially occluded face images, a new geometrical features extraction algorithm based on Contourlet transform was proposed. The method achieved 97.86% recognition accuracy when tested over 4784 correctly masked face images from Masked Face-Net database. Keywords: Facial Recognition, Unconstraint conditions, masked faces, HOG, Support Vector Machine.","PeriodicalId":38638,"journal":{"name":"International Journal of Advances in Soft Computing and its Applications","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44037438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-20DOI: 10.15849/ijasca.220720.07
Iqbal M. Batiha, Suhaib A. Njadat, Radwan M. Batyha, A. Zraiqat, Amer Dababneh, S. Momani
he major goal of the this work is to present an optimal design of the Fractional-order Proportional-Derivative-Integral (FoPID) controller for the single-joint arm dynamics. For meeting this aim, the Particle Swarm Optimization (PSO) algorithm will be implement to tune the parameters of such controller. Six FoPID-controllers will be generated in accordance with two kinds of approaches (Continued Fraction Expansion (CFE) and Outstaloup’s approaches) for Laplacian operators, coupled with three fitness functions (IAE, ITAE, ITSE). These controllers will be competed to each other to determine which one can provide to the closed-loop system of the single-joint robot arm model a good rise time, short settling time, and an excellent overshoot. Keywords: Fractional-order model; Oustaloup approximation, Continued
{"title":"Design Fractional-order PID Controllers for Single-Joint Robot Arm Model","authors":"Iqbal M. Batiha, Suhaib A. Njadat, Radwan M. Batyha, A. Zraiqat, Amer Dababneh, S. Momani","doi":"10.15849/ijasca.220720.07","DOIUrl":"https://doi.org/10.15849/ijasca.220720.07","url":null,"abstract":"he major goal of the this work is to present an optimal design of the Fractional-order Proportional-Derivative-Integral (FoPID) controller for the single-joint arm dynamics. For meeting this aim, the Particle Swarm Optimization (PSO) algorithm will be implement to tune the parameters of such controller. Six FoPID-controllers will be generated in accordance with two kinds of approaches (Continued Fraction Expansion (CFE) and Outstaloup’s approaches) for Laplacian operators, coupled with three fitness functions (IAE, ITAE, ITSE). These controllers will be competed to each other to determine which one can provide to the closed-loop system of the single-joint robot arm model a good rise time, short settling time, and an excellent overshoot. Keywords: Fractional-order model; Oustaloup approximation, Continued","PeriodicalId":38638,"journal":{"name":"International Journal of Advances in Soft Computing and its Applications","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47313298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}