Pub Date : 2023-06-01DOI: 10.11591/ijai.v12.i2.pp974-983
O. Alsaif, S. Hasan, A. H. Maray
Biometrics became fairly important to help people identifications persons by their individualities or features. In this paper, gait recognition has been based on a skeleton model as an important indicator in prevalent activities. Using the reliable dataset for the Chinese Academy of Sciences (CASIA) of silhouettes class C database. Each video has been discredited to 75 frames for each (20 persons (10 males and 10 females)) as (1.0), the result will be 1,500 frames. After Pre-processing the images, many features are extracted from human silhouette images. For gender classification, the human walking skeleton used in this study. The model proposed is based on morphological processes on the silhouette images. The common angle has been computed for the two legs. Later, principal components analysis (PCA) was applied to reduce data using feature selection technology to get the most useful information in gait analysis. Applying two classifiers artificial neural network (ANN) and Gaussian Bayes to distinguish male or female for each classifier. The experimental results for the suggested method provided significant accomplishing about (95.5%), and accuracy of (75%). Gender classification using ANN is more efficient from the Gaussian Bayes technique by (20%), where ANN technique has given a superior performance in recognition.
{"title":"Using skeleton model to recognize human gait gender","authors":"O. Alsaif, S. Hasan, A. H. Maray","doi":"10.11591/ijai.v12.i2.pp974-983","DOIUrl":"https://doi.org/10.11591/ijai.v12.i2.pp974-983","url":null,"abstract":"Biometrics became fairly important to help people identifications persons by their individualities or features. In this paper, gait recognition has been based on a skeleton model as an important indicator in prevalent activities. Using the reliable dataset for the Chinese Academy of Sciences (CASIA) of silhouettes class C database. Each video has been discredited to 75 frames for each (20 persons (10 males and 10 females)) as (1.0), the result will be 1,500 frames. After Pre-processing the images, many features are extracted from human silhouette images. For gender classification, the human walking skeleton used in this study. The model proposed is based on morphological processes on the silhouette images. The common angle has been computed for the two legs. Later, principal components analysis (PCA) was applied to reduce data using feature selection technology to get the most useful information in gait analysis. Applying two classifiers artificial neural network (ANN) and Gaussian Bayes to distinguish male or female for each classifier. The experimental results for the suggested method provided significant accomplishing about (95.5%), and accuracy of (75%). Gender classification using ANN is more efficient from the Gaussian Bayes technique by (20%), where ANN technique has given a superior performance in recognition.","PeriodicalId":52221,"journal":{"name":"IAES International Journal of Artificial Intelligence","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46215328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-01DOI: 10.11591/ijai.v12.i2.pp543-551
Nurfarawahida Ramly, Mohd Saifullah Rusiman, Muhammad Ammar Shafi, S. S., F. Mohamad Hamzah, Ozlem Gurunlu Alma
Regression analysis is a popular tool used in data analysis, whereas fuzzy regression is usually used for analyzing uncertain and imprecise data. In the industrial area, the company usually has problems in predicting the future manufacturing income. Therefore, a new approach model is needed to solve the future company prediction income. This article analyzed the manufacturing income by using the multiple linear regression (MLR) model and fuzzy linear regression (FLR) model proposed by Tanaka and Zolfaghari, involving 9 explanatory variables. In order to find the optimum of the FLR model, the degree of fitting (H) was adjusted between 0 to 1. The performance of three methods has been measured by using mean square error (MSE), mean absolute error (MAE) and mean absolute percentage error (MAPE). The analysis proved that FLR with Zolfaghari’s model with the degree of fitting of 0.025 outperformed the MLR and FLR with Tanaka’s model with the smallest error value. In conclusion, the manufacturing income is directly proportional to 6 independent variables. Furthermore, the manufacturing income is inversely proportional to 3 independent variables. This model is suitable in predicting future manufacturing income.
{"title":"An adjustment degree of fitting on fuzzy linear regression model toward manufacturing income","authors":"Nurfarawahida Ramly, Mohd Saifullah Rusiman, Muhammad Ammar Shafi, S. S., F. Mohamad Hamzah, Ozlem Gurunlu Alma","doi":"10.11591/ijai.v12.i2.pp543-551","DOIUrl":"https://doi.org/10.11591/ijai.v12.i2.pp543-551","url":null,"abstract":"Regression analysis is a popular tool used in data analysis, whereas fuzzy regression is usually used for analyzing uncertain and imprecise data. In the industrial area, the company usually has problems in predicting the future manufacturing income. Therefore, a new approach model is needed to solve the future company prediction income. This article analyzed the manufacturing income by using the multiple linear regression (MLR) model and fuzzy linear regression (FLR) model proposed by Tanaka and Zolfaghari, involving 9 explanatory variables. In order to find the optimum of the FLR model, the degree of fitting (H) was adjusted between 0 to 1. The performance of three methods has been measured by using mean square error (MSE), mean absolute error (MAE) and mean absolute percentage error (MAPE). The analysis proved that FLR with Zolfaghari’s model with the degree of fitting of 0.025 outperformed the MLR and FLR with Tanaka’s model with the smallest error value. In conclusion, the manufacturing income is directly proportional to 6 independent variables. Furthermore, the manufacturing income is inversely proportional to 3 independent variables. This model is suitable in predicting future manufacturing income.","PeriodicalId":52221,"journal":{"name":"IAES International Journal of Artificial Intelligence","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42512308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-01DOI: 10.11591/ijai.v12.i2.pp755-764
Sarah Dahir, A. El Qadi
The enormous size of the web and the vagueness of the terms used to formulate queries still pose a huge problem in achieving user satisfaction. To solve this problem, queries need to be disambiguated based on their context. One well-known technique for enhancing the effectiveness of information retrieval (IR) is query expansion (QE). It reformulates the initial query by adding similar terms that help in retrieving more relevant results. In this paper, we propose a new QE semantic approach based on the modified Concept2vec model using linked data. The novelty of our work is the use of query-dependent linked data from DBpedia as training data for the Concept2vec skip-gram model. We considered only the top feedback documents, and we did not use them directly to generate embeddings; we used their interlinked data instead. Also, we used the linked data attributes that have a long value, e.g., “dbo: abstract”, as training data for neural network models, and, we extracted from them the valuable concepts for QE. Our experiments on the Associated Press collection dataset showed that retrieval effectiveness can be much improved when a skip-gram model is used along with a DBpedia feature. Also, we demonstrated significant improvements compared to other approaches.
{"title":"Query expansion based on modified Concept2vec model using resource description framework knowledge graphs","authors":"Sarah Dahir, A. El Qadi","doi":"10.11591/ijai.v12.i2.pp755-764","DOIUrl":"https://doi.org/10.11591/ijai.v12.i2.pp755-764","url":null,"abstract":"The enormous size of the web and the vagueness of the terms used to formulate queries still pose a huge problem in achieving user satisfaction. To solve this problem, queries need to be disambiguated based on their context. One well-known technique for enhancing the effectiveness of information retrieval (IR) is query expansion (QE). It reformulates the initial query by adding similar terms that help in retrieving more relevant results. In this paper, we propose a new QE semantic approach based on the modified Concept2vec model using linked data. The novelty of our work is the use of query-dependent linked data from DBpedia as training data for the Concept2vec skip-gram model. We considered only the top feedback documents, and we did not use them directly to generate embeddings; we used their interlinked data instead. Also, we used the linked data attributes that have a long value, e.g., “dbo: abstract”, as training data for neural network models, and, we extracted from them the valuable concepts for QE. Our experiments on the Associated Press collection dataset showed that retrieval effectiveness can be much improved when a skip-gram model is used along with a DBpedia feature. Also, we demonstrated significant improvements compared to other approaches.","PeriodicalId":52221,"journal":{"name":"IAES International Journal of Artificial Intelligence","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42641171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-01DOI: 10.11591/ijai.v12.i2.pp505-513
T. Zaman, Elaf Khalid Alharbi, Aeshah Salem Bawazeer, Ghala Abdullah Algethami, Leen Abdullah Almehmadi, Taif Muhammed Alshareef, Y. Alotaibi, Hosham Mohammed Osman Karar
The sudden arrival of COVID-19 called for new technologies to manage the healthcare system and to reduce the burden of patients in the hospitals. Artificial intelligence (AI) which involved using computers to model intelligent behavior became an important choice. Various AI applications helped a lot in the management of healthcare and delivering quick medical consultations and various services to a wide variety of patients. These new technological developments had significant roles in detecting the COVID-19 cases, monitoring them, and forecasting for the future. Artificial intelligence is applied to mimic the functional system of human intelligence. AI techniques and applications are also applied in proper examinations, prediction, analyzing, and tracking of the whereabouts of patients and the projected results. It also played a significant role in recognizing and proposing the generation of vaccines to prevent COVID-19. This study is therefore an attempt to understand the major role and use of AI in healthcare institutions by providing urgent decision-making techniques that greatly helped to manage and control the spread of the COVID-19 disease.
{"title":"Artificial intelligence: the major role it played in the management of healthcare during COVID-19 pandemic","authors":"T. Zaman, Elaf Khalid Alharbi, Aeshah Salem Bawazeer, Ghala Abdullah Algethami, Leen Abdullah Almehmadi, Taif Muhammed Alshareef, Y. Alotaibi, Hosham Mohammed Osman Karar","doi":"10.11591/ijai.v12.i2.pp505-513","DOIUrl":"https://doi.org/10.11591/ijai.v12.i2.pp505-513","url":null,"abstract":"The sudden arrival of COVID-19 called for new technologies to manage the healthcare system and to reduce the burden of patients in the hospitals. Artificial intelligence (AI) which involved using computers to model intelligent behavior became an important choice. Various AI applications helped a lot in the management of healthcare and delivering quick medical consultations and various services to a wide variety of patients. These new technological developments had significant roles in detecting the COVID-19 cases, monitoring them, and forecasting for the future. Artificial intelligence is applied to mimic the functional system of human intelligence. AI techniques and applications are also applied in proper examinations, prediction, analyzing, and tracking of the whereabouts of patients and the projected results. It also played a significant role in recognizing and proposing the generation of vaccines to prevent COVID-19. This study is therefore an attempt to understand the major role and use of AI in healthcare institutions by providing urgent decision-making techniques that greatly helped to manage and control the spread of the COVID-19 disease.","PeriodicalId":52221,"journal":{"name":"IAES International Journal of Artificial Intelligence","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46855935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-01DOI: 10.11591/ijai.v12.i2.pp656-666
Salah Mortada, Y. Yusof
An example of a combinatorial problem is the vehicle routing problem with time windows (VRPTW), which focuses on choosing routes for a limited number of vehicles to serve a group of customers in a restricted period. Meta-heuristics algorithms are successful techniques for VRPTW, and in this study, existing modified artificial bee colony (MABC) algorithm is revised to provide an improved solution. One of the drawbacks of the MABC algorithm is its inability to execute wide exploration. A new solution that is produced randomly and being swapped with best solution when the previous solution can no longer be improved is prone to be trapped in local optima. Hence, this study proposes a perturbed MABC known as pertubated (P-MABC) that addresses the problem of local optima. P-MABC deploys five types of perturbation operators where it improvises abandoned solutions by changing customers in the solution. Experimental results show that the proposed P-MABC algorithm requires fewer number of vehicles and least amount of travelled distance compared with MABC. The P-MABC algorithm can be used to improve the search process of other population algorithms and can be applied in solving VRPTW in domain applications such as food distribution.
{"title":"An improved artificial bee colony with perturbation operators in scout bees’ phase for solving vehicle routing problem with time windows","authors":"Salah Mortada, Y. Yusof","doi":"10.11591/ijai.v12.i2.pp656-666","DOIUrl":"https://doi.org/10.11591/ijai.v12.i2.pp656-666","url":null,"abstract":"An example of a combinatorial problem is the vehicle routing problem with time windows (VRPTW), which focuses on choosing routes for a limited number of vehicles to serve a group of customers in a restricted period. Meta-heuristics algorithms are successful techniques for VRPTW, and in this study, existing modified artificial bee colony (MABC) algorithm is revised to provide an improved solution. One of the drawbacks of the MABC algorithm is its inability to execute wide exploration. A new solution that is produced randomly and being swapped with best solution when the previous solution can no longer be improved is prone to be trapped in local optima. Hence, this study proposes a perturbed MABC known as pertubated (P-MABC) that addresses the problem of local optima. P-MABC deploys five types of perturbation operators where it improvises abandoned solutions by changing customers in the solution. Experimental results show that the proposed P-MABC algorithm requires fewer number of vehicles and least amount of travelled distance compared with MABC. The P-MABC algorithm can be used to improve the search process of other population algorithms and can be applied in solving VRPTW in domain applications such as food distribution.","PeriodicalId":52221,"journal":{"name":"IAES International Journal of Artificial Intelligence","volume":"17 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"65348961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-01DOI: 10.11591/ijai.v12.i2.pp667-677
S. Rashid, Mustafa Maad Hamdi, L. Audah, M. A. Jubair, M. H. Hassan, M. Abood, S. Mostafa
Vehicular ad-hoc network (VANET) is dynamic and it works on various noteworthy applications in intelligent transportation systems (ITS). In general, routing overhead is more in the VANETs due to their properties. Hence, need to handle this issue to improve the performance of the VANETs. Also due to its dynamic nature collision occurs. Up till now, we have had immense complexity in developing the multi-constrained network with high quality of forwarding (QoF). To solve the difficulties especially to control the congestion this paper introduces an enhanced genetic algorithmbased lion optimization for QoF-based routing protocol (EGA-LOQRP) in the VANET network. Lion optimization routing protocol (LORP) is an optimization-based routing protocol that can able to control the network with a huge number of vehicles. An enhanced genetic algorithm (EGA) is employed here to find the best possible path for data transmission which leads to meeting the QoF. This will result in low packet loss, delay, and energy consumption of the network. The exhaustive simulation tests demonstrate that the EGA-LOQRP routing protocol improves performance effectively in the face of congestion and QoS assaults compared to the previous routing protocols like Ad hoc on-demand distance vector (AODV), ant colony optimization-AODV (ACO-AODV) and traffic aware segmentAODV (TAS-AODV).
{"title":"A collaborated genetic with lion optimization algorithms for improving the quality of forwarding in a vehicular ad-hoc network","authors":"S. Rashid, Mustafa Maad Hamdi, L. Audah, M. A. Jubair, M. H. Hassan, M. Abood, S. Mostafa","doi":"10.11591/ijai.v12.i2.pp667-677","DOIUrl":"https://doi.org/10.11591/ijai.v12.i2.pp667-677","url":null,"abstract":"Vehicular ad-hoc network (VANET) is dynamic and it works on various noteworthy applications in intelligent transportation systems (ITS). In general, routing overhead is more in the VANETs due to their properties. Hence, need to handle this issue to improve the performance of the VANETs. Also due to its dynamic nature collision occurs. Up till now, we have had immense complexity in developing the multi-constrained network with high quality of forwarding (QoF). To solve the difficulties especially to control the congestion this paper introduces an enhanced genetic algorithmbased lion optimization for QoF-based routing protocol (EGA-LOQRP) in the VANET network. Lion optimization routing protocol (LORP) is an optimization-based routing protocol that can able to control the network with a huge number of vehicles. An enhanced genetic algorithm (EGA) is employed here to find the best possible path for data transmission which leads to meeting the QoF. This will result in low packet loss, delay, and energy consumption of the network. The exhaustive simulation tests demonstrate that the EGA-LOQRP routing protocol improves performance effectively in the face of congestion and QoS assaults compared to the previous routing protocols like Ad hoc on-demand distance vector (AODV), ant colony optimization-AODV (ACO-AODV) and traffic aware segmentAODV (TAS-AODV).","PeriodicalId":52221,"journal":{"name":"IAES International Journal of Artificial Intelligence","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45685154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-01DOI: 10.11591/ijai.v12.i2.pp912-920
Meryem Chaabi, Mohamed Hamlich, Moncef Garouani
To meet customer expectations and remain competitive, industrials try constantly to improve their quality control systems. There is hence increasing demand for adopting automatic defect detection solutions. However, the biggest issue in addressing such systems is the imbalanced aspect of industrial datasets. Often, defect-free samples far exceed the defected ones, due to continuous improvement approaches adopted by manufacturing companies. In this sense, we propose an automatic defect detection system based on one-class classification (OCC) since it involves only normal samples during training. It consists of three sub-models, first, a convolutional autoencoder serves as latent features extractor, the extracted features vectors are subsequently fed into the dimensionality reduction process by performing principal component analysis (PCA), then the reduced-dimensional data are used to train the one-class classifier support vector data description (SVDD). During the test phase, both normal and defected images are used. The first two stages of the trained model generate a low-dimensional features vector, whereas the SVDD classifies the new input, whether it is defect-free or defected. This approach is evaluated on the carpet images from the industrial inspection dataset MVTec anomaly detection (MVTec AD). During training, only normal images were used. The results showed that the proposed method outperforms the state-of-the-art methods.
{"title":"Product defect detection based on convolutional autoencoder and one-class classification","authors":"Meryem Chaabi, Mohamed Hamlich, Moncef Garouani","doi":"10.11591/ijai.v12.i2.pp912-920","DOIUrl":"https://doi.org/10.11591/ijai.v12.i2.pp912-920","url":null,"abstract":"To meet customer expectations and remain competitive, industrials try constantly to improve their quality control systems. There is hence increasing demand for adopting automatic defect detection solutions. However, the biggest issue in addressing such systems is the imbalanced aspect of industrial datasets. Often, defect-free samples far exceed the defected ones, due to continuous improvement approaches adopted by manufacturing companies. In this sense, we propose an automatic defect detection system based on one-class classification (OCC) since it involves only normal samples during training. It consists of three sub-models, first, a convolutional autoencoder serves as latent features extractor, the extracted features vectors are subsequently fed into the dimensionality reduction process by performing principal component analysis (PCA), then the reduced-dimensional data are used to train the one-class classifier support vector data description (SVDD). During the test phase, both normal and defected images are used. The first two stages of the trained model generate a low-dimensional features vector, whereas the SVDD classifies the new input, whether it is defect-free or defected. This approach is evaluated on the carpet images from the industrial inspection dataset MVTec anomaly detection (MVTec AD). During training, only normal images were used. The results showed that the proposed method outperforms the state-of-the-art methods.","PeriodicalId":52221,"journal":{"name":"IAES International Journal of Artificial Intelligence","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43941082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Glaucoma is a disease that affects the optic nerve. This disease, over a period of time, can lead to loss of vision. Which is known as ‘silent thief of sight’. There are several methods in which the disease can be treated, if detected at an early stage It is not possible for any technology, including artificial intelligence, to replace a doctor. However, it is possible to develop a model based on several classical image processing algorithms, combined with artificial intelligence that can detect onset of glaucoma based on certain parameters of the retinal fundus. This model would play an important role in early detection of the disease and assist the doctor. The traditional methods to detect glaucoma, as efficient as they may be, are usually expensive, a machine learning approach to diagnose from fundus images and accurately classify its severity can be considered to be efficient. Here we propose support vector machine (SVM) method to segregate, train the models using a high-end graphics processor unit (GPU) and augment the hull convex approach to boost the accuracy of the image processing mechanisms along with distinguishing the different stages of glaucoma. A web application for the screening process has also been adopted.
{"title":"Machine learning classifiers for detection of glaucoma","authors":"Reshma Verma, Lakshmi Shrinivasan, Basvaraj Hiremath","doi":"10.11591/ijai.v12.i2.pp806-814","DOIUrl":"https://doi.org/10.11591/ijai.v12.i2.pp806-814","url":null,"abstract":"Glaucoma is a disease that affects the optic nerve. This disease, over a period of time, can lead to loss of vision. Which is known as ‘silent thief of sight’. There are several methods in which the disease can be treated, if detected at an early stage It is not possible for any technology, including artificial intelligence, to replace a doctor. However, it is possible to develop a model based on several classical image processing algorithms, combined with artificial intelligence that can detect onset of glaucoma based on certain parameters of the retinal fundus. This model would play an important role in early detection of the disease and assist the doctor. The traditional methods to detect glaucoma, as efficient as they may be, are usually expensive, a machine learning approach to diagnose from fundus images and accurately classify its severity can be considered to be efficient. Here we propose support vector machine (SVM) method to segregate, train the models using a high-end graphics processor unit (GPU) and augment the hull convex approach to boost the accuracy of the image processing mechanisms along with distinguishing the different stages of glaucoma. A web application for the screening process has also been adopted.","PeriodicalId":52221,"journal":{"name":"IAES International Journal of Artificial Intelligence","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47927119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-01DOI: 10.11591/ijai.v12.i2.pp776-784
Rico Kurniawan, B. Utomo, K. Siregar, K. Ramli, B. Besral, Ruddy J. Suhatril, Okky Assetya Pratiwi
Early risk prediction and appropriate treatment are believed to be able to delay the occurrence of hypertension and attendant conditions. Many hypertension prediction models have been developed across the world, but they cannot be generalized directly to all populations, including for Indonesian population. This study aimed to develop and validate a hypertension risk-prediction model using machine learning (ML). The modifiable risk factors are used as the predictor, while the target variable on the algorithm is hypertension status. This study compared several machine-learning algorithms such as decision tree, random forest, gradient boosting, and logistic regression to develop a hypertension prediction model. Several parameters, including the area under the receiver operator characteristic curve (AUC), classification accuracy (CA), F1 score, precision, and recall were used to evaluate the models. Most of the predictors used in this study were significantly correlated with hypertension. Logistic regression algorithm showed better parameter values, with AUC 0.829, CA 89.6%, recall 0.896, precision 0.878, and F1 score 0.877. ML offers the ability to develop a quick prediction model for hypertension screening using non-invasive factors. From this study, we estimate that 89.6% of people with elevated blood pressure obtained on home blood pressure measurement will show clinical hypertension.
{"title":"Hypertension prediction using machine learning algorithm among Indonesian adults","authors":"Rico Kurniawan, B. Utomo, K. Siregar, K. Ramli, B. Besral, Ruddy J. Suhatril, Okky Assetya Pratiwi","doi":"10.11591/ijai.v12.i2.pp776-784","DOIUrl":"https://doi.org/10.11591/ijai.v12.i2.pp776-784","url":null,"abstract":"Early risk prediction and appropriate treatment are believed to be able to delay the occurrence of hypertension and attendant conditions. Many hypertension prediction models have been developed across the world, but they cannot be generalized directly to all populations, including for Indonesian population. This study aimed to develop and validate a hypertension risk-prediction model using machine learning (ML). The modifiable risk factors are used as the predictor, while the target variable on the algorithm is hypertension status. This study compared several machine-learning algorithms such as decision tree, random forest, gradient boosting, and logistic regression to develop a hypertension prediction model. Several parameters, including the area under the receiver operator characteristic curve (AUC), classification accuracy (CA), F1 score, precision, and recall were used to evaluate the models. Most of the predictors used in this study were significantly correlated with hypertension. Logistic regression algorithm showed better parameter values, with AUC 0.829, CA 89.6%, recall 0.896, precision 0.878, and F1 score 0.877. ML offers the ability to develop a quick prediction model for hypertension screening using non-invasive factors. From this study, we estimate that 89.6% of people with elevated blood pressure obtained on home blood pressure measurement will show clinical hypertension.","PeriodicalId":52221,"journal":{"name":"IAES International Journal of Artificial Intelligence","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49601349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-01DOI: 10.11591/ijai.v12.i2.pp627-640
Hicham Benradi, A. Chater, A. Lasfar
Facial recognition technology has been used in many fields such as security, biometric identification, robotics, video surveillance, health, and commerce due to its ease of implementation and minimal data processing time. However, this technology is influenced by the presence of variations such as pose, lighting, or occlusion. In this paper, we propose a new approach to improve the accuracy rate of face recognition in the presence of variation or occlusion, by combining feature extraction with a histogram of oriented gradient (HOG), scale invariant feature transform (SIFT), Gabor, and the Canny contour detector techniques, as well as a convolutional neural network (CNN) architecture, tested with several combinations of the activation function used (Softmax and Segmoïd) and the optimization algorithm used during training (adam, Adamax, RMSprop, and stochastic gradient descent (SGD)). For this, a preprocessing was performed on two databases of our database of faces (ORL) and Sheffield faces used, then we perform a feature extraction operation with the mentioned techniques and then pass them to our used CNN architecture. The results of our simulations show a high performance of the SIFT+CNN combination, in the case of the presence of variations with an accuracy rate up to 100%.
{"title":"A hybrid approach for face recognition using a convolutional neural network combined with feature extraction techniques","authors":"Hicham Benradi, A. Chater, A. Lasfar","doi":"10.11591/ijai.v12.i2.pp627-640","DOIUrl":"https://doi.org/10.11591/ijai.v12.i2.pp627-640","url":null,"abstract":"Facial recognition technology has been used in many fields such as security, biometric identification, robotics, video surveillance, health, and commerce due to its ease of implementation and minimal data processing time. However, this technology is influenced by the presence of variations such as pose, lighting, or occlusion. In this paper, we propose a new approach to improve the accuracy rate of face recognition in the presence of variation or occlusion, by combining feature extraction with a histogram of oriented gradient (HOG), scale invariant feature transform (SIFT), Gabor, and the Canny contour detector techniques, as well as a convolutional neural network (CNN) architecture, tested with several combinations of the activation function used (Softmax and Segmoïd) and the optimization algorithm used during training (adam, Adamax, RMSprop, and stochastic gradient descent (SGD)). For this, a preprocessing was performed on two databases of our database of faces (ORL) and Sheffield faces used, then we perform a feature extraction operation with the mentioned techniques and then pass them to our used CNN architecture. The results of our simulations show a high performance of the SIFT+CNN combination, in the case of the presence of variations with an accuracy rate up to 100%.","PeriodicalId":52221,"journal":{"name":"IAES International Journal of Artificial Intelligence","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41565333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}