M. Arhami, Anita Desiani, S. Yahdin, Ajeng Islamia Putri, Rifkie Primartha, Husaini Husaini
Diabetic Retinopathy is a effect of diabetes. It results abnormalities in the retinal blood vessels. The abnormalities can cause blurry vision and blindness. Automatic retinal blood vessels segmentation on retinal image can detect abnormalities in these blood vessels, actually resulting in faster and more accurate segmentation results. The paper proposed an automatic blood vessel segmentation method that combined Otsu Thresholding with image enhancement techniques. In image enhancement, it combined CLAHE with Top-hat transformation to improve image quality. The study used DRIVE dataset that provided retinal image data. The image data in dataset was generated by the fundus camera. The CLAHE and Top-hat transformation methods were applied to rise the contrast and reduce noise on the image. The images that had good quality could help the segmentation process to find blood vessels in retinal images appropriately by a computer. It improved the performance of the segmentation method for detecting blood vessels in retinal image. Otsu Thresholding was used to segment blood vessel pixels and other pixels as background by local threshold. To evaluation performance of the proposed method, the study has been measured accuracy, sensitivity, and specificity. The DRIVE dataset's study results showed that the averages of accuracy, sensitivity, and specificity values were 94.7%, 72.28%, and 96.87%, respectively. It indicated that the proposed method was successful and well to work on blood vessels segmentation retinal images especially for thick blood vessels.
{"title":"Contrast enhancement for improved blood vessels retinal segmentation using top-hat transformation and otsu thresholding","authors":"M. Arhami, Anita Desiani, S. Yahdin, Ajeng Islamia Putri, Rifkie Primartha, Husaini Husaini","doi":"10.26555/ijain.v8i2.779","DOIUrl":"https://doi.org/10.26555/ijain.v8i2.779","url":null,"abstract":"Diabetic Retinopathy is a effect of diabetes. It results abnormalities in the retinal blood vessels. The abnormalities can cause blurry vision and blindness. Automatic retinal blood vessels segmentation on retinal image can detect abnormalities in these blood vessels, actually resulting in faster and more accurate segmentation results. The paper proposed an automatic blood vessel segmentation method that combined Otsu Thresholding with image enhancement techniques. In image enhancement, it combined CLAHE with Top-hat transformation to improve image quality. The study used DRIVE dataset that provided retinal image data. The image data in dataset was generated by the fundus camera. The CLAHE and Top-hat transformation methods were applied to rise the contrast and reduce noise on the image. The images that had good quality could help the segmentation process to find blood vessels in retinal images appropriately by a computer. It improved the performance of the segmentation method for detecting blood vessels in retinal image. Otsu Thresholding was used to segment blood vessel pixels and other pixels as background by local threshold. To evaluation performance of the proposed method, the study has been measured accuracy, sensitivity, and specificity. The DRIVE dataset's study results showed that the averages of accuracy, sensitivity, and specificity values were 94.7%, 72.28%, and 96.87%, respectively. It indicated that the proposed method was successful and well to work on blood vessels segmentation retinal images especially for thick blood vessels.","PeriodicalId":52195,"journal":{"name":"International Journal of Advances in Intelligent Informatics","volume":"49 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81735001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
E. Kurniawan, H. Adinanta, S. Suryadi, B. Sirenden, R. K. Ula, Hari Pratomo, Purwowibowo Purwowibowo, J. Prakosa
During the COVID-19 pandemic, physical distancing (PD) is highly recommended to stop the transmission of the virus. PD practices are challenging due to humans' nature as social creatures and the difficulty in estimating the distance from other people. Therefore, some technological aspects are required to monitor PD practices, where one of them is computer vision-based approach. Hence, deep learning-based computer vision is utilized to automatically detect human objects in the video surveillance. In this work, we focus on the performance study of deep learning-based object detector with Tensor RT optimization for the application of physical distancing monitoring system. Deep learning-based object detection is employed to discover people in the crowd. Once the objects have been detected, then the distances between objects can be calculated to determine whether those objects violate physical distancing or not. This work presents the physical distancing monitoring system using a deep neural network. The optimization process is based on TensorRT executed on Graphical Processing Unit (GPU) and Computer Unified Device Architecture (CUDA) platform. This research evaluates the inferencing speed of the well-known object detection model You-Only-Look-Once (YOLO) run on two different Artificial Intelligence (AI) machines. Two different systems-based on Jetson platform are developed as portable devices functioning as PD monitoring stations. The results show that the inferencing speed in regard to Frame-Per-Second (FPS) increases up to 9 times of the non-optimized ones, while maintaining the detection accuracies.
{"title":"Deep neural network-based physical distancing monitoring system with tensorRT optimization","authors":"E. Kurniawan, H. Adinanta, S. Suryadi, B. Sirenden, R. K. Ula, Hari Pratomo, Purwowibowo Purwowibowo, J. Prakosa","doi":"10.26555/ijain.v8i2.824","DOIUrl":"https://doi.org/10.26555/ijain.v8i2.824","url":null,"abstract":"During the COVID-19 pandemic, physical distancing (PD) is highly recommended to stop the transmission of the virus. PD practices are challenging due to humans' nature as social creatures and the difficulty in estimating the distance from other people. Therefore, some technological aspects are required to monitor PD practices, where one of them is computer vision-based approach. Hence, deep learning-based computer vision is utilized to automatically detect human objects in the video surveillance. In this work, we focus on the performance study of deep learning-based object detector with Tensor RT optimization for the application of physical distancing monitoring system. Deep learning-based object detection is employed to discover people in the crowd. Once the objects have been detected, then the distances between objects can be calculated to determine whether those objects violate physical distancing or not. This work presents the physical distancing monitoring system using a deep neural network. The optimization process is based on TensorRT executed on Graphical Processing Unit (GPU) and Computer Unified Device Architecture (CUDA) platform. This research evaluates the inferencing speed of the well-known object detection model You-Only-Look-Once (YOLO) run on two different Artificial Intelligence (AI) machines. Two different systems-based on Jetson platform are developed as portable devices functioning as PD monitoring stations. The results show that the inferencing speed in regard to Frame-Per-Second (FPS) increases up to 9 times of the non-optimized ones, while maintaining the detection accuracies.","PeriodicalId":52195,"journal":{"name":"International Journal of Advances in Intelligent Informatics","volume":"59 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83996723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Over the last decade, numerous methods have been developed to detect the influential actors of hate speech in social networks, one of which is the Collective Influence (CI) method. However, this method is associated with unweighted datasets, which makes it inappropriate for social media, significantly using weight datasets. This study proposes a new CI method called the Weighted Collective Influence Graph (WCIG), which uses the weights and neighbor values to detect the influence of hate speech. A total of 49, 992 Indonesian tweets were and extracted from Indonesian Twitter accounts, from January 01 to January 22, 2021. The data collected are also used to compare the results of the proposed WCIG method to determine the influential actors in the dissemination of information. The experiment was carried out two times using parameters ∂=2 and ∂=4. The results showed that the usernames bernacleboy and zack_rockstar are influential actors in the dataset. Furthermore, the time needed to process WCIG calculations on HPC is 34-75 hours because the larger the parameter used, the greater the processing time.
{"title":"An extended approach of weight collective influence graph for detection influence actor","authors":"Galih Hendro Martono, A. Azhari, K. Mustofa","doi":"10.26555/ijain.v8i1.800","DOIUrl":"https://doi.org/10.26555/ijain.v8i1.800","url":null,"abstract":"Over the last decade, numerous methods have been developed to detect the influential actors of hate speech in social networks, one of which is the Collective Influence (CI) method. However, this method is associated with unweighted datasets, which makes it inappropriate for social media, significantly using weight datasets. This study proposes a new CI method called the Weighted Collective Influence Graph (WCIG), which uses the weights and neighbor values to detect the influence of hate speech. A total of 49, 992 Indonesian tweets were and extracted from Indonesian Twitter accounts, from January 01 to January 22, 2021. The data collected are also used to compare the results of the proposed WCIG method to determine the influential actors in the dissemination of information. The experiment was carried out two times using parameters ∂=2 and ∂=4. The results showed that the usernames bernacleboy and zack_rockstar are influential actors in the dataset. Furthermore, the time needed to process WCIG calculations on HPC is 34-75 hours because the larger the parameter used, the greater the processing time.","PeriodicalId":52195,"journal":{"name":"International Journal of Advances in Intelligent Informatics","volume":"24 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88754296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Computation ranking algorithms are widely used in several informatics fields. One of them is the PageRank algorithm, recognized as the most popular search engine globally. Many researchers have improvised the ranking algorithm in order to get better results. Recent research using Rayleigh Quotient to speed up PageRank can guarantee the convergence of the dominant eigenvalues as a key value for stopping computation. Bolzano's method has a convergence character on a linear function by dividing an interval into two intervals for better convergence. This research aims to implant the Bolzano algorithm into Rayleigh for faster computation. This research produces an algorithm that has been tested and validated by mathematicians, which shows an optimization speed of a maximum 7.08% compared to the sole Rayleigh approach. Analysis of computation results using statistics software shows that the degree of the curve of the new algorithm, which is Rayleigh with Bolzano booster (RB), is positive and more significant than the original method. In other words, the linear function will always be faster in the subsequent computation than the previous method.
{"title":"Rayleigh quotient with bolzano booster for faster convergence of dominant eigenvalues","authors":"M. Arifin, A. N. Che Pee, S. S. Rahim, A. Wibawa","doi":"10.26555/ijain.v8i1.718","DOIUrl":"https://doi.org/10.26555/ijain.v8i1.718","url":null,"abstract":"Computation ranking algorithms are widely used in several informatics fields. One of them is the PageRank algorithm, recognized as the most popular search engine globally. Many researchers have improvised the ranking algorithm in order to get better results. Recent research using Rayleigh Quotient to speed up PageRank can guarantee the convergence of the dominant eigenvalues as a key value for stopping computation. Bolzano's method has a convergence character on a linear function by dividing an interval into two intervals for better convergence. This research aims to implant the Bolzano algorithm into Rayleigh for faster computation. This research produces an algorithm that has been tested and validated by mathematicians, which shows an optimization speed of a maximum 7.08% compared to the sole Rayleigh approach. Analysis of computation results using statistics software shows that the degree of the curve of the new algorithm, which is Rayleigh with Bolzano booster (RB), is positive and more significant than the original method. In other words, the linear function will always be faster in the subsequent computation than the previous method.","PeriodicalId":52195,"journal":{"name":"International Journal of Advances in Intelligent Informatics","volume":"32 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86274620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Imam Tahyudin, R. Wahyudi, Wiga Maulana, Hidetaka Nambo
COVID-19 pandemics for as long as two years ago since 2019 gives many insights into various aspects, including scientific development. One of them is the fundamental research of computer science. This research aimed to construct the best model of COVID-19 patients’ mortality and obtain less prediction errors. We performed the combination methods of time series, SARIMA, and Evolutionary algorithm, PARCD, to predict male patients who died because of COVID-19 in the USA, containing 1.008 data. So, this research proposed that SARIMA-PARCD has a powerful combination for addressing the complex problem in a dataset. The prediction error of SARIMA-PARCD was compared with other methods, i.e., SARIMA, LSTM, and the combination of SARIMA-LSTM. The result showed that the SARIMA-PARCD has the smallest MSE value of 0.0049. Therefore, the proposed method is competitive to implement in other cases with similar characteristics. This combination is robust for solving linear and non-linear problems.
{"title":"The mortality modeling of covid-19 patients using a combined time series model and evolutionary algorithm","authors":"Imam Tahyudin, R. Wahyudi, Wiga Maulana, Hidetaka Nambo","doi":"10.26555/ijain.v8i1.669","DOIUrl":"https://doi.org/10.26555/ijain.v8i1.669","url":null,"abstract":"COVID-19 pandemics for as long as two years ago since 2019 gives many insights into various aspects, including scientific development. One of them is the fundamental research of computer science. This research aimed to construct the best model of COVID-19 patients’ mortality and obtain less prediction errors. We performed the combination methods of time series, SARIMA, and Evolutionary algorithm, PARCD, to predict male patients who died because of COVID-19 in the USA, containing 1.008 data. So, this research proposed that SARIMA-PARCD has a powerful combination for addressing the complex problem in a dataset. The prediction error of SARIMA-PARCD was compared with other methods, i.e., SARIMA, LSTM, and the combination of SARIMA-LSTM. The result showed that the SARIMA-PARCD has the smallest MSE value of 0.0049. Therefore, the proposed method is competitive to implement in other cases with similar characteristics. This combination is robust for solving linear and non-linear problems.","PeriodicalId":52195,"journal":{"name":"International Journal of Advances in Intelligent Informatics","volume":"5 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87127729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jayson P. Rogelio, E. Dadios, A. Bandala, R. R. Vicerra, E. Sybingco
The concept is highly critical for robotic technologies that rely on visual feedback. In this context, robot systems tend to be unresponsive due to reliance on pre-programmed trajectory and path, meaning the occurrence of a change in the environment or the absence of an object. This review paper aims to provide comprehensive studies on the recent application of visual servoing and DNN. PBVS and Mobilenet-SSD were chosen algorithms for alignment control of the film handler mechanism of the portable x-ray system. It also discussed the theoretical framework features extraction and description, visual servoing, and Mobilenet-SSD. Likewise, the latest applications of visual servoing and DNN was summarized, including the comparison of Mobilenet-SSD with other sophisticated models. As a result of a previous study presented, visual servoing and MobileNet-SSD provide reliable tools and models for manipulating robotics systems, including where occlusion is present. Furthermore, effective alignment control relies significantly on visual servoing and deep neural reliability, shaped by different parameters such as the type of visual servoing, feature extraction and description, and DNNs used to construct a robust state estimator. Therefore, visual servoing and MobileNet-SSD are parameterized concepts that require enhanced optimization to achieve a specific purpose with distinct tools.
{"title":"Alignment control using visual servoing and mobilenet single-shot multi-box detection (SSD): a review","authors":"Jayson P. Rogelio, E. Dadios, A. Bandala, R. R. Vicerra, E. Sybingco","doi":"10.26555/ijain.v8i1.819","DOIUrl":"https://doi.org/10.26555/ijain.v8i1.819","url":null,"abstract":"The concept is highly critical for robotic technologies that rely on visual feedback. In this context, robot systems tend to be unresponsive due to reliance on pre-programmed trajectory and path, meaning the occurrence of a change in the environment or the absence of an object. This review paper aims to provide comprehensive studies on the recent application of visual servoing and DNN. PBVS and Mobilenet-SSD were chosen algorithms for alignment control of the film handler mechanism of the portable x-ray system. It also discussed the theoretical framework features extraction and description, visual servoing, and Mobilenet-SSD. Likewise, the latest applications of visual servoing and DNN was summarized, including the comparison of Mobilenet-SSD with other sophisticated models. As a result of a previous study presented, visual servoing and MobileNet-SSD provide reliable tools and models for manipulating robotics systems, including where occlusion is present. Furthermore, effective alignment control relies significantly on visual servoing and deep neural reliability, shaped by different parameters such as the type of visual servoing, feature extraction and description, and DNNs used to construct a robust state estimator. Therefore, visual servoing and MobileNet-SSD are parameterized concepts that require enhanced optimization to achieve a specific purpose with distinct tools.","PeriodicalId":52195,"journal":{"name":"International Journal of Advances in Intelligent Informatics","volume":"11 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90973615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Samar Bashath, Amelia Ritahani Ismail, A. Alwan, A. Hussin
Particle swarm optimization (PSO) is a simple metaheuristic method to implement with robust performance. PSO is regarded as one of the numerous researchers' most well-studied algorithms. However, two of its most fundamental problems remain unresolved. PSO converges onto the local optimum for high-dimensional optimization problems, and it has slow convergence speeds. This paper introduces a new variant of a particle swarm optimization algorithm utilizing Lévy flight-McCulloch, and fast simulated annealing (PSOLFS). The proposed algorithm uses two strategies to address high-dimensional problems: hybrid PSO to define the global search area and fast simulated annealing to refine the visited search region. In this paper, PSOLFS is designed based on a balance between exploration and exploitation. We evaluated the algorithm on 16 benchmark functions for 500 and 1,000 dimension experiments. On 500 dimensions, the algorithm obtains the optimal value on 14 out of 16 functions. On 1,000 dimensions, the algorithm obtains the optimal value on eight benchmark functions and is close to optimal on four others. We also compared PSOLFS with another five PSO variants regarding convergence accuracy and speed. The results demonstrated higher accuracy and faster convergence speed than other PSO variants. Moreover, the results of the Wilcoxon test show a significant difference between PSOLFS and the other PSO variants. Our experiments' findings show that the proposed method enhances the standard PSO by avoiding the local optimum and improving the convergence speed.
{"title":"An Improved particle swarm optimization based on lévy flight and simulated annealing for high dimensional optimization problem","authors":"Samar Bashath, Amelia Ritahani Ismail, A. Alwan, A. Hussin","doi":"10.26555/ijain.v8i1.818","DOIUrl":"https://doi.org/10.26555/ijain.v8i1.818","url":null,"abstract":"Particle swarm optimization (PSO) is a simple metaheuristic method to implement with robust performance. PSO is regarded as one of the numerous researchers' most well-studied algorithms. However, two of its most fundamental problems remain unresolved. PSO converges onto the local optimum for high-dimensional optimization problems, and it has slow convergence speeds. This paper introduces a new variant of a particle swarm optimization algorithm utilizing Lévy flight-McCulloch, and fast simulated annealing (PSOLFS). The proposed algorithm uses two strategies to address high-dimensional problems: hybrid PSO to define the global search area and fast simulated annealing to refine the visited search region. In this paper, PSOLFS is designed based on a balance between exploration and exploitation. We evaluated the algorithm on 16 benchmark functions for 500 and 1,000 dimension experiments. On 500 dimensions, the algorithm obtains the optimal value on 14 out of 16 functions. On 1,000 dimensions, the algorithm obtains the optimal value on eight benchmark functions and is close to optimal on four others. We also compared PSOLFS with another five PSO variants regarding convergence accuracy and speed. The results demonstrated higher accuracy and faster convergence speed than other PSO variants. Moreover, the results of the Wilcoxon test show a significant difference between PSOLFS and the other PSO variants. Our experiments' findings show that the proposed method enhances the standard PSO by avoiding the local optimum and improving the convergence speed.","PeriodicalId":52195,"journal":{"name":"International Journal of Advances in Intelligent Informatics","volume":"10 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72822509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anh Thi Phuong Le, Hoai Nhan Tran, Thi Uyen Thi Nguyen, Dinh-Khang Tran
There are various types of multi-attribute decision-making (MADM) problems in our daily lives and decision-making problems under uncertain environments with vague and imprecise information involved. Therefore, linguistic multi-attribute decision-making problems are an important type studied extensively. Besides, it is easier for decision-makers to use linguistic terms to evaluate/choose among alternatives in real life. Based on the theoretical foundation of the Hedge algebra and linguistic many-valued logic, this study aims to address multi-attribute decision-making problems by linguistic valued qualitative aggregation and reasoning method. In this paper, we construct a finite monotonous Hedge algebra for modeling the linguistic information related to MADM problems and use linguistic many-valued logic for deducing the outcome of decision making. Our method computes directly on linguistic terms without numerical approximation. This method takes advantage of linguistic information processing and shows the benefit of Hedge algebra.
{"title":"An approach for linguistic multi-attribute decision making based on linguistic many-valued logic","authors":"Anh Thi Phuong Le, Hoai Nhan Tran, Thi Uyen Thi Nguyen, Dinh-Khang Tran","doi":"10.26555/ijain.v8i1.820","DOIUrl":"https://doi.org/10.26555/ijain.v8i1.820","url":null,"abstract":"There are various types of multi-attribute decision-making (MADM) problems in our daily lives and decision-making problems under uncertain environments with vague and imprecise information involved. Therefore, linguistic multi-attribute decision-making problems are an important type studied extensively. Besides, it is easier for decision-makers to use linguistic terms to evaluate/choose among alternatives in real life. Based on the theoretical foundation of the Hedge algebra and linguistic many-valued logic, this study aims to address multi-attribute decision-making problems by linguistic valued qualitative aggregation and reasoning method. In this paper, we construct a finite monotonous Hedge algebra for modeling the linguistic information related to MADM problems and use linguistic many-valued logic for deducing the outcome of decision making. Our method computes directly on linguistic terms without numerical approximation. This method takes advantage of linguistic information processing and shows the benefit of Hedge algebra.","PeriodicalId":52195,"journal":{"name":"International Journal of Advances in Intelligent Informatics","volume":"163 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80300603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nur Hazwani Jasni, Aida Mustapha, Siti Solehah Tenah, S. Mostafa, Nazim Razali
Among the challenges in industrial revolutions, 4.0 is managing organizations’ talents, especially to ensure the right person for the position can be selected. This study is set to introduce a predictive approach for talent identification in the sport of netball using individual player qualities in terms of physical fitness, mental capacity, and technical skills. A data mining approach is proposed using three data mining algorithms, which are Decision Tree (DT), Neural Network (NN), and Linear Regressions (LR). All the models are then compared based on the Relative Absolute Error (RAE), Mean Absolute Error (MAE), Relative Square Error (RSE), Root Mean Square Error (RMSE), Coefficient of Determination (R2), and Relative Square Error (RSE). The findings are presented and discussed in light of early talent spotting and selection. Generally, LR has the best performance in terms of MAE and RMSE as it has the lowest values among the three models.
{"title":"Prediction of player position for talent identification in association netball: a regression-based approach","authors":"Nur Hazwani Jasni, Aida Mustapha, Siti Solehah Tenah, S. Mostafa, Nazim Razali","doi":"10.26555/ijain.v8i1.707","DOIUrl":"https://doi.org/10.26555/ijain.v8i1.707","url":null,"abstract":"Among the challenges in industrial revolutions, 4.0 is managing organizations’ talents, especially to ensure the right person for the position can be selected. This study is set to introduce a predictive approach for talent identification in the sport of netball using individual player qualities in terms of physical fitness, mental capacity, and technical skills. A data mining approach is proposed using three data mining algorithms, which are Decision Tree (DT), Neural Network (NN), and Linear Regressions (LR). All the models are then compared based on the Relative Absolute Error (RAE), Mean Absolute Error (MAE), Relative Square Error (RSE), Root Mean Square Error (RMSE), Coefficient of Determination (R2), and Relative Square Error (RSE). The findings are presented and discussed in light of early talent spotting and selection. Generally, LR has the best performance in terms of MAE and RMSE as it has the lowest values among the three models.","PeriodicalId":52195,"journal":{"name":"International Journal of Advances in Intelligent Informatics","volume":"175 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83447335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Quantitative structure-activity relationships (QSAR) are relevant techniques that assist biologists and chemists in accelerating the drug design process and help understanding many biological and chemical mechanisms. Using classical statistical methods may affect the accuracy and the reliability of the developed QSAR models. This work aims to use a machine learning approach to establish a QSAR model for phenols cytotoxicity prediction. This issue concern many chemists and biologists. In this investigation, the dataset is diverse, and the cytotoxicity data are sparse. Multi-component description of the compounds has then been considered. A set of molecular descriptors fed the deep neural network (DNN) and served to train the DNN. The established DNN model was able to predict the cytotoxicity of the phenols at high precision. The correlation coefficient at the fitting stage was higher than other statistical methods reported in the literature or developed in the present work, specifically multiple linear regression (MLR) and shallow artificial neural networks (ANN), and was equal to 0.943. The predictive capability of the model, as estimated by the coefficient of determination on an external predictive dataset, was significantly high and was about 0.739. This finding could help implement many molecular descriptors relevant to describing the compounds, representing the effects governing the phenols' cytotoxicity toward Tetrahymena pyriformis, avoiding overfitting and outlier exclusion.
{"title":"Machine learning for the prediction of phenols cytotoxicity","authors":"L. Douali","doi":"10.26555/ijain.v8i1.748","DOIUrl":"https://doi.org/10.26555/ijain.v8i1.748","url":null,"abstract":"Quantitative structure-activity relationships (QSAR) are relevant techniques that assist biologists and chemists in accelerating the drug design process and help understanding many biological and chemical mechanisms. Using classical statistical methods may affect the accuracy and the reliability of the developed QSAR models. This work aims to use a machine learning approach to establish a QSAR model for phenols cytotoxicity prediction. This issue concern many chemists and biologists. In this investigation, the dataset is diverse, and the cytotoxicity data are sparse. Multi-component description of the compounds has then been considered. A set of molecular descriptors fed the deep neural network (DNN) and served to train the DNN. The established DNN model was able to predict the cytotoxicity of the phenols at high precision. The correlation coefficient at the fitting stage was higher than other statistical methods reported in the literature or developed in the present work, specifically multiple linear regression (MLR) and shallow artificial neural networks (ANN), and was equal to 0.943. The predictive capability of the model, as estimated by the coefficient of determination on an external predictive dataset, was significantly high and was about 0.739. This finding could help implement many molecular descriptors relevant to describing the compounds, representing the effects governing the phenols' cytotoxicity toward Tetrahymena pyriformis, avoiding overfitting and outlier exclusion.","PeriodicalId":52195,"journal":{"name":"International Journal of Advances in Intelligent Informatics","volume":"30 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80028362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}