Non-negative matrix factorization (NMF) is an unsupervised algorithm for clustering where a non-negative data matrix is factorized into (usually) two matrices with the property that all the matrices have no negative elements. This factorization raises the problem of instability, which means whenever we run NMF for the same dataset, we get different factorization. In order to solve the problem of non-uniqueness and to have a more stable solution, we propose a new approach that consists on collaborating different NMF models followed by a consensus. The proposed approach was validated on several datasets and the experimental results showed the effectiveness of our approach which is based on the reducing of standard reconstruction error in NMF model.
{"title":"Collaborative Learning to Improve the Non-uniqueness of NMF","authors":"Kaoutar Benlamine, Younès Bennani, Basarab Matei, Nistor Grozavu, Issam Falih","doi":"10.1142/s1469026822500018","DOIUrl":"https://doi.org/10.1142/s1469026822500018","url":null,"abstract":"Non-negative matrix factorization (NMF) is an unsupervised algorithm for clustering where a non-negative data matrix is factorized into (usually) two matrices with the property that all the matrices have no negative elements. This factorization raises the problem of instability, which means whenever we run NMF for the same dataset, we get different factorization. In order to solve the problem of non-uniqueness and to have a more stable solution, we propose a new approach that consists on collaborating different NMF models followed by a consensus. The proposed approach was validated on several datasets and the experimental results showed the effectiveness of our approach which is based on the reducing of standard reconstruction error in NMF model.","PeriodicalId":422521,"journal":{"name":"Int. J. Comput. Intell. Appl.","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129374097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-01DOI: 10.1142/s1469026822500055
Ahmad Heidary-Sharifabad, M. S. Zarchi, G. Zarei
A large training sample is prerequisite for the successful training of each deep learning model for image classification. Collecting a large dataset is time-consuming and costly, especially for plants. When a large dataset is not available, the challenge is how to use a small or medium size dataset to train a deep model optimally. To overcome this challenge, a novel model is proposed to use the available small size plant dataset efficiently. This model focuses on data augmentation and aims to improve the learning accuracy by oversampling the dataset through representative image patches. To extract the relevant patches, ORB key points are detected in the training images and then image patches are extracted using an innovative algorithm. The extracted ORB image patches are used for dataset augmentation to avoid overfitting during the training phase. The proposed model is implemented using convolutional neural layers, where its structure is based on ResNet architecture. The proposed model is evaluated on a challenging ACHENY dataset. ACHENY is a Chenopodiaceae plant dataset, comprising 27030 images from 30 classes. The experimental results show that the patch-based strategy outperforms the classification accuracy achieved by traditional deep models by 9%.
{"title":"Padeep: A Patched Deep Learning Based Model for Plants Recognition on Small Size Dataset: Chenopodiaceae Case Study","authors":"Ahmad Heidary-Sharifabad, M. S. Zarchi, G. Zarei","doi":"10.1142/s1469026822500055","DOIUrl":"https://doi.org/10.1142/s1469026822500055","url":null,"abstract":"A large training sample is prerequisite for the successful training of each deep learning model for image classification. Collecting a large dataset is time-consuming and costly, especially for plants. When a large dataset is not available, the challenge is how to use a small or medium size dataset to train a deep model optimally. To overcome this challenge, a novel model is proposed to use the available small size plant dataset efficiently. This model focuses on data augmentation and aims to improve the learning accuracy by oversampling the dataset through representative image patches. To extract the relevant patches, ORB key points are detected in the training images and then image patches are extracted using an innovative algorithm. The extracted ORB image patches are used for dataset augmentation to avoid overfitting during the training phase. The proposed model is implemented using convolutional neural layers, where its structure is based on ResNet architecture. The proposed model is evaluated on a challenging ACHENY dataset. ACHENY is a Chenopodiaceae plant dataset, comprising 27030 images from 30 classes. The experimental results show that the patch-based strategy outperforms the classification accuracy achieved by traditional deep models by 9%.","PeriodicalId":422521,"journal":{"name":"Int. J. Comput. Intell. Appl.","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134245911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-01DOI: 10.1142/s1469026822500031
Hajar Sadeghi, Shohreh Ajoudanian
Software Product Lines (SPLs) are one of the ways to develop software products by increasing productivity and reducing cost and time in the software development process. In SPLs, each product has many features and it is necessary to consider the optimal and custom features of the products. In fact, selecting key features in SPLs is a challenging process. Feature selection in SPLs is an optimization problem and an NP-Hard problem. One way to select a feature is to use meta-heuristic algorithms modeled on nature, i.e., Bat Algorithm. By modeling bat behavior in prey hunting, a suitable meta-innovative algorithm is considered. This algorithm has important advantages that make it more accurate than conventional methods such as Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) algorithm. In this paper, to select software product features, idol binary algorithm and artificial neural network are used to identify important features of software products that reduce production costs and time. The experiments in MATLAB software and datasets related to software production lines show that the rate of reduction of target performance error or feature selection cost in software production lines in the proposed method has decreased by 64.17% with increasing population.
{"title":"Optimized Feature Selection in Software Product Lines using Discrete Bat Algorithm","authors":"Hajar Sadeghi, Shohreh Ajoudanian","doi":"10.1142/s1469026822500031","DOIUrl":"https://doi.org/10.1142/s1469026822500031","url":null,"abstract":"Software Product Lines (SPLs) are one of the ways to develop software products by increasing productivity and reducing cost and time in the software development process. In SPLs, each product has many features and it is necessary to consider the optimal and custom features of the products. In fact, selecting key features in SPLs is a challenging process. Feature selection in SPLs is an optimization problem and an NP-Hard problem. One way to select a feature is to use meta-heuristic algorithms modeled on nature, i.e., Bat Algorithm. By modeling bat behavior in prey hunting, a suitable meta-innovative algorithm is considered. This algorithm has important advantages that make it more accurate than conventional methods such as Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) algorithm. In this paper, to select software product features, idol binary algorithm and artificial neural network are used to identify important features of software products that reduce production costs and time. The experiments in MATLAB software and datasets related to software production lines show that the rate of reduction of target performance error or feature selection cost in software production lines in the proposed method has decreased by 64.17% with increasing population.","PeriodicalId":422521,"journal":{"name":"Int. J. Comput. Intell. Appl.","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132053069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-01DOI: 10.1142/s1469026822500067
R. Samuel, B. R. Kanna
Sputum smear microscopic examination is an effective, fast, and low-cost technique that is highly specific in areas with a high prevalence of pulmonary tuberculosis. Since manual screening needs trained pathologist in high prevalence zones, the possibility of deploying adequate technicians during the epidemic sessions would be impractical. This condition can cause overburdening and fatigue of working technicians which may tend to reduce the potential efficiency of Tuberculosis (TB) diagnosis. Hence, automation of sputum inspection is the most appropriate aspect in TB outbreak zones to maximize the detection ability. Sputum collection, smear preparing, staining, interpreting smears, and reporting of TB severity are all part of the diagnosis of tuberculosis. This study has analyzed the risk of automating TB severity grading. According to the findings of the analysis, numerous TB-positive cases do not fit into the standard TB severity grade while applying direct rule-driven strategy. The manual investigation, on the other hand, arbitrarily labels the TB grade on those cases. To counter the risk of automation, a fuzzy-based Tuberculosis Severity Level Categorizing Algorithm (TSLCA) is introduced to eliminate uncertainty in classifying the level of TB infection. TSLCA introduces the weight factors, which are dependent on the existence of maximum number of Acid-Fast Bacilli (AFB) per microscopic Field of View (FOV). The fuzzification and defuzzification operations are carried out using the triangular membership function. In addition, the [Formula: see text]-cut approach is used to eliminate the ambiguity in TB severity grading. Several uncertain TB microscopy screening reports are tested using the proposed TSLCA. Based on the experimental results, it is observed that the TB grading by TSLCA is consistent, error-free, significant and fits exactly into the standard criterion. As a result of the proposed TSLCA, the uncertainty of grading is eliminated and the reliability of tuberculosis diagnosis is ensured when adapting automatic diagnosis.
痰涂片镜检是一种有效、快速、低成本的技术,在肺结核高发地区具有高度特异性。由于在高流行区需要训练有素的病理学家进行人工筛查,在流行病会议期间部署足够的技术人员的可能性是不切实际的。这种情况可能导致工作技术人员负担过重和疲劳,这可能会降低结核病诊断的潜在效率。因此,痰液检测自动化是结核病疫区最合适的方面,以最大限度地提高检测能力。痰液采集、涂片制备、染色、涂片解释和结核病严重程度报告都是结核病诊断的组成部分。本研究分析了自动化结核病严重程度分级的风险。根据分析结果,在采用直接规则驱动策略时,许多结核病阳性病例不符合标准的结核病严重程度等级。另一方面,人工调查武断地给这些病例贴上结核病等级的标签。为了应对自动化带来的风险,引入了一种基于模糊的结核病严重程度分类算法(TSLCA),消除了结核病感染程度分类的不确定性。TSLCA引入了权重因子,该权重因子依赖于每个显微镜视野(FOV)中抗酸杆菌(AFB)最大数量的存在。利用三角隶属函数进行模糊化和去模糊化操作。此外,采用[Formula: see text]-cut方法消除结核病严重程度分级中的歧义。使用拟议的TSLCA测试了几个不确定的结核显微镜筛查报告。实验结果表明,TSLCA分级结果具有一致性、无误差、显著性,与标准标准完全吻合。该方法在适应自动诊断的同时,消除了分级的不确定性,保证了结核病诊断的可靠性。
{"title":"A Fuzzy Strategy to Eliminate Uncertainty in Grading Positive Tuberculosis","authors":"R. Samuel, B. R. Kanna","doi":"10.1142/s1469026822500067","DOIUrl":"https://doi.org/10.1142/s1469026822500067","url":null,"abstract":"Sputum smear microscopic examination is an effective, fast, and low-cost technique that is highly specific in areas with a high prevalence of pulmonary tuberculosis. Since manual screening needs trained pathologist in high prevalence zones, the possibility of deploying adequate technicians during the epidemic sessions would be impractical. This condition can cause overburdening and fatigue of working technicians which may tend to reduce the potential efficiency of Tuberculosis (TB) diagnosis. Hence, automation of sputum inspection is the most appropriate aspect in TB outbreak zones to maximize the detection ability. Sputum collection, smear preparing, staining, interpreting smears, and reporting of TB severity are all part of the diagnosis of tuberculosis. This study has analyzed the risk of automating TB severity grading. According to the findings of the analysis, numerous TB-positive cases do not fit into the standard TB severity grade while applying direct rule-driven strategy. The manual investigation, on the other hand, arbitrarily labels the TB grade on those cases. To counter the risk of automation, a fuzzy-based Tuberculosis Severity Level Categorizing Algorithm (TSLCA) is introduced to eliminate uncertainty in classifying the level of TB infection. TSLCA introduces the weight factors, which are dependent on the existence of maximum number of Acid-Fast Bacilli (AFB) per microscopic Field of View (FOV). The fuzzification and defuzzification operations are carried out using the triangular membership function. In addition, the [Formula: see text]-cut approach is used to eliminate the ambiguity in TB severity grading. Several uncertain TB microscopy screening reports are tested using the proposed TSLCA. Based on the experimental results, it is observed that the TB grading by TSLCA is consistent, error-free, significant and fits exactly into the standard criterion. As a result of the proposed TSLCA, the uncertainty of grading is eliminated and the reliability of tuberculosis diagnosis is ensured when adapting automatic diagnosis.","PeriodicalId":422521,"journal":{"name":"Int. J. Comput. Intell. Appl.","volume":"191 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129225226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-01DOI: 10.1142/s1469026822500043
Asri Safmi, Anita Desiani, B. Suprihatin
The retina is the most important part of the eye. By proper feature extraction, it can be the first step to detect a disease. Morphology of retina blood vessels can be used to identify and classify a disease. A step, such as segmentation and analysis of retinal blood vessels, can assist medical personnel in detecting the severity of a disease. In this paper, vascular segmentation using U-net architecture in the Convolutional Neural Network (CNN) method is proposed to train a sematic segmentation model in retinal blood vessel. In addition, the Contrast Limited Adaptive Histogram Equalization (CLAHE) method is used to increase the contrast of the grayscale and Median Filter is used to obtain better image quality. Data augmentation is also used to maximize the number of datasets owned to make more. The proposed method allows for easier implementation. In this study, the dataset used was STARE with the result of accuracy, sensitivity, specificity, precision, and F1-score that reached 97.64%, 78.18%, 99.20%, 88.77%, and 82.91%.
{"title":"The Augmentation Data of Retina Image for Blood Vessel Segmentation Using U-Net Convolutional Neural Network Method","authors":"Asri Safmi, Anita Desiani, B. Suprihatin","doi":"10.1142/s1469026822500043","DOIUrl":"https://doi.org/10.1142/s1469026822500043","url":null,"abstract":"The retina is the most important part of the eye. By proper feature extraction, it can be the first step to detect a disease. Morphology of retina blood vessels can be used to identify and classify a disease. A step, such as segmentation and analysis of retinal blood vessels, can assist medical personnel in detecting the severity of a disease. In this paper, vascular segmentation using U-net architecture in the Convolutional Neural Network (CNN) method is proposed to train a sematic segmentation model in retinal blood vessel. In addition, the Contrast Limited Adaptive Histogram Equalization (CLAHE) method is used to increase the contrast of the grayscale and Median Filter is used to obtain better image quality. Data augmentation is also used to maximize the number of datasets owned to make more. The proposed method allows for easier implementation. In this study, the dataset used was STARE with the result of accuracy, sensitivity, specificity, precision, and F1-score that reached 97.64%, 78.18%, 99.20%, 88.77%, and 82.91%.","PeriodicalId":422521,"journal":{"name":"Int. J. Comput. Intell. Appl.","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128760795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-15DOI: 10.1142/s1469026821500255
Bharathi Garimella, G. Prasad, M. K. Prasad
The churn prediction based on telecom data has been paid great attention because of the increasing the number telecom providers, but due to inconsistent data, sparsity, and hugeness, the churn prediction becomes complicated and challenging. Hence, an effective and optimal prediction of churns mechanism, named adaptive firefly-spider optimization (adaptive FSO) algorithm, is proposed in this research to predict the churns using the telecom data. The proposed churn prediction method uses telecom data, which is the trending domain of research in predicting the churns; hence, the classification accuracy is increased. However, the proposed adaptive FSO algorithm is designed by integrating the spider monkey optimization (SMO), firefly optimization algorithm (FA), and the adaptive concept. The input data is initially given to the master node of the spark framework. The feature selection is carried out using Kendall’s correlation to select the appropriate features for further processing. Then, the selected unique features are given to the master node to perform churn prediction. Here, the churn prediction is made using a deep convolutional neural network (DCNN), which is trained by the proposed adaptive FSO algorithm. Moreover, the developed model obtained better performance using the metrics, like dice coefficient, accuracy, and Jaccard coefficient by varying the training data percentage and selected features. Thus, the proposed adaptive FSO-based DCNN showed improved results with a dice coefficient of 99.76%, accuracy of 98.65%, Jaccard coefficient of 99.52%.
{"title":"Adaptive Optimization-Enabled Neural Networks to Handle the Imbalance Churn Data in Churn Prediction","authors":"Bharathi Garimella, G. Prasad, M. K. Prasad","doi":"10.1142/s1469026821500255","DOIUrl":"https://doi.org/10.1142/s1469026821500255","url":null,"abstract":"The churn prediction based on telecom data has been paid great attention because of the increasing the number telecom providers, but due to inconsistent data, sparsity, and hugeness, the churn prediction becomes complicated and challenging. Hence, an effective and optimal prediction of churns mechanism, named adaptive firefly-spider optimization (adaptive FSO) algorithm, is proposed in this research to predict the churns using the telecom data. The proposed churn prediction method uses telecom data, which is the trending domain of research in predicting the churns; hence, the classification accuracy is increased. However, the proposed adaptive FSO algorithm is designed by integrating the spider monkey optimization (SMO), firefly optimization algorithm (FA), and the adaptive concept. The input data is initially given to the master node of the spark framework. The feature selection is carried out using Kendall’s correlation to select the appropriate features for further processing. Then, the selected unique features are given to the master node to perform churn prediction. Here, the churn prediction is made using a deep convolutional neural network (DCNN), which is trained by the proposed adaptive FSO algorithm. Moreover, the developed model obtained better performance using the metrics, like dice coefficient, accuracy, and Jaccard coefficient by varying the training data percentage and selected features. Thus, the proposed adaptive FSO-based DCNN showed improved results with a dice coefficient of 99.76%, accuracy of 98.65%, Jaccard coefficient of 99.52%.","PeriodicalId":422521,"journal":{"name":"Int. J. Comput. Intell. Appl.","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128208863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-17DOI: 10.1142/s1469026821500267
S. Kahlouche, M. Belhocine, Abdallah Menouar
In this work, efficient human activity recognition (HAR) algorithm based on deep learning architecture is proposed to classify activities into seven different classes. In order to learn spatial and temporal features from only 3D skeleton data captured from a “Microsoft Kinect” camera, the proposed algorithm combines both convolution neural network (CNN) and long short-term memory (LSTM) architectures. This combination allows taking advantage of LSTM in modeling temporal data and of CNN in modeling spatial data. The captured skeleton sequences are used to create a specific dataset of interactive activities; these data are then transformed according to a view invariant and a symmetry criterion. To demonstrate the effectiveness of the developed algorithm, it has been tested on several public datasets and it has achieved and sometimes has overcome state-of-the-art performance. In order to verify the uncertainty of the proposed algorithm, some tools are provided and discussed to ensure its efficiency for continuous human action recognition in real time.
{"title":"Real-Time Human Action Recognition Using Deep Learning Architecture","authors":"S. Kahlouche, M. Belhocine, Abdallah Menouar","doi":"10.1142/s1469026821500267","DOIUrl":"https://doi.org/10.1142/s1469026821500267","url":null,"abstract":"In this work, efficient human activity recognition (HAR) algorithm based on deep learning architecture is proposed to classify activities into seven different classes. In order to learn spatial and temporal features from only 3D skeleton data captured from a “Microsoft Kinect” camera, the proposed algorithm combines both convolution neural network (CNN) and long short-term memory (LSTM) architectures. This combination allows taking advantage of LSTM in modeling temporal data and of CNN in modeling spatial data. The captured skeleton sequences are used to create a specific dataset of interactive activities; these data are then transformed according to a view invariant and a symmetry criterion. To demonstrate the effectiveness of the developed algorithm, it has been tested on several public datasets and it has achieved and sometimes has overcome state-of-the-art performance. In order to verify the uncertainty of the proposed algorithm, some tools are provided and discussed to ensure its efficiency for continuous human action recognition in real time.","PeriodicalId":422521,"journal":{"name":"Int. J. Comput. Intell. Appl.","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125416613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-11DOI: 10.1142/s1469026821500243
Youssef Hami, Chakir Loqman
This research is an optimal allocation of tasks to processors in order to minimize the total costs of execution and communication. This problem is called the Task Assignment Problem (TAP) with nonuniform communication costs. To solve the latter, the first step concerns the formulation of the problem by an equivalent zero-one quadratic program with a convex objective function using a convexification technique, based on the smallest eigenvalue. The second step concerns the application of the Continuous Hopfield Network (CHN) to solve the obtained problem. The calculation results are presented for the instances from the literature, compared to solutions obtained both the CPLEX solver and by the heuristic genetic algorithm, and show an improvement in the results obtained by applying only the CHN algorithm. We can see that the proposed approach evaluates the efficiency of the theoretical results and achieves the optimal solutions in a short calculation time.
{"title":"Quadratic Convex Reformulation for Solving Task Assignment Problem with Continuous Hopfield Network","authors":"Youssef Hami, Chakir Loqman","doi":"10.1142/s1469026821500243","DOIUrl":"https://doi.org/10.1142/s1469026821500243","url":null,"abstract":"This research is an optimal allocation of tasks to processors in order to minimize the total costs of execution and communication. This problem is called the Task Assignment Problem (TAP) with nonuniform communication costs. To solve the latter, the first step concerns the formulation of the problem by an equivalent zero-one quadratic program with a convex objective function using a convexification technique, based on the smallest eigenvalue. The second step concerns the application of the Continuous Hopfield Network (CHN) to solve the obtained problem. The calculation results are presented for the instances from the literature, compared to solutions obtained both the CPLEX solver and by the heuristic genetic algorithm, and show an improvement in the results obtained by applying only the CHN algorithm. We can see that the proposed approach evaluates the efficiency of the theoretical results and achieves the optimal solutions in a short calculation time.","PeriodicalId":422521,"journal":{"name":"Int. J. Comput. Intell. Appl.","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124543079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-06DOI: 10.1142/s1469026821500279
Kumar Cherukupalli, N. VijayaAnand
In this paper, the optimal distribution generation (DG) size and location for power flow analysis at the smart grid by hybrid method are proposed. The proposed hybrid method is the Interactive Autodidactic School (IAS) and the Most Valuable Player Algorithm (MVPA) and commonly named as IAS-MVPA method. The main aim of this work is to reduce line loss and total harmonic distortion (THD), similarly, to recover the voltage profile of system through the optimal location and size of the distributed generators and optimal rearrangement of network. Here, IAS-MVPA method is utilized as a rectification tool to get the maximum DG size and the maximal reconfiguration of network at environmental load variation. In case of failure, the IAS method is utilized for maximizing the DG location. The IAS chooses the line of maximal power loss as optimal location to place the DG based on the objective function. The fault violates the equality and inequality restrictions of the safe limit system. From the control parameters, the low voltage drift is improved using the MVPA method. The low-voltage deviation has been exploited for obtaining the maximum capacity of the DG. After that, the maximum capacity is used at maximum location that improves the power flow of the system. The proposed system is performed on MATLAB/Simulink platform, and the effectiveness is assessed by comparing it with various existing processes such as generic algorithm (GA), Cuttle fish algorithm (CFA), adaptive grasshopper optimization algorithm (AGOA) and artificial neural network (ANN).
{"title":"Optimal Sizing and Location of Distributed Generators for Power Flow Analysis in Smart Grid Using IAS-MVPA Strategy","authors":"Kumar Cherukupalli, N. VijayaAnand","doi":"10.1142/s1469026821500279","DOIUrl":"https://doi.org/10.1142/s1469026821500279","url":null,"abstract":"In this paper, the optimal distribution generation (DG) size and location for power flow analysis at the smart grid by hybrid method are proposed. The proposed hybrid method is the Interactive Autodidactic School (IAS) and the Most Valuable Player Algorithm (MVPA) and commonly named as IAS-MVPA method. The main aim of this work is to reduce line loss and total harmonic distortion (THD), similarly, to recover the voltage profile of system through the optimal location and size of the distributed generators and optimal rearrangement of network. Here, IAS-MVPA method is utilized as a rectification tool to get the maximum DG size and the maximal reconfiguration of network at environmental load variation. In case of failure, the IAS method is utilized for maximizing the DG location. The IAS chooses the line of maximal power loss as optimal location to place the DG based on the objective function. The fault violates the equality and inequality restrictions of the safe limit system. From the control parameters, the low voltage drift is improved using the MVPA method. The low-voltage deviation has been exploited for obtaining the maximum capacity of the DG. After that, the maximum capacity is used at maximum location that improves the power flow of the system. The proposed system is performed on MATLAB/Simulink platform, and the effectiveness is assessed by comparing it with various existing processes such as generic algorithm (GA), Cuttle fish algorithm (CFA), adaptive grasshopper optimization algorithm (AGOA) and artificial neural network (ANN).","PeriodicalId":422521,"journal":{"name":"Int. J. Comput. Intell. Appl.","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127534615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-03DOI: 10.1142/s146902682150022x
Imo J. Eyoh, J. Eyoh, U. Umoh, R. Kalawsky
Derivative-based algorithms have been adopted in the literature for the optimization of membership and non-membership function parameters of interval type-2 (T2) intuitionistic fuzzy logic systems (FLSs). In this study, a non-derivative-based algorithm called sliding mode control learning algorithm is proposed to tune the parameters of interval T2 intuitionistic FLS for the first time. The proposed rule-based learning system employs the Takagi–Sugeno–Kang inference with artificial neural network to pilot the learning process. The new learning system is evaluated using some nonlinear prediction problems. Analyses of results reveal that the proposed learning apparatus outperforms its type-1 version and many existing solutions in the literature and competes favorably with others on the investigated problem instances with low cost in terms of running time.
{"title":"Optimization of Interval Type-2 Intuitionistic Fuzzy Logic System for Prediction Problems","authors":"Imo J. Eyoh, J. Eyoh, U. Umoh, R. Kalawsky","doi":"10.1142/s146902682150022x","DOIUrl":"https://doi.org/10.1142/s146902682150022x","url":null,"abstract":"Derivative-based algorithms have been adopted in the literature for the optimization of membership and non-membership function parameters of interval type-2 (T2) intuitionistic fuzzy logic systems (FLSs). In this study, a non-derivative-based algorithm called sliding mode control learning algorithm is proposed to tune the parameters of interval T2 intuitionistic FLS for the first time. The proposed rule-based learning system employs the Takagi–Sugeno–Kang inference with artificial neural network to pilot the learning process. The new learning system is evaluated using some nonlinear prediction problems. Analyses of results reveal that the proposed learning apparatus outperforms its type-1 version and many existing solutions in the literature and competes favorably with others on the investigated problem instances with low cost in terms of running time.","PeriodicalId":422521,"journal":{"name":"Int. J. Comput. Intell. Appl.","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129161744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}