首页 > 最新文献

Int. J. Comput. Intell. Appl.最新文献

英文 中文
Collaborative Learning to Improve the Non-uniqueness of NMF 协同学习改进NMF的非唯一性
Pub Date : 2022-03-01 DOI: 10.1142/s1469026822500018
Kaoutar Benlamine, Younès Bennani, Basarab Matei, Nistor Grozavu, Issam Falih
Non-negative matrix factorization (NMF) is an unsupervised algorithm for clustering where a non-negative data matrix is factorized into (usually) two matrices with the property that all the matrices have no negative elements. This factorization raises the problem of instability, which means whenever we run NMF for the same dataset, we get different factorization. In order to solve the problem of non-uniqueness and to have a more stable solution, we propose a new approach that consists on collaborating different NMF models followed by a consensus. The proposed approach was validated on several datasets and the experimental results showed the effectiveness of our approach which is based on the reducing of standard reconstruction error in NMF model.
非负矩阵分解(NMF)是一种无监督的聚类算法,它将一个非负数据矩阵分解成(通常)两个矩阵,所有矩阵都没有负元素。这种分解提出了不稳定性的问题,这意味着每当我们对相同的数据集运行NMF时,我们会得到不同的分解。为了解决非唯一性问题并获得更稳定的解决方案,我们提出了一种新的方法,该方法包括协作不同的NMF模型,然后达成共识。在多个数据集上对该方法进行了验证,实验结果表明了该方法的有效性,该方法基于减小NMF模型的标准重构误差。
{"title":"Collaborative Learning to Improve the Non-uniqueness of NMF","authors":"Kaoutar Benlamine, Younès Bennani, Basarab Matei, Nistor Grozavu, Issam Falih","doi":"10.1142/s1469026822500018","DOIUrl":"https://doi.org/10.1142/s1469026822500018","url":null,"abstract":"Non-negative matrix factorization (NMF) is an unsupervised algorithm for clustering where a non-negative data matrix is factorized into (usually) two matrices with the property that all the matrices have no negative elements. This factorization raises the problem of instability, which means whenever we run NMF for the same dataset, we get different factorization. In order to solve the problem of non-uniqueness and to have a more stable solution, we propose a new approach that consists on collaborating different NMF models followed by a consensus. The proposed approach was validated on several datasets and the experimental results showed the effectiveness of our approach which is based on the reducing of standard reconstruction error in NMF model.","PeriodicalId":422521,"journal":{"name":"Int. J. Comput. Intell. Appl.","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129374097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Padeep: A Patched Deep Learning Based Model for Plants Recognition on Small Size Dataset: Chenopodiaceae Case Study Padeep:基于补丁深度学习的小数据集植物识别模型:藜科案例研究
Pub Date : 2022-03-01 DOI: 10.1142/s1469026822500055
Ahmad Heidary-Sharifabad, M. S. Zarchi, G. Zarei
A large training sample is prerequisite for the successful training of each deep learning model for image classification. Collecting a large dataset is time-consuming and costly, especially for plants. When a large dataset is not available, the challenge is how to use a small or medium size dataset to train a deep model optimally. To overcome this challenge, a novel model is proposed to use the available small size plant dataset efficiently. This model focuses on data augmentation and aims to improve the learning accuracy by oversampling the dataset through representative image patches. To extract the relevant patches, ORB key points are detected in the training images and then image patches are extracted using an innovative algorithm. The extracted ORB image patches are used for dataset augmentation to avoid overfitting during the training phase. The proposed model is implemented using convolutional neural layers, where its structure is based on ResNet architecture. The proposed model is evaluated on a challenging ACHENY dataset. ACHENY is a Chenopodiaceae plant dataset, comprising 27030 images from 30 classes. The experimental results show that the patch-based strategy outperforms the classification accuracy achieved by traditional deep models by 9%.
大的训练样本是每个图像分类深度学习模型训练成功的前提。收集大型数据集既耗时又昂贵,尤其是对植物而言。当没有大型数据集时,挑战在于如何使用小型或中型数据集来最佳地训练深度模型。为了克服这一挑战,提出了一种新的模型来有效地利用现有的小型工厂数据集。该模型以数据增强为重点,通过代表性图像patch对数据集进行过采样,提高学习精度。为了提取相关的patch,首先在训练图像中检测ORB关键点,然后使用一种创新的算法提取图像patch。提取的ORB图像块用于数据集增强,以避免训练阶段的过拟合。该模型使用卷积神经层实现,其结构基于ResNet架构。该模型在一个具有挑战性的ACHENY数据集上进行了评估。ACHENY是藜科植物数据集,包含来自30个类的27030张图像。实验结果表明,基于patch的分类准确率比传统深度模型的分类准确率提高了9%。
{"title":"Padeep: A Patched Deep Learning Based Model for Plants Recognition on Small Size Dataset: Chenopodiaceae Case Study","authors":"Ahmad Heidary-Sharifabad, M. S. Zarchi, G. Zarei","doi":"10.1142/s1469026822500055","DOIUrl":"https://doi.org/10.1142/s1469026822500055","url":null,"abstract":"A large training sample is prerequisite for the successful training of each deep learning model for image classification. Collecting a large dataset is time-consuming and costly, especially for plants. When a large dataset is not available, the challenge is how to use a small or medium size dataset to train a deep model optimally. To overcome this challenge, a novel model is proposed to use the available small size plant dataset efficiently. This model focuses on data augmentation and aims to improve the learning accuracy by oversampling the dataset through representative image patches. To extract the relevant patches, ORB key points are detected in the training images and then image patches are extracted using an innovative algorithm. The extracted ORB image patches are used for dataset augmentation to avoid overfitting during the training phase. The proposed model is implemented using convolutional neural layers, where its structure is based on ResNet architecture. The proposed model is evaluated on a challenging ACHENY dataset. ACHENY is a Chenopodiaceae plant dataset, comprising 27030 images from 30 classes. The experimental results show that the patch-based strategy outperforms the classification accuracy achieved by traditional deep models by 9%.","PeriodicalId":422521,"journal":{"name":"Int. J. Comput. Intell. Appl.","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134245911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimized Feature Selection in Software Product Lines using Discrete Bat Algorithm 基于离散Bat算法的软件产品线特征选择优化
Pub Date : 2022-03-01 DOI: 10.1142/s1469026822500031
Hajar Sadeghi, Shohreh Ajoudanian
Software Product Lines (SPLs) are one of the ways to develop software products by increasing productivity and reducing cost and time in the software development process. In SPLs, each product has many features and it is necessary to consider the optimal and custom features of the products. In fact, selecting key features in SPLs is a challenging process. Feature selection in SPLs is an optimization problem and an NP-Hard problem. One way to select a feature is to use meta-heuristic algorithms modeled on nature, i.e., Bat Algorithm. By modeling bat behavior in prey hunting, a suitable meta-innovative algorithm is considered. This algorithm has important advantages that make it more accurate than conventional methods such as Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) algorithm. In this paper, to select software product features, idol binary algorithm and artificial neural network are used to identify important features of software products that reduce production costs and time. The experiments in MATLAB software and datasets related to software production lines show that the rate of reduction of target performance error or feature selection cost in software production lines in the proposed method has decreased by 64.17% with increasing population.
在软件开发过程中,软件产品线是通过提高生产力和减少成本和时间来开发软件产品的方法之一。在spc中,每个产品都有许多特性,因此有必要考虑产品的最佳特性和定制特性。事实上,在spc中选择关键特性是一个具有挑战性的过程。SPLs中的特征选择是一个优化问题,也是一个NP-Hard问题。选择特征的一种方法是使用基于自然的元启发式算法,即Bat算法。通过对蝙蝠捕食行为的建模,提出了一种合适的元创新算法。该算法具有比遗传算法(GA)和粒子群优化算法(PSO)等传统算法精度更高的重要优点。在软件产品特征的选择上,本文采用偶像二值算法和人工神经网络来识别软件产品的重要特征,从而降低生产成本和时间。在MATLAB软件和软件生产线相关数据集上的实验表明,随着人口的增加,本文提出的方法对软件生产线目标性能误差或特征选择成本的降低率降低了64.17%。
{"title":"Optimized Feature Selection in Software Product Lines using Discrete Bat Algorithm","authors":"Hajar Sadeghi, Shohreh Ajoudanian","doi":"10.1142/s1469026822500031","DOIUrl":"https://doi.org/10.1142/s1469026822500031","url":null,"abstract":"Software Product Lines (SPLs) are one of the ways to develop software products by increasing productivity and reducing cost and time in the software development process. In SPLs, each product has many features and it is necessary to consider the optimal and custom features of the products. In fact, selecting key features in SPLs is a challenging process. Feature selection in SPLs is an optimization problem and an NP-Hard problem. One way to select a feature is to use meta-heuristic algorithms modeled on nature, i.e., Bat Algorithm. By modeling bat behavior in prey hunting, a suitable meta-innovative algorithm is considered. This algorithm has important advantages that make it more accurate than conventional methods such as Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) algorithm. In this paper, to select software product features, idol binary algorithm and artificial neural network are used to identify important features of software products that reduce production costs and time. The experiments in MATLAB software and datasets related to software production lines show that the rate of reduction of target performance error or feature selection cost in software production lines in the proposed method has decreased by 64.17% with increasing population.","PeriodicalId":422521,"journal":{"name":"Int. J. Comput. Intell. Appl.","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132053069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Fuzzy Strategy to Eliminate Uncertainty in Grading Positive Tuberculosis 消除结核阳性分级不确定性的模糊策略
Pub Date : 2022-03-01 DOI: 10.1142/s1469026822500067
R. Samuel, B. R. Kanna
Sputum smear microscopic examination is an effective, fast, and low-cost technique that is highly specific in areas with a high prevalence of pulmonary tuberculosis. Since manual screening needs trained pathologist in high prevalence zones, the possibility of deploying adequate technicians during the epidemic sessions would be impractical. This condition can cause overburdening and fatigue of working technicians which may tend to reduce the potential efficiency of Tuberculosis (TB) diagnosis. Hence, automation of sputum inspection is the most appropriate aspect in TB outbreak zones to maximize the detection ability. Sputum collection, smear preparing, staining, interpreting smears, and reporting of TB severity are all part of the diagnosis of tuberculosis. This study has analyzed the risk of automating TB severity grading. According to the findings of the analysis, numerous TB-positive cases do not fit into the standard TB severity grade while applying direct rule-driven strategy. The manual investigation, on the other hand, arbitrarily labels the TB grade on those cases. To counter the risk of automation, a fuzzy-based Tuberculosis Severity Level Categorizing Algorithm (TSLCA) is introduced to eliminate uncertainty in classifying the level of TB infection. TSLCA introduces the weight factors, which are dependent on the existence of maximum number of Acid-Fast Bacilli (AFB) per microscopic Field of View (FOV). The fuzzification and defuzzification operations are carried out using the triangular membership function. In addition, the [Formula: see text]-cut approach is used to eliminate the ambiguity in TB severity grading. Several uncertain TB microscopy screening reports are tested using the proposed TSLCA. Based on the experimental results, it is observed that the TB grading by TSLCA is consistent, error-free, significant and fits exactly into the standard criterion. As a result of the proposed TSLCA, the uncertainty of grading is eliminated and the reliability of tuberculosis diagnosis is ensured when adapting automatic diagnosis.
痰涂片镜检是一种有效、快速、低成本的技术,在肺结核高发地区具有高度特异性。由于在高流行区需要训练有素的病理学家进行人工筛查,在流行病会议期间部署足够的技术人员的可能性是不切实际的。这种情况可能导致工作技术人员负担过重和疲劳,这可能会降低结核病诊断的潜在效率。因此,痰液检测自动化是结核病疫区最合适的方面,以最大限度地提高检测能力。痰液采集、涂片制备、染色、涂片解释和结核病严重程度报告都是结核病诊断的组成部分。本研究分析了自动化结核病严重程度分级的风险。根据分析结果,在采用直接规则驱动策略时,许多结核病阳性病例不符合标准的结核病严重程度等级。另一方面,人工调查武断地给这些病例贴上结核病等级的标签。为了应对自动化带来的风险,引入了一种基于模糊的结核病严重程度分类算法(TSLCA),消除了结核病感染程度分类的不确定性。TSLCA引入了权重因子,该权重因子依赖于每个显微镜视野(FOV)中抗酸杆菌(AFB)最大数量的存在。利用三角隶属函数进行模糊化和去模糊化操作。此外,采用[Formula: see text]-cut方法消除结核病严重程度分级中的歧义。使用拟议的TSLCA测试了几个不确定的结核显微镜筛查报告。实验结果表明,TSLCA分级结果具有一致性、无误差、显著性,与标准标准完全吻合。该方法在适应自动诊断的同时,消除了分级的不确定性,保证了结核病诊断的可靠性。
{"title":"A Fuzzy Strategy to Eliminate Uncertainty in Grading Positive Tuberculosis","authors":"R. Samuel, B. R. Kanna","doi":"10.1142/s1469026822500067","DOIUrl":"https://doi.org/10.1142/s1469026822500067","url":null,"abstract":"Sputum smear microscopic examination is an effective, fast, and low-cost technique that is highly specific in areas with a high prevalence of pulmonary tuberculosis. Since manual screening needs trained pathologist in high prevalence zones, the possibility of deploying adequate technicians during the epidemic sessions would be impractical. This condition can cause overburdening and fatigue of working technicians which may tend to reduce the potential efficiency of Tuberculosis (TB) diagnosis. Hence, automation of sputum inspection is the most appropriate aspect in TB outbreak zones to maximize the detection ability. Sputum collection, smear preparing, staining, interpreting smears, and reporting of TB severity are all part of the diagnosis of tuberculosis. This study has analyzed the risk of automating TB severity grading. According to the findings of the analysis, numerous TB-positive cases do not fit into the standard TB severity grade while applying direct rule-driven strategy. The manual investigation, on the other hand, arbitrarily labels the TB grade on those cases. To counter the risk of automation, a fuzzy-based Tuberculosis Severity Level Categorizing Algorithm (TSLCA) is introduced to eliminate uncertainty in classifying the level of TB infection. TSLCA introduces the weight factors, which are dependent on the existence of maximum number of Acid-Fast Bacilli (AFB) per microscopic Field of View (FOV). The fuzzification and defuzzification operations are carried out using the triangular membership function. In addition, the [Formula: see text]-cut approach is used to eliminate the ambiguity in TB severity grading. Several uncertain TB microscopy screening reports are tested using the proposed TSLCA. Based on the experimental results, it is observed that the TB grading by TSLCA is consistent, error-free, significant and fits exactly into the standard criterion. As a result of the proposed TSLCA, the uncertainty of grading is eliminated and the reliability of tuberculosis diagnosis is ensured when adapting automatic diagnosis.","PeriodicalId":422521,"journal":{"name":"Int. J. Comput. Intell. Appl.","volume":"191 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129225226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Augmentation Data of Retina Image for Blood Vessel Segmentation Using U-Net Convolutional Neural Network Method 基于U-Net卷积神经网络的视网膜图像增强数据血管分割
Pub Date : 2022-03-01 DOI: 10.1142/s1469026822500043
Asri Safmi, Anita Desiani, B. Suprihatin
The retina is the most important part of the eye. By proper feature extraction, it can be the first step to detect a disease. Morphology of retina blood vessels can be used to identify and classify a disease. A step, such as segmentation and analysis of retinal blood vessels, can assist medical personnel in detecting the severity of a disease. In this paper, vascular segmentation using U-net architecture in the Convolutional Neural Network (CNN) method is proposed to train a sematic segmentation model in retinal blood vessel. In addition, the Contrast Limited Adaptive Histogram Equalization (CLAHE) method is used to increase the contrast of the grayscale and Median Filter is used to obtain better image quality. Data augmentation is also used to maximize the number of datasets owned to make more. The proposed method allows for easier implementation. In this study, the dataset used was STARE with the result of accuracy, sensitivity, specificity, precision, and F1-score that reached 97.64%, 78.18%, 99.20%, 88.77%, and 82.91%.
视网膜是眼睛最重要的部分。通过适当的特征提取,它可以是检测疾病的第一步。视网膜血管的形态学可以用来识别和分类疾病。分割和分析视网膜血管等步骤可以帮助医务人员检测疾病的严重程度。本文提出了一种基于卷积神经网络(CNN)中的U-net结构的血管分割方法,用于训练视网膜血管的语义分割模型。此外,采用对比度有限自适应直方图均衡化(CLAHE)方法来提高灰度的对比度,并采用中值滤波来获得更好的图像质量。数据增强还用于最大化拥有的数据集的数量,以获得更多。所建议的方法更容易实现。本研究使用的数据集为STARE,准确度、灵敏度、特异性、精密度和f1评分分别达到97.64%、78.18%、99.20%、88.77%和82.91%。
{"title":"The Augmentation Data of Retina Image for Blood Vessel Segmentation Using U-Net Convolutional Neural Network Method","authors":"Asri Safmi, Anita Desiani, B. Suprihatin","doi":"10.1142/s1469026822500043","DOIUrl":"https://doi.org/10.1142/s1469026822500043","url":null,"abstract":"The retina is the most important part of the eye. By proper feature extraction, it can be the first step to detect a disease. Morphology of retina blood vessels can be used to identify and classify a disease. A step, such as segmentation and analysis of retinal blood vessels, can assist medical personnel in detecting the severity of a disease. In this paper, vascular segmentation using U-net architecture in the Convolutional Neural Network (CNN) method is proposed to train a sematic segmentation model in retinal blood vessel. In addition, the Contrast Limited Adaptive Histogram Equalization (CLAHE) method is used to increase the contrast of the grayscale and Median Filter is used to obtain better image quality. Data augmentation is also used to maximize the number of datasets owned to make more. The proposed method allows for easier implementation. In this study, the dataset used was STARE with the result of accuracy, sensitivity, specificity, precision, and F1-score that reached 97.64%, 78.18%, 99.20%, 88.77%, and 82.91%.","PeriodicalId":422521,"journal":{"name":"Int. J. Comput. Intell. Appl.","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128760795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Adaptive Optimization-Enabled Neural Networks to Handle the Imbalance Churn Data in Churn Prediction 基于自适应优化的神经网络处理客户流失预测中的不平衡数据
Pub Date : 2021-12-15 DOI: 10.1142/s1469026821500255
Bharathi Garimella, G. Prasad, M. K. Prasad
The churn prediction based on telecom data has been paid great attention because of the increasing the number telecom providers, but due to inconsistent data, sparsity, and hugeness, the churn prediction becomes complicated and challenging. Hence, an effective and optimal prediction of churns mechanism, named adaptive firefly-spider optimization (adaptive FSO) algorithm, is proposed in this research to predict the churns using the telecom data. The proposed churn prediction method uses telecom data, which is the trending domain of research in predicting the churns; hence, the classification accuracy is increased. However, the proposed adaptive FSO algorithm is designed by integrating the spider monkey optimization (SMO), firefly optimization algorithm (FA), and the adaptive concept. The input data is initially given to the master node of the spark framework. The feature selection is carried out using Kendall’s correlation to select the appropriate features for further processing. Then, the selected unique features are given to the master node to perform churn prediction. Here, the churn prediction is made using a deep convolutional neural network (DCNN), which is trained by the proposed adaptive FSO algorithm. Moreover, the developed model obtained better performance using the metrics, like dice coefficient, accuracy, and Jaccard coefficient by varying the training data percentage and selected features. Thus, the proposed adaptive FSO-based DCNN showed improved results with a dice coefficient of 99.76%, accuracy of 98.65%, Jaccard coefficient of 99.52%.
随着电信运营商数量的不断增加,基于电信数据的客户流失预测受到了人们的重视,但由于数据的不一致性、稀疏性和庞大性,使得客户流失预测变得复杂而具有挑战性。因此,本研究提出了一种有效且最优的离职预测机制——自适应萤火虫蜘蛛优化算法(adaptive FSO),利用电信数据预测离职。提出的客户流失预测方法采用电信数据,这是预测客户流失研究的趋势领域;因此,提高了分类精度。然而,本文提出的自适应FSO算法是将蜘蛛猴优化算法(SMO)、萤火虫优化算法(FA)和自适应概念相结合而设计的。输入数据最初被提供给spark框架的主节点。特征选择使用肯德尔的相关性来选择合适的特征进行进一步的处理。然后,将选择的唯一特征提供给主节点进行客户流失预测。在这里,使用深度卷积神经网络(DCNN)进行流失预测,该网络由所提出的自适应FSO算法训练。此外,通过改变训练数据百分比和选择的特征,所开发的模型使用骰子系数、准确率和Jaccard系数等指标获得了更好的性能。结果表明,基于自适应fso的DCNN的准确率为98.65%,骰子系数为99.76%,Jaccard系数为99.52%。
{"title":"Adaptive Optimization-Enabled Neural Networks to Handle the Imbalance Churn Data in Churn Prediction","authors":"Bharathi Garimella, G. Prasad, M. K. Prasad","doi":"10.1142/s1469026821500255","DOIUrl":"https://doi.org/10.1142/s1469026821500255","url":null,"abstract":"The churn prediction based on telecom data has been paid great attention because of the increasing the number telecom providers, but due to inconsistent data, sparsity, and hugeness, the churn prediction becomes complicated and challenging. Hence, an effective and optimal prediction of churns mechanism, named adaptive firefly-spider optimization (adaptive FSO) algorithm, is proposed in this research to predict the churns using the telecom data. The proposed churn prediction method uses telecom data, which is the trending domain of research in predicting the churns; hence, the classification accuracy is increased. However, the proposed adaptive FSO algorithm is designed by integrating the spider monkey optimization (SMO), firefly optimization algorithm (FA), and the adaptive concept. The input data is initially given to the master node of the spark framework. The feature selection is carried out using Kendall’s correlation to select the appropriate features for further processing. Then, the selected unique features are given to the master node to perform churn prediction. Here, the churn prediction is made using a deep convolutional neural network (DCNN), which is trained by the proposed adaptive FSO algorithm. Moreover, the developed model obtained better performance using the metrics, like dice coefficient, accuracy, and Jaccard coefficient by varying the training data percentage and selected features. Thus, the proposed adaptive FSO-based DCNN showed improved results with a dice coefficient of 99.76%, accuracy of 98.65%, Jaccard coefficient of 99.52%.","PeriodicalId":422521,"journal":{"name":"Int. J. Comput. Intell. Appl.","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128208863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-Time Human Action Recognition Using Deep Learning Architecture 使用深度学习架构的实时人体动作识别
Pub Date : 2021-11-17 DOI: 10.1142/s1469026821500267
S. Kahlouche, M. Belhocine, Abdallah Menouar
In this work, efficient human activity recognition (HAR) algorithm based on deep learning architecture is proposed to classify activities into seven different classes. In order to learn spatial and temporal features from only 3D skeleton data captured from a “Microsoft Kinect” camera, the proposed algorithm combines both convolution neural network (CNN) and long short-term memory (LSTM) architectures. This combination allows taking advantage of LSTM in modeling temporal data and of CNN in modeling spatial data. The captured skeleton sequences are used to create a specific dataset of interactive activities; these data are then transformed according to a view invariant and a symmetry criterion. To demonstrate the effectiveness of the developed algorithm, it has been tested on several public datasets and it has achieved and sometimes has overcome state-of-the-art performance. In order to verify the uncertainty of the proposed algorithm, some tools are provided and discussed to ensure its efficiency for continuous human action recognition in real time.
本文提出了一种基于深度学习架构的高效人类活动识别(HAR)算法,将人类活动分为7类。为了仅从“微软Kinect”相机捕获的3D骨骼数据中学习空间和时间特征,该算法结合了卷积神经网络(CNN)和长短期记忆(LSTM)架构。这种组合可以利用LSTM建模时间数据和CNN建模空间数据的优势。捕获的骨架序列用于创建交互式活动的特定数据集;然后根据视图不变量和对称准则对这些数据进行转换。为了证明开发的算法的有效性,它已经在几个公共数据集上进行了测试,并且它已经达到甚至有时已经超越了最先进的性能。为了验证所提算法的不确定性,给出并讨论了一些工具,以保证算法对连续的实时人体动作识别的有效性。
{"title":"Real-Time Human Action Recognition Using Deep Learning Architecture","authors":"S. Kahlouche, M. Belhocine, Abdallah Menouar","doi":"10.1142/s1469026821500267","DOIUrl":"https://doi.org/10.1142/s1469026821500267","url":null,"abstract":"In this work, efficient human activity recognition (HAR) algorithm based on deep learning architecture is proposed to classify activities into seven different classes. In order to learn spatial and temporal features from only 3D skeleton data captured from a “Microsoft Kinect” camera, the proposed algorithm combines both convolution neural network (CNN) and long short-term memory (LSTM) architectures. This combination allows taking advantage of LSTM in modeling temporal data and of CNN in modeling spatial data. The captured skeleton sequences are used to create a specific dataset of interactive activities; these data are then transformed according to a view invariant and a symmetry criterion. To demonstrate the effectiveness of the developed algorithm, it has been tested on several public datasets and it has achieved and sometimes has overcome state-of-the-art performance. In order to verify the uncertainty of the proposed algorithm, some tools are provided and discussed to ensure its efficiency for continuous human action recognition in real time.","PeriodicalId":422521,"journal":{"name":"Int. J. Comput. Intell. Appl.","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125416613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Quadratic Convex Reformulation for Solving Task Assignment Problem with Continuous Hopfield Network 求解连续Hopfield网络任务分配问题的二次凸重构
Pub Date : 2021-11-11 DOI: 10.1142/s1469026821500243
Youssef Hami, Chakir Loqman
This research is an optimal allocation of tasks to processors in order to minimize the total costs of execution and communication. This problem is called the Task Assignment Problem (TAP) with nonuniform communication costs. To solve the latter, the first step concerns the formulation of the problem by an equivalent zero-one quadratic program with a convex objective function using a convexification technique, based on the smallest eigenvalue. The second step concerns the application of the Continuous Hopfield Network (CHN) to solve the obtained problem. The calculation results are presented for the instances from the literature, compared to solutions obtained both the CPLEX solver and by the heuristic genetic algorithm, and show an improvement in the results obtained by applying only the CHN algorithm. We can see that the proposed approach evaluates the efficiency of the theoretical results and achieves the optimal solutions in a short calculation time.
本研究是为了最小化执行和通信的总成本而对处理器进行任务的最佳分配。这个问题被称为具有非均匀通信代价的任务分配问题(TAP)。为了解决后者,第一步涉及到一个基于最小特征值的具有凸目标函数的等效0 - 1二次规划的问题,使用凸化技术。第二步是应用连续Hopfield网络(CHN)来求解得到的问题。给出了文献实例的计算结果,并与CPLEX求解器和启发式遗传算法的解进行了比较,结果表明仅应用CHN算法得到的结果有改善。我们可以看到,所提出的方法评价了理论结果的有效性,并在较短的计算时间内得到了最优解。
{"title":"Quadratic Convex Reformulation for Solving Task Assignment Problem with Continuous Hopfield Network","authors":"Youssef Hami, Chakir Loqman","doi":"10.1142/s1469026821500243","DOIUrl":"https://doi.org/10.1142/s1469026821500243","url":null,"abstract":"This research is an optimal allocation of tasks to processors in order to minimize the total costs of execution and communication. This problem is called the Task Assignment Problem (TAP) with nonuniform communication costs. To solve the latter, the first step concerns the formulation of the problem by an equivalent zero-one quadratic program with a convex objective function using a convexification technique, based on the smallest eigenvalue. The second step concerns the application of the Continuous Hopfield Network (CHN) to solve the obtained problem. The calculation results are presented for the instances from the literature, compared to solutions obtained both the CPLEX solver and by the heuristic genetic algorithm, and show an improvement in the results obtained by applying only the CHN algorithm. We can see that the proposed approach evaluates the efficiency of the theoretical results and achieves the optimal solutions in a short calculation time.","PeriodicalId":422521,"journal":{"name":"Int. J. Comput. Intell. Appl.","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124543079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimal Sizing and Location of Distributed Generators for Power Flow Analysis in Smart Grid Using IAS-MVPA Strategy 基于IAS-MVPA策略的智能电网潮流分析中分布式发电机的优化尺寸和位置
Pub Date : 2021-11-06 DOI: 10.1142/s1469026821500279
Kumar Cherukupalli, N. VijayaAnand
In this paper, the optimal distribution generation (DG) size and location for power flow analysis at the smart grid by hybrid method are proposed. The proposed hybrid method is the Interactive Autodidactic School (IAS) and the Most Valuable Player Algorithm (MVPA) and commonly named as IAS-MVPA method. The main aim of this work is to reduce line loss and total harmonic distortion (THD), similarly, to recover the voltage profile of system through the optimal location and size of the distributed generators and optimal rearrangement of network. Here, IAS-MVPA method is utilized as a rectification tool to get the maximum DG size and the maximal reconfiguration of network at environmental load variation. In case of failure, the IAS method is utilized for maximizing the DG location. The IAS chooses the line of maximal power loss as optimal location to place the DG based on the objective function. The fault violates the equality and inequality restrictions of the safe limit system. From the control parameters, the low voltage drift is improved using the MVPA method. The low-voltage deviation has been exploited for obtaining the maximum capacity of the DG. After that, the maximum capacity is used at maximum location that improves the power flow of the system. The proposed system is performed on MATLAB/Simulink platform, and the effectiveness is assessed by comparing it with various existing processes such as generic algorithm (GA), Cuttle fish algorithm (CFA), adaptive grasshopper optimization algorithm (AGOA) and artificial neural network (ANN).
本文提出了用混合方法进行智能电网潮流分析的最优配电发电(DG)规模和位置。所提出的混合方法是交互式自动教学学校(IAS)和最有价值玩家算法(MVPA),通常称为IAS-MVPA方法。本文的主要目的是降低线路损耗和总谐波失真(THD),同样,通过优化分布式发电机的位置和规模以及优化网络重排来恢复系统的电压分布。本文利用IAS-MVPA方法作为整流工具,得到环境负荷变化下的最大DG大小和最大网络重构。在失败的情况下,利用IAS方法最大化DG位置。IAS根据目标函数选择最大功率损耗线作为DG的最优放置位置。该故障违反了安全极限系统的等式和不等式约束。从控制参数出发,采用MVPA方法对低电压漂移进行了改进。低压偏差被用来获得DG的最大容量。之后,在最大位置使用最大容量,从而改善系统的潮流。在MATLAB/Simulink平台上对所提出的系统进行了仿真,并与现有的通用算法(GA)、墨鱼算法(CFA)、自适应蚱蜢优化算法(AGOA)和人工神经网络(ANN)等算法进行了对比,评估了系统的有效性。
{"title":"Optimal Sizing and Location of Distributed Generators for Power Flow Analysis in Smart Grid Using IAS-MVPA Strategy","authors":"Kumar Cherukupalli, N. VijayaAnand","doi":"10.1142/s1469026821500279","DOIUrl":"https://doi.org/10.1142/s1469026821500279","url":null,"abstract":"In this paper, the optimal distribution generation (DG) size and location for power flow analysis at the smart grid by hybrid method are proposed. The proposed hybrid method is the Interactive Autodidactic School (IAS) and the Most Valuable Player Algorithm (MVPA) and commonly named as IAS-MVPA method. The main aim of this work is to reduce line loss and total harmonic distortion (THD), similarly, to recover the voltage profile of system through the optimal location and size of the distributed generators and optimal rearrangement of network. Here, IAS-MVPA method is utilized as a rectification tool to get the maximum DG size and the maximal reconfiguration of network at environmental load variation. In case of failure, the IAS method is utilized for maximizing the DG location. The IAS chooses the line of maximal power loss as optimal location to place the DG based on the objective function. The fault violates the equality and inequality restrictions of the safe limit system. From the control parameters, the low voltage drift is improved using the MVPA method. The low-voltage deviation has been exploited for obtaining the maximum capacity of the DG. After that, the maximum capacity is used at maximum location that improves the power flow of the system. The proposed system is performed on MATLAB/Simulink platform, and the effectiveness is assessed by comparing it with various existing processes such as generic algorithm (GA), Cuttle fish algorithm (CFA), adaptive grasshopper optimization algorithm (AGOA) and artificial neural network (ANN).","PeriodicalId":422521,"journal":{"name":"Int. J. Comput. Intell. Appl.","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127534615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimization of Interval Type-2 Intuitionistic Fuzzy Logic System for Prediction Problems 区间2型直觉模糊逻辑系统预测问题的优化
Pub Date : 2021-11-03 DOI: 10.1142/s146902682150022x
Imo J. Eyoh, J. Eyoh, U. Umoh, R. Kalawsky
Derivative-based algorithms have been adopted in the literature for the optimization of membership and non-membership function parameters of interval type-2 (T2) intuitionistic fuzzy logic systems (FLSs). In this study, a non-derivative-based algorithm called sliding mode control learning algorithm is proposed to tune the parameters of interval T2 intuitionistic FLS for the first time. The proposed rule-based learning system employs the Takagi–Sugeno–Kang inference with artificial neural network to pilot the learning process. The new learning system is evaluated using some nonlinear prediction problems. Analyses of results reveal that the proposed learning apparatus outperforms its type-1 version and many existing solutions in the literature and competes favorably with others on the investigated problem instances with low cost in terms of running time.
文献中采用基于导数的算法来优化区间2型(T2)直觉模糊逻辑系统的隶属函数和非隶属函数参数。本文首次提出了一种基于非导数的滑模控制学习算法对区间T2直觉FLS的参数进行整定。提出的基于规则的学习系统采用Takagi-Sugeno-Kang推理和人工神经网络来引导学习过程。利用一些非线性预测问题对新的学习系统进行了评估。结果分析表明,所提出的学习装置优于其类型-1版本和许多现有的文献解决方案,并且在运行时间方面具有较低的成本,在所研究的问题实例上具有优势。
{"title":"Optimization of Interval Type-2 Intuitionistic Fuzzy Logic System for Prediction Problems","authors":"Imo J. Eyoh, J. Eyoh, U. Umoh, R. Kalawsky","doi":"10.1142/s146902682150022x","DOIUrl":"https://doi.org/10.1142/s146902682150022x","url":null,"abstract":"Derivative-based algorithms have been adopted in the literature for the optimization of membership and non-membership function parameters of interval type-2 (T2) intuitionistic fuzzy logic systems (FLSs). In this study, a non-derivative-based algorithm called sliding mode control learning algorithm is proposed to tune the parameters of interval T2 intuitionistic FLS for the first time. The proposed rule-based learning system employs the Takagi–Sugeno–Kang inference with artificial neural network to pilot the learning process. The new learning system is evaluated using some nonlinear prediction problems. Analyses of results reveal that the proposed learning apparatus outperforms its type-1 version and many existing solutions in the literature and competes favorably with others on the investigated problem instances with low cost in terms of running time.","PeriodicalId":422521,"journal":{"name":"Int. J. Comput. Intell. Appl.","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129161744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
Int. J. Comput. Intell. Appl.
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1