Diabetic retinopathy (DR) is one of the worse conditions caused by diabetes mellitus (DM). DR can leave the patient completely blind because it may have no symptoms in its initial stages. Expert physicians have been developing technologies for early detection and classification of DR to prevent the increasing number of patients. Some authors have used convolutional neural networks for this purpose. Pre-processing methods for database are important to increase the accuracy detection of CNN, and the use for an optimization algorithm can further increase that accuracy. In this work, four pre-processing methods are presented to compare them and select the best one. Then the use of a hierarchical genetic algorithm (HGA) with the pre-processing method is done with the intention of increasing the classification accuracy of a new CNN model. Using the HGA increases the accuracies obtained by the pre-processing methods and outperforms the results obtained by other authors. In the binary study case (detection of DR) a 0.9781 in the highest accuracy was achieved, a 0.9650 in mean accuracy and 0.007665 in standard deviation. In the multi-class study case (classification of DR) a 0.7762 in the highest accuracy, 0.7596 in mean accuracy and 0.009948 in standard deviation.
{"title":"Hierarchical genetic optimization of convolutional neural models for diabetic retinopathy classification","authors":"Rodrigo Cordero-Martínez, D. Sánchez, P. Melin","doi":"10.3233/his-220004","DOIUrl":"https://doi.org/10.3233/his-220004","url":null,"abstract":"Diabetic retinopathy (DR) is one of the worse conditions caused by diabetes mellitus (DM). DR can leave the patient completely blind because it may have no symptoms in its initial stages. Expert physicians have been developing technologies for early detection and classification of DR to prevent the increasing number of patients. Some authors have used convolutional neural networks for this purpose. Pre-processing methods for database are important to increase the accuracy detection of CNN, and the use for an optimization algorithm can further increase that accuracy. In this work, four pre-processing methods are presented to compare them and select the best one. Then the use of a hierarchical genetic algorithm (HGA) with the pre-processing method is done with the intention of increasing the classification accuracy of a new CNN model. Using the HGA increases the accuracies obtained by the pre-processing methods and outperforms the results obtained by other authors. In the binary study case (detection of DR) a 0.9781 in the highest accuracy was achieved, a 0.9650 in mean accuracy and 0.007665 in standard deviation. In the multi-class study case (classification of DR) a 0.7762 in the highest accuracy, 0.7596 in mean accuracy and 0.009948 in standard deviation.","PeriodicalId":88526,"journal":{"name":"International journal of hybrid intelligent systems","volume":"38 1","pages":"97-109"},"PeriodicalIF":0.0,"publicationDate":"2022-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76055440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Authorship Attribution is a process to determine and/or identify the author of a given text document. The relevance of this research area comes to the fore when two or more writers claim to be the prospective authors of an unidentified or anonymous text document or are unwilling to accept any authorship. This research work aims to utilize various Machine Learning techniques in order to solve the problem of author identification. In the proposed approach, a number of textual features such as Token n-grams, Stylometric features, bag-of-words and TF-IDF have been extracted. Experimentation has been performed on three datasets viz. Spooky Author Identification dataset, Reuter_50_50 dataset and Manual dataset with 3 different train-test split ratios viz. 80-20, 70-30 and 66.67-33.33. Models have been built and tested with supervised learning algorithms such as Naive Bayes, Support Vector Machine, K-Nearest Neighbor, Decision Tree and Random Forest. The proposed system yields promising results. For the Spooky dataset, the best accuracy score obtained is 84.14% with bag-of-words using Naïve Bayes classifier. The best accuracy score of 86.2% is computed for the Reuter_50_50 dataset with 2100 most frequent words when the classifier used is Support Vector Machine. For the Manual dataset, the best score of 96.67% is obtained using the Naïve Bayes Classification Model with both 5-fold and 10-fold cross validation when both syntactic features and 600 most frequent unigrams are used in combination.
{"title":"Machine learning-based authorship attribution using token n-grams and other time tested features","authors":"S. Gupta, Swarupa Das, Jyotish Ranjan Mallik","doi":"10.3233/his-220005","DOIUrl":"https://doi.org/10.3233/his-220005","url":null,"abstract":"Authorship Attribution is a process to determine and/or identify the author of a given text document. The relevance of this research area comes to the fore when two or more writers claim to be the prospective authors of an unidentified or anonymous text document or are unwilling to accept any authorship. This research work aims to utilize various Machine Learning techniques in order to solve the problem of author identification. In the proposed approach, a number of textual features such as Token n-grams, Stylometric features, bag-of-words and TF-IDF have been extracted. Experimentation has been performed on three datasets viz. Spooky Author Identification dataset, Reuter_50_50 dataset and Manual dataset with 3 different train-test split ratios viz. 80-20, 70-30 and 66.67-33.33. Models have been built and tested with supervised learning algorithms such as Naive Bayes, Support Vector Machine, K-Nearest Neighbor, Decision Tree and Random Forest. The proposed system yields promising results. For the Spooky dataset, the best accuracy score obtained is 84.14% with bag-of-words using Naïve Bayes classifier. The best accuracy score of 86.2% is computed for the Reuter_50_50 dataset with 2100 most frequent words when the classifier used is Support Vector Machine. For the Manual dataset, the best score of 96.67% is obtained using the Naïve Bayes Classification Model with both 5-fold and 10-fold cross validation when both syntactic features and 600 most frequent unigrams are used in combination.","PeriodicalId":88526,"journal":{"name":"International journal of hybrid intelligent systems","volume":"53 1","pages":"37-51"},"PeriodicalIF":0.0,"publicationDate":"2022-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75998567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Hub Location Problems (HLP) have gathered great interest due to the complexity and to the many applications in industry such as aviation, public transportation, telecommunications, among others. The HLP have many variants regarding allocation (single or multiple) and capacity (uncapacitated or capacitated). This paper presents a variant of the HLP, encompassing single allocation with capacity constraints. The Capacitated Single Allocation p-Hub Location Problem (CSApHLP) objective consists on determine the set of p hubs in a network that minimizes the total cost of allocating all the non-hub nodes to the p hubs. In this work, it is proposed a sophisticated RAMP approach (PD-RAMP) to improve the results obtained previously by the simple version (Dual-RAMP). Thus, a parallel implementation is conducted to assess the effectiveness of a parallel RAMP model applied to the CSApHLP. The first algorithm, the sequential PD-RAMP, incorporates Dual-RAMP with a Scatter Search procedure to create a Primal-Dual RAMP approach. The second algorithm, the parallel PD-RAMP, also take advantage of the dual and primal, parallelizing the primal side of the problem and interconnecting both sides as it is expected in the RAMP sequential algorithm. The quality of the results carried out on a standard testbed shows that the PD-RAMP approach managed to improve the state-of-the-art algorithms for the CSApHLP.
{"title":"Primal-dual algorithms for the Capacitated Single Allocation p-Hub Location Problem","authors":"Telmo Matos","doi":"10.3233/his-220003","DOIUrl":"https://doi.org/10.3233/his-220003","url":null,"abstract":"The Hub Location Problems (HLP) have gathered great interest due to the complexity and to the many applications in industry such as aviation, public transportation, telecommunications, among others. The HLP have many variants regarding allocation (single or multiple) and capacity (uncapacitated or capacitated). This paper presents a variant of the HLP, encompassing single allocation with capacity constraints. The Capacitated Single Allocation p-Hub Location Problem (CSApHLP) objective consists on determine the set of p hubs in a network that minimizes the total cost of allocating all the non-hub nodes to the p hubs. In this work, it is proposed a sophisticated RAMP approach (PD-RAMP) to improve the results obtained previously by the simple version (Dual-RAMP). Thus, a parallel implementation is conducted to assess the effectiveness of a parallel RAMP model applied to the CSApHLP. The first algorithm, the sequential PD-RAMP, incorporates Dual-RAMP with a Scatter Search procedure to create a Primal-Dual RAMP approach. The second algorithm, the parallel PD-RAMP, also take advantage of the dual and primal, parallelizing the primal side of the problem and interconnecting both sides as it is expected in the RAMP sequential algorithm. The quality of the results carried out on a standard testbed shows that the PD-RAMP approach managed to improve the state-of-the-art algorithms for the CSApHLP.","PeriodicalId":88526,"journal":{"name":"International journal of hybrid intelligent systems","volume":"122 1","pages":"1-17"},"PeriodicalIF":0.0,"publicationDate":"2022-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87322565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abdellah Amzil, Abdessamad Bellouch, Ahmed Boujnoui, Mohamed Hanini, Abdellah Zaaloul
In this research, we assess the impact of collisions produced by simultaneous transmission using the same Spreading Factor (SF) and over the same channel in LoRa networks, demonstrating that such collisions significantly impair LoRa network performance. We quantify the network performance advantages by combining the primary characteristics of the Capture Effect (CE) and Signature Code (SC) approaches. The system is analyzed using a Markov chain model, which allows us to construct the mathematical formulation for the performance measures. Our numerical findings reveal that the proposed approach surpasses the standard LoRa in terms of network throughput and transmitted packet latency.
{"title":"Analysis and performance optimization of LoRa network using the CE & SC hybrid approach","authors":"Abdellah Amzil, Abdessamad Bellouch, Ahmed Boujnoui, Mohamed Hanini, Abdellah Zaaloul","doi":"10.3233/his-220007","DOIUrl":"https://doi.org/10.3233/his-220007","url":null,"abstract":"In this research, we assess the impact of collisions produced by simultaneous transmission using the same Spreading Factor (SF) and over the same channel in LoRa networks, demonstrating that such collisions significantly impair LoRa network performance. We quantify the network performance advantages by combining the primary characteristics of the Capture Effect (CE) and Signature Code (SC) approaches. The system is analyzed using a Markov chain model, which allows us to construct the mathematical formulation for the performance measures. Our numerical findings reveal that the proposed approach surpasses the standard LoRa in terms of network throughput and transmitted packet latency.","PeriodicalId":88526,"journal":{"name":"International journal of hybrid intelligent systems","volume":"1 1","pages":"53-68"},"PeriodicalIF":0.0,"publicationDate":"2022-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72767777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Efficient and accurate early prediction of Alzheimer's disease (AD) based on the neuroimaging data has attracted interest from many researchers to prevent its progression. Deep learning networks have demonstrated an optimal ability to analyse large-scale multimodal neuroimaging for AD classification. The most widely used architecture of deep learning is the Convolution neural networks (CNN) that have shown great potential in AD detection. However CNN does not capture long range dependencies within the input image and does not ensure a good global feature extraction. Furthermore, increasing the receptive field of CNN by increasing the kernels sizes can cause a feature granularity loss. Another limitation is that CNN lacks a weighing mechanism of image features; the network doesn’t focus on the relevant features within the image. Recently,vision transformer have shown an outstanding performance over the CNN and overcomes its main limitations. The vision transformer relies on the self-attention layers. The main drawbacks of this new technique is that it requires a huge amount of training data. In this paper, we combined the main strengths of these two architectures for AD classification. We proposed a new method based on the combination of the Cross ViT and Wide Residual Squeeze-and-Excitation Network. We acquired MRI data from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) and the Open Access Series of Imaging Studies (OASIS). We also proposed a new data augmentation based on the self attention progressive generative adversarial neural network to overcome the limitation of the data. Our proposed method achieved 99% classification accuracy and outperforms CNN models.
{"title":"CrossViT Wide Residual Squeeze-and-Excitation Network for Alzheimer's disease classification with self attention ProGAN data augmentation","authors":"Rahma Kadri, Bassem Bouaziz, M. Tmar, F. Gargouri","doi":"10.3233/his-220002","DOIUrl":"https://doi.org/10.3233/his-220002","url":null,"abstract":"Efficient and accurate early prediction of Alzheimer's disease (AD) based on the neuroimaging data has attracted interest from many researchers to prevent its progression. Deep learning networks have demonstrated an optimal ability to analyse large-scale multimodal neuroimaging for AD classification. The most widely used architecture of deep learning is the Convolution neural networks (CNN) that have shown great potential in AD detection. However CNN does not capture long range dependencies within the input image and does not ensure a good global feature extraction. Furthermore, increasing the receptive field of CNN by increasing the kernels sizes can cause a feature granularity loss. Another limitation is that CNN lacks a weighing mechanism of image features; the network doesn’t focus on the relevant features within the image. Recently,vision transformer have shown an outstanding performance over the CNN and overcomes its main limitations. The vision transformer relies on the self-attention layers. The main drawbacks of this new technique is that it requires a huge amount of training data. In this paper, we combined the main strengths of these two architectures for AD classification. We proposed a new method based on the combination of the Cross ViT and Wide Residual Squeeze-and-Excitation Network. We acquired MRI data from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) and the Open Access Series of Imaging Studies (OASIS). We also proposed a new data augmentation based on the self attention progressive generative adversarial neural network to overcome the limitation of the data. Our proposed method achieved 99% classification accuracy and outperforms CNN models.","PeriodicalId":88526,"journal":{"name":"International journal of hybrid intelligent systems","volume":"24 1","pages":"163-177"},"PeriodicalIF":0.0,"publicationDate":"2022-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84410445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
3D augmented reality (AR) has a photometric aspect of 3D rendering and a geometric aspect of camera tracking. In this paper, we will discuss the second aspect, which involves feature matching for stable 3D object insertion. We present the different types of image matching approaches, starting from handcrafted feature algorithms and machine learning methods, to recent deep learning approaches using various types of CNN architectures, and more modern end-to-end models. A comparison of these methods is performed according to criteria of real time and accuracy, to allow the choice of the most relevant methods for a 3D AR system.
{"title":"Feature matching for 3D AR: Review from handcrafted methods to deep learning","authors":"Houssam Halmaoui, A. Haqiq","doi":"10.3233/his-220001","DOIUrl":"https://doi.org/10.3233/his-220001","url":null,"abstract":"3D augmented reality (AR) has a photometric aspect of 3D rendering and a geometric aspect of camera tracking. In this paper, we will discuss the second aspect, which involves feature matching for stable 3D object insertion. We present the different types of image matching approaches, starting from handcrafted feature algorithms and machine learning methods, to recent deep learning approaches using various types of CNN architectures, and more modern end-to-end models. A comparison of these methods is performed according to criteria of real time and accuracy, to allow the choice of the most relevant methods for a 3D AR system.","PeriodicalId":88526,"journal":{"name":"International journal of hybrid intelligent systems","volume":"21 1","pages":"143-162"},"PeriodicalIF":0.0,"publicationDate":"2022-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83232206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Comparison of optimization algorithms based on swarm intelligence applied to convolutional neural networks for face recognition","authors":"P. Melin, D. Sánchez, O. Castillo","doi":"10.3233/HIS-220010","DOIUrl":"https://doi.org/10.3233/HIS-220010","url":null,"abstract":"","PeriodicalId":88526,"journal":{"name":"International journal of hybrid intelligent systems","volume":"78 1","pages":"161-171"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83885678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Since December 2019, we have detected the appearance of a new virus called COVID-19, which has spread, throughout the world. Everyone today, has given major importance to this new virus. Although we have little knowledge of the disease, doctors and specialists make decisions every day that have a significant impact on public health. There are many and various open data in this context, which are scattered and distributed. For this, we need to capitalize all the information in a data warehouse. For that, in this paper, we propose an approach to create a data warehouse from open data specifically from COVID-19 data. We start with the identification of the relevant sources from the various open data. Then, we collect the pertinent data. After that, we identify the multidimensional concepts used to design the data warehouse schema related to COVID-19 data. Finally, we transform our data warehouse to logical model and create our NoSQL data warehouse with Talend Open Studio for Big Data (TOS_BD).
自2019年12月以来,我们发现了一种名为COVID-19的新病毒的出现,并已在全球传播。今天的每个人都非常重视这种新病毒。尽管我们对这种疾病知之甚少,但医生和专家每天都在做出对公众健康有重大影响的决定。在这一背景下,开放数据数量众多,种类繁多,分散分布。为此,我们需要将数据仓库中的所有信息大写。为此,在本文中,我们提出了一种从开放数据特别是COVID-19数据中创建数据仓库的方法。我们首先从各种开放数据中识别相关来源。然后,我们收集相关数据。之后,我们确定了用于设计与COVID-19数据相关的数据仓库模式的多维概念。最后,我们将数据仓库转换为逻辑模型,并使用Talend Open Studio for Big data (TOS_BD)创建NoSQL数据仓库。
{"title":"Towards data warehouse from open data: Case of COVID-19","authors":"Senda Bouaziz, Ahlem Nabli, F. Gargouri","doi":"10.3233/his-210010","DOIUrl":"https://doi.org/10.3233/his-210010","url":null,"abstract":"Since December 2019, we have detected the appearance of a new virus called COVID-19, which has spread, throughout the world. Everyone today, has given major importance to this new virus. Although we have little knowledge of the disease, doctors and specialists make decisions every day that have a significant impact on public health. There are many and various open data in this context, which are scattered and distributed. For this, we need to capitalize all the information in a data warehouse. For that, in this paper, we propose an approach to create a data warehouse from open data specifically from COVID-19 data. We start with the identification of the relevant sources from the various open data. Then, we collect the pertinent data. After that, we identify the multidimensional concepts used to design the data warehouse schema related to COVID-19 data. Finally, we transform our data warehouse to logical model and create our NoSQL data warehouse with Talend Open Studio for Big Data (TOS_BD).","PeriodicalId":88526,"journal":{"name":"International journal of hybrid intelligent systems","volume":"15 1","pages":"129-142"},"PeriodicalIF":0.0,"publicationDate":"2021-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86136032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper discussed an anisotropic interpolation model that filling in-depth data in a largely empty region of a depth map. We consider an image with an anisotropic metric gij that incorporates spatial and photometric data. We propose a numerical implementation of our model based on the “eikonal” operator, which compute the solution of a degenerated partial differential equation (the biased Infinity Laplacian or biased Absolutely Minimizing Lipschitz Extension). This equation’s solution creates exponential cones based on the available data, extending the available depth data and completing the depth map image. Because of this, this operator is better suited to interpolating smooth surfaces. To perform this task, we assume we have at our disposal a reference color image and a depth map. We carried out an experimental comparison of the AMLE and bAMLE using various metrics with square root, absolute value, and quadratic terms. In these experiments, considered color spaces were sRGB, XYZ, CIE-L*a*b*, and CMY. In this document, we also presented a proposal to extend the AMLE and bAMLE to the time domain. Finally, in the parameter estimation of the model, we compared EHO and PSO. The combination of sRGB and square root metric produces the best results, demonstrating that our bAMLE model outperforms the AMLE model and other contemporary models in the KITTI depth completion suite dataset. This type of model, such as AMLE and bAMLE, is simple to implement and represents a low-cost implementation option for similar applications.
{"title":"Comparing different metrics on an anisotropic depth completion model","authors":"V. Lazcano, F. Calderero, C. Ballester","doi":"10.3233/his-210006","DOIUrl":"https://doi.org/10.3233/his-210006","url":null,"abstract":"This paper discussed an anisotropic interpolation model that filling in-depth data in a largely empty region of a depth map. We consider an image with an anisotropic metric gij that incorporates spatial and photometric data. We propose a numerical implementation of our model based on the “eikonal” operator, which compute the solution of a degenerated partial differential equation (the biased Infinity Laplacian or biased Absolutely Minimizing Lipschitz Extension). This equation’s solution creates exponential cones based on the available data, extending the available depth data and completing the depth map image. Because of this, this operator is better suited to interpolating smooth surfaces. To perform this task, we assume we have at our disposal a reference color image and a depth map. We carried out an experimental comparison of the AMLE and bAMLE using various metrics with square root, absolute value, and quadratic terms. In these experiments, considered color spaces were sRGB, XYZ, CIE-L*a*b*, and CMY. In this document, we also presented a proposal to extend the AMLE and bAMLE to the time domain. Finally, in the parameter estimation of the model, we compared EHO and PSO. The combination of sRGB and square root metric produces the best results, demonstrating that our bAMLE model outperforms the AMLE model and other contemporary models in the KITTI depth completion suite dataset. This type of model, such as AMLE and bAMLE, is simple to implement and represents a low-cost implementation option for similar applications.","PeriodicalId":88526,"journal":{"name":"International journal of hybrid intelligent systems","volume":"27 1","pages":"87-99"},"PeriodicalIF":0.0,"publicationDate":"2021-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76612895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This article investigates the enhancement of a vector evaluat-ed-based adaptive metaheuristics for solving two multiobjective problems called environmental-economic dispatch and portfolio optimization. The idea is to evolve two populations independently, and exchange information between them, i.e., the first population evolves according to the best individual of the second population and vice-versa. The choice of which algorithm will be executed on each generation is carried out stochastically among three evolutionary algorithms well-known in the literature: PSO, DE, ABC. To assess the results, we used an established metric in multiobjective evolutionary algorithms called hypervolume. Tests solving the referred problem have shown that the new approach reaches the best hypervolumes in power systems comprised of six and forty generators and five different datasets of portfolio optimization. The experiments were performed 31 times, using 250, 500, and 1000 iterations in both problems. Results have also shown that our proposal tends to overcome a variation of a hybrid SPEA2 compared to their cooperative and competitive approaches.
本文研究了一种基于向量评估的自适应元启发式方法的改进,用于解决两个多目标问题,即环境经济调度和投资组合优化。其思想是独立地进化两个种群,并在它们之间交换信息,即,第一个种群根据第二个种群的最佳个体进化,反之亦然。在文献中已知的三种进化算法:PSO, DE, ABC中随机选择每代执行哪一种算法。为了评估结果,我们使用了多目标进化算法中称为hypervolume的既定度量。解决上述问题的测试表明,新方法在由六四十个发电机和五个不同的组合优化数据集组成的电力系统中达到最佳超容量。实验进行了31次,在两个问题中分别使用了250次、500次和1000次迭代。结果还表明,与他们的合作和竞争方法相比,我们的建议倾向于克服混合SPEA2的变化。
{"title":"An enhanced auto adaptive vector evaluated-based metaheuristic for solving real-world problems","authors":"L. F. Costa, O. Cortes, João Pedro Augusto Costa","doi":"10.3233/his-210007","DOIUrl":"https://doi.org/10.3233/his-210007","url":null,"abstract":"This article investigates the enhancement of a vector evaluat-ed-based adaptive metaheuristics for solving two multiobjective problems called environmental-economic dispatch and portfolio optimization. The idea is to evolve two populations independently, and exchange information between them, i.e., the first population evolves according to the best individual of the second population and vice-versa. The choice of which algorithm will be executed on each generation is carried out stochastically among three evolutionary algorithms well-known in the literature: PSO, DE, ABC. To assess the results, we used an established metric in multiobjective evolutionary algorithms called hypervolume. Tests solving the referred problem have shown that the new approach reaches the best hypervolumes in power systems comprised of six and forty generators and five different datasets of portfolio optimization. The experiments were performed 31 times, using 250, 500, and 1000 iterations in both problems. Results have also shown that our proposal tends to overcome a variation of a hybrid SPEA2 compared to their cooperative and competitive approaches.","PeriodicalId":88526,"journal":{"name":"International journal of hybrid intelligent systems","volume":"9 1","pages":"101-112"},"PeriodicalIF":0.0,"publicationDate":"2021-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81084229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}