首页 > 最新文献

International journal of hybrid intelligent systems最新文献

英文 中文
Hierarchical genetic optimization of convolutional neural models for diabetic retinopathy classification 糖尿病视网膜病变分类卷积神经模型的层次遗传优化
Pub Date : 2022-04-21 DOI: 10.3233/his-220004
Rodrigo Cordero-Martínez, D. Sánchez, P. Melin
Diabetic retinopathy (DR) is one of the worse conditions caused by diabetes mellitus (DM). DR can leave the patient completely blind because it may have no symptoms in its initial stages. Expert physicians have been developing technologies for early detection and classification of DR to prevent the increasing number of patients. Some authors have used convolutional neural networks for this purpose. Pre-processing methods for database are important to increase the accuracy detection of CNN, and the use for an optimization algorithm can further increase that accuracy. In this work, four pre-processing methods are presented to compare them and select the best one. Then the use of a hierarchical genetic algorithm (HGA) with the pre-processing method is done with the intention of increasing the classification accuracy of a new CNN model. Using the HGA increases the accuracies obtained by the pre-processing methods and outperforms the results obtained by other authors. In the binary study case (detection of DR) a 0.9781 in the highest accuracy was achieved, a 0.9650 in mean accuracy and 0.007665 in standard deviation. In the multi-class study case (classification of DR) a 0.7762 in the highest accuracy, 0.7596 in mean accuracy and 0.009948 in standard deviation.
糖尿病视网膜病变(DR)是糖尿病(DM)引起的最严重的疾病之一。DR可能使患者完全失明,因为它在最初阶段可能没有任何症状。专家医生一直在开发DR的早期检测和分类技术,以防止患者数量的增加。一些作者为此目的使用了卷积神经网络。数据库的预处理方法对于提高CNN的检测精度非常重要,使用优化算法可以进一步提高检测精度。在本工作中,提出了四种预处理方法,并对其进行了比较和选择。然后利用层次遗传算法(HGA)和预处理方法来提高新CNN模型的分类精度。使用HGA提高了预处理方法得到的精度,并且优于其他作者得到的结果。在二元研究案例(DR检测)中,最高准确率为0.9781,平均准确率为0.9650,标准差为0.007665。在多类研究案例(DR分类)中,最高准确率为0.7762,平均准确率为0.7596,标准差为0.009948。
{"title":"Hierarchical genetic optimization of convolutional neural models for diabetic retinopathy classification","authors":"Rodrigo Cordero-Martínez, D. Sánchez, P. Melin","doi":"10.3233/his-220004","DOIUrl":"https://doi.org/10.3233/his-220004","url":null,"abstract":"Diabetic retinopathy (DR) is one of the worse conditions caused by diabetes mellitus (DM). DR can leave the patient completely blind because it may have no symptoms in its initial stages. Expert physicians have been developing technologies for early detection and classification of DR to prevent the increasing number of patients. Some authors have used convolutional neural networks for this purpose. Pre-processing methods for database are important to increase the accuracy detection of CNN, and the use for an optimization algorithm can further increase that accuracy. In this work, four pre-processing methods are presented to compare them and select the best one. Then the use of a hierarchical genetic algorithm (HGA) with the pre-processing method is done with the intention of increasing the classification accuracy of a new CNN model. Using the HGA increases the accuracies obtained by the pre-processing methods and outperforms the results obtained by other authors. In the binary study case (detection of DR) a 0.9781 in the highest accuracy was achieved, a 0.9650 in mean accuracy and 0.007665 in standard deviation. In the multi-class study case (classification of DR) a 0.7762 in the highest accuracy, 0.7596 in mean accuracy and 0.009948 in standard deviation.","PeriodicalId":88526,"journal":{"name":"International journal of hybrid intelligent systems","volume":"38 1","pages":"97-109"},"PeriodicalIF":0.0,"publicationDate":"2022-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76055440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Machine learning-based authorship attribution using token n-grams and other time tested features 基于机器学习的作者归属,使用令牌n-gram和其他经过时间考验的特征
Pub Date : 2022-04-21 DOI: 10.3233/his-220005
S. Gupta, Swarupa Das, Jyotish Ranjan Mallik
Authorship Attribution is a process to determine and/or identify the author of a given text document. The relevance of this research area comes to the fore when two or more writers claim to be the prospective authors of an unidentified or anonymous text document or are unwilling to accept any authorship. This research work aims to utilize various Machine Learning techniques in order to solve the problem of author identification. In the proposed approach, a number of textual features such as Token n-grams, Stylometric features, bag-of-words and TF-IDF have been extracted. Experimentation has been performed on three datasets viz. Spooky Author Identification dataset, Reuter_50_50 dataset and Manual dataset with 3 different train-test split ratios viz. 80-20, 70-30 and 66.67-33.33. Models have been built and tested with supervised learning algorithms such as Naive Bayes, Support Vector Machine, K-Nearest Neighbor, Decision Tree and Random Forest. The proposed system yields promising results. For the Spooky dataset, the best accuracy score obtained is 84.14% with bag-of-words using Naïve Bayes classifier. The best accuracy score of 86.2% is computed for the Reuter_50_50 dataset with 2100 most frequent words when the classifier used is Support Vector Machine. For the Manual dataset, the best score of 96.67% is obtained using the Naïve Bayes Classification Model with both 5-fold and 10-fold cross validation when both syntactic features and 600 most frequent unigrams are used in combination.
作者归属是确定和/或识别给定文本文档的作者的过程。当两个或两个以上的作者声称是未知或匿名文本文件的潜在作者或不愿意接受任何作者身份时,这个研究领域的相关性就会出现。本研究工作旨在利用各种机器学习技术来解决作者识别问题。在该方法中,提取了Token n-gram、文体特征、词袋特征和TF-IDF等文本特征。在Spooky Author Identification数据集、Reuter_50_50数据集和Manual数据集上进行了3个不同的训练-测试分割比(80-20、70-30和66.67-33.33)的实验。模型已经建立并测试了监督学习算法,如朴素贝叶斯,支持向量机,k近邻,决策树和随机森林。所提出的系统产生了令人满意的结果。对于Spooky数据集,使用Naïve贝叶斯分类器进行词袋分类,获得的准确率最高为84.14%。当分类器使用支持向量机时,Reuter_50_50数据集的2100个最频繁的单词计算出了86.2%的最佳准确率。对于Manual数据集,当同时使用语法特征和600个最频繁的单图时,使用Naïve贝叶斯分类模型进行5次和10次交叉验证,获得96.67%的最佳分数。
{"title":"Machine learning-based authorship attribution using token n-grams and other time tested features","authors":"S. Gupta, Swarupa Das, Jyotish Ranjan Mallik","doi":"10.3233/his-220005","DOIUrl":"https://doi.org/10.3233/his-220005","url":null,"abstract":"Authorship Attribution is a process to determine and/or identify the author of a given text document. The relevance of this research area comes to the fore when two or more writers claim to be the prospective authors of an unidentified or anonymous text document or are unwilling to accept any authorship. This research work aims to utilize various Machine Learning techniques in order to solve the problem of author identification. In the proposed approach, a number of textual features such as Token n-grams, Stylometric features, bag-of-words and TF-IDF have been extracted. Experimentation has been performed on three datasets viz. Spooky Author Identification dataset, Reuter_50_50 dataset and Manual dataset with 3 different train-test split ratios viz. 80-20, 70-30 and 66.67-33.33. Models have been built and tested with supervised learning algorithms such as Naive Bayes, Support Vector Machine, K-Nearest Neighbor, Decision Tree and Random Forest. The proposed system yields promising results. For the Spooky dataset, the best accuracy score obtained is 84.14% with bag-of-words using Naïve Bayes classifier. The best accuracy score of 86.2% is computed for the Reuter_50_50 dataset with 2100 most frequent words when the classifier used is Support Vector Machine. For the Manual dataset, the best score of 96.67% is obtained using the Naïve Bayes Classification Model with both 5-fold and 10-fold cross validation when both syntactic features and 600 most frequent unigrams are used in combination.","PeriodicalId":88526,"journal":{"name":"International journal of hybrid intelligent systems","volume":"53 1","pages":"37-51"},"PeriodicalIF":0.0,"publicationDate":"2022-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75998567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Primal-dual algorithms for the Capacitated Single Allocation p-Hub Location Problem 有能力单分配p-Hub定位问题的原始对偶算法
Pub Date : 2022-04-21 DOI: 10.3233/his-220003
Telmo Matos
The Hub Location Problems (HLP) have gathered great interest due to the complexity and to the many applications in industry such as aviation, public transportation, telecommunications, among others. The HLP have many variants regarding allocation (single or multiple) and capacity (uncapacitated or capacitated). This paper presents a variant of the HLP, encompassing single allocation with capacity constraints. The Capacitated Single Allocation p-Hub Location Problem (CSApHLP) objective consists on determine the set of p hubs in a network that minimizes the total cost of allocating all the non-hub nodes to the p hubs. In this work, it is proposed a sophisticated RAMP approach (PD-RAMP) to improve the results obtained previously by the simple version (Dual-RAMP). Thus, a parallel implementation is conducted to assess the effectiveness of a parallel RAMP model applied to the CSApHLP. The first algorithm, the sequential PD-RAMP, incorporates Dual-RAMP with a Scatter Search procedure to create a Primal-Dual RAMP approach. The second algorithm, the parallel PD-RAMP, also take advantage of the dual and primal, parallelizing the primal side of the problem and interconnecting both sides as it is expected in the RAMP sequential algorithm. The quality of the results carried out on a standard testbed shows that the PD-RAMP approach managed to improve the state-of-the-art algorithms for the CSApHLP.
枢纽定位问题(HLP)由于其复杂性和在航空、公共交通、电信等行业中的许多应用而引起了人们的极大兴趣。HLP在分配(单个或多个)和容量(无容量或有容量)方面有许多变体。本文提出了HLP的一种变体,包括具有容量约束的单个分配。有能力单分配p- hub定位问题的目标是确定网络中p个hub的集合,使将所有非hub节点分配给p个hub的总成本最小。在这项工作中,提出了一种复杂的RAMP方法(PD-RAMP)来改进之前简单版本(Dual-RAMP)获得的结果。因此,进行了并行实现,以评估应用于cssphlp的并行RAMP模型的有效性。第一种算法,顺序PD-RAMP,结合了双RAMP和散点搜索过程来创建原始双RAMP方法。第二种算法,并行PD-RAMP,也利用了对偶和原始的优势,将问题的原始侧并行化,并将两边互连,正如RAMP顺序算法所期望的那样。在标准测试台上进行的结果质量表明,PD-RAMP方法设法改进了csaflp的最先进算法。
{"title":"Primal-dual algorithms for the Capacitated Single Allocation p-Hub Location Problem","authors":"Telmo Matos","doi":"10.3233/his-220003","DOIUrl":"https://doi.org/10.3233/his-220003","url":null,"abstract":"The Hub Location Problems (HLP) have gathered great interest due to the complexity and to the many applications in industry such as aviation, public transportation, telecommunications, among others. The HLP have many variants regarding allocation (single or multiple) and capacity (uncapacitated or capacitated). This paper presents a variant of the HLP, encompassing single allocation with capacity constraints. The Capacitated Single Allocation p-Hub Location Problem (CSApHLP) objective consists on determine the set of p hubs in a network that minimizes the total cost of allocating all the non-hub nodes to the p hubs. In this work, it is proposed a sophisticated RAMP approach (PD-RAMP) to improve the results obtained previously by the simple version (Dual-RAMP). Thus, a parallel implementation is conducted to assess the effectiveness of a parallel RAMP model applied to the CSApHLP. The first algorithm, the sequential PD-RAMP, incorporates Dual-RAMP with a Scatter Search procedure to create a Primal-Dual RAMP approach. The second algorithm, the parallel PD-RAMP, also take advantage of the dual and primal, parallelizing the primal side of the problem and interconnecting both sides as it is expected in the RAMP sequential algorithm. The quality of the results carried out on a standard testbed shows that the PD-RAMP approach managed to improve the state-of-the-art algorithms for the CSApHLP.","PeriodicalId":88526,"journal":{"name":"International journal of hybrid intelligent systems","volume":"122 1","pages":"1-17"},"PeriodicalIF":0.0,"publicationDate":"2022-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87322565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Analysis and performance optimization of LoRa network using the CE & SC hybrid approach 基于CE & SC混合方法的LoRa网络分析与性能优化
Pub Date : 2022-04-21 DOI: 10.3233/his-220007
Abdellah Amzil, Abdessamad Bellouch, Ahmed Boujnoui, Mohamed Hanini, Abdellah Zaaloul
In this research, we assess the impact of collisions produced by simultaneous transmission using the same Spreading Factor (SF) and over the same channel in LoRa networks, demonstrating that such collisions significantly impair LoRa network performance. We quantify the network performance advantages by combining the primary characteristics of the Capture Effect (CE) and Signature Code (SC) approaches. The system is analyzed using a Markov chain model, which allows us to construct the mathematical formulation for the performance measures. Our numerical findings reveal that the proposed approach surpasses the standard LoRa in terms of network throughput and transmitted packet latency.
在本研究中,我们评估了在LoRa网络中使用相同的扩频因子(SF)和在相同的信道上同时传输所产生的碰撞的影响,表明这种碰撞严重损害了LoRa网络的性能。我们通过结合捕获效应(CE)和签名码(SC)方法的主要特征来量化网络性能优势。系统分析使用马尔可夫链模型,这使我们能够构建数学公式的性能指标。我们的数值研究结果表明,所提出的方法在网络吞吐量和传输数据包延迟方面优于标准LoRa。
{"title":"Analysis and performance optimization of LoRa network using the CE & SC hybrid approach","authors":"Abdellah Amzil, Abdessamad Bellouch, Ahmed Boujnoui, Mohamed Hanini, Abdellah Zaaloul","doi":"10.3233/his-220007","DOIUrl":"https://doi.org/10.3233/his-220007","url":null,"abstract":"In this research, we assess the impact of collisions produced by simultaneous transmission using the same Spreading Factor (SF) and over the same channel in LoRa networks, demonstrating that such collisions significantly impair LoRa network performance. We quantify the network performance advantages by combining the primary characteristics of the Capture Effect (CE) and Signature Code (SC) approaches. The system is analyzed using a Markov chain model, which allows us to construct the mathematical formulation for the performance measures. Our numerical findings reveal that the proposed approach surpasses the standard LoRa in terms of network throughput and transmitted packet latency.","PeriodicalId":88526,"journal":{"name":"International journal of hybrid intelligent systems","volume":"1 1","pages":"53-68"},"PeriodicalIF":0.0,"publicationDate":"2022-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72767777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CrossViT Wide Residual Squeeze-and-Excitation Network for Alzheimer's disease classification with self attention ProGAN data augmentation 基于自关注ProGAN数据增强的跨svit宽剩余挤压和激励网络用于阿尔茨海默病分类
Pub Date : 2022-04-11 DOI: 10.3233/his-220002
Rahma Kadri, Bassem Bouaziz, M. Tmar, F. Gargouri
Efficient and accurate early prediction of Alzheimer's disease (AD) based on the neuroimaging data has attracted interest from many researchers to prevent its progression. Deep learning networks have demonstrated an optimal ability to analyse large-scale multimodal neuroimaging for AD classification. The most widely used architecture of deep learning is the Convolution neural networks (CNN) that have shown great potential in AD detection. However CNN does not capture long range dependencies within the input image and does not ensure a good global feature extraction. Furthermore, increasing the receptive field of CNN by increasing the kernels sizes can cause a feature granularity loss. Another limitation is that CNN lacks a weighing mechanism of image features; the network doesn’t focus on the relevant features within the image. Recently,vision transformer have shown an outstanding performance over the CNN and overcomes its main limitations. The vision transformer relies on the self-attention layers. The main drawbacks of this new technique is that it requires a huge amount of training data. In this paper, we combined the main strengths of these two architectures for AD classification. We proposed a new method based on the combination of the Cross ViT and Wide Residual Squeeze-and-Excitation Network. We acquired MRI data from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) and the Open Access Series of Imaging Studies (OASIS). We also proposed a new data augmentation based on the self attention progressive generative adversarial neural network to overcome the limitation of the data. Our proposed method achieved 99% classification accuracy and outperforms CNN models.
基于神经影像学数据对阿尔茨海默病(AD)进行有效、准确的早期预测已引起许多研究者的兴趣,以防止其发展。深度学习网络已经证明了分析大规模多模态神经成像用于AD分类的最佳能力。深度学习中应用最广泛的架构是卷积神经网络(CNN),它在AD检测中显示出巨大的潜力。然而,CNN不能捕获输入图像中的长距离依赖关系,也不能确保良好的全局特征提取。此外,通过增加核大小来增加CNN的接受场可能会导致特征粒度损失。另一个限制是CNN缺乏图像特征的加权机制;该网络并不关注图像中的相关特征。近年来,视觉变压器在CNN的基础上表现出了突出的性能,克服了CNN的主要局限性。视觉转换器依赖于自我关注层。这种新技术的主要缺点是它需要大量的训练数据。在本文中,我们将这两种体系结构的主要优势结合起来进行AD分类。提出了一种基于交叉ViT和宽残余挤压激励网络相结合的新方法。我们从阿尔茨海默病神经影像学倡议(ADNI)和开放获取系列影像学研究(OASIS)获得MRI数据。为了克服数据的局限性,我们提出了一种基于自关注渐进式生成对抗神经网络的数据增强方法。我们提出的方法达到了99%的分类准确率,并且优于CNN模型。
{"title":"CrossViT Wide Residual Squeeze-and-Excitation Network for Alzheimer's disease classification with self attention ProGAN data augmentation","authors":"Rahma Kadri, Bassem Bouaziz, M. Tmar, F. Gargouri","doi":"10.3233/his-220002","DOIUrl":"https://doi.org/10.3233/his-220002","url":null,"abstract":"Efficient and accurate early prediction of Alzheimer's disease (AD) based on the neuroimaging data has attracted interest from many researchers to prevent its progression. Deep learning networks have demonstrated an optimal ability to analyse large-scale multimodal neuroimaging for AD classification. The most widely used architecture of deep learning is the Convolution neural networks (CNN) that have shown great potential in AD detection. However CNN does not capture long range dependencies within the input image and does not ensure a good global feature extraction. Furthermore, increasing the receptive field of CNN by increasing the kernels sizes can cause a feature granularity loss. Another limitation is that CNN lacks a weighing mechanism of image features; the network doesn’t focus on the relevant features within the image. Recently,vision transformer have shown an outstanding performance over the CNN and overcomes its main limitations. The vision transformer relies on the self-attention layers. The main drawbacks of this new technique is that it requires a huge amount of training data. In this paper, we combined the main strengths of these two architectures for AD classification. We proposed a new method based on the combination of the Cross ViT and Wide Residual Squeeze-and-Excitation Network. We acquired MRI data from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) and the Open Access Series of Imaging Studies (OASIS). We also proposed a new data augmentation based on the self attention progressive generative adversarial neural network to overcome the limitation of the data. Our proposed method achieved 99% classification accuracy and outperforms CNN models.","PeriodicalId":88526,"journal":{"name":"International journal of hybrid intelligent systems","volume":"24 1","pages":"163-177"},"PeriodicalIF":0.0,"publicationDate":"2022-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84410445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Feature matching for 3D AR: Review from handcrafted methods to deep learning 3D AR的特征匹配:从手工方法到深度学习的回顾
Pub Date : 2022-04-11 DOI: 10.3233/his-220001
Houssam Halmaoui, A. Haqiq
3D augmented reality (AR) has a photometric aspect of 3D rendering and a geometric aspect of camera tracking. In this paper, we will discuss the second aspect, which involves feature matching for stable 3D object insertion. We present the different types of image matching approaches, starting from handcrafted feature algorithms and machine learning methods, to recent deep learning approaches using various types of CNN architectures, and more modern end-to-end models. A comparison of these methods is performed according to criteria of real time and accuracy, to allow the choice of the most relevant methods for a 3D AR system.
三维增强现实(AR)具有三维渲染的光度方面和相机跟踪的几何方面。在本文中,我们将讨论第二个方面,即稳定的三维物体插入的特征匹配。我们提出了不同类型的图像匹配方法,从手工制作的特征算法和机器学习方法,到使用各种类型CNN架构的最新深度学习方法,以及更现代的端到端模型。根据实时性和准确性的标准对这些方法进行比较,以便为3D AR系统选择最相关的方法。
{"title":"Feature matching for 3D AR: Review from handcrafted methods to deep learning","authors":"Houssam Halmaoui, A. Haqiq","doi":"10.3233/his-220001","DOIUrl":"https://doi.org/10.3233/his-220001","url":null,"abstract":"3D augmented reality (AR) has a photometric aspect of 3D rendering and a geometric aspect of camera tracking. In this paper, we will discuss the second aspect, which involves feature matching for stable 3D object insertion. We present the different types of image matching approaches, starting from handcrafted feature algorithms and machine learning methods, to recent deep learning approaches using various types of CNN architectures, and more modern end-to-end models. A comparison of these methods is performed according to criteria of real time and accuracy, to allow the choice of the most relevant methods for a 3D AR system.","PeriodicalId":88526,"journal":{"name":"International journal of hybrid intelligent systems","volume":"21 1","pages":"143-162"},"PeriodicalIF":0.0,"publicationDate":"2022-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83232206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparison of optimization algorithms based on swarm intelligence applied to convolutional neural networks for face recognition 基于群体智能的优化算法在卷积神经网络人脸识别中的应用比较
Pub Date : 2022-01-01 DOI: 10.3233/HIS-220010
P. Melin, D. Sánchez, O. Castillo
{"title":"Comparison of optimization algorithms based on swarm intelligence applied to convolutional neural networks for face recognition","authors":"P. Melin, D. Sánchez, O. Castillo","doi":"10.3233/HIS-220010","DOIUrl":"https://doi.org/10.3233/HIS-220010","url":null,"abstract":"","PeriodicalId":88526,"journal":{"name":"International journal of hybrid intelligent systems","volume":"78 1","pages":"161-171"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83885678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Towards data warehouse from open data: Case of COVID-19 从开放数据走向数据仓库:以COVID-19为例
Pub Date : 2021-11-26 DOI: 10.3233/his-210010
Senda Bouaziz, Ahlem Nabli, F. Gargouri
Since December 2019, we have detected the appearance of a new virus called COVID-19, which has spread, throughout the world. Everyone today, has given major importance to this new virus. Although we have little knowledge of the disease, doctors and specialists make decisions every day that have a significant impact on public health. There are many and various open data in this context, which are scattered and distributed. For this, we need to capitalize all the information in a data warehouse. For that, in this paper, we propose an approach to create a data warehouse from open data specifically from COVID-19 data. We start with the identification of the relevant sources from the various open data. Then, we collect the pertinent data. After that, we identify the multidimensional concepts used to design the data warehouse schema related to COVID-19 data. Finally, we transform our data warehouse to logical model and create our NoSQL data warehouse with Talend Open Studio for Big Data (TOS_BD).
自2019年12月以来,我们发现了一种名为COVID-19的新病毒的出现,并已在全球传播。今天的每个人都非常重视这种新病毒。尽管我们对这种疾病知之甚少,但医生和专家每天都在做出对公众健康有重大影响的决定。在这一背景下,开放数据数量众多,种类繁多,分散分布。为此,我们需要将数据仓库中的所有信息大写。为此,在本文中,我们提出了一种从开放数据特别是COVID-19数据中创建数据仓库的方法。我们首先从各种开放数据中识别相关来源。然后,我们收集相关数据。之后,我们确定了用于设计与COVID-19数据相关的数据仓库模式的多维概念。最后,我们将数据仓库转换为逻辑模型,并使用Talend Open Studio for Big data (TOS_BD)创建NoSQL数据仓库。
{"title":"Towards data warehouse from open data: Case of COVID-19","authors":"Senda Bouaziz, Ahlem Nabli, F. Gargouri","doi":"10.3233/his-210010","DOIUrl":"https://doi.org/10.3233/his-210010","url":null,"abstract":"Since December 2019, we have detected the appearance of a new virus called COVID-19, which has spread, throughout the world. Everyone today, has given major importance to this new virus. Although we have little knowledge of the disease, doctors and specialists make decisions every day that have a significant impact on public health. There are many and various open data in this context, which are scattered and distributed. For this, we need to capitalize all the information in a data warehouse. For that, in this paper, we propose an approach to create a data warehouse from open data specifically from COVID-19 data. We start with the identification of the relevant sources from the various open data. Then, we collect the pertinent data. After that, we identify the multidimensional concepts used to design the data warehouse schema related to COVID-19 data. Finally, we transform our data warehouse to logical model and create our NoSQL data warehouse with Talend Open Studio for Big Data (TOS_BD).","PeriodicalId":88526,"journal":{"name":"International journal of hybrid intelligent systems","volume":"15 1","pages":"129-142"},"PeriodicalIF":0.0,"publicationDate":"2021-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86136032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparing different metrics on an anisotropic depth completion model 比较各向异性深度完井模型的不同指标
Pub Date : 2021-06-25 DOI: 10.3233/his-210006
V. Lazcano, F. Calderero, C. Ballester
This paper discussed an anisotropic interpolation model that filling in-depth data in a largely empty region of a depth map. We consider an image with an anisotropic metric gi⁢j that incorporates spatial and photometric data. We propose a numerical implementation of our model based on the “eikonal” operator, which compute the solution of a degenerated partial differential equation (the biased Infinity Laplacian or biased Absolutely Minimizing Lipschitz Extension). This equation’s solution creates exponential cones based on the available data, extending the available depth data and completing the depth map image. Because of this, this operator is better suited to interpolating smooth surfaces. To perform this task, we assume we have at our disposal a reference color image and a depth map. We carried out an experimental comparison of the AMLE and bAMLE using various metrics with square root, absolute value, and quadratic terms. In these experiments, considered color spaces were sRGB, XYZ, CIE-L*⁢a*⁢b*, and CMY. In this document, we also presented a proposal to extend the AMLE and bAMLE to the time domain. Finally, in the parameter estimation of the model, we compared EHO and PSO. The combination of sRGB and square root metric produces the best results, demonstrating that our bAMLE model outperforms the AMLE model and other contemporary models in the KITTI depth completion suite dataset. This type of model, such as AMLE and bAMLE, is simple to implement and represents a low-cost implementation option for similar applications.
本文讨论了在深度图空白区域填充深度数据的各向异性插值模型。我们考虑具有各向异性度量gi²j的图像,其中包含空间和光度数据。我们提出了一个基于“eikonal”算子的模型的数值实现,该算子计算退化偏微分方程(有偏无穷拉普拉斯或有偏绝对最小化Lipschitz扩展)的解。该方程的解基于可用数据创建指数锥,扩展可用深度数据并完成深度图图像。正因为如此,这个算子更适合于平滑曲面的插值。为了完成这个任务,我们假设我们有一个参考彩色图像和一个深度图。我们使用各种具有平方根、绝对值和二次项的指标对AMLE和bAMLE进行了实验比较。在这些实验中,考虑的颜色空间是sRGB, XYZ, CIE-L* * a* * * b*和CMY。在本文中,我们还提出了将AMLE和bAMLE扩展到时域的建议。最后,在模型的参数估计中,我们比较了EHO和PSO。sRGB和平方根度量的结合产生了最好的结果,表明我们的bAMLE模型在KITTI深度完井套件数据集中优于AMLE模型和其他当代模型。这种类型的模型(如AMLE和bAMLE)易于实现,并且代表了类似应用程序的低成本实现选项。
{"title":"Comparing different metrics on an anisotropic depth completion model","authors":"V. Lazcano, F. Calderero, C. Ballester","doi":"10.3233/his-210006","DOIUrl":"https://doi.org/10.3233/his-210006","url":null,"abstract":"This paper discussed an anisotropic interpolation model that filling in-depth data in a largely empty region of a depth map. We consider an image with an anisotropic metric gi⁢j that incorporates spatial and photometric data. We propose a numerical implementation of our model based on the “eikonal” operator, which compute the solution of a degenerated partial differential equation (the biased Infinity Laplacian or biased Absolutely Minimizing Lipschitz Extension). This equation’s solution creates exponential cones based on the available data, extending the available depth data and completing the depth map image. Because of this, this operator is better suited to interpolating smooth surfaces. To perform this task, we assume we have at our disposal a reference color image and a depth map. We carried out an experimental comparison of the AMLE and bAMLE using various metrics with square root, absolute value, and quadratic terms. In these experiments, considered color spaces were sRGB, XYZ, CIE-L*⁢a*⁢b*, and CMY. In this document, we also presented a proposal to extend the AMLE and bAMLE to the time domain. Finally, in the parameter estimation of the model, we compared EHO and PSO. The combination of sRGB and square root metric produces the best results, demonstrating that our bAMLE model outperforms the AMLE model and other contemporary models in the KITTI depth completion suite dataset. This type of model, such as AMLE and bAMLE, is simple to implement and represents a low-cost implementation option for similar applications.","PeriodicalId":88526,"journal":{"name":"International journal of hybrid intelligent systems","volume":"27 1","pages":"87-99"},"PeriodicalIF":0.0,"publicationDate":"2021-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76612895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
An enhanced auto adaptive vector evaluated-based metaheuristic for solving real-world problems 一种用于解决现实问题的增强自适应向量评估元启发式算法
Pub Date : 2021-06-25 DOI: 10.3233/his-210007
L. F. Costa, O. Cortes, João Pedro Augusto Costa
This article investigates the enhancement of a vector evaluat-ed-based adaptive metaheuristics for solving two multiobjective problems called environmental-economic dispatch and portfolio optimization. The idea is to evolve two populations independently, and exchange information between them, i.e., the first population evolves according to the best individual of the second population and vice-versa. The choice of which algorithm will be executed on each generation is carried out stochastically among three evolutionary algorithms well-known in the literature: PSO, DE, ABC. To assess the results, we used an established metric in multiobjective evolutionary algorithms called hypervolume. Tests solving the referred problem have shown that the new approach reaches the best hypervolumes in power systems comprised of six and forty generators and five different datasets of portfolio optimization. The experiments were performed 31 times, using 250, 500, and 1000 iterations in both problems. Results have also shown that our proposal tends to overcome a variation of a hybrid SPEA2 compared to their cooperative and competitive approaches.
本文研究了一种基于向量评估的自适应元启发式方法的改进,用于解决两个多目标问题,即环境经济调度和投资组合优化。其思想是独立地进化两个种群,并在它们之间交换信息,即,第一个种群根据第二个种群的最佳个体进化,反之亦然。在文献中已知的三种进化算法:PSO, DE, ABC中随机选择每代执行哪一种算法。为了评估结果,我们使用了多目标进化算法中称为hypervolume的既定度量。解决上述问题的测试表明,新方法在由六四十个发电机和五个不同的组合优化数据集组成的电力系统中达到最佳超容量。实验进行了31次,在两个问题中分别使用了250次、500次和1000次迭代。结果还表明,与他们的合作和竞争方法相比,我们的建议倾向于克服混合SPEA2的变化。
{"title":"An enhanced auto adaptive vector evaluated-based metaheuristic for solving real-world problems","authors":"L. F. Costa, O. Cortes, João Pedro Augusto Costa","doi":"10.3233/his-210007","DOIUrl":"https://doi.org/10.3233/his-210007","url":null,"abstract":"This article investigates the enhancement of a vector evaluat-ed-based adaptive metaheuristics for solving two multiobjective problems called environmental-economic dispatch and portfolio optimization. The idea is to evolve two populations independently, and exchange information between them, i.e., the first population evolves according to the best individual of the second population and vice-versa. The choice of which algorithm will be executed on each generation is carried out stochastically among three evolutionary algorithms well-known in the literature: PSO, DE, ABC. To assess the results, we used an established metric in multiobjective evolutionary algorithms called hypervolume. Tests solving the referred problem have shown that the new approach reaches the best hypervolumes in power systems comprised of six and forty generators and five different datasets of portfolio optimization. The experiments were performed 31 times, using 250, 500, and 1000 iterations in both problems. Results have also shown that our proposal tends to overcome a variation of a hybrid SPEA2 compared to their cooperative and competitive approaches.","PeriodicalId":88526,"journal":{"name":"International journal of hybrid intelligent systems","volume":"9 1","pages":"101-112"},"PeriodicalIF":0.0,"publicationDate":"2021-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81084229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
International journal of hybrid intelligent systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1