Sandra Mara Scós Venske, Carolina Paula de Almeida , Myriam Regattieri Delgado
{"title":"元合方法与机器学习:强化学习辅助神经结构搜索的方法","authors":"Sandra Mara Scós Venske, Carolina Paula de Almeida , Myriam Regattieri Delgado","doi":"10.1007/s10732-024-09526-1","DOIUrl":null,"url":null,"abstract":"<p>Methaheuristics (MHs) are techniques widely used for solving complex optimization problems. In recent years, the interest in combining MH and machine learning (ML) has grown. This integration can occur mainly in two ways: ML-in-MH and MH-in-ML. In the present work, we combine the techniques in both ways—ML-in-MH-in-ML, providing an approach in which ML is considered to improve the performance of an evolutionary algorithm (EA), whose solutions encode parameters of an ML model—artificial neural network (ANN). Our approach called TS<span>\\(_{in}\\)</span>EA<span>\\(_{in}\\)</span>ANN employs a reinforcement learning neighborhood (RLN) mutation based on Thompson sampling (TS). TS is a parameterless reinforcement learning method, used here to boost the EA performance. In the experiments, every candidate ANN solves a regression problem known as protein structure prediction deviation. We consider two protein datasets, one with 16,382 and the other with 45,730 samples. The results show that TS<span>\\(_{in}\\)</span>EA<span>\\(_{in}\\)</span>ANN performs significantly better than a canonical genetic algorithm (GA<span>\\(_{in}\\)</span>ANN) and the evolutionary algorithm without reinforcement learning (EA<span>\\(_{in}\\)</span>ANN). Analyses of the parameter’s frequency are also performed comparing the approaches. Finally, comparisons with the literature show that except for one particular case in the largest dataset, TS<span>\\(_{in}\\)</span>EA<span>\\(_{in}\\)</span>ANN outperforms other approaches considered the state of the art for the addressed datasets.</p>","PeriodicalId":54810,"journal":{"name":"Journal of Heuristics","volume":"25 1","pages":""},"PeriodicalIF":1.1000,"publicationDate":"2024-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Metaheuristics and machine learning: an approach with reinforcement learning assisting neural architecture search\",\"authors\":\"Sandra Mara Scós Venske, Carolina Paula de Almeida , Myriam Regattieri Delgado\",\"doi\":\"10.1007/s10732-024-09526-1\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Methaheuristics (MHs) are techniques widely used for solving complex optimization problems. In recent years, the interest in combining MH and machine learning (ML) has grown. This integration can occur mainly in two ways: ML-in-MH and MH-in-ML. In the present work, we combine the techniques in both ways—ML-in-MH-in-ML, providing an approach in which ML is considered to improve the performance of an evolutionary algorithm (EA), whose solutions encode parameters of an ML model—artificial neural network (ANN). Our approach called TS<span>\\\\(_{in}\\\\)</span>EA<span>\\\\(_{in}\\\\)</span>ANN employs a reinforcement learning neighborhood (RLN) mutation based on Thompson sampling (TS). TS is a parameterless reinforcement learning method, used here to boost the EA performance. In the experiments, every candidate ANN solves a regression problem known as protein structure prediction deviation. We consider two protein datasets, one with 16,382 and the other with 45,730 samples. The results show that TS<span>\\\\(_{in}\\\\)</span>EA<span>\\\\(_{in}\\\\)</span>ANN performs significantly better than a canonical genetic algorithm (GA<span>\\\\(_{in}\\\\)</span>ANN) and the evolutionary algorithm without reinforcement learning (EA<span>\\\\(_{in}\\\\)</span>ANN). Analyses of the parameter’s frequency are also performed comparing the approaches. Finally, comparisons with the literature show that except for one particular case in the largest dataset, TS<span>\\\\(_{in}\\\\)</span>EA<span>\\\\(_{in}\\\\)</span>ANN outperforms other approaches considered the state of the art for the addressed datasets.</p>\",\"PeriodicalId\":54810,\"journal\":{\"name\":\"Journal of Heuristics\",\"volume\":\"25 1\",\"pages\":\"\"},\"PeriodicalIF\":1.1000,\"publicationDate\":\"2024-04-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Heuristics\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1007/s10732-024-09526-1\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Heuristics","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s10732-024-09526-1","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
摘要
元启发式(MH)是一种广泛用于解决复杂优化问题的技术。近年来,人们对 MH 与机器学习(ML)的结合越来越感兴趣。这种结合主要有两种方式:ML-in-MH和MH-in-ML。在本研究中,我们将这两种方式中的技术结合起来--ML-in-MH-in-ML,提供了一种方法,即考虑用 ML 来提高进化算法(EA)的性能,而进化算法的解决方案编码了 ML 模型--人工神经网络(ANN)的参数。我们的方法称为 TS (_{in}\)EA (_{in}\)ANN,它采用了基于汤普森采样(Thompson sampling,TS)的强化学习邻域(RLN)突变。TS是一种无参数强化学习方法,在此用于提高EA性能。在实验中,每个候选 ANN 都要解决一个回归问题,即蛋白质结构预测偏差。我们考虑了两个蛋白质数据集,一个有 16,382 个样本,另一个有 45,730 个样本。结果表明,TS/(_{in}/)EA/(_{in}/)ANN的性能明显优于典型遗传算法(GA/(_{in}/)ANN)和无强化学习的进化算法(EA/(_{in}/)ANN)。此外,还对各种方法的参数频率进行了分析比较。最后,与文献的比较表明,除了最大数据集中的一个特殊情况外,TS/(_{in}\)EA/(_{in}\)ANN 在所处理的数据集上优于被认为是最先进的其他方法。
Metaheuristics and machine learning: an approach with reinforcement learning assisting neural architecture search
Methaheuristics (MHs) are techniques widely used for solving complex optimization problems. In recent years, the interest in combining MH and machine learning (ML) has grown. This integration can occur mainly in two ways: ML-in-MH and MH-in-ML. In the present work, we combine the techniques in both ways—ML-in-MH-in-ML, providing an approach in which ML is considered to improve the performance of an evolutionary algorithm (EA), whose solutions encode parameters of an ML model—artificial neural network (ANN). Our approach called TS\(_{in}\)EA\(_{in}\)ANN employs a reinforcement learning neighborhood (RLN) mutation based on Thompson sampling (TS). TS is a parameterless reinforcement learning method, used here to boost the EA performance. In the experiments, every candidate ANN solves a regression problem known as protein structure prediction deviation. We consider two protein datasets, one with 16,382 and the other with 45,730 samples. The results show that TS\(_{in}\)EA\(_{in}\)ANN performs significantly better than a canonical genetic algorithm (GA\(_{in}\)ANN) and the evolutionary algorithm without reinforcement learning (EA\(_{in}\)ANN). Analyses of the parameter’s frequency are also performed comparing the approaches. Finally, comparisons with the literature show that except for one particular case in the largest dataset, TS\(_{in}\)EA\(_{in}\)ANN outperforms other approaches considered the state of the art for the addressed datasets.
期刊介绍:
The Journal of Heuristics provides a forum for advancing the state-of-the-art in the theory and practical application of techniques for solving problems approximately that cannot be solved exactly. It fosters the development, understanding, and practical use of heuristic solution techniques for solving business, engineering, and societal problems. It considers the importance of theoretical, empirical, and experimental work related to the development of heuristics.
The journal presents practical applications, theoretical developments, decision analysis models that consider issues of rational decision making with limited information, artificial intelligence-based heuristics applied to a wide variety of problems, learning paradigms, and computational experimentation.
Officially cited as: J Heuristics
Provides a forum for advancing the state-of-the-art in the theory and practical application of techniques for solving problems approximately that cannot be solved exactly.
Fosters the development, understanding, and practical use of heuristic solution techniques for solving business, engineering, and societal problems.
Considers the importance of theoretical, empirical, and experimental work related to the development of heuristics.