A. Mexicano, J. C. Carmona, Nelva N. Almaza, Lilia Garcia, Ricardo D. Lopez
{"title":"基于gpu的并行JADE算法","authors":"A. Mexicano, J. C. Carmona, Nelva N. Almaza, Lilia Garcia, Ricardo D. Lopez","doi":"10.6025/dspaial/2022/1/1/1-10","DOIUrl":null,"url":null,"abstract":": This work presents a parallel implementation of JADE: Adaptive Differential Evolution With Optional External Archive, using the Compute Unified Device Architecture (CUDA), in order to reduce the execution run-time of the algorithm. The algorithm was tested using the well-known function Sphere and the execution run time was compared against its sequential version. The results were measured in terms of “Speed-up” and they show that the execution run-time can be reduced significantly by the use of CUDA, this benefit can be observed better when working with large amounts of data. However, not necessarily the population with more data reaches the best performance.","PeriodicalId":202021,"journal":{"name":"Digital Signal Processing and Artificial Intelligence for Automatic Learning","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Parallel Version of the JADE Algorithm using GPUS\",\"authors\":\"A. Mexicano, J. C. Carmona, Nelva N. Almaza, Lilia Garcia, Ricardo D. Lopez\",\"doi\":\"10.6025/dspaial/2022/1/1/1-10\",\"DOIUrl\":null,\"url\":null,\"abstract\":\": This work presents a parallel implementation of JADE: Adaptive Differential Evolution With Optional External Archive, using the Compute Unified Device Architecture (CUDA), in order to reduce the execution run-time of the algorithm. The algorithm was tested using the well-known function Sphere and the execution run time was compared against its sequential version. The results were measured in terms of “Speed-up” and they show that the execution run-time can be reduced significantly by the use of CUDA, this benefit can be observed better when working with large amounts of data. However, not necessarily the population with more data reaches the best performance.\",\"PeriodicalId\":202021,\"journal\":{\"name\":\"Digital Signal Processing and Artificial Intelligence for Automatic Learning\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Digital Signal Processing and Artificial Intelligence for Automatic Learning\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.6025/dspaial/2022/1/1/1-10\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Digital Signal Processing and Artificial Intelligence for Automatic Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.6025/dspaial/2022/1/1/1-10","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A Parallel Version of the JADE Algorithm using GPUS
: This work presents a parallel implementation of JADE: Adaptive Differential Evolution With Optional External Archive, using the Compute Unified Device Architecture (CUDA), in order to reduce the execution run-time of the algorithm. The algorithm was tested using the well-known function Sphere and the execution run time was compared against its sequential version. The results were measured in terms of “Speed-up” and they show that the execution run-time can be reduced significantly by the use of CUDA, this benefit can be observed better when working with large amounts of data. However, not necessarily the population with more data reaches the best performance.