{"title":"带有边缘混沌超参数的GPU加速器平台上的剪枝遗传nas","authors":"Anand Ravishankar, S. Natarajan, A. B. Malakreddy","doi":"10.1109/ICMLA52953.2021.00158","DOIUrl":null,"url":null,"abstract":"Deep Neural Networks (DNNs) is an extremely attractive subset of computational models due to their remarkable ability to provide promising results for a wide variety of problems. However, the performance delivered by DNNs often overshadows the work done before training the network, which includes Network Architecture Search (NAS) and its suitability concerning the task. This paper presents a modified Genetic-NAS framework designed to prevent network stagnation and reduce training loss. The network hyperparameters are initialized in a “Chaos on Edge” region, preventing premature convergence through reverse biases. The Genetic-NAS and parameter space exploration process is co-evolved by applying genetic operators and subjugating them to layer-wise competition. The inherent parallelism offered by both the neural network and its genetic extension is exploited by deploying the model on a GPU which improves the throughput. the GPU device provides an acceleration of 8.4x with 92.9% of the workload placed on the GPU device for the text-based datasets. On average, the task of classifying an image-based dataset takes 3 GPU hours.","PeriodicalId":6750,"journal":{"name":"2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA)","volume":"28 1","pages":"958-963"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Pruned Genetic-NAS on GPU Accelerator Platforms with Chaos-on-Edge Hyperparameters\",\"authors\":\"Anand Ravishankar, S. Natarajan, A. B. Malakreddy\",\"doi\":\"10.1109/ICMLA52953.2021.00158\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep Neural Networks (DNNs) is an extremely attractive subset of computational models due to their remarkable ability to provide promising results for a wide variety of problems. However, the performance delivered by DNNs often overshadows the work done before training the network, which includes Network Architecture Search (NAS) and its suitability concerning the task. This paper presents a modified Genetic-NAS framework designed to prevent network stagnation and reduce training loss. The network hyperparameters are initialized in a “Chaos on Edge” region, preventing premature convergence through reverse biases. The Genetic-NAS and parameter space exploration process is co-evolved by applying genetic operators and subjugating them to layer-wise competition. The inherent parallelism offered by both the neural network and its genetic extension is exploited by deploying the model on a GPU which improves the throughput. the GPU device provides an acceleration of 8.4x with 92.9% of the workload placed on the GPU device for the text-based datasets. On average, the task of classifying an image-based dataset takes 3 GPU hours.\",\"PeriodicalId\":6750,\"journal\":{\"name\":\"2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA)\",\"volume\":\"28 1\",\"pages\":\"958-963\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICMLA52953.2021.00158\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICMLA52953.2021.00158","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Pruned Genetic-NAS on GPU Accelerator Platforms with Chaos-on-Edge Hyperparameters
Deep Neural Networks (DNNs) is an extremely attractive subset of computational models due to their remarkable ability to provide promising results for a wide variety of problems. However, the performance delivered by DNNs often overshadows the work done before training the network, which includes Network Architecture Search (NAS) and its suitability concerning the task. This paper presents a modified Genetic-NAS framework designed to prevent network stagnation and reduce training loss. The network hyperparameters are initialized in a “Chaos on Edge” region, preventing premature convergence through reverse biases. The Genetic-NAS and parameter space exploration process is co-evolved by applying genetic operators and subjugating them to layer-wise competition. The inherent parallelism offered by both the neural network and its genetic extension is exploited by deploying the model on a GPU which improves the throughput. the GPU device provides an acceleration of 8.4x with 92.9% of the workload placed on the GPU device for the text-based datasets. On average, the task of classifying an image-based dataset takes 3 GPU hours.