Pruned Genetic-NAS on GPU Accelerator Platforms with Chaos-on-Edge Hyperparameters

Anand Ravishankar, S. Natarajan, A. B. Malakreddy
{"title":"Pruned Genetic-NAS on GPU Accelerator Platforms with Chaos-on-Edge Hyperparameters","authors":"Anand Ravishankar, S. Natarajan, A. B. Malakreddy","doi":"10.1109/ICMLA52953.2021.00158","DOIUrl":null,"url":null,"abstract":"Deep Neural Networks (DNNs) is an extremely attractive subset of computational models due to their remarkable ability to provide promising results for a wide variety of problems. However, the performance delivered by DNNs often overshadows the work done before training the network, which includes Network Architecture Search (NAS) and its suitability concerning the task. This paper presents a modified Genetic-NAS framework designed to prevent network stagnation and reduce training loss. The network hyperparameters are initialized in a “Chaos on Edge” region, preventing premature convergence through reverse biases. The Genetic-NAS and parameter space exploration process is co-evolved by applying genetic operators and subjugating them to layer-wise competition. The inherent parallelism offered by both the neural network and its genetic extension is exploited by deploying the model on a GPU which improves the throughput. the GPU device provides an acceleration of 8.4x with 92.9% of the workload placed on the GPU device for the text-based datasets. On average, the task of classifying an image-based dataset takes 3 GPU hours.","PeriodicalId":6750,"journal":{"name":"2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA)","volume":"28 1","pages":"958-963"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICMLA52953.2021.00158","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Deep Neural Networks (DNNs) is an extremely attractive subset of computational models due to their remarkable ability to provide promising results for a wide variety of problems. However, the performance delivered by DNNs often overshadows the work done before training the network, which includes Network Architecture Search (NAS) and its suitability concerning the task. This paper presents a modified Genetic-NAS framework designed to prevent network stagnation and reduce training loss. The network hyperparameters are initialized in a “Chaos on Edge” region, preventing premature convergence through reverse biases. The Genetic-NAS and parameter space exploration process is co-evolved by applying genetic operators and subjugating them to layer-wise competition. The inherent parallelism offered by both the neural network and its genetic extension is exploited by deploying the model on a GPU which improves the throughput. the GPU device provides an acceleration of 8.4x with 92.9% of the workload placed on the GPU device for the text-based datasets. On average, the task of classifying an image-based dataset takes 3 GPU hours.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
带有边缘混沌超参数的GPU加速器平台上的剪枝遗传nas
深度神经网络(dnn)是一个非常有吸引力的计算模型子集,因为它们具有为各种各样的问题提供有希望的结果的卓越能力。然而,深度神经网络提供的性能往往掩盖了训练网络之前所做的工作,其中包括网络架构搜索(NAS)及其对任务的适用性。本文提出了一种改进的遗传- nas框架,旨在防止网络停滞和减少训练损失。网络超参数在“混沌边缘”区域初始化,防止通过反向偏差过早收敛。遗传- nas和参数空间探索过程是通过应用遗传算子并使它们服从于分层竞争而共同进化的。通过在GPU上部署该模型,利用神经网络及其遗传扩展所提供的固有并行性,提高了吞吐量。GPU设备提供8.4倍的加速,92.9%的工作负载放在GPU设备上用于基于文本的数据集。平均而言,对基于图像的数据集进行分类的任务需要3个GPU小时。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Detecting Offensive Content on Twitter During Proud Boys Riots Explainable Zero-Shot Modelling of Clinical Depression Symptoms from Text Deep Learning Methods for the Prediction of Information Display Type Using Eye Tracking Sequences Step Detection using SVM on NURVV Trackers Condition Monitoring for Power Converters via Deep One-Class Classification
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1