深度学习中的网络辅助数据集扩展:评估 ResNet 中的可训练激活函数以改进图像分类

IF 2.5 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS International Journal of Web Information Systems Pub Date : 2024-07-12 DOI:10.1108/ijwis-05-2024-0135
Zhiqiang Zhang, Xiaoming Li, Xinyi Xu, Chengjie Lu, Yihe Yang, Zhiyong Shi
{"title":"深度学习中的网络辅助数据集扩展:评估 ResNet 中的可训练激活函数以改进图像分类","authors":"Zhiqiang Zhang, Xiaoming Li, Xinyi Xu, Chengjie Lu, Yihe Yang, Zhiyong Shi","doi":"10.1108/ijwis-05-2024-0135","DOIUrl":null,"url":null,"abstract":"\nPurpose\nThe purpose of this study is to explore the potential of trainable activation functions to enhance the performance of deep neural networks, specifically ResNet architectures, in the task of image classification. By introducing activation functions that adapt during training, the authors aim to determine whether such flexibility can lead to improved learning outcomes and generalization capabilities compared to static activation functions like ReLU. This research seeks to provide insights into how dynamic nonlinearities might influence deep learning models' efficiency and accuracy in handling complex image data sets.\n\n\nDesign/methodology/approach\nThis research integrates three novel trainable activation functions – CosLU, DELU and ReLUN – into various ResNet-n architectures, where “n” denotes the number of convolutional layers. Using CIFAR-10 and CIFAR-100 data sets, the authors conducted a comparative study to assess the impact of these functions on image classification accuracy. The approach included modifying the traditional ResNet models by replacing their static activation functions with the trainable variants, allowing for dynamic adaptation during training. The performance was evaluated based on accuracy metrics and loss profiles across different network depths.\n\n\nFindings\nThe findings indicate that trainable activation functions, particularly CosLU, can significantly enhance the performance of deep learning models, outperforming the traditional ReLU in deeper network configurations on the CIFAR-10 data set. CosLU showed the highest improvement in accuracy, whereas DELU and ReLUN offered varying levels of performance enhancements. These functions also demonstrated potential in reducing overfitting and improving model generalization across more complex data sets like CIFAR-100, suggesting that the adaptability of activation functions plays a crucial role in the training dynamics of deep neural networks.\n\n\nOriginality/value\nThis study contributes to the field of deep learning by introducing and evaluating the impact of three novel trainable activation functions within widely used ResNet architectures. Unlike previous works that primarily focused on static activation functions, this research demonstrates that incorporating trainable nonlinearities can lead to significant improvements in model performance and adaptability. The introduction of CosLU, DELU and ReLUN provides a new pathway for enhancing the flexibility and efficiency of neural networks, potentially setting a new standard for future deep learning applications in image classification and beyond.\n","PeriodicalId":44153,"journal":{"name":"International Journal of Web Information Systems","volume":null,"pages":null},"PeriodicalIF":2.5000,"publicationDate":"2024-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Web-aided data set expansion in deep learning: evaluating trainable activation functions in ResNet for improved image classification\",\"authors\":\"Zhiqiang Zhang, Xiaoming Li, Xinyi Xu, Chengjie Lu, Yihe Yang, Zhiyong Shi\",\"doi\":\"10.1108/ijwis-05-2024-0135\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"\\nPurpose\\nThe purpose of this study is to explore the potential of trainable activation functions to enhance the performance of deep neural networks, specifically ResNet architectures, in the task of image classification. By introducing activation functions that adapt during training, the authors aim to determine whether such flexibility can lead to improved learning outcomes and generalization capabilities compared to static activation functions like ReLU. This research seeks to provide insights into how dynamic nonlinearities might influence deep learning models' efficiency and accuracy in handling complex image data sets.\\n\\n\\nDesign/methodology/approach\\nThis research integrates three novel trainable activation functions – CosLU, DELU and ReLUN – into various ResNet-n architectures, where “n” denotes the number of convolutional layers. Using CIFAR-10 and CIFAR-100 data sets, the authors conducted a comparative study to assess the impact of these functions on image classification accuracy. The approach included modifying the traditional ResNet models by replacing their static activation functions with the trainable variants, allowing for dynamic adaptation during training. The performance was evaluated based on accuracy metrics and loss profiles across different network depths.\\n\\n\\nFindings\\nThe findings indicate that trainable activation functions, particularly CosLU, can significantly enhance the performance of deep learning models, outperforming the traditional ReLU in deeper network configurations on the CIFAR-10 data set. CosLU showed the highest improvement in accuracy, whereas DELU and ReLUN offered varying levels of performance enhancements. These functions also demonstrated potential in reducing overfitting and improving model generalization across more complex data sets like CIFAR-100, suggesting that the adaptability of activation functions plays a crucial role in the training dynamics of deep neural networks.\\n\\n\\nOriginality/value\\nThis study contributes to the field of deep learning by introducing and evaluating the impact of three novel trainable activation functions within widely used ResNet architectures. Unlike previous works that primarily focused on static activation functions, this research demonstrates that incorporating trainable nonlinearities can lead to significant improvements in model performance and adaptability. The introduction of CosLU, DELU and ReLUN provides a new pathway for enhancing the flexibility and efficiency of neural networks, potentially setting a new standard for future deep learning applications in image classification and beyond.\\n\",\"PeriodicalId\":44153,\"journal\":{\"name\":\"International Journal of Web Information Systems\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":2.5000,\"publicationDate\":\"2024-07-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Web Information Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1108/ijwis-05-2024-0135\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Web Information Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1108/ijwis-05-2024-0135","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

目的 本研究旨在探索可训练激活函数在图像分类任务中提高深度神经网络(特别是 ResNet 架构)性能的潜力。通过引入可在训练过程中进行调整的激活函数,作者旨在确定与 ReLU 等静态激活函数相比,这种灵活性是否能提高学习效果和泛化能力。本研究旨在深入探讨动态非线性如何影响深度学习模型处理复杂图像数据集的效率和准确性。本研究将 CosLU、DELU 和 ReLUN 这三种新型可训练激活函数集成到各种 ResNet-n 架构中,其中 "n "表示卷积层的数量。作者使用 CIFAR-10 和 CIFAR-100 数据集进行了一项比较研究,以评估这些函数对图像分类准确性的影响。研究方法包括修改传统的 ResNet 模型,用可训练的变体取代静态激活函数,从而在训练过程中实现动态适应。研究结果表明,可训练激活函数,尤其是 CosLU,能显著提高深度学习模型的性能,在 CIFAR-10 数据集的深度网络配置中,其性能优于传统的 ReLU。CosLU 的准确度提高幅度最大,而 DELU 和 ReLUN 则有不同程度的性能提升。这些函数还显示出在 CIFAR-100 等更复杂的数据集上减少过拟合和提高模型泛化的潜力,这表明激活函数的适应性在深度神经网络的动态训练中发挥着至关重要的作用。与以往主要关注静态激活函数的研究不同,这项研究表明,加入可训练的非线性因素可以显著提高模型的性能和适应性。CosLU、DELU 和 ReLUN 的引入为提高神经网络的灵活性和效率提供了新的途径,有可能为未来图像分类及其他领域的深度学习应用设定新的标准。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Web-aided data set expansion in deep learning: evaluating trainable activation functions in ResNet for improved image classification
Purpose The purpose of this study is to explore the potential of trainable activation functions to enhance the performance of deep neural networks, specifically ResNet architectures, in the task of image classification. By introducing activation functions that adapt during training, the authors aim to determine whether such flexibility can lead to improved learning outcomes and generalization capabilities compared to static activation functions like ReLU. This research seeks to provide insights into how dynamic nonlinearities might influence deep learning models' efficiency and accuracy in handling complex image data sets. Design/methodology/approach This research integrates three novel trainable activation functions – CosLU, DELU and ReLUN – into various ResNet-n architectures, where “n” denotes the number of convolutional layers. Using CIFAR-10 and CIFAR-100 data sets, the authors conducted a comparative study to assess the impact of these functions on image classification accuracy. The approach included modifying the traditional ResNet models by replacing their static activation functions with the trainable variants, allowing for dynamic adaptation during training. The performance was evaluated based on accuracy metrics and loss profiles across different network depths. Findings The findings indicate that trainable activation functions, particularly CosLU, can significantly enhance the performance of deep learning models, outperforming the traditional ReLU in deeper network configurations on the CIFAR-10 data set. CosLU showed the highest improvement in accuracy, whereas DELU and ReLUN offered varying levels of performance enhancements. These functions also demonstrated potential in reducing overfitting and improving model generalization across more complex data sets like CIFAR-100, suggesting that the adaptability of activation functions plays a crucial role in the training dynamics of deep neural networks. Originality/value This study contributes to the field of deep learning by introducing and evaluating the impact of three novel trainable activation functions within widely used ResNet architectures. Unlike previous works that primarily focused on static activation functions, this research demonstrates that incorporating trainable nonlinearities can lead to significant improvements in model performance and adaptability. The introduction of CosLU, DELU and ReLUN provides a new pathway for enhancing the flexibility and efficiency of neural networks, potentially setting a new standard for future deep learning applications in image classification and beyond.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
International Journal of Web Information Systems
International Journal of Web Information Systems COMPUTER SCIENCE, INFORMATION SYSTEMS-
CiteScore
4.60
自引率
0.00%
发文量
19
期刊介绍: The Global Information Infrastructure is a daily reality. In spite of the many applications in all domains of our societies: e-business, e-commerce, e-learning, e-science, and e-government, for instance, and in spite of the tremendous advances by engineers and scientists, the seamless development of Web information systems and services remains a major challenge. The journal examines how current shared vision for the future is one of semantically-rich information and service oriented architecture for global information systems. This vision is at the convergence of progress in technologies such as XML, Web services, RDF, OWL, of multimedia, multimodal, and multilingual information retrieval, and of distributed, mobile and ubiquitous computing. Topicality While the International Journal of Web Information Systems covers a broad range of topics, the journal welcomes papers that provide a perspective on all aspects of Web information systems: Web semantics and Web dynamics, Web mining and searching, Web databases and Web data integration, Web-based commerce and e-business, Web collaboration and distributed computing, Internet computing and networks, performance of Web applications, and Web multimedia services and Web-based education.
期刊最新文献
Web-aided data set expansion in deep learning: evaluating trainable activation functions in ResNet for improved image classification Click-through rate prediction model based on graph networks and feature squeeze-and-excitation mechanism Enhancing the viewing, browsing and searching of knowledge graphs with virtual properties GethReplayer: a smart contract testing method based on transaction replay Large language models for automated Q&A involving legal documents: a survey on algorithms, frameworks and applications
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1