A step function based recursion method for 0/1 deep neural networks

IF 3.4 2区 数学 Q1 MATHEMATICS, APPLIED Applied Mathematics and Computation Pub Date : 2025-03-01 Epub Date: 2024-10-21 DOI:10.1016/j.amc.2024.129129
Hui Zhang , Shenglong Zhou , Geoffrey Ye Li , Naihua Xiu , Yiju Wang
{"title":"A step function based recursion method for 0/1 deep neural networks","authors":"Hui Zhang ,&nbsp;Shenglong Zhou ,&nbsp;Geoffrey Ye Li ,&nbsp;Naihua Xiu ,&nbsp;Yiju Wang","doi":"10.1016/j.amc.2024.129129","DOIUrl":null,"url":null,"abstract":"<div><div>The deep neural network with step function activation (0/1 DNNs) is a fundamental composite model in deep learning which has high efficiency and robustness to outliers. However, due to the discontinuity and lacking subgradient information of the 0/1 DNNs model, prior researches are largely focused on designing continuous functions to approximate the step activation and developing continuous optimization methods. In this paper, by introducing two sets of network node variables into the 0/1 DNNs and by exploring the composite structure of the resulted model, the 0/1 DNNs is decomposed into a unary optimization model associated with the step function and three derivational optimization subproblems associated with the other variables. For the unary optimization model and two derivational optimization subproblems, we present a closed form solution, and for the third derivational optimization subproblem, we propose an efficient proximal method. Based on this, a globally convergent step function based recursion method for the 0/1 DNNs is developed. The efficiency and performance of the proposed algorithm are validated via theoretical analysis as well as some illustrative numerical examples on classifying MNIST, FashionMNIST and Cifar10 datasets.</div></div>","PeriodicalId":55496,"journal":{"name":"Applied Mathematics and Computation","volume":"488 ","pages":"Article 129129"},"PeriodicalIF":3.4000,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Mathematics and Computation","FirstCategoryId":"100","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0096300324005903","RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/10/21 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"MATHEMATICS, APPLIED","Score":null,"Total":0}
引用次数: 0

Abstract

The deep neural network with step function activation (0/1 DNNs) is a fundamental composite model in deep learning which has high efficiency and robustness to outliers. However, due to the discontinuity and lacking subgradient information of the 0/1 DNNs model, prior researches are largely focused on designing continuous functions to approximate the step activation and developing continuous optimization methods. In this paper, by introducing two sets of network node variables into the 0/1 DNNs and by exploring the composite structure of the resulted model, the 0/1 DNNs is decomposed into a unary optimization model associated with the step function and three derivational optimization subproblems associated with the other variables. For the unary optimization model and two derivational optimization subproblems, we present a closed form solution, and for the third derivational optimization subproblem, we propose an efficient proximal method. Based on this, a globally convergent step function based recursion method for the 0/1 DNNs is developed. The efficiency and performance of the proposed algorithm are validated via theoretical analysis as well as some illustrative numerical examples on classifying MNIST, FashionMNIST and Cifar10 datasets.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于阶跃函数的 0/1 深度神经网络递归方法
带阶跃函数激活的深度神经网络(0/1 DNNs)是深度学习中的一种基本复合模型,具有高效率和对异常值的鲁棒性。然而,由于 0/1 DNNs 模型的不连续性和缺乏子梯度信息,之前的研究主要集中在设计近似阶跃激活的连续函数和开发连续优化方法上。本文通过在 0/1 DNNs 中引入两组网络节点变量,并通过探索所得模型的复合结构,将 0/1 DNNs 分解为与阶跃函数相关的一元优化模型和与其他变量相关的三个衍生优化子问题。对于一元优化模型和两个派生优化子问题,我们提出了闭式解法;对于第三个派生优化子问题,我们提出了一种高效的近似方法。在此基础上,我们为 0/1 DNN 开发了一种基于阶跃函数的全局收敛递归方法。通过理论分析以及在 MNIST、FashionMNIST 和 Cifar10 数据集分类上的一些数值示例,验证了所提算法的效率和性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
7.90
自引率
10.00%
发文量
755
审稿时长
36 days
期刊介绍: Applied Mathematics and Computation addresses work at the interface between applied mathematics, numerical computation, and applications of systems – oriented ideas to the physical, biological, social, and behavioral sciences, and emphasizes papers of a computational nature focusing on new algorithms, their analysis and numerical results. In addition to presenting research papers, Applied Mathematics and Computation publishes review articles and single–topics issues.
期刊最新文献
Fixed-time projective quasi-synchronization for multi-layer coupled memristive neural networks under deception attacks A general nonlinear method for distributed containment control of high-dimensional MASs with delays and constraints Dynamic event-triggered delayed impulsive control for nonlinear system with packet loss Planar graphs that exclude {4, 5, 7, 8}-cycles and intersecting triangles are DP-3-colorable A generalized second-order positivity-preserving numerical method for non-autonomous dynamical systems with applications
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1