Stability of accuracy for the training of DNNs via the uniform doubling condition

IF 1.2 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Annals of Mathematics and Artificial Intelligence Pub Date : 2024-01-19 DOI:10.1007/s10472-023-09919-1
Yitzchak Shmalo
{"title":"Stability of accuracy for the training of DNNs via the uniform doubling condition","authors":"Yitzchak Shmalo","doi":"10.1007/s10472-023-09919-1","DOIUrl":null,"url":null,"abstract":"<div><p>We study the stability of accuracy during the training of deep neural networks (DNNs). In this context, the training of a DNN is performed via the minimization of a cross-entropy loss function, and the performance metric is accuracy (the proportion of objects that are classified correctly). While training results in a decrease of loss, the accuracy does not necessarily increase during the process and may sometimes even decrease. The goal of achieving stability of accuracy is to ensure that if accuracy is high at some initial time, it remains high throughout training. A recent result by Berlyand, Jabin, and Safsten introduces a doubling condition on the training data, which ensures the stability of accuracy during training for DNNs using the absolute value activation function. For training data in <span>\\(\\mathbb {R}^n\\)</span>, this doubling condition is formulated using slabs in <span>\\(\\mathbb {R}^n\\)</span> and depends on the choice of the slabs. The goal of this paper is twofold. First, to make the doubling condition uniform, that is, independent of the choice of slabs. This leads to sufficient conditions for stability in terms of training data only. In other words, for a training set <i>T</i> that satisfies the uniform doubling condition, there exists a family of DNNs such that a DNN from this family with high accuracy on the training set at some training time <span>\\(t_0\\)</span> will have high accuracy for all time <span>\\(t&gt;t_0\\)</span>. Moreover, establishing uniformity is necessary for the numerical implementation of the doubling condition. We demonstrate how to numerically implement a simplified version of this uniform doubling condition on a dataset and apply it to achieve stability of accuracy using a few model examples. The second goal is to extend the original stability results from the absolute value activation function to a broader class of piecewise linear activation functions with finitely many critical points, such as the popular Leaky ReLU.</p></div>","PeriodicalId":7971,"journal":{"name":"Annals of Mathematics and Artificial Intelligence","volume":"92 2","pages":"439 - 483"},"PeriodicalIF":1.2000,"publicationDate":"2024-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Annals of Mathematics and Artificial Intelligence","FirstCategoryId":"94","ListUrlMain":"https://link.springer.com/article/10.1007/s10472-023-09919-1","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

We study the stability of accuracy during the training of deep neural networks (DNNs). In this context, the training of a DNN is performed via the minimization of a cross-entropy loss function, and the performance metric is accuracy (the proportion of objects that are classified correctly). While training results in a decrease of loss, the accuracy does not necessarily increase during the process and may sometimes even decrease. The goal of achieving stability of accuracy is to ensure that if accuracy is high at some initial time, it remains high throughout training. A recent result by Berlyand, Jabin, and Safsten introduces a doubling condition on the training data, which ensures the stability of accuracy during training for DNNs using the absolute value activation function. For training data in \(\mathbb {R}^n\), this doubling condition is formulated using slabs in \(\mathbb {R}^n\) and depends on the choice of the slabs. The goal of this paper is twofold. First, to make the doubling condition uniform, that is, independent of the choice of slabs. This leads to sufficient conditions for stability in terms of training data only. In other words, for a training set T that satisfies the uniform doubling condition, there exists a family of DNNs such that a DNN from this family with high accuracy on the training set at some training time \(t_0\) will have high accuracy for all time \(t>t_0\). Moreover, establishing uniformity is necessary for the numerical implementation of the doubling condition. We demonstrate how to numerically implement a simplified version of this uniform doubling condition on a dataset and apply it to achieve stability of accuracy using a few model examples. The second goal is to extend the original stability results from the absolute value activation function to a broader class of piecewise linear activation functions with finitely many critical points, such as the popular Leaky ReLU.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
通过均匀加倍条件训练 DNN 的精度稳定性
摘要 我们研究了深度神经网络(DNN)训练过程中准确率的稳定性。在这种情况下,深度神经网络的训练是通过最小化交叉熵损失函数来实现的,其性能指标是准确率(正确分类对象的比例)。虽然训练会导致损失的减少,但在训练过程中,准确率并不一定会提高,有时甚至会降低。实现准确率稳定性的目标是,如果准确率在某个初始时间很高,则确保在整个训练过程中都保持较高的准确率。Berlyand、Jabin 和 Safsten 最近的一项研究成果引入了训练数据加倍条件,从而确保了使用绝对值激活函数的 DNN 在训练过程中的准确率稳定性。对于 \(\mathbb {R}^n\) 中的训练数据,这个加倍条件是使用 \(\mathbb {R}^n\) 中的板块制定的,并取决于板块的选择。本文的目标有两个。首先,使加倍条件统一,即与板块的选择无关。这就为仅在训练数据方面的稳定性提供了充分条件。换句话说,对于满足统一加倍条件的训练集 T,存在一个 DNN 家族,使得这个家族中在某个训练时间 \(t_0\) 对训练集具有高准确率的 DNN 在所有时间 \(t>t_0\) 都具有高准确率。此外,建立统一性对于加倍条件的数值实现是必要的。我们演示了如何在数据集上数值实现这种均匀加倍条件的简化版本,并通过几个模型实例应用它来实现精度的稳定性。第二个目标是将绝对值激活函数的原始稳定性结果扩展到具有有限多个临界点的更广泛的片断线性激活函数类别,例如流行的 Leaky ReLU。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Annals of Mathematics and Artificial Intelligence
Annals of Mathematics and Artificial Intelligence 工程技术-计算机:人工智能
CiteScore
3.00
自引率
8.30%
发文量
37
审稿时长
>12 weeks
期刊介绍: Annals of Mathematics and Artificial Intelligence presents a range of topics of concern to scholars applying quantitative, combinatorial, logical, algebraic and algorithmic methods to diverse areas of Artificial Intelligence, from decision support, automated deduction, and reasoning, to knowledge-based systems, machine learning, computer vision, robotics and planning. The journal features collections of papers appearing either in volumes (400 pages) or in separate issues (100-300 pages), which focus on one topic and have one or more guest editors. Annals of Mathematics and Artificial Intelligence hopes to influence the spawning of new areas of applied mathematics and strengthen the scientific underpinnings of Artificial Intelligence.
期刊最新文献
Time-penalised trees (TpT): introducing a new tree-based data mining algorithm for time-varying covariates Conformal test martingales for hypergraphical models Costly information providing in binary contests Tumato 2.0 - a constraint-based planning approach for safe and robust robot behavior Calibration methods in imbalanced binary classification
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1