Deep Learning to Predict Late-Onset Breast Cancer Metastasis: the Single Hyperparameter Grid Search (SHGS) Strategy for Meta Tuning Concerning Deep Feed-forward Neural Network

Yijun Zhou, Om Arora-Jain, Xia Jiang
{"title":"Deep Learning to Predict Late-Onset Breast Cancer Metastasis: the Single Hyperparameter Grid Search (SHGS) Strategy for Meta Tuning Concerning Deep Feed-forward Neural Network","authors":"Yijun Zhou, Om Arora-Jain, Xia Jiang","doi":"arxiv-2408.15498","DOIUrl":null,"url":null,"abstract":"While machine learning has advanced in medicine, its widespread use in\nclinical applications, especially in predicting breast cancer metastasis, is\nstill limited. We have been dedicated to constructing a DFNN model to predict\nbreast cancer metastasis n years in advance. However, the challenge lies in\nefficiently identifying optimal hyperparameter values through grid search,\ngiven the constraints of time and resources. Issues such as the infinite\npossibilities for continuous hyperparameters like l1 and l2, as well as the\ntime-consuming and costly process, further complicate the task. To address\nthese challenges, we developed Single Hyperparameter Grid Search (SHGS)\nstrategy, serving as a preselection method before grid search. Our experiments\nwith SHGS applied to DFNN models for breast cancer metastasis prediction focus\non analyzing eight target hyperparameters: epochs, batch size, dropout, L1, L2,\nlearning rate, decay, and momentum. We created three figures, each depicting\nthe experiment results obtained from three LSM-I-10-Plus-year datasets. These\nfigures illustrate the relationship between model performance and the target\nhyperparameter values. For each hyperparameter, we analyzed whether changes in\nthis hyperparameter would affect model performance, examined if there were\nspecific patterns, and explored how to choose values for the particular\nhyperparameter. Our experimental findings reveal that the optimal value of a\nhyperparameter is not only dependent on the dataset but is also significantly\ninfluenced by the settings of other hyperparameters. Additionally, our\nexperiments suggested some reduced range of values for a target hyperparameter,\nwhich may be helpful for low-budget grid search. This approach serves as a\nprior experience and foundation for subsequent use of grid search to enhance\nmodel performance.","PeriodicalId":501266,"journal":{"name":"arXiv - QuanBio - Quantitative Methods","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - QuanBio - Quantitative Methods","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.15498","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

While machine learning has advanced in medicine, its widespread use in clinical applications, especially in predicting breast cancer metastasis, is still limited. We have been dedicated to constructing a DFNN model to predict breast cancer metastasis n years in advance. However, the challenge lies in efficiently identifying optimal hyperparameter values through grid search, given the constraints of time and resources. Issues such as the infinite possibilities for continuous hyperparameters like l1 and l2, as well as the time-consuming and costly process, further complicate the task. To address these challenges, we developed Single Hyperparameter Grid Search (SHGS) strategy, serving as a preselection method before grid search. Our experiments with SHGS applied to DFNN models for breast cancer metastasis prediction focus on analyzing eight target hyperparameters: epochs, batch size, dropout, L1, L2, learning rate, decay, and momentum. We created three figures, each depicting the experiment results obtained from three LSM-I-10-Plus-year datasets. These figures illustrate the relationship between model performance and the target hyperparameter values. For each hyperparameter, we analyzed whether changes in this hyperparameter would affect model performance, examined if there were specific patterns, and explored how to choose values for the particular hyperparameter. Our experimental findings reveal that the optimal value of a hyperparameter is not only dependent on the dataset but is also significantly influenced by the settings of other hyperparameters. Additionally, our experiments suggested some reduced range of values for a target hyperparameter, which may be helpful for low-budget grid search. This approach serves as a prior experience and foundation for subsequent use of grid search to enhance model performance.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
预测晚发乳腺癌转移的深度学习:用于深度前馈神经网络元调谐的单超参数网格搜索(SHGS)策略
尽管机器学习在医学领域取得了很大进展,但其在临床应用中的广泛应用,尤其是在预测乳腺癌转移方面,仍然十分有限。我们一直致力于构建一个提前 n 年预测乳腺癌转移的 DFNN 模型。然而,由于时间和资源的限制,通过网格搜索确定最佳超参数值的效率很低,这是一个挑战。诸如 l1 和 l2 等连续超参数的无穷可能性,以及耗时耗资的过程等问题,使任务变得更加复杂。为了应对这些挑战,我们开发了单超参网格搜索(SHGS)策略,作为网格搜索前的预选方法。我们将 SHGS 应用于乳腺癌转移预测的 DFNN 模型的实验,重点分析了八个目标超参数:epochs、batch size、dropout、L1、L2、学习率、衰减和动量。我们绘制了三幅图,分别描述了从三个 LSM-I-10-Plus 年数据集获得的实验结果。这些图说明了模型性能与目标超参数值之间的关系。对于每个超参数,我们分析了改变该超参数是否会影响模型性能,研究了是否存在特定模式,并探讨了如何选择特定超参数的值。我们的实验结果表明,超参数的最佳值不仅取决于数据集,还受到其他超参数设置的显著影响。此外,我们的实验还建议缩小目标超参数的取值范围,这可能有助于低预算网格搜索。这种方法为以后使用网格搜索提高模型性能积累了经验,奠定了基础。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
How to Build the Virtual Cell with Artificial Intelligence: Priorities and Opportunities Automating proton PBS treatment planning for head and neck cancers using policy gradient-based deep reinforcement learning A computational framework for optimal and Model Predictive Control of stochastic gene regulatory networks Active learning for energy-based antibody optimization and enhanced screening Comorbid anxiety symptoms predict lower odds of improvement in depression symptoms during smartphone-delivered psychotherapy
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1