深度学习结果再现性中的问题

S. Jean-Paul, T. Elseify, I. Obeid, Joseph Picone
{"title":"深度学习结果再现性中的问题","authors":"S. Jean-Paul, T. Elseify, I. Obeid, Joseph Picone","doi":"10.1109/SPMB47826.2019.9037840","DOIUrl":null,"url":null,"abstract":"The Neuronix high-performance computing cluster allows us to conduct extensive machine learning experiments on big data [1] . This heterogeneous cluster uses innovative scheduling technology, Slurm [2] , that manages a network of CPUs and graphics processing units (GPUs). The GPU farm consists of a variety of processors ranging from low-end consumer grade devices such as the Nvidia GTX 970 to higher-end devices such as the GeForce RTX 2080. These GPUs are essential to our research since they allow extremely compute-intensive deep learning tasks to be executed on massive data resources such as the TUH EEG Corpus [2] . We use TensorFlow [3] as the core machine learning library for our deep learning systems, and routinely employ multiple GPUs to accelerate the training process.","PeriodicalId":143197,"journal":{"name":"2019 IEEE Signal Processing in Medicine and Biology Symposium (SPMB)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":"{\"title\":\"Issues in the Reproducibility of Deep Learning Results\",\"authors\":\"S. Jean-Paul, T. Elseify, I. Obeid, Joseph Picone\",\"doi\":\"10.1109/SPMB47826.2019.9037840\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The Neuronix high-performance computing cluster allows us to conduct extensive machine learning experiments on big data [1] . This heterogeneous cluster uses innovative scheduling technology, Slurm [2] , that manages a network of CPUs and graphics processing units (GPUs). The GPU farm consists of a variety of processors ranging from low-end consumer grade devices such as the Nvidia GTX 970 to higher-end devices such as the GeForce RTX 2080. These GPUs are essential to our research since they allow extremely compute-intensive deep learning tasks to be executed on massive data resources such as the TUH EEG Corpus [2] . We use TensorFlow [3] as the core machine learning library for our deep learning systems, and routinely employ multiple GPUs to accelerate the training process.\",\"PeriodicalId\":143197,\"journal\":{\"name\":\"2019 IEEE Signal Processing in Medicine and Biology Symposium (SPMB)\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"8\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 IEEE Signal Processing in Medicine and Biology Symposium (SPMB)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SPMB47826.2019.9037840\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE Signal Processing in Medicine and Biology Symposium (SPMB)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SPMB47826.2019.9037840","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8

摘要

Neuronix高性能计算集群允许我们在大数据上进行广泛的机器学习实验[1]。这种异构集群使用创新的调度技术Slurm[2]来管理cpu和图形处理单元(gpu)的网络。GPU农场由各种处理器组成,从低端消费级设备(如Nvidia GTX 970)到高端设备(如GeForce RTX 2080)。这些gpu对我们的研究至关重要,因为它们允许在大量数据资源(如TUH EEG语料库)上执行极其计算密集型的深度学习任务[2]。我们使用TensorFlow[3]作为我们深度学习系统的核心机器学习库,并常规使用多个gpu来加速训练过程。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Issues in the Reproducibility of Deep Learning Results
The Neuronix high-performance computing cluster allows us to conduct extensive machine learning experiments on big data [1] . This heterogeneous cluster uses innovative scheduling technology, Slurm [2] , that manages a network of CPUs and graphics processing units (GPUs). The GPU farm consists of a variety of processors ranging from low-end consumer grade devices such as the Nvidia GTX 970 to higher-end devices such as the GeForce RTX 2080. These GPUs are essential to our research since they allow extremely compute-intensive deep learning tasks to be executed on massive data resources such as the TUH EEG Corpus [2] . We use TensorFlow [3] as the core machine learning library for our deep learning systems, and routinely employ multiple GPUs to accelerate the training process.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Spectrum Sharing Strategy for Radio Frequency-Based Medical Services Predicting Subjective Sleep Quality Using Recurrent Neural Networks Software and Data Resources to Advance Machine Learning Research in Electroencephalography SPMB 2019 Table of Contents Recent Advances in the Temple University Digital Pathology Corpus
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1