{"title":"用于安全保密神经成像分析的联合学习架构","authors":"","doi":"10.1016/j.patter.2024.101031","DOIUrl":null,"url":null,"abstract":"<p>The amount of biomedical data continues to grow rapidly. However, collecting data from multiple sites for joint analysis remains challenging due to security, privacy, and regulatory concerns. To overcome this challenge, we use federated learning, which enables distributed training of neural network models over multiple data sources without sharing data. Each site trains the neural network over its private data for some time and then shares the neural network parameters (i.e., weights and/or gradients) with a federation controller, which in turn aggregates the local models and sends the resulting community model back to each site, and the process repeats. Our federated learning architecture, MetisFL, provides strong security and privacy. First, sample data never leave a site. Second, neural network parameters are encrypted before transmission and the global neural model is computed under fully homomorphic encryption. Finally, we use information-theoretic methods to limit information leakage from the neural model to prevent a “curious” site from performing model inversion or membership attacks. We present a thorough evaluation of the performance of secure, private federated learning in neuroimaging tasks, including for predicting Alzheimer’s disease and for brain age gap estimation (BrainAGE) from magnetic resonance imaging (MRI) studies in challenging, heterogeneous federated environments where sites have different amounts of data and statistical distributions.</p>","PeriodicalId":36242,"journal":{"name":"Patterns","volume":null,"pages":null},"PeriodicalIF":6.7000,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A federated learning architecture for secure and private neuroimaging analysis\",\"authors\":\"\",\"doi\":\"10.1016/j.patter.2024.101031\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>The amount of biomedical data continues to grow rapidly. However, collecting data from multiple sites for joint analysis remains challenging due to security, privacy, and regulatory concerns. To overcome this challenge, we use federated learning, which enables distributed training of neural network models over multiple data sources without sharing data. Each site trains the neural network over its private data for some time and then shares the neural network parameters (i.e., weights and/or gradients) with a federation controller, which in turn aggregates the local models and sends the resulting community model back to each site, and the process repeats. Our federated learning architecture, MetisFL, provides strong security and privacy. First, sample data never leave a site. Second, neural network parameters are encrypted before transmission and the global neural model is computed under fully homomorphic encryption. Finally, we use information-theoretic methods to limit information leakage from the neural model to prevent a “curious” site from performing model inversion or membership attacks. We present a thorough evaluation of the performance of secure, private federated learning in neuroimaging tasks, including for predicting Alzheimer’s disease and for brain age gap estimation (BrainAGE) from magnetic resonance imaging (MRI) studies in challenging, heterogeneous federated environments where sites have different amounts of data and statistical distributions.</p>\",\"PeriodicalId\":36242,\"journal\":{\"name\":\"Patterns\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":6.7000,\"publicationDate\":\"2024-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Patterns\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1016/j.patter.2024.101031\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Patterns","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1016/j.patter.2024.101031","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
A federated learning architecture for secure and private neuroimaging analysis
The amount of biomedical data continues to grow rapidly. However, collecting data from multiple sites for joint analysis remains challenging due to security, privacy, and regulatory concerns. To overcome this challenge, we use federated learning, which enables distributed training of neural network models over multiple data sources without sharing data. Each site trains the neural network over its private data for some time and then shares the neural network parameters (i.e., weights and/or gradients) with a federation controller, which in turn aggregates the local models and sends the resulting community model back to each site, and the process repeats. Our federated learning architecture, MetisFL, provides strong security and privacy. First, sample data never leave a site. Second, neural network parameters are encrypted before transmission and the global neural model is computed under fully homomorphic encryption. Finally, we use information-theoretic methods to limit information leakage from the neural model to prevent a “curious” site from performing model inversion or membership attacks. We present a thorough evaluation of the performance of secure, private federated learning in neuroimaging tasks, including for predicting Alzheimer’s disease and for brain age gap estimation (BrainAGE) from magnetic resonance imaging (MRI) studies in challenging, heterogeneous federated environments where sites have different amounts of data and statistical distributions.