首页 > 最新文献

Distributed, collaborative, and federated learning, and affordable AI and healthcare for resource diverse global health : Third MICCAI Workshop, DeCaF 2022 and Second MICCAI Workshop, FAIR 2022, held in conjunction with MICCAI 2022, Sin...最新文献

英文 中文
Federated Learning: Fundamentals and Advances 联邦学习:基础与进展
Yaochu Jin, Hangyu Zhu, Jinjin Xu, Yang Chen
{"title":"Federated Learning: Fundamentals and Advances","authors":"Yaochu Jin, Hangyu Zhu, Jinjin Xu, Yang Chen","doi":"10.1007/978-981-19-7083-2","DOIUrl":"https://doi.org/10.1007/978-981-19-7083-2","url":null,"abstract":"","PeriodicalId":72833,"journal":{"name":"Distributed, collaborative, and federated learning, and affordable AI and healthcare for resource diverse global health : Third MICCAI Workshop, DeCaF 2022 and Second MICCAI Workshop, FAIR 2022, held in conjunction with MICCAI 2022, Sin...","volume":"9 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87962976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Incremental Learning Meets Transfer Learning: Application to Multi-site Prostate MRI Segmentation. 增量学习与迁移学习的结合:应用于多部位前列腺 MRI 分段。
Chenyu You, Jinlin Xiang, Kun Su, Xiaoran Zhang, Siyuan Dong, John Onofrey, Lawrence Staib, James S Duncan

Many medical datasets have recently been created for medical image segmentation tasks, and it is natural to question whether we can use them to sequentially train a single model that (1) performs better on all these datasets, and (2) generalizes well and transfers better to the unknown target site domain. Prior works have achieved this goal by jointly training one model on multi-site datasets, which achieve competitive performance on average but such methods rely on the assumption about the availability of all training data, thus limiting its effectiveness in practical deployment. In this paper, we propose a novel multi-site segmentation framework called incremental-transfer learning (ITL), which learns a model from multi-site datasets in an end-to-end sequential fashion. Specifically, "incremental" refers to training sequentially constructed datasets, and "transfer" is achieved by leveraging useful information from the linear combination of embedding features on each dataset. In addition, we introduce our ITL framework, where we train the network including a site-agnostic encoder with pretrained weights and at most two segmentation decoder heads. We also design a novel site-level incremental loss in order to generalize well on the target domain. Second, we show for the first time that leveraging our ITL training scheme is able to alleviate challenging catastrophic forgetting problems in incremental learning. We conduct experiments using five challenging benchmark datasets to validate the effectiveness of our incremental-transfer learning approach. Our approach makes minimal assumptions on computation resources and domain-specific expertise, and hence constitutes a strong starting point in multi-site medical image segmentation.

最近为医学图像分割任务创建了许多医学数据集,人们自然会问,我们是否能利用这些数据集依次训练出一个单一模型,使其(1) 在所有这些数据集上表现更佳,(2) 具有良好的泛化能力,并能更好地转移到未知目标部位领域。之前的研究通过在多站点数据集上联合训练一个模型来实现这一目标,平均而言取得了有竞争力的性能,但这些方法依赖于所有训练数据可用性的假设,因此限制了其在实际部署中的有效性。在本文中,我们提出了一种名为增量转移学习(ITL)的新型多站点分割框架,它以端到端的顺序方式从多站点数据集中学习模型。具体来说,"增量 "是指按顺序构建数据集进行训练,而 "转移 "则是通过利用每个数据集上嵌入特征线性组合的有用信息来实现。此外,我们还介绍了我们的 ITL 框架,在该框架中,我们训练的网络包括一个具有预训练权重的站点无关编码器和最多两个分割解码器头。我们还设计了一种新颖的站点级增量损失,以便在目标域上实现良好的泛化。其次,我们首次证明了利用我们的 ITL 训练方案能够缓解增量学习中具有挑战性的灾难性遗忘问题。我们使用五个具有挑战性的基准数据集进行了实验,以验证我们的增量迁移学习方法的有效性。我们的方法对计算资源和特定领域的专业知识做了最低限度的假设,因此是多站点医学图像分割的有力起点。
{"title":"Incremental Learning Meets Transfer Learning: Application to Multi-site Prostate MRI Segmentation.","authors":"Chenyu You, Jinlin Xiang, Kun Su, Xiaoran Zhang, Siyuan Dong, John Onofrey, Lawrence Staib, James S Duncan","doi":"10.1007/978-3-031-18523-6_1","DOIUrl":"10.1007/978-3-031-18523-6_1","url":null,"abstract":"<p><p>Many medical datasets have recently been created for medical image segmentation tasks, and it is natural to question whether we can use them to sequentially train a single model that (1) performs better on all these datasets, and (2) generalizes well and transfers better to the unknown target site domain. Prior works have achieved this goal by jointly training one model on multi-site datasets, which achieve competitive performance on average but such methods rely on the assumption about the availability of all training data, thus limiting its effectiveness in practical deployment. In this paper, we propose a novel multi-site segmentation framework called <b>incremental-transfer learning (ITL)</b>, which learns a model from multi-site datasets in an end-to-end sequential fashion. Specifically, \"incremental\" refers to training sequentially constructed datasets, and \"transfer\" is achieved by leveraging useful information from the linear combination of embedding features on each dataset. In addition, we introduce our ITL framework, where we train the network including a site-agnostic encoder with pretrained weights and at most two segmentation decoder heads. We also design a novel site-level incremental loss in order to generalize well on the target domain. Second, we show for the first time that leveraging our ITL training scheme is able to alleviate challenging catastrophic forgetting problems in incremental learning. We conduct experiments using five challenging benchmark datasets to validate the effectiveness of our incremental-transfer learning approach. Our approach makes minimal assumptions on computation resources and domain-specific expertise, and hence constitutes a strong starting point in multi-site medical image segmentation.</p>","PeriodicalId":72833,"journal":{"name":"Distributed, collaborative, and federated learning, and affordable AI and healthcare for resource diverse global health : Third MICCAI Workshop, DeCaF 2022 and Second MICCAI Workshop, FAIR 2022, held in conjunction with MICCAI 2022, Sin...","volume":"13573 ","pages":"3-16"},"PeriodicalIF":0.0,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10323962/pdf/nihms-1913002.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9806819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards More Efficient Data Valuation in Healthcare Federated Learning using Ensembling 在医疗保健联邦学习中使用集成实现更有效的数据评估
Sourav Kumar, A. Lakshminarayanan, Ken Chang, Feri Guretno, Ivan Ho Mien, Jayashree Kalpathy-Cramer, Pavitra Krishnaswamy, Praveer Singh
Federated Learning (FL) wherein multiple institutions collaboratively train a machine learning model without sharing data is becoming popular. Participating institutions might not contribute equally - some contribute more data, some better quality data or some more diverse data. To fairly rank the contribution of different institutions, Shapley value (SV) has emerged as the method of choice. Exact SV computation is impossibly expensive, especially when there are hundreds of contributors. Existing SV computation techniques use approximations. However, in healthcare where the number of contributing institutions are likely not of a colossal scale, computing exact SVs is still exorbitantly expensive, but not impossible. For such settings, we propose an efficient SV computation technique called SaFE (Shapley Value for Federated Learning using Ensembling). We empirically show that SaFE computes values that are close to exact SVs, and that it performs better than current SV approximations. This is particularly relevant in medical imaging setting where widespread heterogeneity across institutions is rampant and fast accurate data valuation is required to determine the contribution of each participant in multi-institutional collaborative learning.
联邦学习(FL)是指多个机构在不共享数据的情况下协同训练机器学习模型,它正变得越来越流行。参与机构的贡献可能不尽相同——有些机构提供的数据更多,有些提供的数据质量更好,有些提供的数据更多样化。为了公平地对不同机构的贡献进行排名,沙普利值(Shapley value, SV)作为一种选择方法出现了。精确的SV计算非常昂贵,特别是当有数百个贡献者时。现有的SV计算技术使用近似值。然而,在医疗保健领域,贡献机构的数量可能不是很大,计算精确的sv仍然非常昂贵,但并非不可能。对于这种设置,我们提出了一种高效的SV计算技术,称为SaFE (Shapley Value For Federated Learning using Ensembling)。我们的经验表明,SaFE计算的值接近精确的SV,并且它比当前的SV近似值表现得更好。这在医学成像环境中尤为重要,因为医疗机构之间存在广泛的异质性,需要快速准确的数据评估,以确定多机构协作学习中每个参与者的贡献。
{"title":"Towards More Efficient Data Valuation in Healthcare Federated Learning using Ensembling","authors":"Sourav Kumar, A. Lakshminarayanan, Ken Chang, Feri Guretno, Ivan Ho Mien, Jayashree Kalpathy-Cramer, Pavitra Krishnaswamy, Praveer Singh","doi":"10.48550/arXiv.2209.05424","DOIUrl":"https://doi.org/10.48550/arXiv.2209.05424","url":null,"abstract":"Federated Learning (FL) wherein multiple institutions collaboratively train a machine learning model without sharing data is becoming popular. Participating institutions might not contribute equally - some contribute more data, some better quality data or some more diverse data. To fairly rank the contribution of different institutions, Shapley value (SV) has emerged as the method of choice. Exact SV computation is impossibly expensive, especially when there are hundreds of contributors. Existing SV computation techniques use approximations. However, in healthcare where the number of contributing institutions are likely not of a colossal scale, computing exact SVs is still exorbitantly expensive, but not impossible. For such settings, we propose an efficient SV computation technique called SaFE (Shapley Value for Federated Learning using Ensembling). We empirically show that SaFE computes values that are close to exact SVs, and that it performs better than current SV approximations. This is particularly relevant in medical imaging setting where widespread heterogeneity across institutions is rampant and fast accurate data valuation is required to determine the contribution of each participant in multi-institutional collaborative learning.","PeriodicalId":72833,"journal":{"name":"Distributed, collaborative, and federated learning, and affordable AI and healthcare for resource diverse global health : Third MICCAI Workshop, DeCaF 2022 and Second MICCAI Workshop, FAIR 2022, held in conjunction with MICCAI 2022, Sin...","volume":"16 1","pages":"119-129"},"PeriodicalIF":0.0,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73408933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Towards More Efficient Data Valuation in Healthcare Federated Learning using Ensembling. 使用Ensembling实现医疗保健联合学习中更高效的数据评估。
Sourav Kumar, A Lakshminarayanan, Ken Chang, Feri Guretno, Ivan Ho Mien, Jayashree Kalpathy-Cramer, Pavitra Krishnaswamy, Praveer Singh

Federated Learning (FL) wherein multiple institutions collaboratively train a machine learning model without sharing data is becoming popular. Participating institutions might not contribute equally - some contribute more data, some better quality data or some more diverse data. To fairly rank the contribution of different institutions, Shapley value (SV) has emerged as the method of choice. Exact SV computation is impossibly expensive, especially when there are hundreds of contributors. Existing SV computation techniques use approximations. However, in healthcare where the number of contributing institutions are likely not of a colossal scale, computing exact SVs is still exorbitantly expensive, but not impossible. For such settings, we propose an efficient SV computation technique called SaFE (Shapley Value for Federated Learning using Ensembling). We empirically show that SaFE computes values that are close to exact SVs, and that it performs better than current SV approximations. This is particularly relevant in medical imaging setting where widespread heterogeneity across institutions is rampant and fast accurate data valuation is required to determine the contribution of each participant in multi-institutional collaborative learning.

联合学习(FL)越来越流行,其中多个机构在不共享数据的情况下协作训练机器学习模型。参与机构的贡献可能不平等——有些机构贡献了更多的数据,有些机构贡献的数据质量更好,有些机构则贡献的数据更加多样化。为了公平地对不同机构的贡献进行排序,Shapley值(SV)已成为一种选择方法。精确的SV计算非常昂贵,尤其是在有数百个贡献者的情况下。现有的SV计算技术使用近似。然而,在医疗保健领域,贡献机构的数量可能不是很大,计算准确的SV仍然非常昂贵,但并非不可能。对于这种设置,我们提出了一种高效的SV计算技术,称为SaFE(使用Ensembling进行联合学习的Shapley值)。我们的经验表明,SaFE计算的值接近精确的SV,并且它的性能优于当前的SV近似。这在医学成像环境中尤其重要,在医学成像背景下,各机构之间普遍存在异质性,需要快速准确的数据评估来确定每个参与者在多机构协作学习中的贡献。
{"title":"Towards More Efficient Data Valuation in Healthcare Federated Learning using Ensembling.","authors":"Sourav Kumar,&nbsp;A Lakshminarayanan,&nbsp;Ken Chang,&nbsp;Feri Guretno,&nbsp;Ivan Ho Mien,&nbsp;Jayashree Kalpathy-Cramer,&nbsp;Pavitra Krishnaswamy,&nbsp;Praveer Singh","doi":"10.1007/978-3-031-18523-6_12","DOIUrl":"10.1007/978-3-031-18523-6_12","url":null,"abstract":"<p><p>Federated Learning (FL) wherein multiple institutions collaboratively train a machine learning model without sharing data is becoming popular. Participating institutions might not contribute equally - some contribute more data, some better quality data or some more diverse data. To fairly rank the contribution of different institutions, Shapley value (SV) has emerged as the method of choice. Exact SV computation is impossibly expensive, especially when there are hundreds of contributors. Existing SV computation techniques use approximations. However, in healthcare where the number of contributing institutions are likely not of a colossal scale, computing exact SVs is still exorbitantly expensive, but not impossible. For such settings, we propose an efficient SV computation technique called SaFE (Shapley Value for Federated Learning using Ensembling). We empirically show that SaFE computes values that are close to exact SVs, and that it performs better than current SV approximations. This is particularly relevant in medical imaging setting where widespread heterogeneity across institutions is rampant and fast accurate data valuation is required to determine the contribution of each participant in multi-institutional collaborative learning.</p>","PeriodicalId":72833,"journal":{"name":"Distributed, collaborative, and federated learning, and affordable AI and healthcare for resource diverse global health : Third MICCAI Workshop, DeCaF 2022 and Second MICCAI Workshop, FAIR 2022, held in conjunction with MICCAI 2022, Sin...","volume":"13573 ","pages":"119-129"},"PeriodicalIF":0.0,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9890952/pdf/nihms-1859434.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10673796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Incremental Learning Meets Transfer Learning: Application to Multi-site Prostate MRI Segmentation 增量学习与迁移学习:在前列腺MRI多位点分割中的应用
Chenyu You, Jinlin Xiang, Kun Su, Xiaoran Zhang, Siyuan Dong, John A. Onofrey, L. Staib, J. Duncan
Many medical datasets have recently been created for medical image segmentation tasks, and it is natural to question whether we can use them to sequentially train a single model that (1) performs better on all these datasets, and (2) generalizes well and transfers better to the unknown target site domain. Prior works have achieved this goal by jointly training one model on multi-site datasets, which achieve competitive performance on average but such methods rely on the assumption about the availability of all training data, thus limiting its effectiveness in practical deployment. In this paper, we propose a novel multi-site segmentation framework called incremental-transfer learning (ITL), which learns a model from multi-site datasets in an end-to-end sequential fashion. Specifically, "incremental" refers to training sequentially constructed datasets, and "transfer" is achieved by leveraging useful information from the linear combination of embedding features on each dataset. In addition, we introduce our ITL framework, where we train the network including a site-agnostic encoder with pretrained weights and at most two segmentation decoder heads. We also design a novel site-level incremental loss in order to generalize well on the target domain. Second, we show for the first time that leveraging our ITL training scheme is able to alleviate challenging catastrophic forgetting problems in incremental learning. We conduct experiments using five challenging benchmark datasets to validate the effectiveness of our incremental-transfer learning approach. Our approach makes minimal assumptions on computation resources and domain-specific expertise, and hence constitutes a strong starting point in multi-site medical image segmentation.
最近已经创建了许多医学数据集用于医学图像分割任务,并且很自然地质疑我们是否可以使用它们依次训练单个模型(1)在所有这些数据集上表现更好,(2)泛化良好并更好地转移到未知的目标位点域。先前的工作通过在多站点数据集上联合训练一个模型来实现这一目标,平均而言达到了竞争性能,但这种方法依赖于对所有训练数据可用性的假设,从而限制了其在实际部署中的有效性。在本文中,我们提出了一种新的多站点分割框架,称为增量迁移学习(ITL),它以端到端顺序的方式从多站点数据集中学习模型。具体来说,“增量”指的是训练顺序构建的数据集,而“转移”是通过利用每个数据集上嵌入特征的线性组合中的有用信息来实现的。此外,我们介绍了我们的ITL框架,我们在其中训练网络,包括一个具有预训练权重的站点不可知编码器和最多两个分割解码器头。为了更好地泛化目标域,我们还设计了一种新的站点级增量损失。其次,我们首次证明了利用我们的ITL训练方案能够缓解增量学习中具有挑战性的灾难性遗忘问题。我们使用五个具有挑战性的基准数据集进行实验,以验证我们的增量迁移学习方法的有效性。我们的方法对计算资源和特定领域的专业知识的假设最小,因此构成了多站点医学图像分割的一个强有力的起点。
{"title":"Incremental Learning Meets Transfer Learning: Application to Multi-site Prostate MRI Segmentation","authors":"Chenyu You, Jinlin Xiang, Kun Su, Xiaoran Zhang, Siyuan Dong, John A. Onofrey, L. Staib, J. Duncan","doi":"10.48550/arXiv.2206.01369","DOIUrl":"https://doi.org/10.48550/arXiv.2206.01369","url":null,"abstract":"Many medical datasets have recently been created for medical image segmentation tasks, and it is natural to question whether we can use them to sequentially train a single model that (1) performs better on all these datasets, and (2) generalizes well and transfers better to the unknown target site domain. Prior works have achieved this goal by jointly training one model on multi-site datasets, which achieve competitive performance on average but such methods rely on the assumption about the availability of all training data, thus limiting its effectiveness in practical deployment. In this paper, we propose a novel multi-site segmentation framework called incremental-transfer learning (ITL), which learns a model from multi-site datasets in an end-to-end sequential fashion. Specifically, \"incremental\" refers to training sequentially constructed datasets, and \"transfer\" is achieved by leveraging useful information from the linear combination of embedding features on each dataset. In addition, we introduce our ITL framework, where we train the network including a site-agnostic encoder with pretrained weights and at most two segmentation decoder heads. We also design a novel site-level incremental loss in order to generalize well on the target domain. Second, we show for the first time that leveraging our ITL training scheme is able to alleviate challenging catastrophic forgetting problems in incremental learning. We conduct experiments using five challenging benchmark datasets to validate the effectiveness of our incremental-transfer learning approach. Our approach makes minimal assumptions on computation resources and domain-specific expertise, and hence constitutes a strong starting point in multi-site medical image segmentation.","PeriodicalId":72833,"journal":{"name":"Distributed, collaborative, and federated learning, and affordable AI and healthcare for resource diverse global health : Third MICCAI Workshop, DeCaF 2022 and Second MICCAI Workshop, FAIR 2022, held in conjunction with MICCAI 2022, Sin...","volume":"110 1","pages":"3-16"},"PeriodicalIF":0.0,"publicationDate":"2022-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87715630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Security and Robustness in Federated Learning 联邦学习的安全性和鲁棒性
Ambrish Rawat, Giulio Zizzo, Muhammad Zaid Hameed, Luis Muñoz-González
{"title":"Security and Robustness in Federated Learning","authors":"Ambrish Rawat, Giulio Zizzo, Muhammad Zaid Hameed, Luis Muñoz-González","doi":"10.1007/978-3-030-96896-0_16","DOIUrl":"https://doi.org/10.1007/978-3-030-96896-0_16","url":null,"abstract":"","PeriodicalId":72833,"journal":{"name":"Distributed, collaborative, and federated learning, and affordable AI and healthcare for resource diverse global health : Third MICCAI Workshop, DeCaF 2022 and Second MICCAI Workshop, FAIR 2022, held in conjunction with MICCAI 2022, Sin...","volume":"37 1","pages":"363-390"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74044856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tree-Based Models for Federated Learning Systems 基于树的联邦学习系统模型
Yuya Jeremy Ong, N. Baracaldo, Yi Zhou
{"title":"Tree-Based Models for Federated Learning Systems","authors":"Yuya Jeremy Ong, N. Baracaldo, Yi Zhou","doi":"10.1007/978-3-030-96896-0_2","DOIUrl":"https://doi.org/10.1007/978-3-030-96896-0_2","url":null,"abstract":"","PeriodicalId":72833,"journal":{"name":"Distributed, collaborative, and federated learning, and affordable AI and healthcare for resource diverse global health : Third MICCAI Workshop, DeCaF 2022 and Second MICCAI Workshop, FAIR 2022, held in conjunction with MICCAI 2022, Sin...","volume":"99 1","pages":"27-52"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79289573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Federated Reinforcement Learning for Portfolio Management 用于项目组合管理的联邦强化学习
Pengqian Yu, L. Wynter, Shiau Hong Lim
{"title":"Federated Reinforcement Learning for Portfolio Management","authors":"Pengqian Yu, L. Wynter, Shiau Hong Lim","doi":"10.1007/978-3-030-96896-0_21","DOIUrl":"https://doi.org/10.1007/978-3-030-96896-0_21","url":null,"abstract":"","PeriodicalId":72833,"journal":{"name":"Distributed, collaborative, and federated learning, and affordable AI and healthcare for resource diverse global health : Third MICCAI Workshop, DeCaF 2022 and Second MICCAI Workshop, FAIR 2022, held in conjunction with MICCAI 2022, Sin...","volume":"34 1","pages":"467-482"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72908911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Protecting Against Data Leakage in Federated Learning: What Approach Should You Choose? 防止联邦学习中的数据泄漏:您应该选择哪种方法?
N. Baracaldo, Runhua Xu
{"title":"Protecting Against Data Leakage in Federated Learning: What Approach Should You Choose?","authors":"N. Baracaldo, Runhua Xu","doi":"10.1007/978-3-030-96896-0_13","DOIUrl":"https://doi.org/10.1007/978-3-030-96896-0_13","url":null,"abstract":"","PeriodicalId":72833,"journal":{"name":"Distributed, collaborative, and federated learning, and affordable AI and healthcare for resource diverse global health : Third MICCAI Workshop, DeCaF 2022 and Second MICCAI Workshop, FAIR 2022, held in conjunction with MICCAI 2022, Sin...","volume":"693 1","pages":"281-312"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80186787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Introduction to Federated Learning Systems 联邦学习系统简介
Syed Zawad, Feng Yan, A. Anwar
{"title":"Introduction to Federated Learning Systems","authors":"Syed Zawad, Feng Yan, A. Anwar","doi":"10.1007/978-3-030-96896-0_9","DOIUrl":"https://doi.org/10.1007/978-3-030-96896-0_9","url":null,"abstract":"","PeriodicalId":72833,"journal":{"name":"Distributed, collaborative, and federated learning, and affordable AI and healthcare for resource diverse global health : Third MICCAI Workshop, DeCaF 2022 and Second MICCAI Workshop, FAIR 2022, held in conjunction with MICCAI 2022, Sin...","volume":"4 1","pages":"195-212"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84907702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
Distributed, collaborative, and federated learning, and affordable AI and healthcare for resource diverse global health : Third MICCAI Workshop, DeCaF 2022 and Second MICCAI Workshop, FAIR 2022, held in conjunction with MICCAI 2022, Sin...
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1