Shubham Malaviya, Manish Shukla, Pratik Korat, S. Lodha
{"title":"联邦半监督学习中基于模型对比学习的数据增强自由框架","authors":"Shubham Malaviya, Manish Shukla, Pratik Korat, S. Lodha","doi":"10.1145/3555776.3577613","DOIUrl":null,"url":null,"abstract":"Federated learning has emerged as a privacy-preserving technique to learn a machine learning model without requiring users to share their data. Our paper focuses on Federated Semi-Supervised Learning (FSSL) setting wherein users do not have domain expertise or incentives to label data on their device, and the server has access to some labeled data that is annotated by experts. The existing work in FSSL require data augmentation for model training. However, data augmentation is not well defined for prevalent domains like text and graphs. Moreover, non independent and identically distributed (non-i.i.d.) data across users is a significant challenge in federated learning. We propose a generalized framework based on model contrastive learning called FedFAME which does not require data augmentation, thus making it easy to adapt to different domains. Our experiments on image and text datasets show the robustness of FedFAME towards non-i.i.d. data. We have validated our approach by varying data imbalance across users and the number of labeled instances on the server.","PeriodicalId":42971,"journal":{"name":"Applied Computing Review","volume":null,"pages":null},"PeriodicalIF":0.4000,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"FedFAME: A Data Augmentation Free Framework based on Model Contrastive Learning for Federated Semi-Supervised Learning\",\"authors\":\"Shubham Malaviya, Manish Shukla, Pratik Korat, S. Lodha\",\"doi\":\"10.1145/3555776.3577613\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Federated learning has emerged as a privacy-preserving technique to learn a machine learning model without requiring users to share their data. Our paper focuses on Federated Semi-Supervised Learning (FSSL) setting wherein users do not have domain expertise or incentives to label data on their device, and the server has access to some labeled data that is annotated by experts. The existing work in FSSL require data augmentation for model training. However, data augmentation is not well defined for prevalent domains like text and graphs. Moreover, non independent and identically distributed (non-i.i.d.) data across users is a significant challenge in federated learning. We propose a generalized framework based on model contrastive learning called FedFAME which does not require data augmentation, thus making it easy to adapt to different domains. Our experiments on image and text datasets show the robustness of FedFAME towards non-i.i.d. data. We have validated our approach by varying data imbalance across users and the number of labeled instances on the server.\",\"PeriodicalId\":42971,\"journal\":{\"name\":\"Applied Computing Review\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.4000,\"publicationDate\":\"2023-03-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Applied Computing Review\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3555776.3577613\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Computing Review","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3555776.3577613","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
FedFAME: A Data Augmentation Free Framework based on Model Contrastive Learning for Federated Semi-Supervised Learning
Federated learning has emerged as a privacy-preserving technique to learn a machine learning model without requiring users to share their data. Our paper focuses on Federated Semi-Supervised Learning (FSSL) setting wherein users do not have domain expertise or incentives to label data on their device, and the server has access to some labeled data that is annotated by experts. The existing work in FSSL require data augmentation for model training. However, data augmentation is not well defined for prevalent domains like text and graphs. Moreover, non independent and identically distributed (non-i.i.d.) data across users is a significant challenge in federated learning. We propose a generalized framework based on model contrastive learning called FedFAME which does not require data augmentation, thus making it easy to adapt to different domains. Our experiments on image and text datasets show the robustness of FedFAME towards non-i.i.d. data. We have validated our approach by varying data imbalance across users and the number of labeled instances on the server.