Andrew R. Short, Τheofanis G. Orfanoudakis, Helen C. Leligou
{"title":"Improving Security and Fairness in Federated Learning Systems","authors":"Andrew R. Short, Τheofanis G. Orfanoudakis, Helen C. Leligou","doi":"10.5121/ijnsa.2021.13604","DOIUrl":null,"url":null,"abstract":"The ever-increasing use of Artificial Intelligence applications has made apparent that the quality of the training datasets affects the performance of the models. To this end, Federated Learning aims to engage multiple entities to contribute to the learning process with locally maintained data, without requiring them to share the actual datasets. Since the parameter server does not have access to the actual training datasets, it becomes challenging to offer rewards to users by directly inspecting the dataset quality. Instead, this paper focuses on ways to strengthen user engagement by offering “fair” rewards, proportional to the model improvement (in terms of accuracy) they offer. Furthermore, to enable objective judgment of the quality of contribution, we devise a point system to record user performance assisted by blockchain technologies. More precisely, we have developed a verification algorithm that evaluates the performance of users’ contributions by comparing the resulting accuracy of the global model against a verification dataset and we demonstrate how this metric can be used to offer security improvements in a Federated Learning process. Further on, we implement the solution in a simulation environment in order to assess the feasibility and collect baseline results using datasets of varying quality.","PeriodicalId":93303,"journal":{"name":"International journal of network security & its applications","volume":"95 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2021-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International journal of network security & its applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5121/ijnsa.2021.13604","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
The ever-increasing use of Artificial Intelligence applications has made apparent that the quality of the training datasets affects the performance of the models. To this end, Federated Learning aims to engage multiple entities to contribute to the learning process with locally maintained data, without requiring them to share the actual datasets. Since the parameter server does not have access to the actual training datasets, it becomes challenging to offer rewards to users by directly inspecting the dataset quality. Instead, this paper focuses on ways to strengthen user engagement by offering “fair” rewards, proportional to the model improvement (in terms of accuracy) they offer. Furthermore, to enable objective judgment of the quality of contribution, we devise a point system to record user performance assisted by blockchain technologies. More precisely, we have developed a verification algorithm that evaluates the performance of users’ contributions by comparing the resulting accuracy of the global model against a verification dataset and we demonstrate how this metric can be used to offer security improvements in a Federated Learning process. Further on, we implement the solution in a simulation environment in order to assess the feasibility and collect baseline results using datasets of varying quality.