Responsible AI must be able to make or support decisions that consider human values and can be justified by human morals. Accommodating values and morals in responsible decision making is supported by adopting a perspective of macro ethics, which views ethics through a holistic lens incorporating social context. Normative ethical principles inferred from philosophy can be used to methodically reason about ethics and make ethical judgements in specific contexts. Operationalising normative ethical principles thus promotes responsible reasoning under the perspective of macro ethics. We survey AI and computer science literature and develop a taxonomy of 21 normative ethical principles which can be operationalised in AI. We describe how each principle has previously been operationalised, highlighting key themes that AI practitioners seeking to implement ethical principles should be aware of. We envision that this taxonomy will facilitate the development of methodologies to incorporate normative ethical principles in reasoning capacities of responsible AI systems.
{"title":"Macro Ethics Principles for Responsible AI Systems: Taxonomy and Directions","authors":"Jessica Woodgate, Nirav Ajmeri","doi":"10.1145/3672394","DOIUrl":"https://doi.org/10.1145/3672394","url":null,"abstract":"<p>Responsible AI must be able to make or support decisions that consider human values and can be justified by human morals. Accommodating values and morals in responsible decision making is supported by adopting a perspective of macro ethics, which views ethics through a holistic lens incorporating social context. Normative ethical principles inferred from philosophy can be used to methodically reason about ethics and make ethical judgements in specific contexts. Operationalising normative ethical principles thus promotes responsible reasoning under the perspective of macro ethics. We survey AI and computer science literature and develop a taxonomy of 21 normative ethical principles which can be operationalised in AI. We describe how each principle has previously been operationalised, highlighting key themes that AI practitioners seeking to implement ethical principles should be aware of. We envision that this taxonomy will facilitate the development of methodologies to incorporate normative ethical principles in reasoning capacities of responsible AI systems.</p>","PeriodicalId":50926,"journal":{"name":"ACM Computing Surveys","volume":"37 1","pages":""},"PeriodicalIF":16.6,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141315607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Catarina Moreira, Yu-Liang Chou, Chihcheng Hsieh, Chun Ouyang, João Pereira, Joaquim Jorge
This study investigates the impact of machine learning models on the generation of counterfactual explanations by conducting a benchmark evaluation over three different types of models: a decision tree (fully transparent, interpretable, white-box model), a random forest (semi-interpretable, grey-box model), and a neural network (fully opaque, black-box model). We tested the counterfactual generation process using four algorithms (DiCE, WatcherCF, prototype, and GrowingSpheresCF) in the literature in 25 different datasets. Our findings indicate that: (1) Different machine learning models have little impact on the generation of counterfactual explanations; (2) Counterfactual algorithms based uniquely on proximity loss functions are not actionable and will not provide meaningful explanations; (3) One cannot have meaningful evaluation results without guaranteeing plausibility in the counterfactual generation. Algorithms that do not consider plausibility in their internal mechanisms will lead to biased and unreliable conclusions if evaluated with the current state-of-the-art metrics; (4) A counterfactual inspection analysis is strongly recommended to ensure a robust examination of counterfactual explanations and the potential identification of biases.
{"title":"Benchmarking Instance-Centric Counterfactual Algorithms for XAI: From White Box to Black Box","authors":"Catarina Moreira, Yu-Liang Chou, Chihcheng Hsieh, Chun Ouyang, João Pereira, Joaquim Jorge","doi":"10.1145/3672553","DOIUrl":"https://doi.org/10.1145/3672553","url":null,"abstract":"<p>This study investigates the impact of machine learning models on the generation of counterfactual explanations by conducting a benchmark evaluation over three different types of models: a decision tree (fully transparent, interpretable, white-box model), a random forest (semi-interpretable, grey-box model), and a neural network (fully opaque, black-box model). We tested the counterfactual generation process using four algorithms (DiCE, WatcherCF, prototype, and GrowingSpheresCF) in the literature in 25 different datasets. Our findings indicate that: (1) Different machine learning models have little impact on the generation of counterfactual explanations; (2) Counterfactual algorithms based uniquely on proximity loss functions are not actionable and will not provide meaningful explanations; (3) One cannot have meaningful evaluation results without guaranteeing plausibility in the counterfactual generation. Algorithms that do not consider plausibility in their internal mechanisms will lead to biased and unreliable conclusions if evaluated with the current state-of-the-art metrics; (4) A counterfactual inspection analysis is strongly recommended to ensure a robust examination of counterfactual explanations and the potential identification of biases.</p>","PeriodicalId":50926,"journal":{"name":"ACM Computing Surveys","volume":"2014 1","pages":""},"PeriodicalIF":16.6,"publicationDate":"2024-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141308990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Adrien Bennetot, Ivan Donadello, Ayoub El Qadi El Haouari, Mauro Dragoni, Thomas Frossard, Benedikt Wagner, Anna Sarranti, Silvia Tulli, Maria Trocan, Raja Chatila, Andreas Holzinger, Artur d'Avila Garcez, Natalia Díaz-Rodríguez
The past years have been characterized by an upsurge in opaque automatic decision support systems, such as Deep Neural Networks (DNNs). Although DNNs have great generalization and prediction abilities, it is difficult to obtain detailed explanations for their behaviour. As opaque Machine Learning models are increasingly being employed to make important predictions in critical domains, there is a danger of creating and using decisions that are not justifiable or legitimate. Therefore, there is a general agreement on the importance of endowing DNNs with explainability. EXplainable Artificial Intelligence (XAI) techniques can serve to verify and certify model outputs and enhance them with desirable notions such as trustworthiness, accountability, transparency and fairness. This guide is intended to be the go-to handbook for anyone with a computer science background aiming to obtain an intuitive insight from Machine Learning models accompanied by explanations out-of-the-box. The article aims to rectify the lack of a practical XAI guide by applying XAI techniques in particular day-to-day models, datasets and use-cases. In each chapter, the reader will find a description of the proposed method as well as one or several examples of use with Python notebooks. These can be easily modified in order to be applied to specific applications. We also explain what the prerequisites are for using each technique, what the user will learn about them, and which tasks they are aimed at.
{"title":"A Practical tutorial on Explainable AI Techniques","authors":"Adrien Bennetot, Ivan Donadello, Ayoub El Qadi El Haouari, Mauro Dragoni, Thomas Frossard, Benedikt Wagner, Anna Sarranti, Silvia Tulli, Maria Trocan, Raja Chatila, Andreas Holzinger, Artur d'Avila Garcez, Natalia Díaz-Rodríguez","doi":"10.1145/3670685","DOIUrl":"https://doi.org/10.1145/3670685","url":null,"abstract":"<p>The past years have been characterized by an upsurge in opaque automatic decision support systems, such as Deep Neural Networks (DNNs). Although DNNs have great generalization and prediction abilities, it is difficult to obtain detailed explanations for their behaviour. As opaque Machine Learning models are increasingly being employed to make important predictions in critical domains, there is a danger of creating and using decisions that are not justifiable or legitimate. Therefore, there is a general agreement on the importance of endowing DNNs with explainability. EXplainable Artificial Intelligence (XAI) techniques can serve to verify and certify model outputs and enhance them with desirable notions such as trustworthiness, accountability, transparency and fairness. This guide is intended to be the go-to handbook for anyone with a computer science background aiming to obtain an intuitive insight from Machine Learning models accompanied by explanations out-of-the-box. The article aims to rectify the lack of a practical XAI guide by applying XAI techniques in particular day-to-day models, datasets and use-cases. In each chapter, the reader will find a description of the proposed method as well as one or several examples of use with Python notebooks. These can be easily modified in order to be applied to specific applications. We also explain what the prerequisites are for using each technique, what the user will learn about them, and which tasks they are aimed at.</p>","PeriodicalId":50926,"journal":{"name":"ACM Computing Surveys","volume":"6 1","pages":""},"PeriodicalIF":16.6,"publicationDate":"2024-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141308996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lianying Zhao, He Shuang, Shengjie Xu, Wei Huang, Rongzhen Cui, Pushkar Bettadpur, David Lie
Hardware has been constantly augmented for security considerations since the advent of computers. There is also a common perception among computer users that hardware does a relatively better job on security assurance compared to software. Yet, the community has long lacked a comprehensive study to answer questions such as how hardware security support contributes to security, what kind of improvements have been introduced to improve such support and what its advantages/disadvantages are.
By generalizing various security goals, we taxonomize hardware security features and their security properties that can aid in securing program execution, considered as three aspects, i.e., state correctness, runtime protection and input/output protection. Based on this taxonomy, the survey systematically examines 1) the roles: how hardware is applied to achieve security; and 2) the problems: how reported attacks have exploited certain defects in hardware. We see that hardware’s unique advantages and problems co-exist and it highly depends on the desired security purpose as to which type to use. Among the survey findings are also that code as part of hardware (aka. firmware) should be treated differently to ensure security by design; and how research proposals have driven the advancement of commodity hardware features.
{"title":"A Survey of Hardware Improvements to Secure Program Execution","authors":"Lianying Zhao, He Shuang, Shengjie Xu, Wei Huang, Rongzhen Cui, Pushkar Bettadpur, David Lie","doi":"10.1145/3672392","DOIUrl":"https://doi.org/10.1145/3672392","url":null,"abstract":"<p>Hardware has been constantly augmented for security considerations since the advent of computers. There is also a common perception among computer users that hardware does a relatively better job on security assurance compared to software. Yet, the community has long lacked a comprehensive study to answer questions such as how hardware security support contributes to security, what kind of improvements have been introduced to improve such support and what its advantages/disadvantages are. </p><p>By generalizing various security goals, we taxonomize hardware security features and their security properties that can aid in securing program execution, considered as three aspects, i.e., state correctness, runtime protection and input/output protection. Based on this taxonomy, the survey systematically examines 1) the roles: how hardware is applied to achieve security; and 2) the problems: how reported attacks have exploited certain defects in hardware. We see that hardware’s unique advantages and problems co-exist and it highly depends on the desired security purpose as to which type to use. Among the survey findings are also that code as part of hardware (aka. firmware) should be treated differently to ensure security by design; and how research proposals have driven the advancement of commodity hardware features.</p>","PeriodicalId":50926,"journal":{"name":"ACM Computing Surveys","volume":"36 1","pages":""},"PeriodicalIF":16.6,"publicationDate":"2024-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141308991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lexical Semantic Change (LSC) is the task of identifying, interpreting, and assessing the possible change over time in the meanings of a target word. Traditionally, LSC has been addressed by linguists and social scientists through manual and time-consuming analyses, which have thus been limited in terms of the volume, genres, and time-frame that can be considered. In recent years, computational approaches based on Natural Language Processing have gained increasing attention to automate LSC as much as possible. Significant advancements have been made by relying on Large Language Models (LLMs), which can handle the multiple usages of the words and better capture the related semantic change. In this article, we survey the approaches based on LLMs for LSC and we propose a classification framework characterized by three dimensions: meaning representation, time-awareness, and learning modality. The framework is exploited to i) review the measures for change assessment, ii) compare the approaches on performance, and iii) discuss the current issues in terms of scalability, interpretability, and robustness. Open challenges and future research directions about the use of LLMs for LSC are finally outlined.
{"title":"Lexical Semantic Change through Large Language Models: a Survey","authors":"Francesco Periti, Stefano Montanelli","doi":"10.1145/3672393","DOIUrl":"https://doi.org/10.1145/3672393","url":null,"abstract":"<p>Lexical Semantic Change (LSC) is the task of identifying, interpreting, and assessing the possible change over time in the meanings of a target word. Traditionally, LSC has been addressed by linguists and social scientists through manual and time-consuming analyses, which have thus been limited in terms of the volume, genres, and time-frame that can be considered. In recent years, computational approaches based on Natural Language Processing have gained increasing attention to automate LSC as much as possible. Significant advancements have been made by relying on Large Language Models (LLMs), which can handle the multiple usages of the words and better capture the related semantic change. In this article, we survey the approaches based on LLMs for LSC and we propose a classification framework characterized by three dimensions: <i>meaning representation</i>, <i>time-awareness</i>, and <i>learning modality</i>. The framework is exploited to i) review the measures for change assessment, ii) compare the approaches on performance, and iii) discuss the current issues in terms of scalability, interpretability, and robustness. Open challenges and future research directions about the use of LLMs for LSC are finally outlined.</p>","PeriodicalId":50926,"journal":{"name":"ACM Computing Surveys","volume":"31 1","pages":""},"PeriodicalIF":16.6,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141299114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Luís Manuel Meruje Ferreira, Fabio Coelho, José Pereira
While a significant number of databases are deployed in cloud environments, pushing part or all data storage and querying planes closer to their sources (i.e., to the edge) can provide advantages in latency, connectivity, privacy, energy and scalability. This article dissects the advantages provided by databases in edge and fog environments, by surveying application domains and discussing the key drivers for pushing database systems to the edge. At the same time, it also identifies the main challenges faced by developers in this new environment, and analysis the mechanisms employed to deal with them. By providing an overview of the current state of edge and fog databases, this survey provides valuable insights into future research directions.
{"title":"Databases in Edge and Fog Environments : A Survey","authors":"Luís Manuel Meruje Ferreira, Fabio Coelho, José Pereira","doi":"10.1145/3666001","DOIUrl":"https://doi.org/10.1145/3666001","url":null,"abstract":"<p>While a significant number of databases are deployed in cloud environments, pushing part or all data storage and querying planes closer to their sources (i.e., to the edge) can provide advantages in latency, connectivity, privacy, energy and scalability. This article dissects the advantages provided by databases in edge and fog environments, by surveying application domains and discussing the key drivers for pushing database systems to the edge. At the same time, it also identifies the main challenges faced by developers in this new environment, and analysis the mechanisms employed to deal with them. By providing an overview of the current state of edge and fog databases, this survey provides valuable insights into future research directions.</p>","PeriodicalId":50926,"journal":{"name":"ACM Computing Surveys","volume":"69 1","pages":""},"PeriodicalIF":16.6,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141251753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Privacy and security challenges in Machine Learning (ML) have become increasingly severe, along with ML’s pervasive development and the recent demonstration of large attack surfaces. As a mature system-oriented approach, Confidential Computing has been utilized in both academia and industry to mitigate privacy and security issues in various ML scenarios. In this paper, the conjunction between ML and Confidential Computing is investigated. We systematize the prior work on Confidential Computing-assisted ML techniques that provide i) confidentiality guarantees and ii) integrity assurances, and discuss their advanced features and drawbacks. Key challenges are further identified, and we provide dedicated analyses of the limitations in existing Trusted Execution Environment (TEE) systems for ML use cases. Finally, prospective works are discussed, including grounded privacy definitions for closed-loop protection, partitioned executions of efficient ML, dedicated TEE-assisted designs for ML, TEE-aware ML, and ML full pipeline guarantees. By providing these potential solutions in our systematization of knowledge, we aim to build the bridge to help achieve a much stronger TEE-enabled ML for privacy guarantees without introducing computation and system costs.
随着机器学习(ML)的普遍发展和最近展示的巨大攻击面,机器学习(ML)中的隐私和安全挑战变得日益严峻。作为一种成熟的面向系统的方法,保密计算已被学术界和工业界用于缓解各种 ML 场景中的隐私和安全问题。本文研究了 ML 与保密计算之间的结合。我们系统梳理了保密计算辅助 ML 技术(提供 i) 保密性保证和 ii) 完整性保证)的前期工作,并讨论了它们的先进功能和缺点。我们进一步确定了关键挑战,并专门分析了用于 ML 用例的现有可信执行环境 (TEE) 系统的局限性。最后,我们讨论了前瞻性工作,包括闭环保护的基础隐私定义、高效 ML 的分区执行、ML 的专用 TEE 辅助设计、TEE 感知 ML 和 ML 全流水线保证。通过在我们的知识系统化中提供这些潜在的解决方案,我们旨在搭建一座桥梁,帮助实现更强大的 TEE 支持的 ML,从而在不引入计算和系统成本的情况下实现隐私保证。
{"title":"Machine Learning with Confidential Computing: A Systematization of Knowledge","authors":"Fan Mo, Zahra Tarkhani, Hamed Haddadi","doi":"10.1145/3670007","DOIUrl":"https://doi.org/10.1145/3670007","url":null,"abstract":"<p>Privacy and security challenges in Machine Learning (ML) have become increasingly severe, along with ML’s pervasive development and the recent demonstration of large attack surfaces. As a mature system-oriented approach, Confidential Computing has been utilized in both academia and industry to mitigate privacy and security issues in various ML scenarios. In this paper, the conjunction between ML and Confidential Computing is investigated. We systematize the prior work on Confidential Computing-assisted ML techniques that provide <i>i</i>) <i>confidentiality guarantees</i> and <i>ii</i>) <i>integrity assurances</i>, and discuss their advanced features and drawbacks. Key challenges are further identified, and we provide dedicated analyses of the <i>limitations</i> in existing <i>Trusted Execution Environment</i> (TEE) systems for ML use cases. Finally, prospective works are discussed, including grounded privacy definitions for closed-loop protection, partitioned executions of efficient ML, dedicated TEE-assisted designs for ML, TEE-aware ML, and ML full pipeline guarantees. By providing these potential solutions in our systematization of knowledge, we aim to build the bridge to help achieve a much stronger TEE-enabled ML for privacy guarantees without introducing computation and system costs.</p>","PeriodicalId":50926,"journal":{"name":"ACM Computing Surveys","volume":"2013 1","pages":""},"PeriodicalIF":16.6,"publicationDate":"2024-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141251607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cybersickness (CS), also known as visually induced motion sickness (VIMS) is a condition that can affect individuals when they interact with virtual reality (VR) technology. This condition is characterized by symptoms such as nausea, dizziness, headaches, eye fatigue, etc., and can be caused by a variety of factors. Finding a feasible solution to reduce the impact of CS is extremely important as it will greatly enhance the overall user experience and make VR more appealing to a wider range of people. We have carefully compiled a list of 223 highly pertinent studies to review the current state of research on the most essential aspects of CS. We have provided a novel taxonomy that encapsulates various aspects of CS measurement techniques found in the literature. We have proposed a set of CS mitigation guidelines for both developers and users. We have also discussed various CS-inducing factors and provided a taxonomy that tries to capture the same. Overall, our work provides a comprehensive overview of the current state of research in CS with a particular emphasis on different measurement techniques and CS mitigation strategies, identifies research gaps in the literature, and provides recommendations for future research in the field.
{"title":"“Are you feeling sick?” A systematic literature review of cybersickness in virtual reality","authors":"Nilotpal Biswas, Anamitra Mukherjee, Samit Bhattacharya","doi":"10.1145/3670008","DOIUrl":"https://doi.org/10.1145/3670008","url":null,"abstract":"<p>Cybersickness (CS), also known as visually induced motion sickness (VIMS) is a condition that can affect individuals when they interact with virtual reality (VR) technology. This condition is characterized by symptoms such as nausea, dizziness, headaches, eye fatigue, etc., and can be caused by a variety of factors. Finding a feasible solution to reduce the impact of CS is extremely important as it will greatly enhance the overall user experience and make VR more appealing to a wider range of people. We have carefully compiled a list of 223 highly pertinent studies to review the current state of research on the most essential aspects of CS. We have provided a novel taxonomy that encapsulates various aspects of CS measurement techniques found in the literature. We have proposed a set of CS mitigation guidelines for both developers and users. We have also discussed various CS-inducing factors and provided a taxonomy that tries to capture the same. Overall, our work provides a comprehensive overview of the current state of research in CS with a particular emphasis on different measurement techniques and CS mitigation strategies, identifies research gaps in the literature, and provides recommendations for future research in the field.</p>","PeriodicalId":50926,"journal":{"name":"ACM Computing Surveys","volume":"70 1","pages":""},"PeriodicalIF":16.6,"publicationDate":"2024-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141251747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Huiming Chen, Huandong Wang, Qingyue Long, Depeng Jin, Yong Li
Federated learning (FL) is a promising technique for resolving the rising privacy and security concerns. Its main ingredient is to cooperatively learn the model among the distributed clients without uploading any sensitive data. In this paper, we conducted a thorough review of the related works, following the development context and deeply mining the key technologies behind FL from the perspectives of theory and application. Specifically, we first classify the existing works in FL architecture based on the network topology of FL systems with detailed analysis and summarization. Next, we abstract the current application problems, summarize the general techniques and frame the application problems into the general paradigm of FL base models. Moreover, we provide our proposed solutions for model training via FL. We have summarized and analyzed the existing FedOpt algorithms, and deeply revealed the algorithmic development principles of many first-order algorithms in depth, proposing a more generalized algorithm design framework. With the instantiation of these frameworks, FedOpt algorithms can be simply developed. As privacy and security is the fundamental requirement in FL, we provide the existing attack scenarios and the defense methods. To the best of our knowledge, we are among the first tier to review the theoretical methodology and propose our strategies since there are very few works surveying the theoretical approaches. Our survey targets motivating the development of high-performance, privacy-preserving, and secure methods to integrate FL into real-world applications.
{"title":"Advancements in Federated Learning: Models, Methods, and Privacy","authors":"Huiming Chen, Huandong Wang, Qingyue Long, Depeng Jin, Yong Li","doi":"10.1145/3664650","DOIUrl":"https://doi.org/10.1145/3664650","url":null,"abstract":"<p>Federated learning (FL) is a promising technique for resolving the rising privacy and security concerns. Its main ingredient is to cooperatively learn the model among the distributed clients without uploading any sensitive data. In this paper, we conducted a thorough review of the related works, following the development context and deeply mining the key technologies behind FL from the perspectives of theory and application. Specifically, we first classify the existing works in FL architecture based on the network topology of FL systems with detailed analysis and summarization. Next, we abstract the current application problems, summarize the general techniques and frame the application problems into the general paradigm of FL base models. Moreover, we provide our proposed solutions for model training via FL. We have summarized and analyzed the existing FedOpt algorithms, and deeply revealed the algorithmic development principles of many first-order algorithms in depth, proposing a more generalized algorithm design framework. With the instantiation of these frameworks, FedOpt algorithms can be simply developed. As privacy and security is the fundamental requirement in FL, we provide the existing attack scenarios and the defense methods. To the best of our knowledge, we are among the first tier to review the theoretical methodology and propose our strategies since there are very few works surveying the theoretical approaches. Our survey targets motivating the development of high-performance, privacy-preserving, and secure methods to integrate FL into real-world applications.</p>","PeriodicalId":50926,"journal":{"name":"ACM Computing Surveys","volume":"21 1","pages":""},"PeriodicalIF":16.6,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141251632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Emotion recognition based on electroencephalography (EEG) signals has emerged as a prominent research field, facilitating objective evaluation of diseases like depression and motion detection for heathy people. Starting from the basic concepts of temporal-frequency-spatial features in EEG and the methods for cross-domain feature fusion. This survey then extends the overfitting challenge of EEG single-modal to the problem of heterogeneous modality modeling in multi-modal conditions. It explores issues such as feature selection, sample scarcity, cross-subject emotional transfer, physiological knowledge discovery, multi-modal fusion methods and modality missing. These findings provide clues for researchers to further investigate emotion recognition based on EEG signals.
{"title":"Research Progress of EEG-Based Emotion Recognition: A Survey","authors":"Yiming Wang, Bin Zhang, Lamei Di","doi":"10.1145/3666002","DOIUrl":"https://doi.org/10.1145/3666002","url":null,"abstract":"<p>Emotion recognition based on electroencephalography (EEG) signals has emerged as a prominent research field, facilitating objective evaluation of diseases like depression and motion detection for heathy people. Starting from the basic concepts of temporal-frequency-spatial features in EEG and the methods for cross-domain feature fusion. This survey then extends the overfitting challenge of EEG single-modal to the problem of heterogeneous modality modeling in multi-modal conditions. It explores issues such as feature selection, sample scarcity, cross-subject emotional transfer, physiological knowledge discovery, multi-modal fusion methods and modality missing. These findings provide clues for researchers to further investigate emotion recognition based on EEG signals.</p>","PeriodicalId":50926,"journal":{"name":"ACM Computing Surveys","volume":"26 1","pages":""},"PeriodicalIF":16.6,"publicationDate":"2024-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141251551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}