Alberto Blanco-Justicia, Najeeb Jebreel, Benet Manzanares-Salor, David Sánchez, Josep Domingo-Ferrer, Guillem Collell, Kuan Eeik Tan
{"title":"Digital forgetting in large language models: a survey of unlearning methods","authors":"Alberto Blanco-Justicia, Najeeb Jebreel, Benet Manzanares-Salor, David Sánchez, Josep Domingo-Ferrer, Guillem Collell, Kuan Eeik Tan","doi":"10.1007/s10462-024-11078-6","DOIUrl":null,"url":null,"abstract":"<div><p>Large language models (LLMs) have become the state of the art in natural language processing. The massive adoption of generative LLMs and the capabilities they have shown have prompted public concerns regarding their impact on the labor market, privacy, the use of copyrighted work, and how these models align with human ethics and the rule of law. As a response, new regulations are being pushed, which require developers and service providers to evaluate, monitor, and forestall or at least mitigate the risks posed by their models. One mitigation strategy is digital forgetting: given a model with undesirable knowledge or behavior, the goal is to obtain a new model where the detected issues are no longer present. Digital forgetting is usually enforced via machine unlearning techniques, which modify trained machine learning models for them to behave as models trained on a subset of the original training data. In this work, we describe the motivations and desirable properties of digital forgetting when applied to LLMs, and we survey recent works on machine unlearning. Specifically, we propose a taxonomy of unlearning methods based on the reach and depth of the modifications done on the models, we discuss and compare the effectiveness of machine unlearning methods for LLMs proposed so far, and we survey their evaluation. Finally, we describe open problems of machine unlearning applied to LLMs and we put forward recommendations for developers and practitioners.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"58 3","pages":""},"PeriodicalIF":10.7000,"publicationDate":"2025-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-024-11078-6.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Artificial Intelligence Review","FirstCategoryId":"94","ListUrlMain":"https://link.springer.com/article/10.1007/s10462-024-11078-6","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Large language models (LLMs) have become the state of the art in natural language processing. The massive adoption of generative LLMs and the capabilities they have shown have prompted public concerns regarding their impact on the labor market, privacy, the use of copyrighted work, and how these models align with human ethics and the rule of law. As a response, new regulations are being pushed, which require developers and service providers to evaluate, monitor, and forestall or at least mitigate the risks posed by their models. One mitigation strategy is digital forgetting: given a model with undesirable knowledge or behavior, the goal is to obtain a new model where the detected issues are no longer present. Digital forgetting is usually enforced via machine unlearning techniques, which modify trained machine learning models for them to behave as models trained on a subset of the original training data. In this work, we describe the motivations and desirable properties of digital forgetting when applied to LLMs, and we survey recent works on machine unlearning. Specifically, we propose a taxonomy of unlearning methods based on the reach and depth of the modifications done on the models, we discuss and compare the effectiveness of machine unlearning methods for LLMs proposed so far, and we survey their evaluation. Finally, we describe open problems of machine unlearning applied to LLMs and we put forward recommendations for developers and practitioners.
期刊介绍:
Artificial Intelligence Review, a fully open access journal, publishes cutting-edge research in artificial intelligence and cognitive science. It features critical evaluations of applications, techniques, and algorithms, providing a platform for both researchers and application developers. The journal includes refereed survey and tutorial articles, along with reviews and commentary on significant developments in the field.