Pub Date : 2025-12-16DOI: 10.1109/TSUSC.2025.3645150
Ismael Samaye;Gilles Sassatelli;Abdoulaye Gamatié
Integrating renewable energy into data centers is essential for reducing reliance on fossil fuels and minimize the environmental impact of digital infrastructures. However, the variability and unpredictability of renewable sources come with significant design and operational challenges. This paper introduces a formal modeling framework for solar-powered small-scale data centers, using stochastic timed automata and statistical model checking for mathematical analysis. The solution supports efficient resource sizing, reduces grid energy consumption through optimized workload scheduling and server renewal strategies. It enables robustness evaluation under component failure scenarios. A case study demonstrates the applicability, flexibility, and scalability of the framework for distributed system topologies and energy-aware design exploration.
{"title":"Formal Modeling and Analysis of Small-Scale Data Centers Integrating Renewable Energy Using Timed Automata","authors":"Ismael Samaye;Gilles Sassatelli;Abdoulaye Gamatié","doi":"10.1109/TSUSC.2025.3645150","DOIUrl":"https://doi.org/10.1109/TSUSC.2025.3645150","url":null,"abstract":"Integrating renewable energy into data centers is essential for reducing reliance on fossil fuels and minimize the environmental impact of digital infrastructures. However, the variability and unpredictability of renewable sources come with significant design and operational challenges. This paper introduces a formal modeling framework for solar-powered small-scale data centers, using stochastic timed automata and statistical model checking for mathematical analysis. The solution supports efficient resource sizing, reduces grid energy consumption through optimized workload scheduling and server renewal strategies. It enables robustness evaluation under component failure scenarios. A case study demonstrates the applicability, flexibility, and scalability of the framework for distributed system topologies and energy-aware design exploration.","PeriodicalId":13268,"journal":{"name":"IEEE Transactions on Sustainable Computing","volume":"11 1","pages":"57-71"},"PeriodicalIF":3.9,"publicationDate":"2025-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146116874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-11DOI: 10.1109/TSUSC.2025.3643431
Liang Zhao;Jiating Xu;Ammar Hawbani;Zhi Liu;Keping Yu;Yuanguo Bi
In Vehicular Edge Computing (VEC), load imbalances among edge servers, driven by varying traffic densities and computational demands across geographic areas, can lead to significant delays, decreased efficiency, and potential service disruptions, adversely affecting both user experience and system reliability. This study proposes an innovative adaptive load balancing method that integrates deep reinforcement learning with predictive analytics to optimize resource allocation in VEC. The framework comprises a predictive model called ST-ChebNet, enhanced with Chebyshev polynomials in graph convolutional networks for accurate workload forecasting, and an adaptive model compression strategy utilizing knowledge distillation to dynamically adjust compression ratios based on anticipated workloads. Additionally, the integration of this predictive model with the Soft Actor-Critic (SAC) algorithm, termed GC-SAC, effectively combines graph-based predictive insights with reinforcement learning techniques to tailor resource distribution, minimizing computational delays and enhancing system responsiveness. The simulation results show that the GC-SAC algorithm can significantly reduce the average delay and average energy consumption of vehicular tasks, as well as the workload rate of edge servers.
{"title":"Adaptive Load Balancing in Vehicular Edge Computing Using Deep Reinforcement Learning and Model Compression","authors":"Liang Zhao;Jiating Xu;Ammar Hawbani;Zhi Liu;Keping Yu;Yuanguo Bi","doi":"10.1109/TSUSC.2025.3643431","DOIUrl":"https://doi.org/10.1109/TSUSC.2025.3643431","url":null,"abstract":"In Vehicular Edge Computing (VEC), load imbalances among edge servers, driven by varying traffic densities and computational demands across geographic areas, can lead to significant delays, decreased efficiency, and potential service disruptions, adversely affecting both user experience and system reliability. This study proposes an innovative adaptive load balancing method that integrates deep reinforcement learning with predictive analytics to optimize resource allocation in VEC. The framework comprises a predictive model called ST-ChebNet, enhanced with Chebyshev polynomials in graph convolutional networks for accurate workload forecasting, and an adaptive model compression strategy utilizing knowledge distillation to dynamically adjust compression ratios based on anticipated workloads. Additionally, the integration of this predictive model with the Soft Actor-Critic (SAC) algorithm, termed GC-SAC, effectively combines graph-based predictive insights with reinforcement learning techniques to tailor resource distribution, minimizing computational delays and enhancing system responsiveness. The simulation results show that the GC-SAC algorithm can significantly reduce the average delay and average energy consumption of vehicular tasks, as well as the workload rate of edge servers.","PeriodicalId":13268,"journal":{"name":"IEEE Transactions on Sustainable Computing","volume":"11 1","pages":"42-56"},"PeriodicalIF":3.9,"publicationDate":"2025-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146116898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-10DOI: 10.1109/TSUSC.2025.3642616
Bing Ai;Guodong Ye;Zijun Wu;Yu Sun
Coalition Formation (CF) game emerges as a pioneering framework for resource allocation in uncrewed aerial vehicles (UAVs) equipped with various types of complementary resources. However, both the overlapping-enabled collaborative CF and inter-coalition competitive behaviors significantly impact the system performance in complex multi-UAV scenarios. In this paper, we propose a Multiple Overlapping Coalitions (MOC) noncooperative game. Specifically, we first establish an optimization model encompassing coupled resource constraints. Subsequently, a task-priority-based incentive mechanism is designed to better motivate participation. To achieve the Nash equilibrium, a two-step solution technique incorporating relaxation and fine-tuning of resource granularity is designed. We propose a MOC noncooperative game-combined Multi-agent Proximal Policy Optimization (MAOPPPO). The simulation results substantiate that our approach outperforms the other five state-of-the-art learning countermeasures in terms of average reward with a gain of up to 4.59% after 800 training episodes. In terms of throughput, the proposed MOC noncooperative game increases by 66.67%, 93.68%, and 11.76% compared with that of CF noncooperative game, non-CF noncooperative game, and consensus-based algorithm, respectively. For total resource contribution, the improvements are 62.99%, 94.59%, and 23.16%, respectively. The energy efficiency enhances by 6.82%, 23.68%, and 4.78% compared to the other three baselines, respectively.
{"title":"Overlapping Coalition Formation-Enabled Noncooperative Game-Combined Multi-Agent DRL for UAV-Assisted Resource Allocation","authors":"Bing Ai;Guodong Ye;Zijun Wu;Yu Sun","doi":"10.1109/TSUSC.2025.3642616","DOIUrl":"https://doi.org/10.1109/TSUSC.2025.3642616","url":null,"abstract":"Coalition Formation (CF) game emerges as a pioneering framework for resource allocation in uncrewed aerial vehicles (UAVs) equipped with various types of complementary resources. However, both the overlapping-enabled collaborative CF and inter-coalition competitive behaviors significantly impact the system performance in complex multi-UAV scenarios. In this paper, we propose a Multiple Overlapping Coalitions (MOC) noncooperative game. Specifically, we first establish an optimization model encompassing coupled resource constraints. Subsequently, a task-priority-based incentive mechanism is designed to better motivate participation. To achieve the Nash equilibrium, a two-step solution technique incorporating relaxation and fine-tuning of resource granularity is designed. We propose a MOC noncooperative game-combined Multi-agent Proximal Policy Optimization (MAOPPPO). The simulation results substantiate that our approach outperforms the other five state-of-the-art learning countermeasures in terms of average reward with a gain of up to 4.59% after 800 training episodes. In terms of throughput, the proposed MOC noncooperative game increases by 66.67%, 93.68%, and 11.76% compared with that of CF noncooperative game, non-CF noncooperative game, and consensus-based algorithm, respectively. For total resource contribution, the improvements are 62.99%, 94.59%, and 23.16%, respectively. The energy efficiency enhances by 6.82%, 23.68%, and 4.78% compared to the other three baselines, respectively.","PeriodicalId":13268,"journal":{"name":"IEEE Transactions on Sustainable Computing","volume":"11 1","pages":"29-41"},"PeriodicalIF":3.9,"publicationDate":"2025-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146116891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-17DOI: 10.1109/TSUSC.2025.3633312
Kai Peng;Hao Wen;Zhiyong Guo;Hanfang Ge;Chao Cai;Bo Zhou;Menglan Hu
Microservice as a promising architecture has been widely employed in edge computing to support sensitive-latency online applications. Unfortunately, the deployment of numerous microservices creates complex invocations and requires frequent communications, which brings significant challenges to service deployment and request routing. Moreover, the strict requirements for low energy consumption and low latency in edge computing further exacerbate these difficulties. In this case, it is crucial to optimize the joint microservices deployment and request routing using a meticulous and energy-efficient approach. However, existing studies often overlook their interdependence and treat them as separate problems. Therefore, we propose a fine-grained approach in this paper to jointly optimize the deployment and request routing of microservices within edge computing scenarios. First, we utilize queuing networks to conduct detailed modeling and mathematical analysis that study the complex invocation relationships, microservice instance sharing, and communication latency. Second, we propose an energy-efficient microservice orchestration algorithm, referred to as Cluster-Processing-based Adaptive Memory Procedure. This algorithm maintains a memory storing elite solution elements, and it iteratively picks up suitable elements from the memory to construct superior solutions. Finally, extensive simulation experiments demonstrate that the proposed algorithm outperforms baseline algorithms significantly in terms of response latency and energy consumption.
{"title":"Energy-Efficient Joint Deployment and Routing for Delay-Sensitive Microservices in Edge Computing","authors":"Kai Peng;Hao Wen;Zhiyong Guo;Hanfang Ge;Chao Cai;Bo Zhou;Menglan Hu","doi":"10.1109/TSUSC.2025.3633312","DOIUrl":"https://doi.org/10.1109/TSUSC.2025.3633312","url":null,"abstract":"Microservice as a promising architecture has been widely employed in edge computing to support sensitive-latency online applications. Unfortunately, the deployment of numerous microservices creates complex invocations and requires frequent communications, which brings significant challenges to service deployment and request routing. Moreover, the strict requirements for low energy consumption and low latency in edge computing further exacerbate these difficulties. In this case, it is crucial to optimize the joint microservices deployment and request routing using a meticulous and energy-efficient approach. However, existing studies often overlook their interdependence and treat them as separate problems. Therefore, we propose a fine-grained approach in this paper to jointly optimize the deployment and request routing of microservices within edge computing scenarios. First, we utilize queuing networks to conduct detailed modeling and mathematical analysis that study the complex invocation relationships, microservice instance sharing, and communication latency. Second, we propose an energy-efficient microservice orchestration algorithm, referred to as Cluster-Processing-based Adaptive Memory Procedure. This algorithm maintains a memory storing elite solution elements, and it iteratively picks up suitable elements from the memory to construct superior solutions. Finally, extensive simulation experiments demonstrate that the proposed algorithm outperforms baseline algorithms significantly in terms of response latency and energy consumption.","PeriodicalId":13268,"journal":{"name":"IEEE Transactions on Sustainable Computing","volume":"11 1","pages":"1-14"},"PeriodicalIF":3.9,"publicationDate":"2025-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146116821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-17DOI: 10.1109/TSUSC.2025.3633995
Syed Mhamudul Hasan;Ahmed Imteaj;Abdur R. Shahid
Cyber-Physical Systems (CPS) are increasingly leveraging Federated Learning (FL) to enable decentralized intelligence while preserving privacy across distributed devices. Federated adversarial learning (FAL) leverages FL and adversarial training to enhance model robustness against adversarial attacks while maintaining data privacy across decentralized, heterogeneous devices. While FAL strengthens CPS resilience against adversarial threats, variations in energy constraints, carbon emissions, computational capabilities, and latency requirements introduce additional complexity. These variations impact energy consumption, carbon emissions, and power source efficiency, creating a complex trade-off between sustainability and robustness. This underscores the critical need for standardized metrics to systematically evaluate and balance these competing factors in FAL-enabled CPS. In this paper, we propose three novel robustness metrics designed to quantify the interplay between energy efficiency, sustainability dimensions, and adversarial robustness in FAL setups for CPS. The proposed methodology accounts for diverse CPS scenarios, addressing factors such as emissions, energy consumption, latency, renewable energy, and low-energy devices with critical latency needs. We validate our approach through simulations in two setups, including a single-device environment to isolate device variability and a heterogeneous multi-device environment to evaluate architectural impacts. The results demonstrate the effectiveness of our proposed metrics in systemically quantifying the trade-off between sustainability and robustness in FAL-based CPS.
{"title":"Quantifying Robustness and Sustainability Trade-Off in Federated Adversarial Learning for Cyber-Physical Systems","authors":"Syed Mhamudul Hasan;Ahmed Imteaj;Abdur R. Shahid","doi":"10.1109/TSUSC.2025.3633995","DOIUrl":"https://doi.org/10.1109/TSUSC.2025.3633995","url":null,"abstract":"Cyber-Physical Systems (CPS) are increasingly leveraging Federated Learning (FL) to enable decentralized intelligence while preserving privacy across distributed devices. Federated adversarial learning (FAL) leverages FL and adversarial training to enhance model robustness against adversarial attacks while maintaining data privacy across decentralized, heterogeneous devices. While FAL strengthens CPS resilience against adversarial threats, variations in energy constraints, carbon emissions, computational capabilities, and latency requirements introduce additional complexity. These variations impact energy consumption, carbon emissions, and power source efficiency, creating a complex trade-off between sustainability and robustness. This underscores the critical need for standardized metrics to systematically evaluate and balance these competing factors in FAL-enabled CPS. In this paper, we propose three novel robustness metrics designed to quantify the interplay between energy efficiency, sustainability dimensions, and adversarial robustness in FAL setups for CPS. The proposed methodology accounts for diverse CPS scenarios, addressing factors such as emissions, energy consumption, latency, renewable energy, and low-energy devices with critical latency needs. We validate our approach through simulations in two setups, including a single-device environment to isolate device variability and a heterogeneous multi-device environment to evaluate architectural impacts. The results demonstrate the effectiveness of our proposed metrics in systemically quantifying the trade-off between sustainability and robustness in FAL-based CPS.","PeriodicalId":13268,"journal":{"name":"IEEE Transactions on Sustainable Computing","volume":"11 1","pages":"15-28"},"PeriodicalIF":3.9,"publicationDate":"2025-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146116923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-13DOI: 10.1109/TSUSC.2025.3632842
Tao Jiang;Le Luo;Chao Li;Jinyang Guo;Sheng Xu
With the number of cores increasing in shared-memory systems, the energy consumption of parallel computing on them is becoming increasingly prominent. Currently, researchers concern with the performance optimization, while ignoring the energy efficiency of graph processing. Meanwhile, existing works that optimize energy efficiency involve mainly the general benchmarks by using dynamic voltage and frequency scaling and thread throttling methods. However, these methods cannot be directly transplanted to graph processing, because most graph algorithms converge in fewer iterations and traditional energy efficiency optimization methods are not applicable to them and will produce much overhead, resulting in the fact that the loss outweighs the gain. And some energy-saving methods estimate the subsequent CPU frequency based on the run-time system state, which leads to an inaccurate prediction of the optimal energy-saving CPU frequency. In view of the above issues, we propose a pre-allocated thread throttling method and a static frequency scaling method. The former achieves thread throttling by establishing a pre-allocated scheduling method, which calculates the optimal energy-saving number of threads promptly when the graph is loaded; On this basis, in order to reduce the cost of dynamic frequency setting at runtime and improve the energy efficiency further, the latter introduces the static frequency scaling method to reduce the execution speed of some tasks by relaxing thread execution time. The experimental results show that the pre-allocated thread throttling method improves the energy efficiency by about 10% compared to the original framework, and the static frequency scaling method further improves it by about 20% with trivial performance loss.
{"title":"Improving Energy Efficiency of Graph Processing on Shared-Memory Systems","authors":"Tao Jiang;Le Luo;Chao Li;Jinyang Guo;Sheng Xu","doi":"10.1109/TSUSC.2025.3632842","DOIUrl":"https://doi.org/10.1109/TSUSC.2025.3632842","url":null,"abstract":"With the number of cores increasing in shared-memory systems, the energy consumption of parallel computing on them is becoming increasingly prominent. Currently, researchers concern with the performance optimization, while ignoring the energy efficiency of graph processing. Meanwhile, existing works that optimize energy efficiency involve mainly the general benchmarks by using dynamic voltage and frequency scaling and thread throttling methods. However, these methods cannot be directly transplanted to graph processing, because most graph algorithms converge in fewer iterations and traditional energy efficiency optimization methods are not applicable to them and will produce much overhead, resulting in the fact that the loss outweighs the gain. And some energy-saving methods estimate the subsequent CPU frequency based on the run-time system state, which leads to an inaccurate prediction of the optimal energy-saving CPU frequency. In view of the above issues, we propose a pre-allocated thread throttling method and a static frequency scaling method. The former achieves thread throttling by establishing a pre-allocated scheduling method, which calculates the optimal energy-saving number of threads promptly when the graph is loaded; On this basis, in order to reduce the cost of dynamic frequency setting at runtime and improve the energy efficiency further, the latter introduces the static frequency scaling method to reduce the execution speed of some tasks by relaxing thread execution time. The experimental results show that the pre-allocated thread throttling method improves the energy efficiency by about 10% compared to the original framework, and the static frequency scaling method further improves it by about 20% with trivial performance loss.","PeriodicalId":13268,"journal":{"name":"IEEE Transactions on Sustainable Computing","volume":"10 6","pages":"1449-1460"},"PeriodicalIF":3.9,"publicationDate":"2025-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145712560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-31DOI: 10.1109/TSUSC.2025.3627484
Ahsan Raza Khan;Rao Naveed Bin Rais;Sarmad Sohaib;Sajjad Hussain;Ahmed Zoha
Human Activity Recognition (HAR) using Channel State Information (CSI) enables energy-efficient and non-invasive healthcare monitoring. However, conventional HAR systems rely on centralised model training, which requires the sharing of raw data, leading to privacy risks, excessive bandwidth usage, and high communication latency that limit scalability. This paper proposes FedFusionQuant (FFQ), a federated learning (FL) framework that jointly performs feature fusion, adaptive aggregation, and quantisation-aware compression during training. A novel federated distance (FedDist) mechanism dynamically adjusts parameter updates using neuron dissimilarity metrics, enhancing generalisation across heterogeneous clients. Meanwhile, quantisation-aware training (QAT) reduces model size and transmission cost while preserving accuracy. Extensive experiments on real CSI data from 30 participants demonstrate that FFQ improves multi-class HAR accuracy by 4.29% and binary fall detection by 5.55% compared to raw fusion models. Furthermore, model compression with QAT achieves a 47% reduction in communication overhead while maintaining accuracy comparable to state-of-the-art methods.
{"title":"FedFusionQuant (FFQ): Federated Learning With Feature Fusion and Model Quantisation for Human Activity Recognition Using CSI","authors":"Ahsan Raza Khan;Rao Naveed Bin Rais;Sarmad Sohaib;Sajjad Hussain;Ahmed Zoha","doi":"10.1109/TSUSC.2025.3627484","DOIUrl":"https://doi.org/10.1109/TSUSC.2025.3627484","url":null,"abstract":"Human Activity Recognition (HAR) using Channel State Information (CSI) enables energy-efficient and non-invasive healthcare monitoring. However, conventional HAR systems rely on centralised model training, which requires the sharing of raw data, leading to privacy risks, excessive bandwidth usage, and high communication latency that limit scalability. This paper proposes <bold>FedFusionQuant (FFQ)</b>, a federated learning (FL) framework that jointly performs feature fusion, adaptive aggregation, and quantisation-aware compression during training. A novel <bold>federated distance (FedDist)</b> mechanism dynamically adjusts parameter updates using neuron dissimilarity metrics, enhancing generalisation across heterogeneous clients. Meanwhile, <bold>quantisation-aware training (QAT)</b> reduces model size and transmission cost while preserving accuracy. Extensive experiments on real CSI data from 30 participants demonstrate that FFQ improves multi-class HAR accuracy by <bold>4.29%</b> and binary fall detection by <bold>5.55%</b> compared to raw fusion models. Furthermore, model compression with QAT achieves a <bold>47% reduction in communication overhead</b> while maintaining accuracy comparable to state-of-the-art methods.","PeriodicalId":13268,"journal":{"name":"IEEE Transactions on Sustainable Computing","volume":"10 6","pages":"1421-1434"},"PeriodicalIF":3.9,"publicationDate":"2025-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145712547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Differential privacy has received considerable attention as a privacy concept for releasing statistical information from datasets. While differential privacy provides strict statistical guarantees, it is equally crucial to investigate how these guarantees interact with individual privacy preferences and privacy policies. Existing solutions, such as one-sided differential privacy, treat all sensitive records equally in terms of privacy protection, although datasets can be classified based on predetermined privacy policies that differentiate between sensitive and insensitive records. In this paper, we present a novel concept of privacy termed One-sided Personalized Differential Privacy (OSPDP), offering verifiable privacy assurances at the user level for sensitive records derived from privacy policies. Specifically, OSPDP enables data owners to articulate their privacy needs more flexibly, avoiding a one-size-fits-all approach to privacy protection and potentially establishing a dichotomous privacy policy regarding the sensitivity of records. Furthermore, the truthful release or legitimate disclosure of non-sensitive records reduces unnecessary privacy consumption and can be utilized to significantly enhance data utility. Additionally, we present several well-performing mechanisms for achieving OSPDP. Finally, we evaluate and analyze the trade-off between privacy and utility of the proposed mechanisms through extensive experiments.
{"title":"OSPDP: One-Sided Personalized Differential Privacy","authors":"Jiajun Chen;Chunqiang Hu;Huijun Zhuang;Ruifeng Zhao;Jiguo Yu","doi":"10.1109/TSUSC.2025.3626773","DOIUrl":"https://doi.org/10.1109/TSUSC.2025.3626773","url":null,"abstract":"Differential privacy has received considerable attention as a privacy concept for releasing statistical information from datasets. While differential privacy provides strict statistical guarantees, it is equally crucial to investigate how these guarantees interact with individual privacy preferences and privacy policies. Existing solutions, such as one-sided differential privacy, treat all sensitive records equally in terms of privacy protection, although datasets can be classified based on predetermined privacy policies that differentiate between sensitive and insensitive records. In this paper, we present a novel concept of privacy termed One-sided Personalized Differential Privacy (OSPDP), offering verifiable privacy assurances at the user level for sensitive records derived from privacy policies. Specifically, OSPDP enables data owners to articulate their privacy needs more flexibly, avoiding a one-size-fits-all approach to privacy protection and potentially establishing a dichotomous privacy policy regarding the sensitivity of records. Furthermore, the truthful release or legitimate disclosure of non-sensitive records reduces unnecessary privacy consumption and can be utilized to significantly enhance data utility. Additionally, we present several well-performing mechanisms for achieving OSPDP. Finally, we evaluate and analyze the trade-off between privacy and utility of the proposed mechanisms through extensive experiments.","PeriodicalId":13268,"journal":{"name":"IEEE Transactions on Sustainable Computing","volume":"10 6","pages":"1435-1448"},"PeriodicalIF":3.9,"publicationDate":"2025-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145712554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-19DOI: 10.1109/TSUSC.2025.3612138
Yuhong Huang;Xue Li;Songle Chen;Siguang Chen
As an extension of machine unlearning in distributed scenarios, federated unlearning gains significant attention. However, federated unlearning remains challenging, as many studies require additional resources, such as auxiliary dataset or storage, to achieve high-quality models. These requirements incur extra costs and are often difficult to satisfy in practical applications. To address these issues, we propose a flexible client-level federated unlearning algorithm with prototypes, called FedUP. Specifically, our algorithm consists of two components: prototype-based unlearning and model recovering. First, we design a prototype-based unlearning strategy that uses prototypes of the erased client to guide the unlearning process, and maximizes the prototype loss between the remaining and erased clients to unlearn the information. It does not rely on historical storage updates or additional standard datasets, making the unlearning process more streamlined. To mitigate performance degradation from the unlearning process, we develop a brief model recovering approach guided by global prototypes to swiftly and efficiently restore models’ accuracy on the remaining datasets. Unlike other unlearning algorithms, our approach exchanges prototypes instead of model parameters, significantly reducing communication overhead. Finally, we empirically evaluate the proposed algorithm from multiple perspectives on two datasets, demonstrating that our algorithm can achieve high-quality unlearned models with minimal communication cost.
{"title":"FedUP: Federated Unlearning With Prototypes","authors":"Yuhong Huang;Xue Li;Songle Chen;Siguang Chen","doi":"10.1109/TSUSC.2025.3612138","DOIUrl":"https://doi.org/10.1109/TSUSC.2025.3612138","url":null,"abstract":"As an extension of machine unlearning in distributed scenarios, federated unlearning gains significant attention. However, federated unlearning remains challenging, as many studies require additional resources, such as auxiliary dataset or storage, to achieve high-quality models. These requirements incur extra costs and are often difficult to satisfy in practical applications. To address these issues, we propose a flexible client-level federated unlearning algorithm with prototypes, called FedUP. Specifically, our algorithm consists of two components: prototype-based unlearning and model recovering. First, we design a prototype-based unlearning strategy that uses prototypes of the erased client to guide the unlearning process, and maximizes the prototype loss between the remaining and erased clients to unlearn the information. It does not rely on historical storage updates or additional standard datasets, making the unlearning process more streamlined. To mitigate performance degradation from the unlearning process, we develop a brief model recovering approach guided by global prototypes to swiftly and efficiently restore models’ accuracy on the remaining datasets. Unlike other unlearning algorithms, our approach exchanges prototypes instead of model parameters, significantly reducing communication overhead. Finally, we empirically evaluate the proposed algorithm from multiple perspectives on two datasets, demonstrating that our algorithm can achieve high-quality unlearned models with minimal communication cost.","PeriodicalId":13268,"journal":{"name":"IEEE Transactions on Sustainable Computing","volume":"10 6","pages":"1461-1467"},"PeriodicalIF":3.9,"publicationDate":"2025-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145712546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}