Pub Date : 2025-08-26DOI: 10.1109/TIV.2025.3592517
{"title":"Share Your Preprint Research with the World!","authors":"","doi":"10.1109/TIV.2025.3592517","DOIUrl":"https://doi.org/10.1109/TIV.2025.3592517","url":null,"abstract":"","PeriodicalId":36532,"journal":{"name":"IEEE Transactions on Intelligent Vehicles","volume":"10 4","pages":"2932-2932"},"PeriodicalIF":14.3,"publicationDate":"2025-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11142487","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144904739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In a cloud-edge collaborative environment, especially for the Internet of Vehicles, federated learning (FL) has garnered widespread attention due to its unique training process, where users solely upload trained parameters yet do not transmit their local data, demonstrating that FL is a promising privacy-preserving distributed machine learning paradigm. However, FL still faces challenges, such as the local gradients and global parameters (i.e., global model, global weights, or global gradients) transmitted by users may leak the users' private information, and malicious or lazy aggregation servers may forge or tamper with the parameters uploaded by users, thereby generating incorrect aggregated results, ultimately reducing the availability of the global model. Moreover, network fluctuations or device failures may cause user dropouts during training. Although existing works address these challenges, privacy protection schemes based on complex cryptographic primitives are costly and lack research on protecting global parameters. Additionally, verification schemes for aggregation results face overhead and security challenges. For such, we propose a lightweight secure aggregation and efficient verification scheme for federated learning, namely SAEV-FL. We design a single-masking protocol based on the Chinese Remainder Theorem (CRT) and perturbation technique to achieve privacy protection for local gradients and global parameters with lower overhead. To verify the correctness of the aggregated results, we have combined homomorphic hash functions and random number technology to design a secure verification mechanism that does not disclose users' privacy. Detailed theoretical analysis and comprehensive experiments establish that the proposed scheme outperforms other similar works in terms of security and efficiency.
{"title":"SAEV-FL: Lightweight Secure Aggregation and Efficient Verification Scheme for Federated Learning in Cloud-Edge Collaborative Environment","authors":"Shiwen Zhang;Feixiang Ren;Wei Liang;Kuanching Li;Nam Ling","doi":"10.1109/TIV.2025.3599909","DOIUrl":"https://doi.org/10.1109/TIV.2025.3599909","url":null,"abstract":"In a cloud-edge collaborative environment, especially for the Internet of Vehicles, federated learning (FL) has garnered widespread attention due to its unique training process, where users solely upload trained parameters yet do not transmit their local data, demonstrating that FL is a promising privacy-preserving distributed machine learning paradigm. However, FL still faces challenges, such as the local gradients and global parameters (<italic>i.e.</i>, global model, global weights, or global gradients) transmitted by users may leak the users' private information, and malicious or lazy aggregation servers may forge or tamper with the parameters uploaded by users, thereby generating incorrect aggregated results, ultimately reducing the availability of the global model. Moreover, network fluctuations or device failures may cause user dropouts during training. Although existing works address these challenges, privacy protection schemes based on complex cryptographic primitives are costly and lack research on protecting global parameters. Additionally, verification schemes for aggregation results face overhead and security challenges. For such, we propose a lightweight secure aggregation and efficient verification scheme for federated learning, namely SAEV-FL. We design a single-masking protocol based on the Chinese Remainder Theorem (CRT) and perturbation technique to achieve privacy protection for local gradients and global parameters with lower overhead. To verify the correctness of the aggregated results, we have combined homomorphic hash functions and random number technology to design a secure verification mechanism that does not disclose users' privacy. Detailed theoretical analysis and comprehensive experiments establish that the proposed scheme outperforms other similar works in terms of security and efficiency.","PeriodicalId":36532,"journal":{"name":"IEEE Transactions on Intelligent Vehicles","volume":"11 1","pages":"133-149"},"PeriodicalIF":14.3,"publicationDate":"2025-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145802389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-13DOI: 10.1109/TIV.2025.3598768
Qisheng Zhang;Han Jun Yoon;Terrence J. Moore;Seunghyun Yoon;Dan Dongseong Kim;Hyuk Lim;Frederica Nelson;Jin-Hee Cho
End-to-end autonomous driving with deep reinforcement learning (DRL) encounters significant challenges in security and safety, which are critical for the automotive industry's adoption of autonomous technologies. This paper introduces a novel intrusion response system (IRS), called the control command utility-based IRS (CCU), specifically designed for DRL-based autonomous systems. The CCU provides a lightweight yet powerful defense against false data injection attacks on the in-vehicle CAN (control area network) bus, enhancing both security and driving performance by making intelligent, context-aware decisions based on control command utilities derived from DRL outputs. We rigorously evaluated CCU against other state-of-the-art IRSs based on DRL autonomous driving models, Rails and Roach. Equipped with an additional confidence score-based filter, CCU effectively minimizes false alarms, demonstrating superior performance in improving critical driving metrics such as driving score, route completion, and infraction penalties, all while lowering defense costs. Furthermore, CCU exhibits robust resilience in hostile environments with varying attack probabilities, underscoring its reliability in complex scenarios. This contribution represents a significant advancement in autonomous driving, addressing essential security and safety challenges and accelerating the path toward safer, more reliable autonomous vehicle deployment.
{"title":"Securing End-to-End Reinforcement Learning-Driven Autonomous Driving: A Control Command Utility-Based Intrusion Response System","authors":"Qisheng Zhang;Han Jun Yoon;Terrence J. Moore;Seunghyun Yoon;Dan Dongseong Kim;Hyuk Lim;Frederica Nelson;Jin-Hee Cho","doi":"10.1109/TIV.2025.3598768","DOIUrl":"https://doi.org/10.1109/TIV.2025.3598768","url":null,"abstract":"End-to-end autonomous driving with deep reinforcement learning (DRL) encounters significant challenges in security and safety, which are critical for the automotive industry's adoption of autonomous technologies. This paper introduces a novel intrusion response system (IRS), called the control command utility-based IRS (<monospace>CCU</monospace>), specifically designed for DRL-based autonomous systems. The <monospace>CCU</monospace> provides a lightweight yet powerful defense against false data injection attacks on the in-vehicle CAN (control area network) bus, enhancing both security and driving performance by making intelligent, context-aware decisions based on control command utilities derived from DRL outputs. We rigorously evaluated <monospace>CCU</monospace> against other state-of-the-art IRSs based on DRL autonomous driving models, Rails and Roach. Equipped with an additional confidence score-based filter, <monospace>CCU</monospace> effectively minimizes false alarms, demonstrating superior performance in improving critical driving metrics such as driving score, route completion, and infraction penalties, all while lowering defense costs. Furthermore, <monospace>CCU</monospace> exhibits robust resilience in hostile environments with varying attack probabilities, underscoring its reliability in complex scenarios. This contribution represents a significant advancement in autonomous driving, addressing essential security and safety challenges and accelerating the path toward safer, more reliable autonomous vehicle deployment.","PeriodicalId":36532,"journal":{"name":"IEEE Transactions on Intelligent Vehicles","volume":"11 1","pages":"122-132"},"PeriodicalIF":14.3,"publicationDate":"2025-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145802353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}