首页 > 最新文献

Journal of Information Security and Applications最新文献

英文 中文
Efficient adaptive defense scheme for differential privacy in federated learning
IF 3.8 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-02-10 DOI: 10.1016/j.jisa.2025.103992
Fangfang Shan , Yanlong Lu , Shuaifeng Li , Shiqi Mao , Yuang Li , Xin Wang
Federated learning, as an emerging technology in the field of artificial intelligence, effectively addresses the issue of data islands while ensuring privacy protection. However, studies have shown that by analyzing gradient updates, leaked gradient information can still be used to reconstruct original data, thus inferring private information. In recent years, differential privacy techniques have been widely applied to federated learning to enhance data privacy protection. However, the noise introduced often significantly reduces the learning performance. Previous studies typically employed a fixed gradient clipping strategy with added fixed noise. Although this method offers privacy protection, it remains vulnerable to gradient leakage attacks, and training performance is often subpar. Although subsequent proposals of dynamic differential privacy parameters aim to address the issue of model utility, frequent parameter adjustments lead to reduced efficiency. To solve these issues, this paper proposes an efficient federated learning differential privacy protection framework with noise attenuation and automatic pruning (EADS-DPFL). This framework not only effectively defends against gradient leakage attacks but also significantly improves the training performance of federated learning models.
Extensive experimental results demonstrate that our framework outperforms existing differential privacy federated learning schemes in terms of model accuracy, convergence speed, and resistance to attacks.
{"title":"Efficient adaptive defense scheme for differential privacy in federated learning","authors":"Fangfang Shan ,&nbsp;Yanlong Lu ,&nbsp;Shuaifeng Li ,&nbsp;Shiqi Mao ,&nbsp;Yuang Li ,&nbsp;Xin Wang","doi":"10.1016/j.jisa.2025.103992","DOIUrl":"10.1016/j.jisa.2025.103992","url":null,"abstract":"<div><div>Federated learning, as an emerging technology in the field of artificial intelligence, effectively addresses the issue of data islands while ensuring privacy protection. However, studies have shown that by analyzing gradient updates, leaked gradient information can still be used to reconstruct original data, thus inferring private information. In recent years, differential privacy techniques have been widely applied to federated learning to enhance data privacy protection. However, the noise introduced often significantly reduces the learning performance. Previous studies typically employed a fixed gradient clipping strategy with added fixed noise. Although this method offers privacy protection, it remains vulnerable to gradient leakage attacks, and training performance is often subpar. Although subsequent proposals of dynamic differential privacy parameters aim to address the issue of model utility, frequent parameter adjustments lead to reduced efficiency. To solve these issues, this paper proposes an efficient federated learning differential privacy protection framework with noise attenuation and automatic pruning (EADS-DPFL). This framework not only effectively defends against gradient leakage attacks but also significantly improves the training performance of federated learning models.</div><div>Extensive experimental results demonstrate that our framework outperforms existing differential privacy federated learning schemes in terms of model accuracy, convergence speed, and resistance to attacks.</div></div>","PeriodicalId":48638,"journal":{"name":"Journal of Information Security and Applications","volume":"89 ","pages":"Article 103992"},"PeriodicalIF":3.8,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143376961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DMRP: Privacy-Preserving Deep Learning Model with Dynamic Masking and Random Permutation
IF 3.8 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-02-10 DOI: 10.1016/j.jisa.2025.103987
Chongzhen Zhang , Zhiwang Hu , Xiangrui Xu , Yong Liu , Bin Wang , Jian Shen , Tao Li , Yu Huang , Baigen Cai , Wei Wang
Large AI models exhibit significant efficiency and precision in addressing complex problems. Despite their considerable advantages in various domains, these models encounter numerous challenges, notably high training costs. Currently, the training of distributed large AI models offers a solution to mitigate these elevated costs. However, distributed large AI models remain susceptible to data reconstruction attacks. A malicious server could leverage the intermediate results uploaded by clients to reconstruct the original data within the framework of distributed large AI models. This study first examines the underlying principles of data reconstruction attacks and proposes a privacy protection scheme. Our approach begins by obfuscating the mapping relationship between embeddings and the original data to ensure privacy protection. Specifically, during the upload of embedding data by clients to the server, genuine embeddings are concealed to prevent unauthorized access by malicious servers. Building on this concept, we introduce DMRP, a defensive mechanism featuring Dynamic Masking and Random Permutation, designed to mitigate data reconstruction attacks while maintaining the accuracy of the primary task. Our experiments, conducted across three models and four datasets, demonstrate the effectiveness of DMRP in countering data reconstruction attacks within distributed large-scale AI models.
{"title":"DMRP: Privacy-Preserving Deep Learning Model with Dynamic Masking and Random Permutation","authors":"Chongzhen Zhang ,&nbsp;Zhiwang Hu ,&nbsp;Xiangrui Xu ,&nbsp;Yong Liu ,&nbsp;Bin Wang ,&nbsp;Jian Shen ,&nbsp;Tao Li ,&nbsp;Yu Huang ,&nbsp;Baigen Cai ,&nbsp;Wei Wang","doi":"10.1016/j.jisa.2025.103987","DOIUrl":"10.1016/j.jisa.2025.103987","url":null,"abstract":"<div><div>Large AI models exhibit significant efficiency and precision in addressing complex problems. Despite their considerable advantages in various domains, these models encounter numerous challenges, notably high training costs. Currently, the training of distributed large AI models offers a solution to mitigate these elevated costs. However, distributed large AI models remain susceptible to data reconstruction attacks. A malicious server could leverage the intermediate results uploaded by clients to reconstruct the original data within the framework of distributed large AI models. This study first examines the underlying principles of data reconstruction attacks and proposes a privacy protection scheme. Our approach begins by obfuscating the mapping relationship between embeddings and the original data to ensure privacy protection. Specifically, during the upload of embedding data by clients to the server, genuine embeddings are concealed to prevent unauthorized access by malicious servers. Building on this concept, we introduce <em>DMRP</em>, a defensive mechanism featuring Dynamic Masking and Random Permutation, designed to mitigate data reconstruction attacks while maintaining the accuracy of the primary task. Our experiments, conducted across three models and four datasets, demonstrate the effectiveness of DMRP in countering data reconstruction attacks within distributed large-scale AI models.</div></div>","PeriodicalId":48638,"journal":{"name":"Journal of Information Security and Applications","volume":"89 ","pages":"Article 103987"},"PeriodicalIF":3.8,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143376962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Public data-enhanced multi-stage differentially private graph neural networks
IF 3.8 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-02-09 DOI: 10.1016/j.jisa.2025.103985
Bingbing Zhang , Heyuan Huang , Lingbo Wei , Chi Zhang
Existing differential privacy algorithms for graph neural networks (GNNs) typically rely on adding noise to private graph data to prevent the leakage of sensitive information. While the addition of noise often leads to significant performance degradation, the incorporation of additional public graph data can effectively mitigate these effects, thereby improving the privacy-utility trade-off in differentially private GNNs. To enhance the trade-off, we propose a method that utilizes public graph data in multi-stage training algorithms. First, to increase the ability to extract useful information from graph data, we introduce a public graph and apply an unsupervised pretraining algorithm, which is then integrated into the private model training through parameter transfer. Second, we utilize multi-stage GNNs to transform the neighborhood aggregation into a preprocessing step to prevent privacy budget accumulation from occurring in the embedding layer, hence enhancing model performance under the same privacy constraints. This method is applicable to both node differential privacy and edge differential privacy in GNNs. Third, for edge differential privacy, we introduce an aggregation perturbation mechanism, which trains an edge prediction model on a basis of node features using the public graph data. We apply this trained model to the private graph data to predict potential neighbors for each node. We then calculate an additional aggregation result based on these predicted neighbors and combine with the aggregation result derived from the true edges, ensuring that the aggregation perturbation result retains valuable information even under very low privacy budgets. Our results show that incorporating public graph data can enhance the accuracy of differentially private GNNs by approximately 5% under the same privacy settings.
{"title":"Public data-enhanced multi-stage differentially private graph neural networks","authors":"Bingbing Zhang ,&nbsp;Heyuan Huang ,&nbsp;Lingbo Wei ,&nbsp;Chi Zhang","doi":"10.1016/j.jisa.2025.103985","DOIUrl":"10.1016/j.jisa.2025.103985","url":null,"abstract":"<div><div>Existing differential privacy algorithms for graph neural networks (GNNs) typically rely on adding noise to private graph data to prevent the leakage of sensitive information. While the addition of noise often leads to significant performance degradation, the incorporation of additional public graph data can effectively mitigate these effects, thereby improving the privacy-utility trade-off in differentially private GNNs. To enhance the trade-off, we propose a method that utilizes public graph data in multi-stage training algorithms. First, to increase the ability to extract useful information from graph data, we introduce a public graph and apply an unsupervised pretraining algorithm, which is then integrated into the private model training through parameter transfer. Second, we utilize multi-stage GNNs to transform the neighborhood aggregation into a preprocessing step to prevent privacy budget accumulation from occurring in the embedding layer, hence enhancing model performance under the same privacy constraints. This method is applicable to both node differential privacy and edge differential privacy in GNNs. Third, for edge differential privacy, we introduce an aggregation perturbation mechanism, which trains an edge prediction model on a basis of node features using the public graph data. We apply this trained model to the private graph data to predict potential neighbors for each node. We then calculate an additional aggregation result based on these predicted neighbors and combine with the aggregation result derived from the true edges, ensuring that the aggregation perturbation result retains valuable information even under very low privacy budgets. Our results show that incorporating public graph data can enhance the accuracy of differentially private GNNs by approximately 5% under the same privacy settings.</div></div>","PeriodicalId":48638,"journal":{"name":"Journal of Information Security and Applications","volume":"89 ","pages":"Article 103985"},"PeriodicalIF":3.8,"publicationDate":"2025-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143372980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Perceptual visual security index: Analyzing image content leakage for vision language models
IF 3.8 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-02-08 DOI: 10.1016/j.jisa.2025.103988
Lishuang Hu , Tao Xiang , Shangwei Guo , Xiaoguo Li , Ying Yang
During the training phase of vision language models (VLMs), the privacy storage and sharing of images are of paramount importance. While the Visual Security Index (VSI) is commonly used for content leakage analysis, it usually focuses on comparing content similarity between plain and protected or encrypted images, neglecting the threat model of visual security. In this paper, considering the functionality of the human visual capability, we comprehensively analyze the system model of VSIs and propose a novel perceptual visual security index (PVSI) to evaluate the content leakage of perceptually encrypted images for VLMs. In particular, we take visual perception (VP) as the adversary’s capability and present the definition of VSI under an honest-but-curious threat model. To evaluate the content leakage of encrypted images under the VP assumption, we first present a robust feature descriptor and obtain the semantic content sets of both plain and encrypted images. Then, we propose a systematic method to reduce the impact of different encryption algorithms. We further evaluate the similarity between semantic content sets to obtain the proposed PVSI. We also analyze the consistency between the proposed visual security definition and PVSI. Extensive experiments are performed on five publicly available image databases. Our experimental results demonstrate that compared with many existing state-of-the-art visual security metrics, the proposed PVSI exhibits better performance not only on images generated from specific image encryption algorithms but also on publicly available image databases.
{"title":"Perceptual visual security index: Analyzing image content leakage for vision language models","authors":"Lishuang Hu ,&nbsp;Tao Xiang ,&nbsp;Shangwei Guo ,&nbsp;Xiaoguo Li ,&nbsp;Ying Yang","doi":"10.1016/j.jisa.2025.103988","DOIUrl":"10.1016/j.jisa.2025.103988","url":null,"abstract":"<div><div>During the training phase of vision language models (VLMs), the privacy storage and sharing of images are of paramount importance. While the Visual Security Index (VSI) is commonly used for content leakage analysis, it usually focuses on comparing content similarity between plain and protected or encrypted images, neglecting the threat model of visual security. In this paper, considering the functionality of the human visual capability, we comprehensively analyze the system model of VSIs and propose a novel perceptual visual security index (PVSI) to evaluate the content leakage of perceptually encrypted images for VLMs. In particular, we take visual perception (<strong>VP</strong>) as the adversary’s capability and present the definition of VSI under an honest-but-curious threat model. To evaluate the content leakage of encrypted images under the <strong>VP</strong> assumption, we first present a robust feature descriptor and obtain the semantic content sets of both plain and encrypted images. Then, we propose a systematic method to reduce the impact of different encryption algorithms. We further evaluate the similarity between semantic content sets to obtain the proposed PVSI. We also analyze the consistency between the proposed visual security definition and PVSI. Extensive experiments are performed on five publicly available image databases. Our experimental results demonstrate that compared with many existing state-of-the-art visual security metrics, the proposed PVSI exhibits better performance not only on images generated from specific image encryption algorithms but also on publicly available image databases.</div></div>","PeriodicalId":48638,"journal":{"name":"Journal of Information Security and Applications","volume":"89 ","pages":"Article 103988"},"PeriodicalIF":3.8,"publicationDate":"2025-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143350748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A heuristic assisted cyber attack detection system using multi-scale and attention-based adaptive hybrid network
IF 3.8 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-02-07 DOI: 10.1016/j.jisa.2025.103970
R. Lakshman Naik , Dr. Sourabh Jain , Dr. Manjula Bairam
Business domains have employed distributed platforms, and these domains use networks and communication services to send vital information that must be secured. To secure confidentiality, the information security system is introduced, which is described as the generation of the data, the network, and the hardware systems. Practically, all of our daily activities depend upon information and communication technology, which is vulnerable to threats. To rectify these issues, a deep learning-related cyber security system is developed to protect the data from various cyber-attacks. Initially, the cyber attacks are detected using Multi-scale and Attention-based Adaptive Hybrid Network (MA-AHNet), where the networks such as Dilated Long Short Term Memory (LSTM) and Deep Temporal Convolutional Network (DTCN) are integrated to construct MA-AHNet. The parameters from MA-AHNet are tuned with the support of the Fitness-based Ebola Optimization Algorithm (FEOA) to improve the detection performance. Then, the authorized user detection is carried out via the same MA-AHNet. Finally, the risk prediction is done via the same MA-AHNet to identify the level of risk in the network. These cyber-attacks, user authorization, and risk detection processes provide higher security. The experimental findings are validated with the traditional cyber security systems concerning various performance measures.
{"title":"A heuristic assisted cyber attack detection system using multi-scale and attention-based adaptive hybrid network","authors":"R. Lakshman Naik ,&nbsp;Dr. Sourabh Jain ,&nbsp;Dr. Manjula Bairam","doi":"10.1016/j.jisa.2025.103970","DOIUrl":"10.1016/j.jisa.2025.103970","url":null,"abstract":"<div><div>Business domains have employed distributed platforms, and these domains use networks and communication services to send vital information that must be secured. To secure confidentiality, the information security system is introduced, which is described as the generation of the data, the network, and the hardware systems. Practically, all of our daily activities depend upon information and communication technology, which is vulnerable to threats. To rectify these issues, a deep learning-related cyber security system is developed to protect the data from various cyber-attacks. Initially, the cyber attacks are detected using Multi-scale and Attention-based Adaptive Hybrid Network (MA-AHNet), where the networks such as Dilated Long Short Term Memory (LSTM) and Deep Temporal Convolutional Network (DTCN) are integrated to construct MA-AHNet. The parameters from MA-AHNet are tuned with the support of the Fitness-based Ebola Optimization Algorithm (FEOA) to improve the detection performance. Then, the authorized user detection is carried out via the same MA-AHNet. Finally, the risk prediction is done via the same MA-AHNet to identify the level of risk in the network. These cyber-attacks, user authorization, and risk detection processes provide higher security. The experimental findings are validated with the traditional cyber security systems concerning various performance measures.</div></div>","PeriodicalId":48638,"journal":{"name":"Journal of Information Security and Applications","volume":"89 ","pages":"Article 103970"},"PeriodicalIF":3.8,"publicationDate":"2025-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143232569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FP-growth-based signature extraction and unknown variants of DoS/DDoS attack detection on real-time data stream
IF 3.8 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-02-07 DOI: 10.1016/j.jisa.2025.103996
Arpita Srivastava, Ditipriya Sinha
Protecting sensitive information on Internet from unknown attacks is challenging due to no known signatures, limited historical data, a high number of false positives, and a lack of vendor patches. This paper has proposed a statistical method to detect unknown variants of denial-of-service (DoS)/ distributed denial-of-service (DDoS) (high-volume) attacks. The proposed method is primarily divided into two modules: DoS/DDoS attack signature extraction and unknown variants of DoS/DDoS attack detection. A setup in laboratory of NITP is created to capture real-time traffic of six different variants of DoS or DDoS attacks with benign network traffic behavior, referred to as RTNITP24. Unique DoS/DDoS attack signatures are extracted by applying a Frequent-Pattern Growth (FP-Growth) algorithm using 71 % of RTNITP24 data having DoS/DDoS attack and benign traffic, assuming these signatures are primarily present in DoS/DDoS attack traffic but rarely in benign traffic. These signatures are stored in a high-volume attack (HVA) knowledge base (KB). Unknown variants of the DoS/DDoS (high-volume) attack detection module use an HVA knowledge base and pcap files of 29 % RTNITP24 and CICIDS2017 new data packets, which is not considered in the attack signature extraction module. Jaccard similarity score is computed between new data packets and attack signatures and scrutinizes the two main conditions: if similarity score of any of the signatures is greater than or equal to rule threshold or if the average similarity score of all the signatures is greater than or equal to the overall threshold. Packet is detected as malicious if any of aforementioned conditions are true. Otherwise, the packet is benign. Proposed model achieves high accuracy (91.66 % and 94.87 %) and low false alarm rates (5.32 % and 4.98 %) on RTNITP24 and CICIDS2017 datasets, respectively. Additionally, proposed model is compared to apriori-based rule extraction technique and current state-of-the-art methods, revealing that it outperforms both apriori-based and existing methods.
{"title":"FP-growth-based signature extraction and unknown variants of DoS/DDoS attack detection on real-time data stream","authors":"Arpita Srivastava,&nbsp;Ditipriya Sinha","doi":"10.1016/j.jisa.2025.103996","DOIUrl":"10.1016/j.jisa.2025.103996","url":null,"abstract":"<div><div>Protecting sensitive information on Internet from unknown attacks is challenging due to no known signatures, limited historical data, a high number of false positives, and a lack of vendor patches. This paper has proposed a statistical method to detect unknown variants of denial-of-service (DoS)/ distributed denial-of-service (DDoS) (high-volume) attacks. The proposed method is primarily divided into two modules: DoS/DDoS attack signature extraction and unknown variants of DoS/DDoS attack detection. A setup in laboratory of NITP is created to capture real-time traffic of six different variants of DoS or DDoS attacks with benign network traffic behavior, referred to as RTNITP24. Unique DoS/DDoS attack signatures are extracted by applying a Frequent-Pattern Growth (FP-Growth) algorithm using 71 % of RTNITP24 data having DoS/DDoS attack and benign traffic, assuming these signatures are primarily present in DoS/DDoS attack traffic but rarely in benign traffic. These signatures are stored in a high-volume attack (HVA) knowledge base (KB). Unknown variants of the DoS/DDoS (high-volume) attack detection module use an HVA knowledge base and pcap files of 29 % RTNITP24 and CICIDS2017 new data packets, which is not considered in the attack signature extraction module. Jaccard similarity score is computed between new data packets and attack signatures and scrutinizes the two main conditions: if similarity score of any of the signatures is greater than or equal to rule threshold or if the average similarity score of all the signatures is greater than or equal to the overall threshold. Packet is detected as malicious if any of aforementioned conditions are true. Otherwise, the packet is benign. Proposed model achieves high accuracy (91.66 % and 94.87 %) and low false alarm rates (5.32 % and 4.98 %) on RTNITP24 and CICIDS2017 datasets, respectively. Additionally, proposed model is compared to apriori-based rule extraction technique and current state-of-the-art methods, revealing that it outperforms both apriori-based and existing methods.</div></div>","PeriodicalId":48638,"journal":{"name":"Journal of Information Security and Applications","volume":"89 ","pages":"Article 103996"},"PeriodicalIF":3.8,"publicationDate":"2025-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143232568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Accuracy-aware differential privacy in federated learning of large transformer models
IF 3.8 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-02-06 DOI: 10.1016/j.jisa.2025.103986
Junyan Ouyang, Rui Han, Xiaojiang Zuo, Yunlai Cheng, Chi Harold Liu
Federated learning with Differential privacy (DP-FL) allows distributed clients to collaboratively train a model by exchanging their model parameters with injected noises. Despite the great benefits in privacy protection, DP-FL still suffers from large noise that increases linearly with model size. Hence when applying large transformers in modern AI systems, DP-FL may cause severe accuracy degradation. The prior art either injects isotropic noises to all model parameters, or relies on empirical settings to vary noises injected in different model parts. In this paper, we propose AccurateDP to systematically leverage the distinct effects of noises on every unit of model accuracy to improve DP-FL performance. The key of AccurateDP is to support noise injection at multiple granularities to minimize accuracy variations in DP. Given a granularity and a privacy budget, AccurateDP further provides an automatic means to find the optimal noise injection setting and provides theoretical proofs for our approach. We implemented AccurateDP to support prevalent transformer models. Extensive evaluation against latest techniques shows AccurateDP increases accuracy by an average of 7.69% under the same privacy budget and gains more accuracy improvement (9.23%) when applied to large models.
{"title":"Accuracy-aware differential privacy in federated learning of large transformer models","authors":"Junyan Ouyang,&nbsp;Rui Han,&nbsp;Xiaojiang Zuo,&nbsp;Yunlai Cheng,&nbsp;Chi Harold Liu","doi":"10.1016/j.jisa.2025.103986","DOIUrl":"10.1016/j.jisa.2025.103986","url":null,"abstract":"<div><div>Federated learning with Differential privacy (DP-FL) allows distributed clients to collaboratively train a model by exchanging their model parameters with injected noises. Despite the great benefits in privacy protection, DP-FL still suffers from large noise that increases linearly with model size. Hence when applying large transformers in modern AI systems, DP-FL may cause severe accuracy degradation. The prior art either injects isotropic noises to all model parameters, or relies on empirical settings to vary noises injected in different model parts. In this paper, we propose AccurateDP to systematically leverage the distinct effects of noises on every unit of model accuracy to improve DP-FL performance. The key of AccurateDP is to support noise injection at multiple granularities to minimize accuracy variations in DP. Given a granularity and a privacy budget, AccurateDP further provides an automatic means to find the optimal noise injection setting and provides theoretical proofs for our approach. We implemented AccurateDP to support prevalent transformer models. Extensive evaluation against latest techniques shows AccurateDP increases accuracy by an average of 7.69% under the same privacy budget and gains more accuracy improvement (9.23%) when applied to large models.</div></div>","PeriodicalId":48638,"journal":{"name":"Journal of Information Security and Applications","volume":"89 ","pages":"Article 103986"},"PeriodicalIF":3.8,"publicationDate":"2025-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143232382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A generic cryptographic algorithm identification scheme based on ciphertext features
IF 3.8 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-02-04 DOI: 10.1016/j.jisa.2025.103984
Jiabao Li , Hanlin Sun , Zhanfei Du , Yaxuan Wang , Ke Yuan , Chunfu Jia
To assist relevant agencies in conducting security assessments of commercial cryptographic applications or establishing security monitoring and early warning mechanisms for cryptographic system, this paper proposes a generic cryptographic algorithm identification scheme based on ciphertext features and machine learning. The assessment agency generates a dataset with the information for testing and sends it to the testing server. Subsequently, the target agency server employs the cryptographic system to generate a ciphertext dataset, which is then transmitted to the testing server. By extracting features from the ciphertext and applying machine learning techniques, the cryptographic algorithms can be accurately identified on the testing server. Finally, the test results are generated and transmitted back to the assessment agency. This paper formally defines the scheme model and presents a detailed implementation. The scheme is primarily used in the security assessment of commercial cryptographic applications, allowing the assessment agency to analyze the obtained ciphertext files and determine whether the cryptographic algorithms meet specified requirements, as well as assess any potential risks. Notably, this approach avoids physical contact with cryptographic equipment and minimizes disruptions to the target agency’s normal operations during the assessment.
{"title":"A generic cryptographic algorithm identification scheme based on ciphertext features","authors":"Jiabao Li ,&nbsp;Hanlin Sun ,&nbsp;Zhanfei Du ,&nbsp;Yaxuan Wang ,&nbsp;Ke Yuan ,&nbsp;Chunfu Jia","doi":"10.1016/j.jisa.2025.103984","DOIUrl":"10.1016/j.jisa.2025.103984","url":null,"abstract":"<div><div>To assist relevant agencies in conducting security assessments of commercial cryptographic applications or establishing security monitoring and early warning mechanisms for cryptographic system, this paper proposes a generic cryptographic algorithm identification scheme based on ciphertext features and machine learning. The assessment agency generates a dataset with the information for testing and sends it to the testing server. Subsequently, the target agency server employs the cryptographic system to generate a ciphertext dataset, which is then transmitted to the testing server. By extracting features from the ciphertext and applying machine learning techniques, the cryptographic algorithms can be accurately identified on the testing server. Finally, the test results are generated and transmitted back to the assessment agency. This paper formally defines the scheme model and presents a detailed implementation. The scheme is primarily used in the security assessment of commercial cryptographic applications, allowing the assessment agency to analyze the obtained ciphertext files and determine whether the cryptographic algorithms meet specified requirements, as well as assess any potential risks. Notably, this approach avoids physical contact with cryptographic equipment and minimizes disruptions to the target agency’s normal operations during the assessment.</div></div>","PeriodicalId":48638,"journal":{"name":"Journal of Information Security and Applications","volume":"89 ","pages":"Article 103984"},"PeriodicalIF":3.8,"publicationDate":"2025-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143170896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LDAC: A lightweight data access control scheme with constant size ciphertext in VSNs based on blockchain
IF 3.8 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-02-03 DOI: 10.1016/j.jisa.2025.103982
Cien Chen, Yanli Ren, Chen Lin
The vehicular social network (VSN) offers diverse services such as traffic management, data sharing, and safe driving. However, malicious users in VSNs may steal and tamper with shared data, which can bring about privacy leakage issues and even cause serious traffic accidents. The CP-ABE algorithm can effectively protect shared data in VSNs and enable one-to-many data sharing. However, it faces issues of high computational complexity and high ciphertext storage overhead. To ensure the security and confidentiality of shared data in VSNs, we propose a lightweight data access control scheme(LDAC) with constant size ciphertext based on blockchain, which greatly reduces the storage and computing overhead of vehicle users. Due to the presence of malicious users and outdated attributes in VSNs, the LDAC scheme supports user revocation and attribute revocation. The multi-authority CP-ABE algorithm is combined with blockchain to enable distributed key distribution and the verification of decrypted data integrity. Security analysis indicates that the security and confidentiality of shared data can be effectively protected by the LDAC scheme. Experimental results indicate that the LDAC scheme can realize more lightweight calculation while achieving constant size ciphertext in comparison to the previous schemes.
{"title":"LDAC: A lightweight data access control scheme with constant size ciphertext in VSNs based on blockchain","authors":"Cien Chen,&nbsp;Yanli Ren,&nbsp;Chen Lin","doi":"10.1016/j.jisa.2025.103982","DOIUrl":"10.1016/j.jisa.2025.103982","url":null,"abstract":"<div><div>The vehicular social network (VSN) offers diverse services such as traffic management, data sharing, and safe driving. However, malicious users in VSNs may steal and tamper with shared data, which can bring about privacy leakage issues and even cause serious traffic accidents. The CP-ABE algorithm can effectively protect shared data in VSNs and enable one-to-many data sharing. However, it faces issues of high computational complexity and high ciphertext storage overhead. To ensure the security and confidentiality of shared data in VSNs, we propose a lightweight data access control scheme(LDAC) with constant size ciphertext based on blockchain, which greatly reduces the storage and computing overhead of vehicle users. Due to the presence of malicious users and outdated attributes in VSNs, the LDAC scheme supports user revocation and attribute revocation. The multi-authority CP-ABE algorithm is combined with blockchain to enable distributed key distribution and the verification of decrypted data integrity. Security analysis indicates that the security and confidentiality of shared data can be effectively protected by the LDAC scheme. Experimental results indicate that the LDAC scheme can realize more lightweight calculation while achieving constant size ciphertext in comparison to the previous schemes.</div></div>","PeriodicalId":48638,"journal":{"name":"Journal of Information Security and Applications","volume":"89 ","pages":"Article 103982"},"PeriodicalIF":3.8,"publicationDate":"2025-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143170135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BSFL: A blockchain-oriented secure federated learning scheme for 5G
IF 3.8 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-02-01 DOI: 10.1016/j.jisa.2025.103983
Gang Han , Weiran Ma , Yinghui Zhang , Yuyuan Liu , Shuanggen Liu
Ensuring data security, privacy, and defense against poisoning attacks in 5G intelligent scheduling has become a critical research priority. To address this, this paper proposes BSFL, a verifiable and secure federated learning scheme resistant to poisoning attacks, integrating blockchain technology. This scheme fully leverages the high speed and low latency characteristics of 5G networks, enabling rapid scheduling and real-time processing of smart devices, thus providing robust data support for federated learning. By incorporating the decentralized, immutable, and transparent nature of blockchain, we design a blockchain-based federated learning framework that facilitates verification of feature results and comparison of data features among participants, ensuring the security and reliability of scheduling data. Moreover, it prevents denial-of-service attacks to a certain extent. Experimental results demonstrate that this scheme not only significantly improves the efficiency and accuracy of federated learning but also effectively mitigates the potential threat of poisoning attacks, providing a robust security guarantee for federated learning in 5G intelligent scheduling environments.
{"title":"BSFL: A blockchain-oriented secure federated learning scheme for 5G","authors":"Gang Han ,&nbsp;Weiran Ma ,&nbsp;Yinghui Zhang ,&nbsp;Yuyuan Liu ,&nbsp;Shuanggen Liu","doi":"10.1016/j.jisa.2025.103983","DOIUrl":"10.1016/j.jisa.2025.103983","url":null,"abstract":"<div><div>Ensuring data security, privacy, and defense against poisoning attacks in 5G intelligent scheduling has become a critical research priority. To address this, this paper proposes BSFL, a verifiable and secure federated learning scheme resistant to poisoning attacks, integrating blockchain technology. This scheme fully leverages the high speed and low latency characteristics of 5G networks, enabling rapid scheduling and real-time processing of smart devices, thus providing robust data support for federated learning. By incorporating the decentralized, immutable, and transparent nature of blockchain, we design a blockchain-based federated learning framework that facilitates verification of feature results and comparison of data features among participants, ensuring the security and reliability of scheduling data. Moreover, it prevents denial-of-service attacks to a certain extent. Experimental results demonstrate that this scheme not only significantly improves the efficiency and accuracy of federated learning but also effectively mitigates the potential threat of poisoning attacks, providing a robust security guarantee for federated learning in 5G intelligent scheduling environments.</div></div>","PeriodicalId":48638,"journal":{"name":"Journal of Information Security and Applications","volume":"89 ","pages":"Article 103983"},"PeriodicalIF":3.8,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143232381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Information Security and Applications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1