Pub Date : 2024-06-18DOI: 10.1016/j.jisa.2024.103821
Jacques Bou Abdo , Sherali Zeadally
Interrupting the web tracking kill chain is enough to disrupt the tracker’s ability to leverage the collected information; however, this may disrupt the personalized services enjoyed by many. Empowering the user to select which domains can be co-tracked gives him/her the upper hand over web trackers. This allows the user to enjoy the personalized services without fearing full inter-domain tracking. To achieve this, we propose a solution that attempts to introduce layers of anonymization serving as temporary identities to be used by the user while browsing. Those identities will be used for limited time (to sustain the customization and user experience resulting from tracking), and then discarded for a new identity. This approach allows the user to divide the activity into profiles which eliminates browsing history spilling over to other sessions. We proved the security of this approach mathematically and we demonstrated its usability using an open-source Proof-of-Concept built on top of blockchain.
{"title":"Disposable identities: Solving web tracking","authors":"Jacques Bou Abdo , Sherali Zeadally","doi":"10.1016/j.jisa.2024.103821","DOIUrl":"https://doi.org/10.1016/j.jisa.2024.103821","url":null,"abstract":"<div><p>Interrupting the web tracking kill chain is enough to disrupt the tracker’s ability to leverage the collected information; however, this may disrupt the personalized services enjoyed by many. Empowering the user to select which domains can be co-tracked gives him/her the upper hand over web trackers. This allows the user to enjoy the personalized services without fearing full inter-domain tracking. To achieve this, we propose a solution that attempts to introduce layers of anonymization serving as temporary identities to be used by the user while browsing. Those identities will be used for limited time (to sustain the customization and user experience resulting from tracking), and then discarded for a new identity. This approach allows the user to divide the activity into profiles which eliminates browsing history spilling over to other sessions. We proved the security of this approach mathematically and we demonstrated its usability using an open-source Proof-of-Concept built on top of blockchain.</p></div>","PeriodicalId":48638,"journal":{"name":"Journal of Information Security and Applications","volume":"84 ","pages":"Article 103821"},"PeriodicalIF":5.6,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141424500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-15DOI: 10.1016/j.jisa.2024.103816
Gao Wang, Gaoli Wang
<div><p>At CRYPTO 2019, Gohr pioneered the use of the neural distinguisher (<span><math><mrow><mi>N</mi><mi>D</mi></mrow></math></span>) for differential cryptanalysis, sparking growing interest in this approach. However, a key limitation of <span><math><mrow><mi>N</mi><mi>D</mi></mrow></math></span> is its inability to analyze as many rounds as the classical differential distinguisher (<span><math><mrow><mi>C</mi><mi>D</mi></mrow></math></span>). To overcome this, researchers have begun combining <span><math><mrow><mi>N</mi><mi>D</mi></mrow></math></span> with <span><math><mrow><mi>C</mi><mi>D</mi></mrow></math></span> into a classical-neural distinguisher (<span><math><mrow><mi>C</mi><mi>N</mi><mi>D</mi></mrow></math></span>) for differential cryptanalysis. Nevertheless, the optimal integration of <span><math><mrow><mi>C</mi><mi>D</mi></mrow></math></span> and <span><math><mrow><mi>N</mi><mi>D</mi></mrow></math></span> remains an under-studied and unresolved challenge.</p><p>In this paper, we introduce a superior approach for constructing the <span><math><mrow><mo>(</mo><mi>r</mi><mo>+</mo><mi>s</mi><mo>)</mo></mrow></math></span>-round differential distinguisher <span><math><mrow><mi>C</mi><mi>N</mi><msub><mrow><mi>D</mi></mrow><mrow><mi>r</mi><mo>+</mo><mi>s</mi></mrow></msub></mrow></math></span> by keeping the <span><math><mi>r</mi></math></span>-round classical distinguisher <span><math><mrow><mi>C</mi><msub><mrow><mi>D</mi></mrow><mrow><mi>r</mi></mrow></msub></mrow></math></span> and the <span><math><mi>s</mi></math></span>-round neural distinguisher <span><math><mrow><mi>N</mi><msub><mrow><mi>D</mi></mrow><mrow><mi>s</mi></mrow></msub></mrow></math></span> in balance. Through experimental analysis, we find that the data complexity of <span><math><mrow><mi>C</mi><mi>N</mi><msub><mrow><mi>D</mi></mrow><mrow><mi>r</mi><mo>+</mo><mi>s</mi></mrow></msub></mrow></math></span> closely approximates the product of that for <span><math><mrow><mi>C</mi><msub><mrow><mi>D</mi></mrow><mrow><mi>r</mi></mrow></msub></mrow></math></span> and <span><math><mrow><mi>N</mi><msub><mrow><mi>D</mi></mrow><mrow><mi>s</mi></mrow></msub></mrow></math></span>. This finding highlights the limitations of current strategies. Subsequently, we introduce an enhanced scheme for constructing <span><math><mrow><mi>C</mi><mi>N</mi><msub><mrow><mi>D</mi></mrow><mrow><mi>r</mi><mo>+</mo><mi>s</mi></mrow></msub></mrow></math></span>, which comprises three main components: a new method for searching the suitable differential characteristics, a scheme for constructing the neural distinguisher, and an accelerated evaluation strategy for the data complexity of <span><math><mrow><mi>C</mi><mi>N</mi><msub><mrow><mi>D</mi></mrow><mrow><mi>r</mi><mo>+</mo><mi>s</mi></mrow></msub></mrow></math></span>. To validate the effectiveness of our approach, we apply it to the round-reduced Simon32, Speck32 and Present64, achieving improved results. Specifically, for Simon32, our <span><math><mr
{"title":"Keeping classical distinguisher and neural distinguisher in balance","authors":"Gao Wang, Gaoli Wang","doi":"10.1016/j.jisa.2024.103816","DOIUrl":"https://doi.org/10.1016/j.jisa.2024.103816","url":null,"abstract":"<div><p>At CRYPTO 2019, Gohr pioneered the use of the neural distinguisher (<span><math><mrow><mi>N</mi><mi>D</mi></mrow></math></span>) for differential cryptanalysis, sparking growing interest in this approach. However, a key limitation of <span><math><mrow><mi>N</mi><mi>D</mi></mrow></math></span> is its inability to analyze as many rounds as the classical differential distinguisher (<span><math><mrow><mi>C</mi><mi>D</mi></mrow></math></span>). To overcome this, researchers have begun combining <span><math><mrow><mi>N</mi><mi>D</mi></mrow></math></span> with <span><math><mrow><mi>C</mi><mi>D</mi></mrow></math></span> into a classical-neural distinguisher (<span><math><mrow><mi>C</mi><mi>N</mi><mi>D</mi></mrow></math></span>) for differential cryptanalysis. Nevertheless, the optimal integration of <span><math><mrow><mi>C</mi><mi>D</mi></mrow></math></span> and <span><math><mrow><mi>N</mi><mi>D</mi></mrow></math></span> remains an under-studied and unresolved challenge.</p><p>In this paper, we introduce a superior approach for constructing the <span><math><mrow><mo>(</mo><mi>r</mi><mo>+</mo><mi>s</mi><mo>)</mo></mrow></math></span>-round differential distinguisher <span><math><mrow><mi>C</mi><mi>N</mi><msub><mrow><mi>D</mi></mrow><mrow><mi>r</mi><mo>+</mo><mi>s</mi></mrow></msub></mrow></math></span> by keeping the <span><math><mi>r</mi></math></span>-round classical distinguisher <span><math><mrow><mi>C</mi><msub><mrow><mi>D</mi></mrow><mrow><mi>r</mi></mrow></msub></mrow></math></span> and the <span><math><mi>s</mi></math></span>-round neural distinguisher <span><math><mrow><mi>N</mi><msub><mrow><mi>D</mi></mrow><mrow><mi>s</mi></mrow></msub></mrow></math></span> in balance. Through experimental analysis, we find that the data complexity of <span><math><mrow><mi>C</mi><mi>N</mi><msub><mrow><mi>D</mi></mrow><mrow><mi>r</mi><mo>+</mo><mi>s</mi></mrow></msub></mrow></math></span> closely approximates the product of that for <span><math><mrow><mi>C</mi><msub><mrow><mi>D</mi></mrow><mrow><mi>r</mi></mrow></msub></mrow></math></span> and <span><math><mrow><mi>N</mi><msub><mrow><mi>D</mi></mrow><mrow><mi>s</mi></mrow></msub></mrow></math></span>. This finding highlights the limitations of current strategies. Subsequently, we introduce an enhanced scheme for constructing <span><math><mrow><mi>C</mi><mi>N</mi><msub><mrow><mi>D</mi></mrow><mrow><mi>r</mi><mo>+</mo><mi>s</mi></mrow></msub></mrow></math></span>, which comprises three main components: a new method for searching the suitable differential characteristics, a scheme for constructing the neural distinguisher, and an accelerated evaluation strategy for the data complexity of <span><math><mrow><mi>C</mi><mi>N</mi><msub><mrow><mi>D</mi></mrow><mrow><mi>r</mi><mo>+</mo><mi>s</mi></mrow></msub></mrow></math></span>. To validate the effectiveness of our approach, we apply it to the round-reduced Simon32, Speck32 and Present64, achieving improved results. Specifically, for Simon32, our <span><math><mr","PeriodicalId":48638,"journal":{"name":"Journal of Information Security and Applications","volume":"84 ","pages":"Article 103816"},"PeriodicalIF":5.6,"publicationDate":"2024-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141328576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-14DOI: 10.1016/j.jisa.2024.103806
Amit Kumar Roy , Vijayakumar Varadaranjan , Keshab Nath
Wireless Mesh Network (WMN) has become the most favorable choice among various networking options due to its distributed nature. It offers continuous Internet services, in comparison with other conventional networks, through a self-healing and self-configuration approach. Due to the high mobility of mesh clients, handover authentication is an operation that demands significant attention in WMNs. Through the exchange of messages, mesh clients (MCs) and mesh routers (MRs) initiate the operation, allowing the client to authenticate itself with the foreign mesh router (FMR). In existing protocols, these messages were shared in plaintext format, making it easy for an attacker to breach their integrity. Therefore, a secure communication method should be established between MCs and MRs for message exchange. In this paper, we propose a protocol that offers efficient authentication while preserving message integrity during the handover operation. The experimental results show that our proposed protocol performs better and overcomes the limitations present in the existing protocols.
无线网格网络(WMN)因其分布式特性,已成为各种网络选择中最有利的选择。与其他传统网络相比,它通过自修复和自配置方法提供持续的互联网服务。由于网状客户的高流动性,移交认证是 WMN 中需要重点关注的一项操作。通过交换信息,网状客户机(MC)和网状路由器(MR)启动操作,允许客户机与外国网状路由器(FMR)进行身份验证。在现有协议中,这些信息以明文格式共享,攻击者很容易破坏其完整性。因此,应该在 MC 和 MR 之间建立一种安全的通信方法来交换信息。在本文中,我们提出了一种在移交操作过程中既能提供高效认证,又能保持信息完整性的协议。实验结果表明,我们提出的协议性能更好,克服了现有协议的局限性。
{"title":"Efficient handover authentication protocol with message integrity for mobile clients in wireless mesh networks","authors":"Amit Kumar Roy , Vijayakumar Varadaranjan , Keshab Nath","doi":"10.1016/j.jisa.2024.103806","DOIUrl":"https://doi.org/10.1016/j.jisa.2024.103806","url":null,"abstract":"<div><p>Wireless Mesh Network (WMN) has become the most favorable choice among various networking options due to its distributed nature. It offers continuous Internet services, in comparison with other conventional networks, through a self-healing and self-configuration approach. Due to the high mobility of mesh clients, handover authentication is an operation that demands significant attention in WMNs. Through the exchange of messages, mesh clients (MCs) and mesh routers (MRs) initiate the operation, allowing the client to authenticate itself with the foreign mesh router (FMR). In existing protocols, these messages were shared in plaintext format, making it easy for an attacker to breach their integrity. Therefore, a secure communication method should be established between MCs and MRs for message exchange. In this paper, we propose a protocol that offers efficient authentication while preserving message integrity during the handover operation. The experimental results show that our proposed protocol performs better and overcomes the limitations present in the existing protocols.</p></div>","PeriodicalId":48638,"journal":{"name":"Journal of Information Security and Applications","volume":"84 ","pages":"Article 103806"},"PeriodicalIF":5.6,"publicationDate":"2024-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141324180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-13DOI: 10.1016/j.jisa.2024.103804
Sharad Joshi , Suraj Saxena , Nitin Khanna
Vast volumes of printed documents continue to be used for various important as well as trivial applications. Such applications often rely on the information provided in the form of printed text documents whose integrity verification poses a challenge due to time constraints and lack of resources. Source printer identification provides essential information about the origin and integrity of a printed document in a fast and cost-effective manner. Even when fraudulent documents are identified, information about their origin can help stop future frauds. If a smartphone camera replaces scanner for the document acquisition process, document forensics would be more economical, user-friendly, and even faster in many applications where remote and distributed analysis is beneficial. Building on existing methods, we propose to learn a single CNN model from the fusion of letter images and their printer-specific noise residuals. In the absence of any publicly available dataset, we created a new dataset consisting of 2250 document images of text documents printed by eighteen printers and acquired by a smartphone camera at five acquisition settings. The proposed method achieves 98.42% document classification accuracy using images of letter ‘e’ under a 5 × 2 cross-validation approach. Further, when tested using about half a million letters of all types, it achieves 90.33% and 98.01% letter and document classification accuracies, respectively, thus highlighting the ability to learn a discriminative model without dependence on a single letter type. Also, classification accuracies are encouraging under various acquisition settings, including low illumination and change in angle between the document and camera planes.
{"title":"Source printer identification from document images acquired using smartphone","authors":"Sharad Joshi , Suraj Saxena , Nitin Khanna","doi":"10.1016/j.jisa.2024.103804","DOIUrl":"https://doi.org/10.1016/j.jisa.2024.103804","url":null,"abstract":"<div><p>Vast volumes of printed documents continue to be used for various important as well as trivial applications. Such applications often rely on the information provided in the form of printed text documents whose integrity verification poses a challenge due to time constraints and lack of resources. Source printer identification provides essential information about the origin and integrity of a printed document in a fast and cost-effective manner. Even when fraudulent documents are identified, information about their origin can help stop future frauds. If a smartphone camera replaces scanner for the document acquisition process, document forensics would be more economical, user-friendly, and even faster in many applications where remote and distributed analysis is beneficial. Building on existing methods, we propose to learn a single CNN model from the fusion of letter images and their printer-specific noise residuals. In the absence of any publicly available dataset, we created a new dataset consisting of 2250 document images of text documents printed by eighteen printers and acquired by a smartphone camera at five acquisition settings. The proposed method achieves 98.42% document classification accuracy using images of letter ‘e’ under a 5 × 2 cross-validation approach. Further, when tested using about half a million letters of all types, it achieves 90.33% and 98.01% letter and document classification accuracies, respectively, thus highlighting the ability to learn a discriminative model without dependence on a single letter type. Also, classification accuracies are encouraging under various acquisition settings, including low illumination and change in angle between the document and camera planes.</p></div>","PeriodicalId":48638,"journal":{"name":"Journal of Information Security and Applications","volume":"84 ","pages":"Article 103804"},"PeriodicalIF":5.6,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141324181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-13DOI: 10.1016/j.jisa.2024.103809
Aikaterini Kanta , Iwen Coisel , Mark Scanlon
Password-based authentication systems have many weaknesses, yet they remain overwhelmingly used and their announced disappearance is still undated. The system admin overcomes the imperfection by skilfully enforcing a strong password policy and sane password management on the server side. But in the end, the user behind the password is still responsible for the password’s strength. A poor choice can have dramatic consequences for the user or even for the service behind, especially considering critical infrastructure. On the other hand, law enforcement can benefit from a suspect’s weak decisions to recover digital content stored in an encrypted format. Generic password cracking procedures can support law enforcement in this matter — however, these approaches quickly demonstrate their limitations. This article proves that more targeted approaches can be used in combination with traditional strategies to increase the likelihood of success when contextual information is available and can be exploited.
{"title":"A comprehensive evaluation on the benefits of context based password cracking for digital forensics","authors":"Aikaterini Kanta , Iwen Coisel , Mark Scanlon","doi":"10.1016/j.jisa.2024.103809","DOIUrl":"https://doi.org/10.1016/j.jisa.2024.103809","url":null,"abstract":"<div><p>Password-based authentication systems have many weaknesses, yet they remain overwhelmingly used and their announced disappearance is still undated. The system admin overcomes the imperfection by skilfully enforcing a strong password policy and sane password management on the server side. But in the end, the user behind the password is still responsible for the password’s strength. A poor choice can have dramatic consequences for the user or even for the service behind, especially considering critical infrastructure. On the other hand, law enforcement can benefit from a suspect’s weak decisions to recover digital content stored in an encrypted format. Generic password cracking procedures can support law enforcement in this matter — however, these approaches quickly demonstrate their limitations. This article proves that more targeted approaches can be used in combination with traditional strategies to increase the likelihood of success when contextual information is available and can be exploited.</p></div>","PeriodicalId":48638,"journal":{"name":"Journal of Information Security and Applications","volume":"84 ","pages":"Article 103809"},"PeriodicalIF":5.6,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2214212624001121/pdfft?md5=4f90fdd3c66acaa8d04f675c1df40be6&pid=1-s2.0-S2214212624001121-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141324182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-13DOI: 10.1016/j.jisa.2024.103808
Zongye Zhang, Fucai Zhou, Ruiwei Hou
The Internet of Things (IoT) generates a significant volume of geo-tagged images via surveillance sensors in edge–cloud computing environments. Image search is essential to facilitate information sharing, data analysis, and strategic decision-making. However, outsourced images are typically encrypted for privacy protection, posing a challenge in simultaneously searching for visual and geographical relevance on encrypted images. To address this, this paper proposes an edge intelligence empowered privacy-preserving top- geo-tagged image search scheme for IoT in edge–cloud computing. The scheme presents a novel single-to-multi searchable encryption method for geo-tagged images that enables multiple users to perform secure nearest neighbor queries on a data source. Additionally, an extended anchor-based position determination method and an inner product-based distance calculation method are designed to enable geo-tagged image similarity calculation on ciphertext. Finally, a secure pruning method is introduced to improve query performance. Experiments are conducted to verify the performance of the scheme in terms of high efficiency and accuracy of the search.
{"title":"Privacy-preserving geo-tagged image search in edge–cloud computing for IoT","authors":"Zongye Zhang, Fucai Zhou, Ruiwei Hou","doi":"10.1016/j.jisa.2024.103808","DOIUrl":"https://doi.org/10.1016/j.jisa.2024.103808","url":null,"abstract":"<div><p>The Internet of Things (IoT) generates a significant volume of geo-tagged images via surveillance sensors in edge–cloud computing environments. Image search is essential to facilitate information sharing, data analysis, and strategic decision-making. However, outsourced images are typically encrypted for privacy protection, posing a challenge in simultaneously searching for visual and geographical relevance on encrypted images. To address this, this paper proposes an edge intelligence empowered privacy-preserving top-<span><math><mi>k</mi></math></span> geo-tagged image search scheme for IoT in edge–cloud computing. The scheme presents a novel single-to-multi searchable encryption method for geo-tagged images that enables multiple users to perform secure nearest neighbor queries on a data source. Additionally, an extended anchor-based position determination method and an inner product-based distance calculation method are designed to enable geo-tagged image similarity calculation on ciphertext. Finally, a secure pruning method is introduced to improve query performance. Experiments are conducted to verify the performance of the scheme in terms of high efficiency and accuracy of the search.</p></div>","PeriodicalId":48638,"journal":{"name":"Journal of Information Security and Applications","volume":"84 ","pages":"Article 103808"},"PeriodicalIF":5.6,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141324178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-10DOI: 10.1016/j.jisa.2024.103805
Lieyu Lv, Ling Xiong, Fagen Li
Nowadays, many security service providers have their own vulnerability databases and consider them as corporate property. How to ensure the normal use of client while protecting the privacy of these assets has become a problem that needs to be solved. This paper mainly introduces a privacy comparison protocol based on BGN and a version number standardization method, which can be used in scenarios of vulnerability database privacy comparison. Our scheme PCPHE adds random offsets and special preprocessing to avoid common factor attacks that may occur in privacy comparison, while ensuring that client does not know the specific vulnerability database content of the security service provider in a limited number of queries.
{"title":"PCPHE: A privacy comparison protocol for vulnerability detection based on homomorphic encryption","authors":"Lieyu Lv, Ling Xiong, Fagen Li","doi":"10.1016/j.jisa.2024.103805","DOIUrl":"https://doi.org/10.1016/j.jisa.2024.103805","url":null,"abstract":"<div><p>Nowadays, many security service providers have their own vulnerability databases and consider them as corporate property. How to ensure the normal use of client while protecting the privacy of these assets has become a problem that needs to be solved. This paper mainly introduces a privacy comparison protocol based on BGN and a version number standardization method, which can be used in scenarios of vulnerability database privacy comparison. Our scheme PCPHE adds random offsets and special preprocessing to avoid common factor attacks that may occur in privacy comparison, while ensuring that client does not know the specific vulnerability database content of the security service provider in a limited number of queries.</p></div>","PeriodicalId":48638,"journal":{"name":"Journal of Information Security and Applications","volume":"84 ","pages":"Article 103805"},"PeriodicalIF":5.6,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141298113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-10DOI: 10.1016/j.jisa.2024.103807
Yuming Liu, Yong Wang, Hao Feng
The application of Software-defined Networking (SDN) in low-latency scenarios, such as 6G, has received immense attention. Notably, our research reveals that SDN remains susceptible to link fabrication attacks (LFA) in low-latency environments, where existing detection methods fail to effectively detect LFA. To address this issue, we propose a novel detection method called Correlated Link Verification (CLV). CLV is composed of three phases. Firstly, we introduce a data processing method to mitigate measurement error and enhance robustness. Secondly, we present a multipath transmission simulation method to convert the measured performance disparity between correlated links into statistical features. Thirdly, we propose a dynamic threshold calculation method, which utilizes the statistical features to determine thresholds based on extreme value theory and probability distribution fitting. Finally, CLV identifies the fabricated link within correlated links based on the thresholds and current statistical features. Extensive experiments have been conducted to validate the feasibility, effectiveness, scalability and robustness of CLV. The experimental results demonstrate that CLV can effectively detect LFA in low-latency SDN networks.
{"title":"A novel link fabrication attack detection method for low-latency SDN networks","authors":"Yuming Liu, Yong Wang, Hao Feng","doi":"10.1016/j.jisa.2024.103807","DOIUrl":"https://doi.org/10.1016/j.jisa.2024.103807","url":null,"abstract":"<div><p>The application of Software-defined Networking (SDN) in low-latency scenarios, such as 6G, has received immense attention. Notably, our research reveals that SDN remains susceptible to link fabrication attacks (LFA) in low-latency environments, where existing detection methods fail to effectively detect LFA. To address this issue, we propose a novel detection method called Correlated Link Verification (CLV). CLV is composed of three phases. Firstly, we introduce a data processing method to mitigate measurement error and enhance robustness. Secondly, we present a multipath transmission simulation method to convert the measured performance disparity between correlated links into statistical features. Thirdly, we propose a dynamic threshold calculation method, which utilizes the statistical features to determine thresholds based on extreme value theory and probability distribution fitting. Finally, CLV identifies the fabricated link within correlated links based on the thresholds and current statistical features. Extensive experiments have been conducted to validate the feasibility, effectiveness, scalability and robustness of CLV. The experimental results demonstrate that CLV can effectively detect LFA in low-latency SDN networks.</p></div>","PeriodicalId":48638,"journal":{"name":"Journal of Information Security and Applications","volume":"84 ","pages":"Article 103807"},"PeriodicalIF":5.6,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141298114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-08DOI: 10.1016/j.jisa.2024.103797
Jong-Yeon Park , Jang-Won Ju , Wonil Lee , Bo Gyeong Kang , Yasuyuki Kachi , Kouichi Sakurai
Hiding countermeasure is among the best-known secure implementation techniques designed to counteract side-channel attacks. It uses a permutation algorithm to shuffle data. In today’s Post-Quantum Cryptography (PQC), hiding countermeasure has earned the limelight for its “shufflability” in lattice-based, and code-based, cryptographic algorithms. In this narrative, most importantly, as a rule, fast generation of permutations is paramount to both efficacy and security of an algorithm. The Fisher–Yates (FY) shuffling method has long been a popular choice for this purpose: the FY method generates randomly shuffled (finite) indices. However, despite its theoretical verity, with the FY method we anticipate the following risks of misuse, which can lead to biased shuffling sequences: (i) incorrect implementation, (ii) poor random source, and (iii) the chosen random number being too small. In this paper, we introduce a new statistical test called “approximate permutation criterion” (“APC”). We use it to examine some known cases of misused FY shuffling (i–iii). APC takes into consideration the fact that the super-exponential rate of growth of the factorial function , which represents the number of permutations of indices, defies any meaningful form of statistical tests. With APC one can verify whether the output permutations are biased or not with much lower testing cost. Mathematically, in this paper we introduce the so-called “th order permutation verification”, the underpinning notion upon which APC is based. We also compare APC with full sample space to demonstrate how well it encapsulates the statistical randomness of random permutations. We thereby provide a new method that identifies a bias that exists in the output permutations when implementing FY Shuffling through a visual ratio test and the chi-square () distribution test.
{"title":"A statistical verification method of random permutations for hiding countermeasure against side-channel attacks","authors":"Jong-Yeon Park , Jang-Won Ju , Wonil Lee , Bo Gyeong Kang , Yasuyuki Kachi , Kouichi Sakurai","doi":"10.1016/j.jisa.2024.103797","DOIUrl":"https://doi.org/10.1016/j.jisa.2024.103797","url":null,"abstract":"<div><p>Hiding countermeasure is among the best-known secure implementation techniques designed to counteract side-channel attacks. It uses a permutation algorithm to shuffle data. In today’s Post-Quantum Cryptography (PQC), hiding countermeasure has earned the limelight for its “shufflability” in lattice-based, and code-based, cryptographic algorithms. In this narrative, most importantly, as a rule, fast generation of permutations is paramount to both efficacy and security of an algorithm. The Fisher–Yates (FY) shuffling method has long been a popular choice for this purpose: the FY method generates randomly shuffled (finite) indices. However, despite its theoretical verity, with the FY method we anticipate the following risks of misuse, which can lead to biased shuffling sequences: (i) incorrect implementation, (ii) poor random source, and (iii) the chosen random number being too small. In this paper, we introduce a new statistical test called “approximate permutation criterion” (“APC”). We use it to examine some known cases of misused FY shuffling (i–iii). APC takes into consideration the fact that the super-exponential rate of growth of the factorial function <span><math><mrow><mi>N</mi><mo>!</mo></mrow></math></span>, which represents the number of permutations of <span><math><mi>N</mi></math></span> indices, defies any meaningful form of statistical tests. With APC one can verify whether the output permutations are biased or not with much lower testing cost. Mathematically, in this paper we introduce the so-called “<span><math><mi>k</mi></math></span>th order permutation verification”, the underpinning notion upon which APC is based. We also compare APC with full sample space to demonstrate how well it encapsulates the statistical randomness of random permutations. We thereby provide a new method that identifies a bias that exists in the output permutations when implementing FY Shuffling through a visual ratio test and the chi-square (<span><math><msup><mrow><mi>χ</mi></mrow><mrow><mn>2</mn></mrow></msup></math></span>) distribution test.</p></div>","PeriodicalId":48638,"journal":{"name":"Journal of Information Security and Applications","volume":"84 ","pages":"Article 103797"},"PeriodicalIF":5.6,"publicationDate":"2024-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2214212624001005/pdfft?md5=667e687ea99769a6ff80e01b65747c51&pid=1-s2.0-S2214212624001005-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141291442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-07DOI: 10.1016/j.jisa.2024.103803
Shuanggen Liu , Yingzi Hu , Xu An Wang , Xukai Liu , Yuqing Yin , Teng Wang
Cloud file sharing (CFS) in cloud storage is one of the essential tools for enterprises to improve their core competitiveness. In the sharing process, user dynamic management and players/readers abuse has always been a problem that needs to be solved, but malicious encryptors are also a new challenge. Therefore, preventing malicious encryption is another way to protect copyright issues. This scheme proposes a traitor tracing scheme with puncturable-based broadcast encryption in cloud storage, which is an improved scheme proposed in Ref. Garg et al. (2010). Based on the original completely collusion resistant traitor tracing scheme, the uniform distribution of hash output is used to prevent malicious encryptors. In addition, users can perform authentication during the decryption phase to prevent replay attacks. At the same time, the puncture algorithm is introduced, so that normal users can dynamically revoke themselves without affecting the normal use of other users. We prove that the scheme is secure under chosen plaintext attack (CPA). Theoretical analysis also shows that our scheme can prevent malicious encryptors in cloud file sharing and allow normal users to dynamically revoke. After experimental verification, our scheme offers distinct advantages over the existing one.
{"title":"Puncturable-based broadcast encryption with tracking for preventing malicious encryptors in cloud file sharing","authors":"Shuanggen Liu , Yingzi Hu , Xu An Wang , Xukai Liu , Yuqing Yin , Teng Wang","doi":"10.1016/j.jisa.2024.103803","DOIUrl":"https://doi.org/10.1016/j.jisa.2024.103803","url":null,"abstract":"<div><p>Cloud file sharing (CFS) in cloud storage is one of the essential tools for enterprises to improve their core competitiveness. In the sharing process, user dynamic management and players/readers abuse has always been a problem that needs to be solved, but malicious encryptors are also a new challenge. Therefore, preventing malicious encryption is another way to protect copyright issues. This scheme proposes a traitor tracing scheme with puncturable-based broadcast encryption in cloud storage, which is an improved scheme proposed in Ref. Garg et al. (2010). Based on the original completely collusion resistant traitor tracing scheme, the uniform distribution of hash output is used to prevent malicious encryptors. In addition, users can perform authentication during the decryption phase to prevent replay attacks. At the same time, the puncture algorithm is introduced, so that normal users can dynamically revoke themselves without affecting the normal use of other users. We prove that the scheme is secure under chosen plaintext attack (CPA). Theoretical analysis also shows that our scheme can prevent malicious encryptors in cloud file sharing and allow normal users to dynamically revoke. After experimental verification, our scheme offers distinct advantages over the existing one.</p></div>","PeriodicalId":48638,"journal":{"name":"Journal of Information Security and Applications","volume":"84 ","pages":"Article 103803"},"PeriodicalIF":5.6,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141286570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}