We study quantum superposition attacks against permutation-based pseudorandom cryptographic schemes. We first extend Kuwakado and Morii’s attack against the Even–Mansour cipher and exhibit key recovery attacks against a large class of pseudorandom schemes based on a single call to an n-bit permutation, with polynomial O(n) (or O(n2), if the concrete cost of Hadamard transform is also taken in) quantum steps. We then consider schemes, namely, two permutation-based pseudorandom cryptographic schemes. Using the improved Grover-meet-Simon method, we show that the keys of a wide class of schemes can be recovered with O(n) superposition queries (the complexity of the original is O(n2n/2)) and O(n2n/2) quantum steps. We also exhibit subclasses of “degenerated” schemes that lack certain internal operations and exhibit more efficient key recovery attacks using either the Simon’s algorithm or collision searching algorithm. Further, using the all-subkeys-recovery idea of Isobe and Shibutani, our results give rise to key recovery attacks against several recently proposed permutation-based PRFs, as well as the two-round Even–Mansour ciphers with generic key schedule functions and their tweakable variants. From a constructive perspective, our results establish new quantum Q2 security upper bounds for two permutation-based pseudorandom schemes as well as sound design choices.
{"title":"Superposition Attacks on Pseudorandom Schemes Based on Two or Less Permutations","authors":"Shaoxuan Zhang, Chun Guo, Qingju Wang","doi":"10.1049/2024/9991841","DOIUrl":"10.1049/2024/9991841","url":null,"abstract":"<p>We study quantum superposition attacks against permutation-based pseudorandom cryptographic schemes. We first extend Kuwakado and Morii’s attack against the Even–Mansour cipher and exhibit key recovery attacks against a large class of pseudorandom schemes based on a single call to an <i>n</i>-bit permutation, with polynomial <i>O</i>(<i>n</i>) (or <i>O</i>(<i>n</i><sup>2</sup>), if the concrete cost of Hadamard transform is also taken in) quantum steps. We then consider <span></span><math></math> schemes, namely, two permutation-based pseudorandom cryptographic schemes. Using the improved Grover-meet-Simon method, we show that the keys of a wide class of <span></span><math></math> schemes can be recovered with <i>O</i>(<i>n</i>) superposition queries (the complexity of the original is <i>O</i>(<i>n</i>2<sup><i>n</i>/2</sup>)) and <i>O</i>(<i>n</i>2<sup><i>n</i>/2</sup>) quantum steps. We also exhibit subclasses of “degenerated” <span></span><math></math> schemes that lack certain internal operations and exhibit more efficient key recovery attacks using either the Simon’s algorithm or collision searching algorithm. Further, using the all-subkeys-recovery idea of Isobe and Shibutani, our results give rise to key recovery attacks against several recently proposed permutation-based PRFs, as well as the two-round Even–Mansour ciphers with generic key schedule functions and their tweakable variants. From a constructive perspective, our results establish new quantum Q2 security upper bounds for two permutation-based pseudorandom schemes as well as sound design choices.</p>","PeriodicalId":50380,"journal":{"name":"IET Information Security","volume":"2024 1","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/2024/9991841","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142174295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Syed Imran Akhtar, Abdul Rauf, Muhammad Faisal Amjad, Ifra Batool
Prospects of cloud computing as a technology that optimizes resources, reduces complexity, and provides cost-effective solutions to its consumers are well established. The future of cloud is the “cloud of clouds,” where cloud service providers (CSPs) collaborate with each other to provide ever-scalable solutions to their customers. However, one of the most restricting factors toward the use of the cloud by its consumers is their concerns about data security. Most sensitive to any organization is its data, thus, to give confidence to these organizations to put their data in the cloud requires a trustworthy framework. Therefore, this paper proposes an inter-cloud data security framework, which is a set of controls and a mechanism to measure trust for data sharing based on compliance with the controls. The proposed framework for building inter-cloud trust for data security (FBI-TDS) defines a set of data security controls extracted to cover the possible data-related threats linked with various inter-cloud use cases. As part of FBI-TDS, a mechanism is suggested that would enable CSPs to view compliance with data security controls and the overall trustworthiness of other CSPs. This would enable them to decide the level of interaction that they might undertake, depending upon their data security commitments. A data security compliance monitor service is proposed which measures compliance with data security controls. This service communicates with data trust as a service (DTaaS), which measures the trustworthiness of a CSP based on its total compliance value, users’ feedback rating, and cloud security auditor rating. CSPs who subscribe to DTaaS would be able to view the trustworthiness of other CSPs, yet they would be bound to provide access to the service to measure theirs as well. This new approach to data security in inter-cloud is a mix of data security controls, their measure of compliance, and based on this trust value of a CSP for handling data. The proposed solution thus promotes the cloud of clouds by securing inter-cloud interactions for data-related use cases.
{"title":"Inter-Cloud Data Security Framework to Build Trust Based on Compliance with Controls","authors":"Syed Imran Akhtar, Abdul Rauf, Muhammad Faisal Amjad, Ifra Batool","doi":"10.1049/2024/6565102","DOIUrl":"10.1049/2024/6565102","url":null,"abstract":"<p>Prospects of cloud computing as a technology that optimizes resources, reduces complexity, and provides cost-effective solutions to its consumers are well established. The future of cloud is the “cloud of clouds,” where cloud service providers (CSPs) collaborate with each other to provide ever-scalable solutions to their customers. However, one of the most restricting factors toward the use of the cloud by its consumers is their concerns about data security. Most sensitive to any organization is its data, thus, to give confidence to these organizations to put their data in the cloud requires a trustworthy framework. Therefore, this paper proposes an inter-cloud data security framework, which is a set of controls and a mechanism to measure trust for data sharing based on compliance with the controls. The proposed framework for building inter-cloud trust for data security (FBI-TDS) defines a set of data security controls extracted to cover the possible data-related threats linked with various inter-cloud use cases. As part of FBI-TDS, a mechanism is suggested that would enable CSPs to view compliance with data security controls and the overall trustworthiness of other CSPs. This would enable them to decide the level of interaction that they might undertake, depending upon their data security commitments. A data security compliance monitor service is proposed which measures compliance with data security controls. This service communicates with data trust as a service (DTaaS), which measures the trustworthiness of a CSP based on its total compliance value, users’ feedback rating, and cloud security auditor rating. CSPs who subscribe to DTaaS would be able to view the trustworthiness of other CSPs, yet they would be bound to provide access to the service to measure theirs as well. This new approach to data security in inter-cloud is a mix of data security controls, their measure of compliance, and based on this trust value of a CSP for handling data. The proposed solution thus promotes the cloud of clouds by securing inter-cloud interactions for data-related use cases.</p>","PeriodicalId":50380,"journal":{"name":"IET Information Security","volume":"2024 1","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/2024/6565102","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142100124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many lightweight block ciphers have been proposed for IoT devices that have limited resources. SLIM, LBC-IoT, and SLA are lightweight block ciphers developed for IoT systems. The designer of SLIM presented a 7-round differential distinguisher and an 11-round linear trail using a heuristic method. We have comprehensively sought the longest distinguisher for linear cryptanalysis, zero-correlation linear cryptanalysis, impossible differential attack, and integral attack using the mixed integer linear Programming (MILP) on SLIM, LBC-IoT, and SLA. The search led to discovery of a 16-round linear trail on SLIM, which is 5-round longer than the earlier result. We have also discovered 7-, 7-, and 9-round distinguishers for zero-correlation linear cryptanalysis, impossible differential attack, and integral attack, which are new results for SLIM. We have revealed 9-, 8-, and 11-round distinguishers on LBC-IoT for zero-correlation linear cryptanalysis, impossible differential attack, and integral attack. We have presented full-round distinguishers on SLA for integral attack using only two chosen plaintexts. We performed a key recovery attack on 16-round SLIM with an experimental verification. This verification took 106 s with a success rate of 93%. Moreover, we present a key recovery attack on 19-round SLIM using 16-round linear trail with correlation 2−15: the necessary number of known plaintext–ciphertext pairs is 231; the time complexity is 264.4 encryptions; and the memory complexity is 238 bytes. Results show that this is the current best key recovery attack on SLIM. Because the recommended number of rounds is 32, SLIM is secure against linear cryptanalysis, as demonstrated herein.
{"title":"Bit-Based Evaluation of Lightweight Block Ciphers SLIM, LBC-IoT, and SLA by Mixed Integer Linear Programming","authors":"Nobuyuki Sugio","doi":"10.1049/2024/1741613","DOIUrl":"10.1049/2024/1741613","url":null,"abstract":"<p>Many lightweight block ciphers have been proposed for IoT devices that have limited resources. SLIM, LBC-IoT, and SLA are lightweight block ciphers developed for IoT systems. The designer of SLIM presented a 7-round differential distinguisher and an 11-round linear trail using a heuristic method. We have comprehensively sought the longest distinguisher for linear cryptanalysis, zero-correlation linear cryptanalysis, impossible differential attack, and integral attack using the mixed integer linear Programming (MILP) on SLIM, LBC-IoT, and SLA. The search led to discovery of a 16-round linear trail on SLIM, which is 5-round longer than the earlier result. We have also discovered 7-, 7-, and 9-round distinguishers for zero-correlation linear cryptanalysis, impossible differential attack, and integral attack, which are new results for SLIM. We have revealed 9-, 8-, and 11-round distinguishers on LBC-IoT for zero-correlation linear cryptanalysis, impossible differential attack, and integral attack. We have presented full-round distinguishers on SLA for integral attack using only two chosen plaintexts. We performed a key recovery attack on 16-round SLIM with an experimental verification. This verification took 106 s with a success rate of 93%. Moreover, we present a key recovery attack on 19-round SLIM using 16-round linear trail with correlation 2<sup>−15</sup>: the necessary number of known plaintext–ciphertext pairs is 2<sup>31</sup>; the time complexity is 2<sup>64.4</sup> encryptions; and the memory complexity is 2<sup>38</sup> bytes. Results show that this is the current best key recovery attack on SLIM. Because the recommended number of rounds is 32, SLIM is secure against linear cryptanalysis, as demonstrated herein.</p>","PeriodicalId":50380,"journal":{"name":"IET Information Security","volume":"2024 1","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/2024/1741613","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142045337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The static scanning identification of android application packages (APK) has been widely proven to be an effective and scalable method. However, the existing identification methods either collect feature values from known APKs for inefficient comparative analysis, or use expensive program syntax or semantic analysis methods to extract features. Therefore, this paper proposes an APK static identification method that is different from traditional graph analysis. We match application programming interface (API) call graph to a complex network, and use a dual-centrality analysis method to calculate the importance of sensitive nodes in the API call graph, while integrating the global and relative influence of sensitive nodes. Our key insight is that the dual-centrality analysis method can more accurately characterize the graph semantic information of Android malicious APKs. We created and named a method DCDroid and evaluated it on a dataset of 4,428 benign samples and 4,626 malicious samples. The experimental results show that compared to the four advanced methods Drebin, MaMaDroid, MalScan, and HomeDroid, DCDroid can identify Android malicious APKs with an accuracy of 97.5%, with an F1 value of 96.7% and is two times faster than HomeDroid, eight times faster than Drebin, and 17 times faster than MaMaDroid. We grabbed 10,000 APKs from the Google Play Market, DCDroid was able to find 68 malicious APKs, of which 67 were confirmed Android malicious APKs, with a good ability to identify market-level malicious APKs.
{"title":"DCDroid: An APK Static Identification Method Based on Naïve Bayes Classifier and Dual-Centrality Analysis","authors":"Lansheng Han, Peng Chen, Wei Liao","doi":"10.1049/2024/6652217","DOIUrl":"10.1049/2024/6652217","url":null,"abstract":"<p>The static scanning identification of android application packages (APK) has been widely proven to be an effective and scalable method. However, the existing identification methods either collect feature values from known APKs for inefficient comparative analysis, or use expensive program syntax or semantic analysis methods to extract features. Therefore, this paper proposes an APK static identification method that is different from traditional graph analysis. We match application programming interface (API) call graph to a complex network, and use a dual-centrality analysis method to calculate the importance of sensitive nodes in the API call graph, while integrating the global and relative influence of sensitive nodes. Our key insight is that the dual-centrality analysis method can more accurately characterize the graph semantic information of Android malicious APKs. We created and named a method <i>DCDroid</i> and evaluated it on a dataset of 4,428 benign samples and 4,626 malicious samples. The experimental results show that compared to the four advanced methods <i>Drebin</i>, <i>MaMaDroid</i>, <i>MalScan</i>, and <i>HomeDroid</i>, <i>DCDroid</i> can identify Android malicious APKs with an accuracy of 97.5%, with an F1 value of 96.7% and is two times faster than <i>HomeDroid</i>, eight times faster than <i>Drebin</i>, and 17 times faster than <i>MaMaDroid</i>. We grabbed 10,000 APKs from the Google Play Market, <i>DCDroid</i> was able to find 68 malicious APKs, of which 67 were confirmed Android malicious APKs, with a good ability to identify market-level malicious APKs.</p>","PeriodicalId":50380,"journal":{"name":"IET Information Security","volume":"2024 1","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/2024/6652217","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142007227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The data transmission and data retrieval process from the cloud is a critical issue because of cyber-attacks. The data in the cloud is highly vulnerable and may fall prey to hackers. The hackers tend to attack the data in the public network, deteriorating the range of confidentiality and the authentication of the data. To prevent this attack on the cloud data, this manuscript proposes a crypto deep ring topology firewall to protect the cloud from data breaches. The data transmission has been performed using egress ring topology crypto encryption that solves the difficulty in isolating the traffic path between the edge and cloud network. Moreover, during the cloud data retrieval, the data interoperability issue arises due to the improper cloud service level agreement, which is solved using an application programing interface firewall fetch intrusion prevention system used in the secure transmission technique in which the data are entered into the transport and session layer of the firewall and then into the intrusion detection and prevention system thus sieving of data is carried out to solve the amenability violation of the cloud network and eliminate data interoperability issue. The proposed model was implemented in the Python platform and provided an enhanced level of encryption and decryption performance than the existing cloud retrieval model, producing high access speed to the cloud network with data security. The proposed work has proved to be highly robust against cyber attacks like man-in-the-middle attacks and spoofing attacks.
{"title":"Crypto Deep Ring Topology Firewall in Sensitive Data Transmission and Retrieval in Cloud","authors":"Vikas K. Soman, V. Natarajan","doi":"10.1049/2024/8821086","DOIUrl":"10.1049/2024/8821086","url":null,"abstract":"<p>The data transmission and data retrieval process from the cloud is a critical issue because of cyber-attacks. The data in the cloud is highly vulnerable and may fall prey to hackers. The hackers tend to attack the data in the public network, deteriorating the range of confidentiality and the authentication of the data. To prevent this attack on the cloud data, this manuscript proposes a crypto deep ring topology firewall to protect the cloud from data breaches. The data transmission has been performed using egress ring topology crypto encryption that solves the difficulty in isolating the traffic path between the edge and cloud network. Moreover, during the cloud data retrieval, the data interoperability issue arises due to the improper cloud service level agreement, which is solved using an application programing interface firewall fetch intrusion prevention system used in the secure transmission technique in which the data are entered into the transport and session layer of the firewall and then into the intrusion detection and prevention system thus sieving of data is carried out to solve the amenability violation of the cloud network and eliminate data interoperability issue. The proposed model was implemented in the Python platform and provided an enhanced level of encryption and decryption performance than the existing cloud retrieval model, producing high access speed to the cloud network with data security. The proposed work has proved to be highly robust against cyber attacks like man-in-the-middle attacks and spoofing attacks.</p>","PeriodicalId":50380,"journal":{"name":"IET Information Security","volume":"2024 1","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/2024/8821086","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141980477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A great amount of data is generated by the Internet and communication areas’ rapid technological improvement, which expands the size of the network. These cutting-edge technologies could result in unique network attacks that present security risks. This intrusion launches many attacks on the communication network which is to be monitored. An intrusion detection system (IDS) is a tool to prevent from intrusions by inspecting the network traffic and to make sure the network integrity, confidentiality, availability, and robustness. Many researchers are focused to IDS with machine and deep learning approaches to detect the intruders. Yet, IDS face challenges to detect the intruders accurately with reduced false alarm rate, feature selection, and detection. High dimensional data affect the feature selection methods effectiveness and efficiency. Preprocessing of data to make the dataset as balanced, normalized, and transformed data is done before the feature selection and classification process. Efficient data preprocessing will ensure the whole IDS performance with improved detection rate (DR) and reduced false alarm rate (FAR). Since datasets are required for the various feature dimensions, this article proposes an efficient data preprocessing method that includes a series of techniques for data balance using SMOTE, data normalization with power transformation, data encoding using one hot and ordinal encoding, and feature reduction using a proposed deep sparse autoencoder (DSAE) with differential evolution (DE) on data before feature selection and classification. The efficiency of the transformation methods is evaluated with recursive Pearson correlation-based feature selection and graphical convolution neural network (G-CNN) methods.
{"title":"Efficient Intrusion Detection System Data Preprocessing Using Deep Sparse Autoencoder with Differential Evolution","authors":"Saranya N., Anandakumar Haldorai","doi":"10.1049/2024/9937803","DOIUrl":"10.1049/2024/9937803","url":null,"abstract":"<p>A great amount of data is generated by the Internet and communication areas’ rapid technological improvement, which expands the size of the network. These cutting-edge technologies could result in unique network attacks that present security risks. This intrusion launches many attacks on the communication network which is to be monitored. An intrusion detection system (IDS) is a tool to prevent from intrusions by inspecting the network traffic and to make sure the network integrity, confidentiality, availability, and robustness. Many researchers are focused to IDS with machine and deep learning approaches to detect the intruders. Yet, IDS face challenges to detect the intruders accurately with reduced false alarm rate, feature selection, and detection. High dimensional data affect the feature selection methods effectiveness and efficiency. Preprocessing of data to make the dataset as balanced, normalized, and transformed data is done before the feature selection and classification process. Efficient data preprocessing will ensure the whole IDS performance with improved detection rate (DR) and reduced false alarm rate (FAR). Since datasets are required for the various feature dimensions, this article proposes an efficient data preprocessing method that includes a series of techniques for data balance using SMOTE, data normalization with power transformation, data encoding using one hot and ordinal encoding, and feature reduction using a proposed deep sparse autoencoder (DSAE) with differential evolution (DE) on data before feature selection and classification. The efficiency of the transformation methods is evaluated with recursive Pearson correlation-based feature selection and graphical convolution neural network (G-CNN) methods.</p>","PeriodicalId":50380,"journal":{"name":"IET Information Security","volume":"2024 1","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/2024/9937803","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141980218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As the winner of the NIST lightweight cryptography project, Ascon has undergone extensive self-evaluation and third-party cryptanalysis. In this paper, we use constraint programming (CP) as a tool to analyze the Ascon permutation and propose several differential-based distinguishers. We first propose a search methodology for finding truncated differentials for Ascon with CP, the core of which is modeling with the undisturbed bits of the S-box. By using this method, we find the five- and six-round truncated differentials with a probability of 2−44 and 2−162, respectively. Considering the application of permutation in the context, we also provide the five- and six-round truncated differential distinguishers under the weak-key setting. Then, inspired by our five-round truncated differentials, we propose a six-round boomerang characteristic, and based on this, we obtain the five- and six-round sandwich distinguishers with a complexity of 270 and 2134, respectively. Using the CP tool again and specifying that the “3-3” differential pattern is satisfied in the middle rounds, we propose a six-round differential characteristic with a probability of 2−280, which increases the probability by 225 compared to the best known six-round differential characteristic.
{"title":"New Differential-Based Distinguishers for Ascon via Constraint Programming","authors":"Chan Song, Wenling Wu, Lei Zhang","doi":"10.1049/2024/6624991","DOIUrl":"10.1049/2024/6624991","url":null,"abstract":"<p>As the winner of the NIST lightweight cryptography project, Ascon has undergone extensive self-evaluation and third-party cryptanalysis. In this paper, we use constraint programming (CP) as a tool to analyze the Ascon permutation and propose several differential-based distinguishers. We first propose a search methodology for finding truncated differentials for Ascon with CP, the core of which is modeling with the undisturbed bits of the S-box. By using this method, we find the five- and six-round truncated differentials with a probability of 2<sup>−44</sup> and 2<sup>−162</sup>, respectively. Considering the application of permutation in the context, we also provide the five- and six-round truncated differential distinguishers under the weak-key setting. Then, inspired by our five-round truncated differentials, we propose a six-round boomerang characteristic, and based on this, we obtain the five- and six-round sandwich distinguishers with a complexity of 2<sup>70</sup> and 2<sup>134</sup>, respectively. Using the CP tool again and specifying that the “3-3” differential pattern is satisfied in the middle rounds, we propose a six-round differential characteristic with a probability of 2<sup>−280</sup>, which increases the probability by 2<sup>25</sup> compared to the best known six-round differential characteristic.</p>","PeriodicalId":50380,"journal":{"name":"IET Information Security","volume":"2024 1","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/2024/6624991","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141967308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Selvakumar Shanmugam, Rajesh Natarajan, Gururaj H. L., Francesco Flammini, Badria Sulaiman Alfurhood, Anitha Premkumar
Cloud computing (CC) is a network-based concept where users access data at a specific time and place. The CC comprises servers, storage, databases, networking, software, analytics, and intelligence. Cloud security is the cybersecurity authority dedicated to securing cloud computing systems. It includes keeping data private and safe across online-based infrastructure, applications, and platforms. Securing these systems involves the efforts of cloud providers and the clients that use them, whether an individual, small-to-medium business, or enterprise uses. Security is essential for protecting data and cloud resources from malicious activity. A cloud service provider is utilized to provide secure data storage services. Data integrity is a critical issue in cloud computing. However, using data storage services securely and ensuring data integrity in these cloud servers remain an issue for cloud users. We introduce a unique piecewise regressive Kupyna cryptographic hash blockchain (PRKCHB) technique to secure cloud services with higher data integrity to solve these issues. The proposed PRKCHB method involves user registration, cryptographic hash blockchain, and regression analysis. Initially, the registration process for each cloud user is performed. After registering user particulars, Davies–Meyer Kupyna’s cryptographic hash blockchain generates the hash value of data in each block. When a user requests data from the server, a piecewise regression function is used to validate their identity. Furthermore, the Gaussian kernel function recognizes authorized or unauthorized users for secure cloud information transmission. The regression function results in original data by enhanced integrity in the cloud. An analysis of the proposed PRKCHB technique evaluates different existing methods implemented in Python. The results contain different metrics: data confidentiality rate, data integrity rate, authentication time, storage overhead, and execution time. Compared to conventional techniques, findings corroborate the assertion that the proposed PRKCHB technique improves data confidentiality and integrity by up to 9% and 9% while lowering storage overhead, authentication time, and execution time by 10%, 12%, and 12%.
{"title":"Blockchain-Based Piecewise Regressive Kupyna Cryptography for Secure Cloud Services","authors":"Selvakumar Shanmugam, Rajesh Natarajan, Gururaj H. L., Francesco Flammini, Badria Sulaiman Alfurhood, Anitha Premkumar","doi":"10.1049/2024/6863755","DOIUrl":"10.1049/2024/6863755","url":null,"abstract":"<p>Cloud computing (CC) is a network-based concept where users access data at a specific time and place. The CC comprises servers, storage, databases, networking, software, analytics, and intelligence. Cloud security is the cybersecurity authority dedicated to securing cloud computing systems. It includes keeping data private and safe across online-based infrastructure, applications, and platforms. Securing these systems involves the efforts of cloud providers and the clients that use them, whether an individual, small-to-medium business, or enterprise uses. Security is essential for protecting data and cloud resources from malicious activity. A cloud service provider is utilized to provide secure data storage services. Data integrity is a critical issue in cloud computing. However, using data storage services securely and ensuring data integrity in these cloud servers remain an issue for cloud users. We introduce a unique piecewise regressive Kupyna cryptographic hash blockchain (PRKCHB) technique to secure cloud services with higher data integrity to solve these issues. The proposed PRKCHB method involves user registration, cryptographic hash blockchain, and regression analysis. Initially, the registration process for each cloud user is performed. After registering user particulars, Davies–Meyer Kupyna’s cryptographic hash blockchain generates the hash value of data in each block. When a user requests data from the server, a piecewise regression function is used to validate their identity. Furthermore, the Gaussian kernel function recognizes authorized or unauthorized users for secure cloud information transmission. The regression function results in original data by enhanced integrity in the cloud. An analysis of the proposed PRKCHB technique evaluates different existing methods implemented in Python. The results contain different metrics: data confidentiality rate, data integrity rate, authentication time, storage overhead, and execution time. Compared to conventional techniques, findings corroborate the assertion that the proposed PRKCHB technique improves data confidentiality and integrity by up to 9% and 9% while lowering storage overhead, authentication time, and execution time by 10%, 12%, and 12%.</p>","PeriodicalId":50380,"journal":{"name":"IET Information Security","volume":"2024 1","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/2024/6863755","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141967665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Artificial intelligence algorithms and big data analysis methods are commonly employed in network intrusion detection systems. However, challenges such as unbalanced data and unknown network intrusion modes can influence the effectiveness of these methods. Moreover, the information personnel of most enterprises lack specialized knowledge of information security. Thus, a simple and effective model for detecting abnormal behaviors may be more practical for information personnel than attempting to identify network intrusion modes. This study develops a network intrusion detection model by integrating weighted principal component analysis into an exponentially weighted moving average control chart. The proposed method assists information personnel in easily determining whether a network intrusion event has occurred. The effectiveness of the proposed method was validated using simulated examples.
{"title":"Using WPCA and EWMA Control Chart to Construct a Network Intrusion Detection Model","authors":"Ying-Ti Tsai, Chung-Ho Wang, Yung-Chia Chang, Lee-Ing Tong","doi":"10.1049/2024/3948341","DOIUrl":"10.1049/2024/3948341","url":null,"abstract":"<p>Artificial intelligence algorithms and big data analysis methods are commonly employed in network intrusion detection systems. However, challenges such as unbalanced data and unknown network intrusion modes can influence the effectiveness of these methods. Moreover, the information personnel of most enterprises lack specialized knowledge of information security. Thus, a simple and effective model for detecting abnormal behaviors may be more practical for information personnel than attempting to identify network intrusion modes. This study develops a network intrusion detection model by integrating weighted principal component analysis into an exponentially weighted moving average control chart. The proposed method assists information personnel in easily determining whether a network intrusion event has occurred. The effectiveness of the proposed method was validated using simulated examples.</p>","PeriodicalId":50380,"journal":{"name":"IET Information Security","volume":"2024 1","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/2024/3948341","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141967577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The fast dissemination speed and wide range of information dissemination on social media also enable false information and rumors to spread rapidly on public social media. Attackers can use false information to trigger public panic and disrupt social stability. Traditional multimodal sentiment analysis methods face challenges due to the suboptimal fusion of multimodal features and consequent diminution in classification accuracy. To address these issues, this study introduces a novel emotion classification model. The model solves the problem of interaction between modalities, which is neglected by the direct fusion of multimodal features, and improves the model’s ability to understand and generalize the semantics of emotions. The Transformer’s encoding layer is applied to extract sophisticated sentiment semantic encodings from audio and textual sequences. Subsequently, a complex bimodal feature interaction fusion attention mechanism is deployed to scrutinize intramodal and intermodal correlations and capture contextual dependencies. This approach enhances the model’s capacity to comprehend and extrapolate sentiment semantics. The cross-modal fused features are incorporated into the classification layer, enabling sentiment prediction. Experimental testing on the IEMOCAP dataset demonstrates that the proposed model achieves an emotion recognition classification accuracy of 78.5% and an F1-score of 77.6%. Compared to other mainstream multimodal emotion recognition methods, the proposed model shows significant improvements in all metrics. The experimental results demonstrate that the proposed method based on the Transformer and interactive attention mechanism can more fully understand the information of discourse emotion features in the network model. This research provides robust technical support for social network public sentiment security monitoring.
社交媒体的传播速度快、信息传播范围广,也使得虚假信息和谣言在公共社交媒体上迅速传播。攻击者可以利用虚假信息引发公众恐慌,破坏社会稳定。传统的多模态情感分析方法由于多模态特征融合不理想而面临挑战,并因此降低了分类的准确性。为解决这些问题,本研究引入了一种新型情感分类模型。该模型解决了多模态特征直接融合所忽视的模态间交互问题,并提高了模型理解和概括情感语义的能力。Transformer 的编码层用于从音频和文本序列中提取复杂的情感语义编码。随后,采用复杂的双模特征交互融合关注机制来仔细检查模内和模间相关性,并捕捉上下文依赖关系。这种方法增强了模型理解和推断情感语义的能力。跨模态融合特征被纳入分类层,从而实现情感预测。在 IEMOCAP 数据集上进行的实验测试表明,所提出的模型达到了 78.5% 的情感识别分类准确率和 77.6% 的 F1 分数。与其他主流多模态情感识别方法相比,所提出的模型在所有指标上都有显著提高。实验结果表明,基于变换器和交互关注机制的拟议方法能更充分地理解网络模型中的话语情感特征信息。该研究为社交网络公共情绪安全监测提供了有力的技术支持。
{"title":"Social Media Public Opinion Detection Using Multimodal Natural Language Processing and Attention Mechanisms","authors":"Yanxia Dui, Hongchun Hu","doi":"10.1049/2024/8880804","DOIUrl":"10.1049/2024/8880804","url":null,"abstract":"<p>The fast dissemination speed and wide range of information dissemination on social media also enable false information and rumors to spread rapidly on public social media. Attackers can use false information to trigger public panic and disrupt social stability. Traditional multimodal sentiment analysis methods face challenges due to the suboptimal fusion of multimodal features and consequent diminution in classification accuracy. To address these issues, this study introduces a novel emotion classification model. The model solves the problem of interaction between modalities, which is neglected by the direct fusion of multimodal features, and improves the model’s ability to understand and generalize the semantics of emotions. The Transformer’s encoding layer is applied to extract sophisticated sentiment semantic encodings from audio and textual sequences. Subsequently, a complex bimodal feature interaction fusion attention mechanism is deployed to scrutinize intramodal and intermodal correlations and capture contextual dependencies. This approach enhances the model’s capacity to comprehend and extrapolate sentiment semantics. The cross-modal fused features are incorporated into the classification layer, enabling sentiment prediction. Experimental testing on the IEMOCAP dataset demonstrates that the proposed model achieves an emotion recognition classification accuracy of 78.5% and an F1-score of 77.6%. Compared to other mainstream multimodal emotion recognition methods, the proposed model shows significant improvements in all metrics. The experimental results demonstrate that the proposed method based on the Transformer and interactive attention mechanism can more fully understand the information of discourse emotion features in the network model. This research provides robust technical support for social network public sentiment security monitoring.</p>","PeriodicalId":50380,"journal":{"name":"IET Information Security","volume":"2024 1","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/2024/8880804","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141631141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}