Pub Date : 2024-04-04DOI: 10.1007/s12243-024-01028-2
Abstract
Log file analysis is increasingly being addressed through the use of large language models (LLM). LLM provides the mechanism for discovering embeddings for distinguishing between different behaviors present in log files. In this work, we are interested in discriminating between normal and anomalous behaviors via an unsupervised learning approach. To this end, firstly five recent LLM architectures are evaluated over six different log files. Then, further research is conducted to explicitly quantify the significance of performing self-supervised fine-tuning on the LLMs. Moreover, we show that the quality of an (unsupervised) feature map used to make the overall (normal/anomalous) predictions may also benefit from an AutoEncoder stage between LLM and feature map. Such an AutoEncoder provides significant reductions in the cost of training the feature map and typically improves the quality of the resulting predictions.
{"title":"Large language models and unsupervised feature learning: implications for log analysis","authors":"","doi":"10.1007/s12243-024-01028-2","DOIUrl":"https://doi.org/10.1007/s12243-024-01028-2","url":null,"abstract":"<h3>Abstract</h3> <p>Log file analysis is increasingly being addressed through the use of large language models (LLM). LLM provides the mechanism for discovering embeddings for distinguishing between different behaviors present in log files. In this work, we are interested in discriminating between normal and anomalous behaviors via an unsupervised learning approach. To this end, firstly five recent LLM architectures are evaluated over six different log files. Then, further research is conducted to explicitly quantify the significance of performing self-supervised fine-tuning on the LLMs. Moreover, we show that the quality of an (unsupervised) feature map used to make the overall (normal/anomalous) predictions may also benefit from an AutoEncoder stage between LLM and feature map. Such an AutoEncoder provides significant reductions in the cost of training the feature map and typically improves the quality of the resulting predictions.</p>","PeriodicalId":50761,"journal":{"name":"Annals of Telecommunications","volume":"93 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2024-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140573502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-04DOI: 10.1007/s12243-024-01023-7
Zhiyuan Wei, Usman Rauf, Fadi Mohsen
Insider threats refer to harmful actions carried out by authorized users within an organization, posing the most damaging risks. The increasing number of these threats has revealed the inadequacy of traditional methods for detecting and mitigating insider threats. These existing approaches lack the ability to analyze activity-related information in detail, resulting in delayed detection of malicious intent. Additionally, current methods lack advancements in addressing noisy datasets or unknown scenarios, leading to under-fitting or over-fitting of the models. To address these, our paper presents a hybrid insider threat detection framework. We not only enhance prediction accuracy by incorporating a layer of statistical criteria on top of machine learning-based classification but also present optimal parameters to address over/under-fitting of models. We evaluate the performance of our framework using a real-life threat test dataset (CERT r4.2) and compare it to existing methods on the same dataset (Glasser and Lindauer 2013). Our initial evaluation demonstrates that our proposed framework achieves an accuracy of 98.48% in detecting insider threats, surpassing the performance of most of the existing methods. Additionally, our framework effectively handles potential bias and data imbalance issues that can arise in real-life scenarios.
{"title":"E-Watcher: insider threat monitoring and detection for enhanced security","authors":"Zhiyuan Wei, Usman Rauf, Fadi Mohsen","doi":"10.1007/s12243-024-01023-7","DOIUrl":"https://doi.org/10.1007/s12243-024-01023-7","url":null,"abstract":"<p>Insider threats refer to harmful actions carried out by authorized users within an organization, posing the most damaging risks. The increasing number of these threats has revealed the inadequacy of traditional methods for detecting and mitigating insider threats. These existing approaches lack the ability to analyze activity-related information in detail, resulting in delayed detection of malicious intent. Additionally, current methods lack advancements in addressing noisy datasets or unknown scenarios, leading to under-fitting or over-fitting of the models. To address these, our paper presents a hybrid insider threat detection framework. We not only enhance prediction accuracy by incorporating a layer of statistical criteria on top of machine learning-based classification but also present optimal parameters to address over/under-fitting of models. We evaluate the performance of our framework using a real-life threat test dataset (CERT r4.2) and compare it to existing methods on the same dataset (Glasser and Lindauer 2013). Our initial evaluation demonstrates that our proposed framework achieves an accuracy of 98.48% in detecting insider threats, surpassing the performance of most of the existing methods. Additionally, our framework effectively handles potential bias and data imbalance issues that can arise in real-life scenarios.</p>","PeriodicalId":50761,"journal":{"name":"Annals of Telecommunications","volume":"2015 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2024-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140573836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-02DOI: 10.1007/s12243-024-01026-4
Marie-José Montpetit, Walter Cerroni
{"title":"ICIN 2023 special issue — Emergence of the data and intelligence networking across the edge-cloud continuum","authors":"Marie-José Montpetit, Walter Cerroni","doi":"10.1007/s12243-024-01026-4","DOIUrl":"10.1007/s12243-024-01026-4","url":null,"abstract":"","PeriodicalId":50761,"journal":{"name":"Annals of Telecommunications","volume":"79 3-4","pages":"131 - 133"},"PeriodicalIF":1.8,"publicationDate":"2024-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140751719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-27DOI: 10.1007/s12243-024-01021-9
Vivek Kumar, Kamal Kumar, Maheep Singh
There has been a significant development in the design of intrusion detection systems (IDS) by using deep learning (DL)/machine learning (ML) methods for detecting threats in a computer network. Unfortunately, these DL/ML-based IDS are vulnerable to adversarial examples, wherein a malicious data sample can be slightly perturbed to cause a misclassification by an IDS while retaining its malicious properties. Unlike image recognition domain, the network domain has certain constraints known as domain constraints which are multifarious interrelationships and dependencies between features. To be considered as practical and realizable, an adversary must ensure that the adversarial examples comply with domain constraints. Recently, generative models like GANs and VAEs have been extensively used for generating adversarial examples against IDS. However, majority of these techniques generate adversarial examples which do not satisfy all domain constraints. Also, current generative methods lack explicit restrictions on the amount of perturbation which a malicious data sample undergoes during the crafting of adversarial examples, leading to the potential generation of invalid data samples. To address these limitations, a solution is presented in this work which utilize a variational autoencoder to generate adversarial examples that not only result in misclassification by an IDS, but also satisfy domain constraints. Instead of perturbing the data samples itself, the adversarial examples are crafted by perturbing the latent space representation of the data sample. It allows the generation of adversarial examples under limited perturbation. This research has explored the novel applications of generative networks for generating constraint satisfying adversarial examples. The experimental results support the claims with an attack success rate of 64.8(%) against ML/DL-based IDS. The trained model can be integrated further into an operational IDS to strengthen its robustness against adversarial examples; however, this is out of scope of this work.
{"title":"Generating practical adversarial examples against learning-based network intrusion detection systems","authors":"Vivek Kumar, Kamal Kumar, Maheep Singh","doi":"10.1007/s12243-024-01021-9","DOIUrl":"https://doi.org/10.1007/s12243-024-01021-9","url":null,"abstract":"<p>There has been a significant development in the design of intrusion detection systems (IDS) by using deep learning (DL)/machine learning (ML) methods for detecting threats in a computer network. Unfortunately, these DL/ML-based IDS are vulnerable to adversarial examples, wherein a malicious data sample can be slightly perturbed to cause a misclassification by an IDS while retaining its malicious properties. Unlike image recognition domain, the network domain has certain constraints known as <i>domain constraints</i> which are multifarious interrelationships and dependencies between features. To be considered as practical and realizable, an adversary must ensure that the adversarial examples comply with domain constraints. Recently, generative models like GANs and VAEs have been extensively used for generating adversarial examples against IDS. However, majority of these techniques generate adversarial examples which do not satisfy all domain constraints. Also, current generative methods lack explicit restrictions on the amount of perturbation which a malicious data sample undergoes during the crafting of adversarial examples, leading to the potential generation of invalid data samples. To address these limitations, a solution is presented in this work which utilize a variational autoencoder to generate adversarial examples that not only result in misclassification by an IDS, but also satisfy domain constraints. Instead of perturbing the data samples itself, the adversarial examples are crafted by perturbing the latent space representation of the data sample. It allows the generation of adversarial examples under limited perturbation. This research has explored the novel applications of generative networks for generating constraint satisfying adversarial examples. The experimental results support the claims with an attack success rate of 64.8<span>(%)</span> against ML/DL-based IDS. The trained model can be integrated further into an operational IDS to strengthen its robustness against adversarial examples; however, this is out of scope of this work.</p>","PeriodicalId":50761,"journal":{"name":"Annals of Telecommunications","volume":"33 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2024-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140313957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Modern attacks, such as advanced persistent threats, hide command-and-control channels inside authorized network traffic like DNS or DNS over HTTPS to infiltrate the local network and exfiltrate sensitive data. Detecting such malicious traffic using traditional techniques is cumbersome especially when the traffic encrypted like DNS over HTTPS. Unsupervised machine learning techniques, and more specifically density-based spatial clustering of applications with noise (DBSCAN), can achieve good results in detecting malicious DNS tunnels. However, DBSCAN requires manually tuning two hyperparameters, whose optimal values can differ depending on the dataset. In this article, we propose an improved algorithm called AutoRoC-DBSCAN that can automatically find the best hyperparameters. We evaluated and obtained good results on two different datasets: a dataset we created with malicious DNS tunnels and the CIRA-CIC-DoHBrw-2020 dataset with malicious DoH tunnels.
高级持续性威胁等现代攻击会在 DNS 或通过 HTTPS 的 DNS 等授权网络流量中隐藏命令和控制通道,以渗透本地网络并外泄敏感数据。使用传统技术检测此类恶意流量非常麻烦,尤其是像通过 HTTPS 的 DNS 这样的加密流量。无监督机器学习技术,特别是基于密度的带噪声应用空间聚类(DBSCAN),可以在检测恶意 DNS 隧道方面取得良好效果。然而,DBSCAN 需要手动调整两个超参数,而这两个参数的最佳值可能因数据集而异。在本文中,我们提出了一种名为 AutoRoC-DBSCAN 的改进算法,它可以自动找到最佳超参数。我们在两个不同的数据集上进行了评估,并取得了良好的结果:一个是我们用恶意 DNS 隧道创建的数据集,另一个是用恶意 DoH 隧道创建的 CIRA-CIC-DoHBrw-2020 数据集。
{"title":"AutoRoC-DBSCAN: automatic tuning of DBSCAN to detect malicious DNS tunnels","authors":"Thi Quynh Nguyen, Romain Laborde, Abdelmalek Benzekri, Arnaud Oglaza, Mehdi Mounsif","doi":"10.1007/s12243-024-01025-5","DOIUrl":"https://doi.org/10.1007/s12243-024-01025-5","url":null,"abstract":"<p>Modern attacks, such as advanced persistent threats, hide command-and-control channels inside authorized network traffic like DNS or DNS over HTTPS to infiltrate the local network and exfiltrate sensitive data. Detecting such malicious traffic using traditional techniques is cumbersome especially when the traffic encrypted like DNS over HTTPS. Unsupervised machine learning techniques, and more specifically density-based spatial clustering of applications with noise (DBSCAN), can achieve good results in detecting malicious DNS tunnels. However, DBSCAN requires manually tuning two hyperparameters, whose optimal values can differ depending on the dataset. In this article, we propose an improved algorithm called AutoRoC-DBSCAN that can automatically find the best hyperparameters. We evaluated and obtained good results on two different datasets: a dataset we created with malicious DNS tunnels and the CIRA-CIC-DoHBrw-2020 dataset with malicious DoH tunnels.</p>","PeriodicalId":50761,"journal":{"name":"Annals of Telecommunications","volume":"15 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2024-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140202859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-13DOI: 10.1007/s12243-024-01017-5
Antoine Fressancourt, Luigi Iannone, Mael Kerichard
We present a deeper analysis of Ariadne, a privacy-preserving network layer communication protocol that we introduced in Fressancourt and Iannone (2023). Ariadne uses a source routing approach to avoid relying on trusted third parties. In Ariadne, a source node willing to send anonymized network traffic to a destination uses a path consisting in nodes with which it has pre-shared symmetric keys. Temporary keys derived from those pre-shared keys are used to protect the communication’s privacy using onion routing techniques, ensuring session unlinkability for packets following the same path. Ariadne enhances previous approaches to preserve communication privacy by introducing two novelties. First, the source route is encoded in a fixed size, sequentially encrypted vector of routing information elements, in which the elements’ positions in the vector are pseudo-randomly permuted. Second, the temporary keys used to process the packets on the path are referenced using mutually known encrypted patterns. This avoids the use of an explicit key reference that could be used to de-anonymize the communications. This article enriches our previous presentation of Ariadne Fressancourt and Iannone (2023) with a set of formal proofs of its security properties. Besides, a performance evaluation of Ariadne’s Rust implementation is presented to assess the ability of our protocol to protect privacy at the network layer in real-world use cases.
{"title":"A deeper look at Ariadne: a privacy-preserving network layer protocol","authors":"Antoine Fressancourt, Luigi Iannone, Mael Kerichard","doi":"10.1007/s12243-024-01017-5","DOIUrl":"https://doi.org/10.1007/s12243-024-01017-5","url":null,"abstract":"<p>We present a deeper analysis of Ariadne, a privacy-preserving network layer communication protocol that we introduced in Fressancourt and Iannone (2023). Ariadne uses a source routing approach to avoid relying on trusted third parties. In Ariadne, a source node willing to send anonymized network traffic to a destination uses a path consisting in nodes with which it has pre-shared symmetric keys. Temporary keys derived from those pre-shared keys are used to protect the communication’s privacy using onion routing techniques, ensuring <i>session unlinkability</i> for packets following the same path. Ariadne enhances previous approaches to preserve communication privacy by introducing two novelties. First, the source route is encoded in a fixed size, sequentially encrypted vector of routing information elements, in which the elements’ positions in the vector are pseudo-randomly permuted. Second, the temporary keys used to process the packets on the path are referenced using mutually known encrypted patterns. This avoids the use of an explicit key reference that could be used to de-anonymize the communications. This article enriches our previous presentation of Ariadne Fressancourt and Iannone (2023) with a set of formal proofs of its security properties. Besides, a performance evaluation of Ariadne’s Rust implementation is presented to assess the ability of our protocol to protect privacy at the network layer in real-world use cases.</p>","PeriodicalId":50761,"journal":{"name":"Annals of Telecommunications","volume":"24 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2024-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140116886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-13DOI: 10.1007/s12243-024-01018-4
Michael Kouremetis, Dean Lawrence, Ron Alford, Zoe Cheuvront, David Davila, Benjamin Geyer, Trevor Haigh, Ethan Michalak, Rachel Murphy, Gianpaolo Russo
As the capabilities of cyber adversaries continue to evolve, now in parallel to the explosion of maturing and publicly-available artificial intelligence (AI) technologies, cyber defenders may reasonably wonder when cyber adversaries will begin to also field these AI technologies. In this regard, some promising (read: scary) areas of AI for cyber attack capabilities are search, automated planning, and reinforcement learning. As such, one possible defensive mechanism against future AI-enabled adversaries is that of cyber deception. To that end, in this work, we present and evaluate Mirage, an experimentation system demonstrated in both emulation and simulation forms that allows for the implementation and testing of novel cyber deceptions designed to counter cyber adversaries that use AI search and planning capabilities.
{"title":"Mirage: cyber deception against autonomous cyber attacks in emulation and simulation","authors":"Michael Kouremetis, Dean Lawrence, Ron Alford, Zoe Cheuvront, David Davila, Benjamin Geyer, Trevor Haigh, Ethan Michalak, Rachel Murphy, Gianpaolo Russo","doi":"10.1007/s12243-024-01018-4","DOIUrl":"https://doi.org/10.1007/s12243-024-01018-4","url":null,"abstract":"<p>As the capabilities of cyber adversaries continue to evolve, now in parallel to the explosion of maturing and publicly-available artificial intelligence (AI) technologies, cyber defenders may reasonably wonder when cyber adversaries will begin to also field these AI technologies. In this regard, some promising (read: scary) areas of AI for cyber attack capabilities are search, automated planning, and reinforcement learning. As such, one possible defensive mechanism against future AI-enabled adversaries is that of cyber deception. To that end, in this work, we present and evaluate Mirage, an experimentation system demonstrated in both emulation and simulation forms that allows for the implementation and testing of novel cyber deceptions designed to counter cyber adversaries that use AI search and planning capabilities.</p>","PeriodicalId":50761,"journal":{"name":"Annals of Telecommunications","volume":"8 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2024-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140116917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-12DOI: 10.1007/s12243-024-01019-3
Muhammad Shahid Farid, Badi uz Zaman Babar, Muhammad Hassan Khan
Three-dimensional (3D) video technology has gained immense admiration in recent times due to its numerous applications, particularly in the television and cinema industry. Three-dimensional television (3DTV) and free-viewpoint television (FTV) are two well-known applications that provide the end-user with a real-world and high-quality 3D display. In both applications, multiple views captured from different viewpoints are rendered simultaneously to offer depth sensation to the viewer. A large number of views are needed to enable FTV. However, transmitting this massive amount of data is challenging due to bandwidth limitations. Multiview video-plus-depth (MVD) is the most popular format where in addition to color images, corresponding depth information is also available which represents the scene geometry. The MVD format with the help of depth image-based rendering (DIBR) enables the generation of views at novel viewpoints. In this paper, we introduce a panorama-based representation of MVD data with an efficient keyframe-based disocclusions handling technique. The panorama view for a stereo pair with depth is constructed from the left view and the novel appearing region of the right view which is not visible from the left viewpoint. The disocclusions that appear in the right view when obtained from the DIBR of the left view are collected in a special frame named as keyframe. On the decoder side, the left view is available with a simple crop of panorama view. The right view is obtained through DIBR of the left view combined with the appearing region from the panorama view. The disocclusions in this warped view are filled from the keyframe. The panorama view with additional keyframes and the corresponding depth map are compressed using the standard HEVC codec. The experimental evaluations performed on standard MVD sequences showed that the proposed scheme achieves excellent video quality while saving considerable bit rate compared to HEVC simulcast.
{"title":"Efficient representation of disoccluded regions in 3D video coding","authors":"Muhammad Shahid Farid, Badi uz Zaman Babar, Muhammad Hassan Khan","doi":"10.1007/s12243-024-01019-3","DOIUrl":"https://doi.org/10.1007/s12243-024-01019-3","url":null,"abstract":"<p>Three-dimensional (3D) video technology has gained immense admiration in recent times due to its numerous applications, particularly in the television and cinema industry. Three-dimensional television (3DTV) and free-viewpoint television (FTV) are two well-known applications that provide the end-user with a real-world and high-quality 3D display. In both applications, multiple views captured from different viewpoints are rendered simultaneously to offer depth sensation to the viewer. A large number of views are needed to enable FTV. However, transmitting this massive amount of data is challenging due to bandwidth limitations. Multiview video-plus-depth (MVD) is the most popular format where in addition to color images, corresponding depth information is also available which represents the scene geometry. The MVD format with the help of depth image-based rendering (DIBR) enables the generation of views at novel viewpoints. In this paper, we introduce a panorama-based representation of MVD data with an efficient keyframe-based disocclusions handling technique. The panorama view for a stereo pair with depth is constructed from the left view and the novel appearing region of the right view which is not visible from the left viewpoint. The disocclusions that appear in the right view when obtained from the DIBR of the left view are collected in a special frame named as keyframe. On the decoder side, the left view is available with a simple crop of panorama view. The right view is obtained through DIBR of the left view combined with the appearing region from the panorama view. The disocclusions in this warped view are filled from the keyframe. The panorama view with additional keyframes and the corresponding depth map are compressed using the standard HEVC codec. The experimental evaluations performed on standard MVD sequences showed that the proposed scheme achieves excellent video quality while saving considerable bit rate compared to HEVC simulcast.</p>","PeriodicalId":50761,"journal":{"name":"Annals of Telecommunications","volume":"4 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2024-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140116948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-29DOI: 10.1007/s12243-024-01016-6
Diego Canizio Lopes, André Nasserala, Ian Vilar Bastos, Igor Monteiro Moraes
In this article, we investigate the performance of the Pending Interest Table (PIT) of named data networking (NDN) routers in the presence of a collusive interest flooding attack (CIFA), which can overwhelm the PIT and cause delays in content retrieval. We simulate and evaluate the attack’s impact on the PIT occupancy rate and content retrieval delay. The results reveal that the CIFA is highly effective in compromising the performance of NDN routers, leading to high PIT occupancy rates, long content retrieval delays, and degraded overall network performance. The PIT occupancy rate can reach 95.83% during the attack, while the interest retrieval rate is less than 30%. The study highlights the need for effective countermeasures to mitigate the impact of such attacks.
{"title":"Evaluating pending interest table performance under the collusive interest flooding attack in named data networks","authors":"Diego Canizio Lopes, André Nasserala, Ian Vilar Bastos, Igor Monteiro Moraes","doi":"10.1007/s12243-024-01016-6","DOIUrl":"10.1007/s12243-024-01016-6","url":null,"abstract":"<div><p>In this article, we investigate the performance of the Pending Interest Table (PIT) of named data networking (NDN) routers in the presence of a collusive interest flooding attack (CIFA), which can overwhelm the PIT and cause delays in content retrieval. We simulate and evaluate the attack’s impact on the PIT occupancy rate and content retrieval delay. The results reveal that the CIFA is highly effective in compromising the performance of NDN routers, leading to high PIT occupancy rates, long content retrieval delays, and degraded overall network performance. The PIT occupancy rate can reach 95.83% during the attack, while the interest retrieval rate is less than 30%. The study highlights the need for effective countermeasures to mitigate the impact of such attacks.</p></div>","PeriodicalId":50761,"journal":{"name":"Annals of Telecommunications","volume":"79 7-8","pages":"475 - 486"},"PeriodicalIF":1.8,"publicationDate":"2024-02-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140007618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Beamforming represents a pivotal technology in massive multiple-input multiple-output (MIMO) systems, as it facilitates the regulation of transmission and reception operations. Beamforming techniques’ categorization is based either on their hardware architecture or implementation strategy. This paper proposes an orthogonal beamforming technology founded on a specific implementation method that utilizes predetermined orthogonal beams to serve users. The suggested approach incorporates numerous orthogonal beams relying on a substantial number of antennas at the base station. The primary objective of this approach is to enhance the performance of massive MIMO systems by augmenting spectral efficiency and accommodating more users. The proposed beamforming approach is well suited for millimeter frequency bands. The purpose of this paper is to explore the suggested orthogonal beamforming technology. The concept of this approach is described at first and then followed by an evaluation of its efficacy for a single user through the allocation of orthogonal beams. The suggested approach is also examined in the context of multiuser systems, and the results are compared with the adaptive ZF beamforming technique. Furthermore, the paper presents solutions to the issues that may arise in multiuser systems, for example, ensuring that each orthogonal beam is assigned to only one user. The simulations conducted in this study demonstrate that the suggested approach outperforms the ZF technique in terms of both the spectral efficiency and the number of serviced users. Specifically, the suggested approach can enhance SE by approximately 40.6% over the ZF technique, and it can support up to double the number of users when compared to the ZF approach.
{"title":"Orthogonal beamforming technique for massive MIMO systems","authors":"Marwa Abdelfatah, Abdelhalim Zekry, Shaimaa ElSayed","doi":"10.1007/s12243-024-01013-9","DOIUrl":"https://doi.org/10.1007/s12243-024-01013-9","url":null,"abstract":"<p>Beamforming represents a pivotal technology in massive multiple-input multiple-output (MIMO) systems, as it facilitates the regulation of transmission and reception operations. Beamforming techniques’ categorization is based either on their hardware architecture or implementation strategy. This paper proposes an orthogonal beamforming technology founded on a specific implementation method that utilizes predetermined orthogonal beams to serve users. The suggested approach incorporates numerous orthogonal beams relying on a substantial number of antennas at the base station. The primary objective of this approach is to enhance the performance of massive MIMO systems by augmenting spectral efficiency and accommodating more users. The proposed beamforming approach is well suited for millimeter frequency bands. The purpose of this paper is to explore the suggested orthogonal beamforming technology. The concept of this approach is described at first and then followed by an evaluation of its efficacy for a single user through the allocation of orthogonal beams. The suggested approach is also examined in the context of multiuser systems, and the results are compared with the adaptive ZF beamforming technique. Furthermore, the paper presents solutions to the issues that may arise in multiuser systems, for example, ensuring that each orthogonal beam is assigned to only one user. The simulations conducted in this study demonstrate that the suggested approach outperforms the ZF technique in terms of both the spectral efficiency and the number of serviced users. Specifically, the suggested approach can enhance SE by approximately 40.6% over the ZF technique, and it can support up to double the number of users when compared to the ZF approach.</p>","PeriodicalId":50761,"journal":{"name":"Annals of Telecommunications","volume":"105 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2024-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139920383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}