Pascal Wichmann, Matthias Marx, H. Federrath, Mathias Fischer
Network intrusion detection systems (NIDSs) can detect attacks in network traffic. However, the increasing ratio of encrypted connections on the Internet restricts their ability to observe such attacks. This paper proposes a completely passive method that allows to detect brute-force attacks in encrypted traffic without the need to decrypt it. For that, we propose five novel metrics for attack detection which quantify metadata like packet size or packet timing. We evaluate the performance of our method with synthetically generated but realistic traffic as well as on real-world traffic from a Tor exit node on the Internet. Our results indicate that the proposed metrics can reliably detect brute-force attacks in encrypted traffic in protocols like HTTPS, FTPS, IMAPS, SMTPS, and SSH. Simultaneously, our approach causes only a few false positives, achieving an F-measure between 75% and 100%.
{"title":"Detection of Brute-Force Attacks in End-to-End Encrypted Network Traffic","authors":"Pascal Wichmann, Matthias Marx, H. Federrath, Mathias Fischer","doi":"10.1145/3465481.3470113","DOIUrl":"https://doi.org/10.1145/3465481.3470113","url":null,"abstract":"Network intrusion detection systems (NIDSs) can detect attacks in network traffic. However, the increasing ratio of encrypted connections on the Internet restricts their ability to observe such attacks. This paper proposes a completely passive method that allows to detect brute-force attacks in encrypted traffic without the need to decrypt it. For that, we propose five novel metrics for attack detection which quantify metadata like packet size or packet timing. We evaluate the performance of our method with synthetically generated but realistic traffic as well as on real-world traffic from a Tor exit node on the Internet. Our results indicate that the proposed metrics can reliably detect brute-force attacks in encrypted traffic in protocols like HTTPS, FTPS, IMAPS, SMTPS, and SSH. Simultaneously, our approach causes only a few false positives, achieving an F-measure between 75% and 100%.","PeriodicalId":417395,"journal":{"name":"Proceedings of the 16th International Conference on Availability, Reliability and Security","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130663471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Copstein, J. Schwartzentruber, N. Zincir-Heywood, M. Heywood
The collection of log messages regarding the operation of deployed services and application is an integral component to the forensic analysis for the identification and understanding of security incidents. Approaches for parsing and abstraction of such logs, despite widespread use and study, do not directly account for the individualities of the domain of information security. This, in return, limits their applicability on the field. In this work, we analyze the state-of-the-art log parsing and abstraction algorithms from the perspective of information security. First, we reproduce/replicate previous analysis of such algorithms from the literature. Then, we evaluate their ability for parsing and abstraction of log files for forensic analysis purposes. Our study demonstrates that while the state-of-the-art techniques are accurate in log parsing, improvements are necessary in terms of achieving a holistic view to aid in forensic analysis for the identification and understanding of security incidents.
{"title":"Log Abstraction for Information Security: Heuristics and Reproducibility","authors":"R. Copstein, J. Schwartzentruber, N. Zincir-Heywood, M. Heywood","doi":"10.1145/3465481.3470083","DOIUrl":"https://doi.org/10.1145/3465481.3470083","url":null,"abstract":"The collection of log messages regarding the operation of deployed services and application is an integral component to the forensic analysis for the identification and understanding of security incidents. Approaches for parsing and abstraction of such logs, despite widespread use and study, do not directly account for the individualities of the domain of information security. This, in return, limits their applicability on the field. In this work, we analyze the state-of-the-art log parsing and abstraction algorithms from the perspective of information security. First, we reproduce/replicate previous analysis of such algorithms from the literature. Then, we evaluate their ability for parsing and abstraction of log files for forensic analysis purposes. Our study demonstrates that while the state-of-the-art techniques are accurate in log parsing, improvements are necessary in terms of achieving a holistic view to aid in forensic analysis for the identification and understanding of security incidents.","PeriodicalId":417395,"journal":{"name":"Proceedings of the 16th International Conference on Availability, Reliability and Security","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116792208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rule learning based intrusion detection systems (IDS) regularly collect and process network traffic, and thereafter they apply rule learning algorithms to the data to identify network communication behaviors represented as IF-THEN rules. Detection rules are inferred offline and can be periodically automatically updated online for intrusion detection. In this context, we implement in the present paper various attacks against MQTT in a carefully designed and very realistic experiment environment, instead of a simulation program as commonly seen in previous works, for data generation. Besides, we investigate a Bayesian rule learning based approach as countermeasure, which is able to detect various attack types. A Bayesian network is learned from training data and subsequently translated into a rule set for intrusion detection. The combination of prior knowledge (about the communication protocol and target system) and data help to efficiently learn the Bayesian network. The translation from the Bayesian network to a set of inherently interpretable rules can be regarded as a transformation from implicit knowledge to explicit knowledge. We show that our proposed method can achieve not only good detection performance but also high interpretability.
{"title":"A Bayesian Rule Learning Based Intrusion Detection System for the MQTT Communication Protocol","authors":"Qi Liu, H. Keller, V. Hagenmeyer","doi":"10.1145/3465481.3470046","DOIUrl":"https://doi.org/10.1145/3465481.3470046","url":null,"abstract":"Rule learning based intrusion detection systems (IDS) regularly collect and process network traffic, and thereafter they apply rule learning algorithms to the data to identify network communication behaviors represented as IF-THEN rules. Detection rules are inferred offline and can be periodically automatically updated online for intrusion detection. In this context, we implement in the present paper various attacks against MQTT in a carefully designed and very realistic experiment environment, instead of a simulation program as commonly seen in previous works, for data generation. Besides, we investigate a Bayesian rule learning based approach as countermeasure, which is able to detect various attack types. A Bayesian network is learned from training data and subsequently translated into a rule set for intrusion detection. The combination of prior knowledge (about the communication protocol and target system) and data help to efficiently learn the Bayesian network. The translation from the Bayesian network to a set of inherently interpretable rules can be regarded as a transformation from implicit knowledge to explicit knowledge. We show that our proposed method can achieve not only good detection performance but also high interpretability.","PeriodicalId":417395,"journal":{"name":"Proceedings of the 16th International Conference on Availability, Reliability and Security","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125613510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bahruz Jabiyev, Sinan Pehlivanoglu, Kaan Onarlioglu, E. Kirda
Internet-based media and social networks enable quick access to information; however, that has also made it easy to conduct disinformation campaigns. Fake news poses a serious threat to the functioning and safety of our society, as demonstrated by nation-state-sponsored campaigns to sway the 2016 US presidential election, and more recently COVID-19 pandemic hoaxes that promote false cures, putting lives at risk. FADE is a novel approach and service that helps Internet users detect fake news. FADE discovers multiple news sources covering the same story, analyzes their reputation, and checks the trustworthiness of cited sources. Our approach does not depend on any specific social media or news source, does not rely on costly textual content analysis, and does not require lengthy offline processing. Our experiments demonstrate above 85% detection accuracy with a practical implementation. FADE offers a path to empowering the Internet community with effective tools to identify fake news.
{"title":"FADE: Detecting Fake News Articles on the Web","authors":"Bahruz Jabiyev, Sinan Pehlivanoglu, Kaan Onarlioglu, E. Kirda","doi":"10.1145/3465481.3465751","DOIUrl":"https://doi.org/10.1145/3465481.3465751","url":null,"abstract":"Internet-based media and social networks enable quick access to information; however, that has also made it easy to conduct disinformation campaigns. Fake news poses a serious threat to the functioning and safety of our society, as demonstrated by nation-state-sponsored campaigns to sway the 2016 US presidential election, and more recently COVID-19 pandemic hoaxes that promote false cures, putting lives at risk. FADE is a novel approach and service that helps Internet users detect fake news. FADE discovers multiple news sources covering the same story, analyzes their reputation, and checks the trustworthiness of cited sources. Our approach does not depend on any specific social media or news source, does not rely on costly textual content analysis, and does not require lengthy offline processing. Our experiments demonstrate above 85% detection accuracy with a practical implementation. FADE offers a path to empowering the Internet community with effective tools to identify fake news.","PeriodicalId":417395,"journal":{"name":"Proceedings of the 16th International Conference on Availability, Reliability and Security","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131404256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Gasiba, Iosif Andrei-Cristian, U. Lechner, M. Pinto-Albuquerque
Improper deployment of software can have serious consequences, ranging from simple downtime to permanent data loss and data breaches. Infrastructure as Code tools serve to streamline delivery by promising consistency and speed, by abstracting away from the underlying actions. However, this simplicity may distract from architectural or configuration faults, potentially compromising the secure development lifecycle. One way to address this issue involves awareness training. Sifu is a platform that provides education on security through serious games, developed in the industry, for the industry. The presented work extends the Sifu platform with challenges addressing Terraform-aided cloud deployment on Amazon Web Services. This paper proposes an evaluation pipeline behind the challenges, and provides details of the vulnerability detection and feedback mechanisms, as well as a novel technique for detecting undesired differences between a given architecture and a target result. Furthermore, this paper quantifies the challenges’ perceived usefulness and impact, by evaluating the challenges among a total of twelve participants. Our preliminary results show that the challenges are suitable for education and the industry, with potential usage in internal training. A key finding is that, although the participants understand the importance of secure coding, their answers indicate that universities leave them unprepared in this area. Finally, our results are compared with related industry works, to extract and provide good practices and advice for practitioners.
不当的软件部署可能会产生严重的后果,从简单的停机到永久的数据丢失和数据泄露。基础设施即代码工具通过承诺一致性和速度,以及从底层操作中抽象出来,来简化交付。然而,这种简单性可能会分散对体系结构或配置错误的关注,从而潜在地危及安全的开发生命周期。解决这个问题的一个方法是意识训练。Sifu是一个通过严肃游戏提供安全教育的平台,在行业内开发,为行业服务。提出的工作扩展了Sifu平台,解决了Amazon Web Services上terraform辅助云部署的挑战。本文提出了挑战背后的评估管道,并提供了漏洞检测和反馈机制的细节,以及一种用于检测给定体系结构与目标结果之间不期望的差异的新技术。此外,本文量化挑战的感知有用性和影响,通过评估挑战在总共12个参与者。我们的初步结果表明,这些挑战适用于教育和行业,在内部培训中具有潜在的用途。一个重要的发现是,尽管参与者理解安全编码的重要性,但他们的回答表明,大学让他们在这方面措手不及。最后,将我们的研究结果与相关行业的研究成果进行比较,从中提炼出一些好的做法,为从业者提供建议。
{"title":"Raising Security Awareness of Cloud Deployments using Infrastructure as Code through CyberSecurity Challenges","authors":"T. Gasiba, Iosif Andrei-Cristian, U. Lechner, M. Pinto-Albuquerque","doi":"10.1145/3465481.3470030","DOIUrl":"https://doi.org/10.1145/3465481.3470030","url":null,"abstract":"Improper deployment of software can have serious consequences, ranging from simple downtime to permanent data loss and data breaches. Infrastructure as Code tools serve to streamline delivery by promising consistency and speed, by abstracting away from the underlying actions. However, this simplicity may distract from architectural or configuration faults, potentially compromising the secure development lifecycle. One way to address this issue involves awareness training. Sifu is a platform that provides education on security through serious games, developed in the industry, for the industry. The presented work extends the Sifu platform with challenges addressing Terraform-aided cloud deployment on Amazon Web Services. This paper proposes an evaluation pipeline behind the challenges, and provides details of the vulnerability detection and feedback mechanisms, as well as a novel technique for detecting undesired differences between a given architecture and a target result. Furthermore, this paper quantifies the challenges’ perceived usefulness and impact, by evaluating the challenges among a total of twelve participants. Our preliminary results show that the challenges are suitable for education and the industry, with potential usage in internal training. A key finding is that, although the participants understand the importance of secure coding, their answers indicate that universities leave them unprepared in this area. Finally, our results are compared with related industry works, to extract and provide good practices and advice for practitioners.","PeriodicalId":417395,"journal":{"name":"Proceedings of the 16th International Conference on Availability, Reliability and Security","volume":"147 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116617493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jakub Drmola, František Kasl, Pavel Loutocký, M. Mareš, Tomás Pitner, Jakub Vostoupal
This paper is focused on challenges connected with the persisting imbalance between the supply and demand of the cybersecurity expert workforce. We analyse the current situation in the Czech Republic, finding that although the shortage of experts affects the private and public sectors both, the public sector is constrained by a massive financial undervaluation of cybersecurity experts and other legal and systematic deficiencies and obstacles and therefore has a much lower chance of attracting talents in this field. The inability of public institutions to find relevant workforce causes, among other things, problems with formulating public procurements, assessing offers and communicating their requirements to the supply-side of the labour market. One of the solutions to this crisis might be in the systematic support of education programmes. However, the cybersecurity field is so dynamic and fragmented that the alignment of the education programme with the market needs presents a significant challenge. There, a unified qualifications framework could serve as a basis for finding common ground. We focus on the benefits of creating such a framework, especially the benefits that a united taxonomy can bring to the cybersecurity labour market by bolstering cybersecurity higher education. Finally, we summarise the key features of the cyber-qualifications framework that is being developed under our current project and highlight its potential use for labour market optimisation and efficient development of new cybersecurity study programs and further education.
{"title":"The Matter of Cybersecurity Expert Workforce Scarcity in the Czech Republic and Its Alleviation Through the Proposed Qualifications Framework","authors":"Jakub Drmola, František Kasl, Pavel Loutocký, M. Mareš, Tomás Pitner, Jakub Vostoupal","doi":"10.1145/3465481.3469186","DOIUrl":"https://doi.org/10.1145/3465481.3469186","url":null,"abstract":"This paper is focused on challenges connected with the persisting imbalance between the supply and demand of the cybersecurity expert workforce. We analyse the current situation in the Czech Republic, finding that although the shortage of experts affects the private and public sectors both, the public sector is constrained by a massive financial undervaluation of cybersecurity experts and other legal and systematic deficiencies and obstacles and therefore has a much lower chance of attracting talents in this field. The inability of public institutions to find relevant workforce causes, among other things, problems with formulating public procurements, assessing offers and communicating their requirements to the supply-side of the labour market. One of the solutions to this crisis might be in the systematic support of education programmes. However, the cybersecurity field is so dynamic and fragmented that the alignment of the education programme with the market needs presents a significant challenge. There, a unified qualifications framework could serve as a basis for finding common ground. We focus on the benefits of creating such a framework, especially the benefits that a united taxonomy can bring to the cybersecurity labour market by bolstering cybersecurity higher education. Finally, we summarise the key features of the cyber-qualifications framework that is being developed under our current project and highlight its potential use for labour market optimisation and efficient development of new cybersecurity study programs and further education.","PeriodicalId":417395,"journal":{"name":"Proceedings of the 16th International Conference on Availability, Reliability and Security","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129730379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we describe a system for the continuous collection of data for the needs of network security management. When a cybersecurity incident occurs in the network, the contextual information on the involved assets facilitates estimating the severity and impact of the incident and selecting an appropriate incident response. We propose a system based on the combination of active and passive network measurements and the correlation of the data with third-party systems. The system enumerates devices and services in the network and their vulnerabilities via fingerprinting of operating systems and applications. Further, the system pairs the hosts in the network with contacts on responsible administrators and highlights critical infrastructure and its dependencies. The system concentrates all the information required for common incident handling procedures and aims to speed up incident response, reduce the time spent on the manual investigation, and prevent errors caused by negligence or lack of information.
{"title":"System for Continuous Collection of Contextual Information for Network Security Management and Incident Handling","authors":"M. Husák, Martin Laštovička, Daniel Tovarnák","doi":"10.1145/3465481.3470037","DOIUrl":"https://doi.org/10.1145/3465481.3470037","url":null,"abstract":"In this paper, we describe a system for the continuous collection of data for the needs of network security management. When a cybersecurity incident occurs in the network, the contextual information on the involved assets facilitates estimating the severity and impact of the incident and selecting an appropriate incident response. We propose a system based on the combination of active and passive network measurements and the correlation of the data with third-party systems. The system enumerates devices and services in the network and their vulnerabilities via fingerprinting of operating systems and applications. Further, the system pairs the hosts in the network with contacts on responsible administrators and highlights critical infrastructure and its dependencies. The system concentrates all the information required for common incident handling procedures and aims to speed up incident response, reduce the time spent on the manual investigation, and prevent errors caused by negligence or lack of information.","PeriodicalId":417395,"journal":{"name":"Proceedings of the 16th International Conference on Availability, Reliability and Security","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130988375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cybersecurity is increasingly a concern for small and medium-sized enterprises (SMEs), and there exist many awareness training programs and tools for them. The literature mainly studies SMEs as a unitary type of company and provides one-size-fits-all recommendations and solutions. However, SMEs are not homogeneous. They are diverse with different vulnerabilities, cybersecurity needs, and competencies. Few studies considered such differences in standards and certificates for security tools adoption and cybersecurity tailoring for these SMEs. This study proposes a classification framework with an outline of cybersecurity improvement needs for each class. The framework suggests five SME types based on their characteristics and specific security needs: cybersecurity abandoned SME, unskilled SME, expert-connected SME, capable SME, and cybersecurity provider SME. In addition to describing the five classes, the study explains the framework's usage in sampled SMEs. The framework proposes solutions for each class to approach cybersecurity awareness and competence more consistent with SME needs.
{"title":"Classifying SMEs for Approaching Cybersecurity Competence and Awareness","authors":"Alireza Shojaifar, Heini-Marja Järvinen","doi":"10.1145/3465481.3469200","DOIUrl":"https://doi.org/10.1145/3465481.3469200","url":null,"abstract":"Cybersecurity is increasingly a concern for small and medium-sized enterprises (SMEs), and there exist many awareness training programs and tools for them. The literature mainly studies SMEs as a unitary type of company and provides one-size-fits-all recommendations and solutions. However, SMEs are not homogeneous. They are diverse with different vulnerabilities, cybersecurity needs, and competencies. Few studies considered such differences in standards and certificates for security tools adoption and cybersecurity tailoring for these SMEs. This study proposes a classification framework with an outline of cybersecurity improvement needs for each class. The framework suggests five SME types based on their characteristics and specific security needs: cybersecurity abandoned SME, unskilled SME, expert-connected SME, capable SME, and cybersecurity provider SME. In addition to describing the five classes, the study explains the framework's usage in sampled SMEs. The framework proposes solutions for each class to approach cybersecurity awareness and competence more consistent with SME needs.","PeriodicalId":417395,"journal":{"name":"Proceedings of the 16th International Conference on Availability, Reliability and Security","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134327497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we propose the concept of ”entropy of a flow” to augment flow statistical features for identifying malicious behaviours in DNS tunnels, specifically DNS over HTTPS traffic. In order to achieve this, we explore the use of three flow exporters, namely Argus, DoHlyzer and Tranalyzer2 to extract flow statistical features. We then augment these features using different ways of calculating the entropy of a flow. To this end, we investigate three entropy calculation approaches: Entropy over all packets of a flow, Entropy over the first 96 bytes of a flow, and Entropy over the first n-packets of a flow. We evaluate five machine learning classifiers, namely Decision Tree, Random Forest, Logistic Regression, Support Vector Machine and Naive Bayes using these features in order to identify malicious behaviours in different publicly available datasets. The evaluations show that the Decision Tree classifier achieves an F-measure of 99.7% when flow statistical features are augmented with entropy of a flow calculated over the first 4 packets.
在本文中,我们提出了“流量熵”的概念,以增强流量统计特征,以识别DNS隧道中的恶意行为,特别是DNS over HTTPS流量。为了实现这一点,我们探索了使用三个流量导出器,即Argus, DoHlyzer和Tranalyzer2来提取流量统计特征。然后,我们使用计算流熵的不同方法来增强这些特征。为此,我们研究了三种熵计算方法:流的所有数据包的熵,流的前96个字节的熵,流的前n个数据包的熵。我们评估了五种机器学习分类器,即决策树,随机森林,逻辑回归,支持向量机和朴素贝叶斯,使用这些特征来识别不同公开可用数据集中的恶意行为。评估表明,当流量统计特征与前4个数据包计算的流量熵增强时,决策树分类器的f度量达到99.7%。
{"title":"Network Flow Entropy for Identifying Malicious Behaviours in DNS Tunnels","authors":"Yulduz Khodjaeva, Nur Zincir-Heywood","doi":"10.1145/3465481.3470089","DOIUrl":"https://doi.org/10.1145/3465481.3470089","url":null,"abstract":"In this paper, we propose the concept of ”entropy of a flow” to augment flow statistical features for identifying malicious behaviours in DNS tunnels, specifically DNS over HTTPS traffic. In order to achieve this, we explore the use of three flow exporters, namely Argus, DoHlyzer and Tranalyzer2 to extract flow statistical features. We then augment these features using different ways of calculating the entropy of a flow. To this end, we investigate three entropy calculation approaches: Entropy over all packets of a flow, Entropy over the first 96 bytes of a flow, and Entropy over the first n-packets of a flow. We evaluate five machine learning classifiers, namely Decision Tree, Random Forest, Logistic Regression, Support Vector Machine and Naive Bayes using these features in order to identify malicious behaviours in different publicly available datasets. The evaluations show that the Decision Tree classifier achieves an F-measure of 99.7% when flow statistical features are augmented with entropy of a flow calculated over the first 4 packets.","PeriodicalId":417395,"journal":{"name":"Proceedings of the 16th International Conference on Availability, Reliability and Security","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132086664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The advent of crucial areas such as smart healthcare and autonomous transportation, bring in new requirements on the computing infrastructure, including higher demand for real-time processing capability with minimized latency and maximized availability. The traditional cloud infrastructure has several deficiencies when meeting such requirements due to its centralization. Edge clouds seems to be the solution for the aforementioned requirements, in which the resources are much closer to the edge devices and provides local computing power and high Quality of Service (QoS). However, there are still security issues that endanger the functionality of edge clouds. One of the recent types of such issues is Very Short Intermittent Distributed Denial of Service (VSI-DDoS) which is a new category of low-rate DDoS attacks that targets both small and large-scale web services. This attack generates very short bursts of HTTP request intermittently towards target services to encounter unexpected degradation of QoS at edge clouds. In this paper, we formulate the problem with a sequence modeling approach to address short intermittent intervals of DDoS attacks during the rendering of services on edge clouds using Long Short-Term Memory (LSTM) with local attention. The proposed approach ameliorates the detection performance by learning from the most important discernible patterns of the sequence data rather than considering complete historical information and hence achieves a more sophisticated model approximation. Experimental results confirm the feasibility of the proposed approach for VSI-DDoS detection on edge clouds and it achieves 2% more accuracy when compared with baseline methods.
{"title":"Detection of VSI-DDoS Attacks on the Edge: A Sequential Modeling Approach","authors":"Javad Forough, M. Bhuyan, E. Elmroth","doi":"10.1145/3465481.3465757","DOIUrl":"https://doi.org/10.1145/3465481.3465757","url":null,"abstract":"The advent of crucial areas such as smart healthcare and autonomous transportation, bring in new requirements on the computing infrastructure, including higher demand for real-time processing capability with minimized latency and maximized availability. The traditional cloud infrastructure has several deficiencies when meeting such requirements due to its centralization. Edge clouds seems to be the solution for the aforementioned requirements, in which the resources are much closer to the edge devices and provides local computing power and high Quality of Service (QoS). However, there are still security issues that endanger the functionality of edge clouds. One of the recent types of such issues is Very Short Intermittent Distributed Denial of Service (VSI-DDoS) which is a new category of low-rate DDoS attacks that targets both small and large-scale web services. This attack generates very short bursts of HTTP request intermittently towards target services to encounter unexpected degradation of QoS at edge clouds. In this paper, we formulate the problem with a sequence modeling approach to address short intermittent intervals of DDoS attacks during the rendering of services on edge clouds using Long Short-Term Memory (LSTM) with local attention. The proposed approach ameliorates the detection performance by learning from the most important discernible patterns of the sequence data rather than considering complete historical information and hence achieves a more sophisticated model approximation. Experimental results confirm the feasibility of the proposed approach for VSI-DDoS detection on edge clouds and it achieves 2% more accuracy when compared with baseline methods.","PeriodicalId":417395,"journal":{"name":"Proceedings of the 16th International Conference on Availability, Reliability and Security","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132122283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}