Smart contracts on Ethereum enable billions of dollars to be transacted in a decentralized, transparent and trustless environment. However, adversaries lie await in the Dark Forest, waiting to exploit any and all smart contract vulnerabilities in order to extract profits from unsuspecting victims in this new financial system. As the blockchain space moves at a breakneck pace, exploits on smart contract vulnerabilities rapidly evolve, and existing research quickly becomes obsolete. It is imperative that smart contract developers stay up to date on the current most damaging vulnerabilities and countermeasures to ensure the security of users' funds, and to collectively ensure the future of Ethereum as a financial settlement layer. This research work focuses on two smart contract vulnerabilities: transaction-ordering dependency and oracle manipulation. Combined, these two vulnerabilities have been exploited to extract hundreds of millions of dollars from smart contracts in the past year (2020-2021). For each of them, this paper presents: (1) a literary survey from recent (as of 2021) formal and informal sources; (2) a reproducible experiment as code demonstrating the vulnerability and, where applicable, countermeasures to mitigate the vulnerability; and (3) analysis and discussion on proposed countermeasures. To conclude, strengths, weaknesses and trade-offs of these countermeasures are summarised, inspiring directions for future research.
{"title":"Your Smart Contracts Are Not Secure: Investigating Arbitrageurs and Oracle Manipulators in Ethereum","authors":"Kevin Tjiam, Rui Wang, H. Chen, K. Liang","doi":"10.1145/3474374.3486916","DOIUrl":"https://doi.org/10.1145/3474374.3486916","url":null,"abstract":"Smart contracts on Ethereum enable billions of dollars to be transacted in a decentralized, transparent and trustless environment. However, adversaries lie await in the Dark Forest, waiting to exploit any and all smart contract vulnerabilities in order to extract profits from unsuspecting victims in this new financial system. As the blockchain space moves at a breakneck pace, exploits on smart contract vulnerabilities rapidly evolve, and existing research quickly becomes obsolete. It is imperative that smart contract developers stay up to date on the current most damaging vulnerabilities and countermeasures to ensure the security of users' funds, and to collectively ensure the future of Ethereum as a financial settlement layer. This research work focuses on two smart contract vulnerabilities: transaction-ordering dependency and oracle manipulation. Combined, these two vulnerabilities have been exploited to extract hundreds of millions of dollars from smart contracts in the past year (2020-2021). For each of them, this paper presents: (1) a literary survey from recent (as of 2021) formal and informal sources; (2) a reproducible experiment as code demonstrating the vulnerability and, where applicable, countermeasures to mitigate the vulnerability; and (3) analysis and discussion on proposed countermeasures. To conclude, strengths, weaknesses and trade-offs of these countermeasures are summarised, inspiring directions for future research.","PeriodicalId":319965,"journal":{"name":"Proceedings of the 3rd Workshop on Cyber-Security Arms Race","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132808695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Short on time with a reduced attention span, people disengage from reading long text with a "too long, didn't read" justification. While a useful heuristic of managing reading resources, we believe that "tl;dr" is prone to adversarial manipulation. In a seemingly noble effort to produce a bite-sized segments of information fitting social media posts, an adversity could reduce a long text to a short but polarizing summary. In this paper we demonstrate an adversarial text summarization that reduces Federal Register long texts to summaries with obvious liberal or conservative leanings. Contextualizing summaries to a political agenda is hardly new, but a barrage of polarizing "tl;dr" social media posts could derail the public debate about important public policy matters with an unprecedented lack of effort. We show and elaborate on such example "tl;dr" posts to showcase a new and relatively unexplored avenue for information operations on social media.
{"title":"Regulation TL;DR: Adversarial Text Summarization of Federal Register Articles","authors":"Filipo Sharevski, Peter Jachim, Emma Pieroni","doi":"10.1145/3474374.3486917","DOIUrl":"https://doi.org/10.1145/3474374.3486917","url":null,"abstract":"Short on time with a reduced attention span, people disengage from reading long text with a \"too long, didn't read\" justification. While a useful heuristic of managing reading resources, we believe that \"tl;dr\" is prone to adversarial manipulation. In a seemingly noble effort to produce a bite-sized segments of information fitting social media posts, an adversity could reduce a long text to a short but polarizing summary. In this paper we demonstrate an adversarial text summarization that reduces Federal Register long texts to summaries with obvious liberal or conservative leanings. Contextualizing summaries to a political agenda is hardly new, but a barrage of polarizing \"tl;dr\" social media posts could derail the public debate about important public policy matters with an unprecedented lack of effort. We show and elaborate on such example \"tl;dr\" posts to showcase a new and relatively unexplored avenue for information operations on social media.","PeriodicalId":319965,"journal":{"name":"Proceedings of the 3rd Workshop on Cyber-Security Arms Race","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114093350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Arthur Drichel, Benedikt Holmes, Justus von Brandt, U. Meyer
Domain generation algorithms (DGAs) prevent the connection between a botnet and its master from being blocked by generating a large number of domain names. Promising single-data-source approaches have been proposed for separating benign from DGA-generated domains. Collaborative machine learning (ML) can be used in order to enhance a classifier's detection rate, reduce its false positive rate (FPR), and to improve the classifier's generalization capability to different networks. In this paper, we complement the research area of DGA detection by conducting a comprehensive collaborative learning study, including a total of 13,440 evaluation runs. In two real-world scenarios we evaluate a total of eleven different variations of collaborative learning using three different state-of-the-art classifiers. We show that collaborative ML can lead to a reduction in FPR by up to 51.7%. However, while collaborative ML is beneficial for DGA detection, not all approaches and classifier types profit equally. We round up our comprehensive study with a thorough discussion of the privacy threats implicated by the different collaborative ML approaches.
{"title":"The More, the Better: A Study on Collaborative Machine Learning for DGA Detection","authors":"Arthur Drichel, Benedikt Holmes, Justus von Brandt, U. Meyer","doi":"10.1145/3474374.3486915","DOIUrl":"https://doi.org/10.1145/3474374.3486915","url":null,"abstract":"Domain generation algorithms (DGAs) prevent the connection between a botnet and its master from being blocked by generating a large number of domain names. Promising single-data-source approaches have been proposed for separating benign from DGA-generated domains. Collaborative machine learning (ML) can be used in order to enhance a classifier's detection rate, reduce its false positive rate (FPR), and to improve the classifier's generalization capability to different networks. In this paper, we complement the research area of DGA detection by conducting a comprehensive collaborative learning study, including a total of 13,440 evaluation runs. In two real-world scenarios we evaluate a total of eleven different variations of collaborative learning using three different state-of-the-art classifiers. We show that collaborative ML can lead to a reduction in FPR by up to 51.7%. However, while collaborative ML is beneficial for DGA detection, not all approaches and classifier types profit equally. We round up our comprehensive study with a thorough discussion of the privacy threats implicated by the different collaborative ML approaches.","PeriodicalId":319965,"journal":{"name":"Proceedings of the 3rd Workshop on Cyber-Security Arms Race","volume":"126 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132787383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Florian Wilkens, Felix Ortmann, Steffen Haas, Matthias Vallentin, Mathias Fischer
Today, human security analysts need to sift through large volumes of alerts they have to triage during investigations. This alert fatigue results in failure to detect complex attacks, such as advanced persistent threats (APTs), because they manifest over long time frames and attackers tread carefully to evade detection mechanisms. In this paper, we contribute a new method to synthesize scenario graphs from state machines. We use the network direction to derive potential attack stages from single and meta-alerts and model resulting attack scenarios in a kill chain state machine(KCSM). Our algorithm yields a graphical summary of the attack, called APT scenario graphs, where nodes represent involved hosts and edges infection activity. We evaluate the feasibility of our approach by injecting an APT campaign into a network traffic data set containing both benign and malicious activity. Our approach then generates a set of APT scenario graphs that contain our injected campaign while reducing the overall alert set by up to three orders of magnitude. This reduction makes it feasible for human analysts to effectively triage potential incidents.
{"title":"Multi-Stage Attack Detection via Kill Chain State Machines","authors":"Florian Wilkens, Felix Ortmann, Steffen Haas, Matthias Vallentin, Mathias Fischer","doi":"10.1145/3474374.3486918","DOIUrl":"https://doi.org/10.1145/3474374.3486918","url":null,"abstract":"Today, human security analysts need to sift through large volumes of alerts they have to triage during investigations. This alert fatigue results in failure to detect complex attacks, such as advanced persistent threats (APTs), because they manifest over long time frames and attackers tread carefully to evade detection mechanisms. In this paper, we contribute a new method to synthesize scenario graphs from state machines. We use the network direction to derive potential attack stages from single and meta-alerts and model resulting attack scenarios in a kill chain state machine(KCSM). Our algorithm yields a graphical summary of the attack, called APT scenario graphs, where nodes represent involved hosts and edges infection activity. We evaluate the feasibility of our approach by injecting an APT campaign into a network traffic data set containing both benign and malicious activity. Our approach then generates a set of APT scenario graphs that contain our injected campaign while reducing the overall alert set by up to three orders of magnitude. This reduction makes it feasible for human analysts to effectively triage potential incidents.","PeriodicalId":319965,"journal":{"name":"Proceedings of the 3rd Workshop on Cyber-Security Arms Race","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121277737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}