Nur Ilzam Che Mat, Norziana Jamil, Yunus Yusoff, Miss Laiha Mat Kiah
Advanced persistent threats (APTs) pose significant security-related challenges to organizations owing to their sophisticated and persistent nature, and are inimical to the confidentiality, integrity, and availability of organizational information and services. This study systematically reviews the literature on methods of detecting APTs by comprehensively surveying research in the area, identifying gaps in the relevant studies, and proposing directions for future work. The authors provide a detailed analysis of current methods of APT detection that are based on multi-stage attack-related behaviors. We adhered to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines and conducted an extensive search of a variety of databases. A total of 45 studies, encompassing sources from both academia and the industry, were considered in the final analysis. The findings reveal that APTs have the capability to laterally propagate and achieve their objectives by identifying and exploiting existing systemic vulnerabilities. By identifying shortcomings in prevalent methods of APT detection, we propose integrating the multi-stage attack-related behaviors of APTs with the assessment of the presence of vulnerabilities in the network and their susceptibility to being exploited in order to improve the accuracy of their identification. Such an improved approach uses vulnerability scores and probability metrics to determine the probable sequence of targeted nodes, and visualizes the path of APT attacks. This technique of advanced detection enables the early identification of the most likely targets, which, in turn, allows for the implementation of proactive measures to prevent the network from being further compromised. The research here contributes to the literature by highlighting the importance of integrating multi-stage attack-related behaviors, vulnerability assessment, and techniques of visualization for APT detection to enhance the overall security of organizations.
{"title":"A systematic literature review on advanced persistent threat behaviors and its detection strategy","authors":"Nur Ilzam Che Mat, Norziana Jamil, Yunus Yusoff, Miss Laiha Mat Kiah","doi":"10.1093/cybsec/tyad023","DOIUrl":"https://doi.org/10.1093/cybsec/tyad023","url":null,"abstract":"Advanced persistent threats (APTs) pose significant security-related challenges to organizations owing to their sophisticated and persistent nature, and are inimical to the confidentiality, integrity, and availability of organizational information and services. This study systematically reviews the literature on methods of detecting APTs by comprehensively surveying research in the area, identifying gaps in the relevant studies, and proposing directions for future work. The authors provide a detailed analysis of current methods of APT detection that are based on multi-stage attack-related behaviors. We adhered to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines and conducted an extensive search of a variety of databases. A total of 45 studies, encompassing sources from both academia and the industry, were considered in the final analysis. The findings reveal that APTs have the capability to laterally propagate and achieve their objectives by identifying and exploiting existing systemic vulnerabilities. By identifying shortcomings in prevalent methods of APT detection, we propose integrating the multi-stage attack-related behaviors of APTs with the assessment of the presence of vulnerabilities in the network and their susceptibility to being exploited in order to improve the accuracy of their identification. Such an improved approach uses vulnerability scores and probability metrics to determine the probable sequence of targeted nodes, and visualizes the path of APT attacks. This technique of advanced detection enables the early identification of the most likely targets, which, in turn, allows for the implementation of proactive measures to prevent the network from being further compromised. The research here contributes to the literature by highlighting the importance of integrating multi-stage attack-related behaviors, vulnerability assessment, and techniques of visualization for APT detection to enhance the overall security of organizations.","PeriodicalId":44310,"journal":{"name":"Journal of Cybersecurity","volume":"105 1","pages":""},"PeriodicalIF":3.9,"publicationDate":"2024-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139373999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
David Barrera, Christopher Bellman, Paul C van Oorschot
We carry out a detailed analysis of the security advice coding method (SAcoding) of Barrera et al., which is designed to analyze security advice in the sense of measuring actionability and categorizing advice items as practices, policies, principles, or outcomes. The main part of our analysis explores the extent to which a second coder’s assignment of codes to advice items agrees with that of a first, for a dataset of 1013 security advice items nominally addressing Internet of Things devices. More broadly, we seek a deeper understanding of the soundness and utility of the SAcoding method, and the degree to which it meets the design goal of reducing subjectivity in assigning codes to security advice items. Our analysis results in suggestions for modifications to the coding tree methodology, and some recommendations. We believe the coding tree approach may be of interest for analysis of qualitative data beyond security advice datasets alone.
{"title":"A close look at a systematic method for analyzing sets of security advice","authors":"David Barrera, Christopher Bellman, Paul C van Oorschot","doi":"10.1093/cybsec/tyad013","DOIUrl":"https://doi.org/10.1093/cybsec/tyad013","url":null,"abstract":"We carry out a detailed analysis of the security advice coding method (SAcoding) of Barrera et al., which is designed to analyze security advice in the sense of measuring actionability and categorizing advice items as practices, policies, principles, or outcomes. The main part of our analysis explores the extent to which a second coder’s assignment of codes to advice items agrees with that of a first, for a dataset of 1013 security advice items nominally addressing Internet of Things devices. More broadly, we seek a deeper understanding of the soundness and utility of the SAcoding method, and the degree to which it meets the design goal of reducing subjectivity in assigning codes to security advice items. Our analysis results in suggestions for modifications to the coding tree methodology, and some recommendations. We believe the coding tree approach may be of interest for analysis of qualitative data beyond security advice datasets alone.","PeriodicalId":44310,"journal":{"name":"Journal of Cybersecurity","volume":"973 1","pages":""},"PeriodicalIF":3.9,"publicationDate":"2023-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138505420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The rise of consumer encryption has led to a fierce debate over whether the loss of potential evidence due to encryption will be offset by the increase in evidence available from electronic metadata. One major question raised by this debate is how jurors will interpret and value metadata as opposed to content information. Though there are plausible arguments in favor of the persuasive power of each type of evidence, to date no empirical study has examined how ordinary people, potential jurors, view each of these sorts of evidence. We address this issue through a series of survey experiments that present respondents with hypothetical criminal trials, randomly assigning them to descriptions featuring either metadata or content information. These studies show that the relative power of content and metadata information is highly contextual. Content information and metadata can be equally useful when conveying logically equivalent information. However, content information may be more persuasive where the defendant’s state of mind is critical, while metadata can more convincingly establish a pattern of behavior. This suggests that the rise of encryption will have a heterogeneous effect on criminal cases, with the direction of the effect depending on the facts that the prosecution must prove.
{"title":"Juror interpretations of metadata and content information: implications for the going dark debate","authors":"Anne E Boustead, Matthew B Kugler","doi":"10.1093/cybsec/tyad002","DOIUrl":"https://doi.org/10.1093/cybsec/tyad002","url":null,"abstract":"The rise of consumer encryption has led to a fierce debate over whether the loss of potential evidence due to encryption will be offset by the increase in evidence available from electronic metadata. One major question raised by this debate is how jurors will interpret and value metadata as opposed to content information. Though there are plausible arguments in favor of the persuasive power of each type of evidence, to date no empirical study has examined how ordinary people, potential jurors, view each of these sorts of evidence. We address this issue through a series of survey experiments that present respondents with hypothetical criminal trials, randomly assigning them to descriptions featuring either metadata or content information. These studies show that the relative power of content and metadata information is highly contextual. Content information and metadata can be equally useful when conveying logically equivalent information. However, content information may be more persuasive where the defendant’s state of mind is critical, while metadata can more convincingly establish a pattern of behavior. This suggests that the rise of encryption will have a heterogeneous effect on criminal cases, with the direction of the effect depending on the facts that the prosecution must prove.","PeriodicalId":44310,"journal":{"name":"Journal of Cybersecurity","volume":"971 1","pages":""},"PeriodicalIF":3.9,"publicationDate":"2023-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138505422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Given the increase in cybercrime, cybersecurity analysts (i.e. defenders) are in high demand. Defenders must monitor an organization’s network to evaluate threats and potential breaches into the network. Adversary simulation is commonly used to test defenders’ performance against known threats to organizations. However, it is unclear how effective this training process is in preparing defenders for this highly demanding job. In this paper, we demonstrate how to use adversarial algorithms to investigate defenders’ learning using interactive cyber-defense games. We created an Interactive Defense Game (IDG) that represents a cyber-defense scenario, which requires monitoring of incoming network alerts and allows a defender to analyze, remove, and restore services based on the events observed in a network. The participants in our study faced one of two types of simulated adversaries. A Beeline adversary is a fast, targeted, and informed attacker; and a Meander adversary is a slow attacker that wanders the network until it finds the right target to exploit. Our results suggest that although human defenders have more difficulty to stop the Beeline adversary initially, they were able to learn to stop this adversary by taking advantage of their attack strategy. Participants who played against the Beeline adversary learned to anticipate the adversary’s actions and took more proactive actions, while decreasing their reactive actions. These findings have implications for understanding how to help cybersecurity analysts speed up their training.
{"title":"Learning about simulated adversaries from human defenders using interactive cyber-defense games","authors":"Baptiste Prebot, Yinuo Du, Cleotilde Gonzalez","doi":"10.1093/cybsec/tyad022","DOIUrl":"https://doi.org/10.1093/cybsec/tyad022","url":null,"abstract":"Abstract Given the increase in cybercrime, cybersecurity analysts (i.e. defenders) are in high demand. Defenders must monitor an organization’s network to evaluate threats and potential breaches into the network. Adversary simulation is commonly used to test defenders’ performance against known threats to organizations. However, it is unclear how effective this training process is in preparing defenders for this highly demanding job. In this paper, we demonstrate how to use adversarial algorithms to investigate defenders’ learning using interactive cyber-defense games. We created an Interactive Defense Game (IDG) that represents a cyber-defense scenario, which requires monitoring of incoming network alerts and allows a defender to analyze, remove, and restore services based on the events observed in a network. The participants in our study faced one of two types of simulated adversaries. A Beeline adversary is a fast, targeted, and informed attacker; and a Meander adversary is a slow attacker that wanders the network until it finds the right target to exploit. Our results suggest that although human defenders have more difficulty to stop the Beeline adversary initially, they were able to learn to stop this adversary by taking advantage of their attack strategy. Participants who played against the Beeline adversary learned to anticipate the adversary’s actions and took more proactive actions, while decreasing their reactive actions. These findings have implications for understanding how to help cybersecurity analysts speed up their training.","PeriodicalId":44310,"journal":{"name":"Journal of Cybersecurity","volume":"2014 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135212854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ugochukwu Etudo, Christopher Whyte, Victoria Yoon, Niam Yaraghi
Abstract Much research on influence operations (IO) and cyber-enabled influence operations (CEIO) rests on the assumption that state-backed digital interference attempts to generically produce sociopolitical division favorable to the perpetrator’s own interests. And yet, the empirical record of malicious IO during the 2010s show that social media manipulation and messaging takes a number of forms. In this article, we survey arguments regarding the targeting tactics and techniques associated with digital age IO and suggest that existing accounts tend to ignore the strategic context of foreign interference. We propose that state-sponsored IO are not unlike conventional political messaging campaigns in that they are an evolving flow of information rooted in several key objectives and assumptions. However, the strategic position of foreign actors as an outside force constrains opportunities for effective manipulation and forces certain operational constraints that shape practice. These outside actors, generally unable to create sensation from nothing without being unveiled, rely on domestic events tied to a broad macrosocial division (e.g. an act of race violence or protest activity) to create the conditions wherein social media manipulation can be leveraged to strategic gain. Once an event occurs, belligerents tailor steps being taken to embed themselves in relevant social networks with the goal of turning that influence toward some action. We illustrate and validate this framework using the content of the Russian Federation’s coordinated trolling campaign against the USA between 2015 and 2016. We deploy an empirical testing approach centered on fear appeals as a likely method for engaging foreign populations relative to some domestic triggering event and find support of our framework. Specifically, we show that while strong associations exist between Russian ad emissions on Facebook and societal unrest in the period, those relationships are not statistically causal. We find a temporal ordering of social media content that is highly suggestive of a fear appeals strategy responsive to macrosocial dividing events. Of unique interest, we also see that malware is targeted to social media populations at later stages of the fear appeal threat lifecycle, implying lessons for those specifically interested in the relationship between CEIO and disinformation tactics.
{"title":"From Russia with fear: fear appeals and the patterns of cyber-enabled influence operations","authors":"Ugochukwu Etudo, Christopher Whyte, Victoria Yoon, Niam Yaraghi","doi":"10.1093/cybsec/tyad016","DOIUrl":"https://doi.org/10.1093/cybsec/tyad016","url":null,"abstract":"Abstract Much research on influence operations (IO) and cyber-enabled influence operations (CEIO) rests on the assumption that state-backed digital interference attempts to generically produce sociopolitical division favorable to the perpetrator’s own interests. And yet, the empirical record of malicious IO during the 2010s show that social media manipulation and messaging takes a number of forms. In this article, we survey arguments regarding the targeting tactics and techniques associated with digital age IO and suggest that existing accounts tend to ignore the strategic context of foreign interference. We propose that state-sponsored IO are not unlike conventional political messaging campaigns in that they are an evolving flow of information rooted in several key objectives and assumptions. However, the strategic position of foreign actors as an outside force constrains opportunities for effective manipulation and forces certain operational constraints that shape practice. These outside actors, generally unable to create sensation from nothing without being unveiled, rely on domestic events tied to a broad macrosocial division (e.g. an act of race violence or protest activity) to create the conditions wherein social media manipulation can be leveraged to strategic gain. Once an event occurs, belligerents tailor steps being taken to embed themselves in relevant social networks with the goal of turning that influence toward some action. We illustrate and validate this framework using the content of the Russian Federation’s coordinated trolling campaign against the USA between 2015 and 2016. We deploy an empirical testing approach centered on fear appeals as a likely method for engaging foreign populations relative to some domestic triggering event and find support of our framework. Specifically, we show that while strong associations exist between Russian ad emissions on Facebook and societal unrest in the period, those relationships are not statistically causal. We find a temporal ordering of social media content that is highly suggestive of a fear appeals strategy responsive to macrosocial dividing events. Of unique interest, we also see that malware is targeted to social media populations at later stages of the fear appeal threat lifecycle, implying lessons for those specifically interested in the relationship between CEIO and disinformation tactics.","PeriodicalId":44310,"journal":{"name":"Journal of Cybersecurity","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136298190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Higher education has recently been identified as a sector of concern by the UK National Cyber Security Centre (NCSC). In 2021, the NCSC reported that universities and higher education institutions (HEI) had been exponentially targeted by cyber-criminals. Existing challenges were amplified or highlighted over the course of the global pandemic when universities struggled to continue to function through hybrid and remote teaching provision that relied heavily on their digital estate and services. Despite the value of the sector and the vulnerabilities within it, higher education has received relatively little attention from the cybersecurity research community. Over 2 years, we carried out numerous interventions and engagements with the UK higher education sector. Through interviews with cybersecurity practitioners working in the sector as well as roundtables, and questionnaires, we conducted a qualitative and quantitative analysis of threat intelligence sharing, which we use as a proxy for measuring and analysing collaboration. In a unique approach to studying collaboration in cybersecurity, we utilized social network analysis. This paper presents the study and our findings about the state of cybersecurity in UK universities. It also presents some recommendations for future steps that we argue will be necessary to equip the higher education sector to continue to support UK national interests going forward. Key findings include the positive inclination of those working in university cybersecurity to collaborate as well as the factors that impede that collaboration. These include management and insurance constraints, concerns about individual and institutional reputational damage, a lack of trusted relationships, and the lack of effective mechanisms or channels for sectoral collaboration. In terms of the network itself, we found that it is highly fragmented with a very small number of the possible connections active, none of the organizations we might expect to facilitate collaboration in the network are playing a significant role, and some universities are currently acting as key information bridges. For these reasons, any changes that might be led by sectoral bodies such as Jisc, UCISA or government bodies such as NCSC, would need to go through these information brokers.
{"title":"Cybersecurity in UK Universities: mapping (or managing) threat intelligence sharing within the higher education sector","authors":"Anna Piazza, Srinidhi Vasudevan, Madeline Carr","doi":"10.1093/cybsec/tyad019","DOIUrl":"https://doi.org/10.1093/cybsec/tyad019","url":null,"abstract":"Abstract Higher education has recently been identified as a sector of concern by the UK National Cyber Security Centre (NCSC). In 2021, the NCSC reported that universities and higher education institutions (HEI) had been exponentially targeted by cyber-criminals. Existing challenges were amplified or highlighted over the course of the global pandemic when universities struggled to continue to function through hybrid and remote teaching provision that relied heavily on their digital estate and services. Despite the value of the sector and the vulnerabilities within it, higher education has received relatively little attention from the cybersecurity research community. Over 2 years, we carried out numerous interventions and engagements with the UK higher education sector. Through interviews with cybersecurity practitioners working in the sector as well as roundtables, and questionnaires, we conducted a qualitative and quantitative analysis of threat intelligence sharing, which we use as a proxy for measuring and analysing collaboration. In a unique approach to studying collaboration in cybersecurity, we utilized social network analysis. This paper presents the study and our findings about the state of cybersecurity in UK universities. It also presents some recommendations for future steps that we argue will be necessary to equip the higher education sector to continue to support UK national interests going forward. Key findings include the positive inclination of those working in university cybersecurity to collaborate as well as the factors that impede that collaboration. These include management and insurance constraints, concerns about individual and institutional reputational damage, a lack of trusted relationships, and the lack of effective mechanisms or channels for sectoral collaboration. In terms of the network itself, we found that it is highly fragmented with a very small number of the possible connections active, none of the organizations we might expect to facilitate collaboration in the network are playing a significant role, and some universities are currently acting as key information bridges. For these reasons, any changes that might be led by sectoral bodies such as Jisc, UCISA or government bodies such as NCSC, would need to go through these information brokers.","PeriodicalId":44310,"journal":{"name":"Journal of Cybersecurity","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135653642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Threats against security in the Internet often have a wide range and can have serious impacts within society. Large quantum computers will be able to break the cryptographic algorithms used to ensure security today, which is known as the quantum threat. Quantum threats are multi-faceted and very complex cybersecurity issues. We use assemblage theory to explore the complexities associated with these threats, including how they are understood within policy and strategy. It is in this way that we explore how the governance of the quantum threat is made visible. Generally, the private and academic sectors have been a primary driver in this field, but other actors (especially states) have begun to grapple with the threat and have begun to understand the relation to defence challenges, and pathways to cooperation in order to prepare against the threat. This may pose challenges for traditional avenues of defence cooperation as states attempt to understand and manage the associated technologies and perceived threats. We examine how traditionally cooperating allies attempt to govern the quantum threat by focusing on Australia, Canada, European Union, New Zealand, UK, and USA. We explore the linkages within post-quantum cryptographic assemblages and identify several governmental interventions as attempts to understand and manage the threat and associated technologies. In examining over 40 policy and strategy-related documents between traditionally defence cooperating allies, we identify six main linkages: Infrastructure, Standardization, Education, Partnerships, Economy, and Defence. These linkages highlight the governmental interventions to govern through standardization and regulation as a way to define the contours of the quantum threat.
{"title":"Post-quantum cryptographic assemblages and the governance of the quantum threat","authors":"Kristen Csenkey, Nina Bindel","doi":"10.1093/cybsec/tyad001","DOIUrl":"https://doi.org/10.1093/cybsec/tyad001","url":null,"abstract":"Abstract Threats against security in the Internet often have a wide range and can have serious impacts within society. Large quantum computers will be able to break the cryptographic algorithms used to ensure security today, which is known as the quantum threat. Quantum threats are multi-faceted and very complex cybersecurity issues. We use assemblage theory to explore the complexities associated with these threats, including how they are understood within policy and strategy. It is in this way that we explore how the governance of the quantum threat is made visible. Generally, the private and academic sectors have been a primary driver in this field, but other actors (especially states) have begun to grapple with the threat and have begun to understand the relation to defence challenges, and pathways to cooperation in order to prepare against the threat. This may pose challenges for traditional avenues of defence cooperation as states attempt to understand and manage the associated technologies and perceived threats. We examine how traditionally cooperating allies attempt to govern the quantum threat by focusing on Australia, Canada, European Union, New Zealand, UK, and USA. We explore the linkages within post-quantum cryptographic assemblages and identify several governmental interventions as attempts to understand and manage the threat and associated technologies. In examining over 40 policy and strategy-related documents between traditionally defence cooperating allies, we identify six main linkages: Infrastructure, Standardization, Education, Partnerships, Economy, and Defence. These linkages highlight the governmental interventions to govern through standardization and regulation as a way to define the contours of the quantum threat.","PeriodicalId":44310,"journal":{"name":"Journal of Cybersecurity","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135127077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract ‘Deepfakes’ are computationally created entities that falsely represent reality. They can take image, video, and audio modalities, and pose a threat to many areas of systems and societies, comprising a topic of interest to various aspects of cybersecurity and cybersafety. In 2020, a workshop consulting AI experts from academia, policing, government, the private sector, and state security agencies ranked deepfakes as the most serious AI threat. These experts noted that since fake material can propagate through many uncontrolled routes, changes in citizen behaviour may be the only effective defence. This study aims to assess human ability to identify image deepfakes of human faces (these being uncurated output from the StyleGAN2 algorithm as trained on the FFHQ dataset) from a pool of non-deepfake images (these being random selection of images from the FFHQ dataset), and to assess the effectiveness of some simple interventions intended to improve detection accuracy. Using an online survey, participants (N = 280) were randomly allocated to one of four groups: a control group, and three assistance interventions. Each participant was shown a sequence of 20 images randomly selected from a pool of 50 deepfake images of human faces and 50 images of real human faces. Participants were asked whether each image was AI-generated or not, to report their confidence, and to describe the reasoning behind each response. Overall detection accuracy was only just above chance and none of the interventions significantly improved this. Of equal concern was the fact that participants’ confidence in their answers was high and unrelated to accuracy. Assessing the results on a per-image basis reveals that participants consistently found certain images easy to label correctly and certain images difficult, but reported similarly high confidence regardless of the image. Thus, although participant accuracy was 62% overall, this accuracy across images ranged quite evenly between 85 and 30%, with an accuracy of below 50% for one in every five images. We interpret the findings as suggesting that there is a need for an urgent call to action to address this threat.
{"title":"Testing human ability to detect ‘deepfake’ images of human faces","authors":"Sergi D Bray, Shane D Johnson, Bennett Kleinberg","doi":"10.1093/cybsec/tyad011","DOIUrl":"https://doi.org/10.1093/cybsec/tyad011","url":null,"abstract":"Abstract ‘Deepfakes’ are computationally created entities that falsely represent reality. They can take image, video, and audio modalities, and pose a threat to many areas of systems and societies, comprising a topic of interest to various aspects of cybersecurity and cybersafety. In 2020, a workshop consulting AI experts from academia, policing, government, the private sector, and state security agencies ranked deepfakes as the most serious AI threat. These experts noted that since fake material can propagate through many uncontrolled routes, changes in citizen behaviour may be the only effective defence. This study aims to assess human ability to identify image deepfakes of human faces (these being uncurated output from the StyleGAN2 algorithm as trained on the FFHQ dataset) from a pool of non-deepfake images (these being random selection of images from the FFHQ dataset), and to assess the effectiveness of some simple interventions intended to improve detection accuracy. Using an online survey, participants (N = 280) were randomly allocated to one of four groups: a control group, and three assistance interventions. Each participant was shown a sequence of 20 images randomly selected from a pool of 50 deepfake images of human faces and 50 images of real human faces. Participants were asked whether each image was AI-generated or not, to report their confidence, and to describe the reasoning behind each response. Overall detection accuracy was only just above chance and none of the interventions significantly improved this. Of equal concern was the fact that participants’ confidence in their answers was high and unrelated to accuracy. Assessing the results on a per-image basis reveals that participants consistently found certain images easy to label correctly and certain images difficult, but reported similarly high confidence regardless of the image. Thus, although participant accuracy was 62% overall, this accuracy across images ranged quite evenly between 85 and 30%, with an accuracy of below 50% for one in every five images. We interpret the findings as suggesting that there is a need for an urgent call to action to address this threat.","PeriodicalId":44310,"journal":{"name":"Journal of Cybersecurity","volume":"263 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135182590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sébastien Gillard, Dimitri Percia David, Alain Mermoud, Thomas Maillart
Abstract The latency reduction between the discovery of vulnerabilities, the build-up, and the dissemination of cyberattacks has put significant pressure on cybersecurity professionals. For that, security researchers have increasingly resorted to collective action in order to reduce the time needed to characterize and tame outstanding threats. Here, we investigate how joining and contribution dynamics on Malware Information Sharing Platform (MISP), an open-source threat intelligence sharing platform, influence the time needed to collectively complete threat descriptions. We find that performance, defined as the capacity to characterize quickly a threat event, is influenced by (i) its own complexity (negatively), by (ii) collective action (positively), and by (iii) learning, information integration, and modularity (positively). Our results inform on how collective action can be organized at scale and in a modular way to overcome a large number of time-critical tasks, such as cybersecurity threats.
{"title":"Efficient collective action for tackling time-critical cybersecurity threats","authors":"Sébastien Gillard, Dimitri Percia David, Alain Mermoud, Thomas Maillart","doi":"10.1093/cybsec/tyad021","DOIUrl":"https://doi.org/10.1093/cybsec/tyad021","url":null,"abstract":"Abstract The latency reduction between the discovery of vulnerabilities, the build-up, and the dissemination of cyberattacks has put significant pressure on cybersecurity professionals. For that, security researchers have increasingly resorted to collective action in order to reduce the time needed to characterize and tame outstanding threats. Here, we investigate how joining and contribution dynamics on Malware Information Sharing Platform (MISP), an open-source threat intelligence sharing platform, influence the time needed to collectively complete threat descriptions. We find that performance, defined as the capacity to characterize quickly a threat event, is influenced by (i) its own complexity (negatively), by (ii) collective action (positively), and by (iii) learning, information integration, and modularity (positively). Our results inform on how collective action can be organized at scale and in a modular way to overcome a large number of time-critical tasks, such as cybersecurity threats.","PeriodicalId":44310,"journal":{"name":"Journal of Cybersecurity","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135604439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fran Casino, Claudia Pina, Pablo López-Aguilar, Edgar Batista, Agusti Solanas, Constantinos Patsakis
Digital evidence underpin the majority of crimes as their analysis is an integral part of almost every criminal investigation. Even if we temporarily disregard the numerous challenges in the collection and analysis of digital evidence, the exchange of the evidence among the different stakeholders has many thorny issues. Of specific interest are cross-border criminal investigations as the complexity is significantly high due to the heterogeneity of legal frameworks, which beyond time bottlenecks can also become prohibiting. The aim of this article is to analyse the current state of practice of cross-border investigations considering the efficacy of current collaboration protocols along with the challenges and drawbacks to be overcome. Further to performing a legally oriented research treatise, we recall all the challenges raised in the literature and discuss them from a more practical yet global perspective. Thus, this article paves the way to enabling practitioners and stakeholders to leverage horizontal strategies to fill in the identified gaps timely and accurately.
{"title":"SoK: cross-border criminal investigations and digital evidence","authors":"Fran Casino, Claudia Pina, Pablo López-Aguilar, Edgar Batista, Agusti Solanas, Constantinos Patsakis","doi":"10.1093/cybsec/tyac014","DOIUrl":"https://doi.org/10.1093/cybsec/tyac014","url":null,"abstract":"Digital evidence underpin the majority of crimes as their analysis is an integral part of almost every criminal investigation. Even if we temporarily disregard the numerous challenges in the collection and analysis of digital evidence, the exchange of the evidence among the different stakeholders has many thorny issues. Of specific interest are cross-border criminal investigations as the complexity is significantly high due to the heterogeneity of legal frameworks, which beyond time bottlenecks can also become prohibiting. The aim of this article is to analyse the current state of practice of cross-border investigations considering the efficacy of current collaboration protocols along with the challenges and drawbacks to be overcome. Further to performing a legally oriented research treatise, we recall all the challenges raised in the literature and discuss them from a more practical yet global perspective. Thus, this article paves the way to enabling practitioners and stakeholders to leverage horizontal strategies to fill in the identified gaps timely and accurately.","PeriodicalId":44310,"journal":{"name":"Journal of Cybersecurity","volume":"972 1","pages":""},"PeriodicalIF":3.9,"publicationDate":"2022-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138505421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}