Pub Date : 2011-04-11DOI: 10.1109/CICYBS.2011.5949397
Stephen M. Woskov, M. Grimaila, R. Mills, M. Haas
Virtually all modern organizations have embedded information systems into their core business processes as a means to increase operational efficiency, improve decision making quality, and minimize costs. Unfortunately, this dependence can place an organization's mission at risk if the confidentiality, integrity, or availability of a critical information resource has been lost or degraded. Within the military, this type of incident could ultimately result in serious consequences including physical destruction and loss of life. To reduce the likelihood of this outcome, personnel must be informed about cyber incidents, and their potential consequences, in a timely and relevant manner so that appropriate contingency actions can be taken. In this paper, we identify criteria for improving the relevance of incident notification, propose the use of case-based reasoning (CBR) for contingency decision support, and identify key design considerations for implementing a CBR system used to deliver relevant notification following a cyber incident.
{"title":"Design considerations for a case-based reasoning engine for scenario-based cyber incident notification","authors":"Stephen M. Woskov, M. Grimaila, R. Mills, M. Haas","doi":"10.1109/CICYBS.2011.5949397","DOIUrl":"https://doi.org/10.1109/CICYBS.2011.5949397","url":null,"abstract":"Virtually all modern organizations have embedded information systems into their core business processes as a means to increase operational efficiency, improve decision making quality, and minimize costs. Unfortunately, this dependence can place an organization's mission at risk if the confidentiality, integrity, or availability of a critical information resource has been lost or degraded. Within the military, this type of incident could ultimately result in serious consequences including physical destruction and loss of life. To reduce the likelihood of this outcome, personnel must be informed about cyber incidents, and their potential consequences, in a timely and relevant manner so that appropriate contingency actions can be taken. In this paper, we identify criteria for improving the relevance of incident notification, propose the use of case-based reasoning (CBR) for contingency decision support, and identify key design considerations for implementing a CBR system used to deliver relevant notification following a cyber incident.","PeriodicalId":436263,"journal":{"name":"2011 IEEE Symposium on Computational Intelligence in Cyber Security (CICS)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115468981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-04-11DOI: 10.1109/CICYBS.2011.5949400
Ran Liu, Wenjian Luo, Xufa Wang
The negative database (NDB) is a complement of the corresponding database. The NDB could protect the privacy of the data, but it should be complete and hard-to-reverse. However, existent techniques cannot generate the complete and hard-to-reverse negative database. In this paper, a hybrid method is proposed to generate single negative databases. The proposed hybrid method includes two phases. Firstly, a complete negative database with a small size is generated by the transformation of the prefix algorithm. Secondly, a hard-to-reverse negative database, which is generated with the q-hidden method, is added into the small complete negative database. Therefore, the hybrid negative database is both complete and hard-to-reverse. Experiment results show that the NDB generated by the hybrid method is better than the NDB generated by the typical q-hidden method. Especially, the NDB generated by the q-hidden method can be reversed on average when the string length is 300. However, the NDB generated by the hybrid method cannot be reversed on average when the string length is 150.
{"title":"A Hybrid of the prefix algorithm and the q-hidden algorithm for generating single negative databases","authors":"Ran Liu, Wenjian Luo, Xufa Wang","doi":"10.1109/CICYBS.2011.5949400","DOIUrl":"https://doi.org/10.1109/CICYBS.2011.5949400","url":null,"abstract":"The negative database (NDB) is a complement of the corresponding database. The NDB could protect the privacy of the data, but it should be complete and hard-to-reverse. However, existent techniques cannot generate the complete and hard-to-reverse negative database. In this paper, a hybrid method is proposed to generate single negative databases. The proposed hybrid method includes two phases. Firstly, a complete negative database with a small size is generated by the transformation of the prefix algorithm. Secondly, a hard-to-reverse negative database, which is generated with the q-hidden method, is added into the small complete negative database. Therefore, the hybrid negative database is both complete and hard-to-reverse. Experiment results show that the NDB generated by the hybrid method is better than the NDB generated by the typical q-hidden method. Especially, the NDB generated by the q-hidden method can be reversed on average when the string length is 300. However, the NDB generated by the hybrid method cannot be reversed on average when the string length is 150.","PeriodicalId":436263,"journal":{"name":"2011 IEEE Symposium on Computational Intelligence in Cyber Security (CICS)","volume":"111 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117199307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-04-11DOI: 10.1109/CICYBS.2011.5949385
A. Zakrzewska, Erik M. Ferragut
When threatened by automated attacks, critical systems that require human-controlled responses have difficulty making optimal responses and adapting protections in real-time and may therefore be overwhelmed. Consequently, experts have called for the development of automatic real-time reaction capabilities. However, a technical gap exists in the modeling and analysis of cyber conflicts to automatically understand the repercussions of responses. There is a need for modeling cyber assets that accounts for concurrent behavior, incomplete information, and payoff functions.
{"title":"Modeling cyber conflicts using an extended Petri Net formalism","authors":"A. Zakrzewska, Erik M. Ferragut","doi":"10.1109/CICYBS.2011.5949385","DOIUrl":"https://doi.org/10.1109/CICYBS.2011.5949385","url":null,"abstract":"When threatened by automated attacks, critical systems that require human-controlled responses have difficulty making optimal responses and adapting protections in real-time and may therefore be overwhelmed. Consequently, experts have called for the development of automatic real-time reaction capabilities. However, a technical gap exists in the modeling and analysis of cyber conflicts to automatically understand the repercussions of responses. There is a need for modeling cyber assets that accounts for concurrent behavior, incomplete information, and payoff functions.","PeriodicalId":436263,"journal":{"name":"2011 IEEE Symposium on Computational Intelligence in Cyber Security (CICS)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129509852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-04-11DOI: 10.1109/CICYBS.2011.5949395
R. Abercrombie, Erik M. Ferragut, Frederick T. Sheldon, M. Grimaila
Information system security risk, defined as the product of the monetary losses associated with security incidents and the probability that they occur, is a suitable decision criterion when considering different information system architectures. Risk assessment is the widely accepted process used to understand, quantify, and document the effects of undesirable events on organizational objectives so that risk management, continuity of operations planning, and contingency planning can be performed. One technique, the Cyberspace Security Econometrics System (CSES), is a methodology for estimating security costs to stakeholders as a function of possible risk postures. In earlier works, we presented a computational infrastructure that allows an analyst to estimate the security of a system in terms of the loss that each stakeholder stands to sustain, as a result of security breakdowns. Additional work has applied CSES to specific business cases. The current state-of-the-art of CSES addresses independent events. In typical usage, analysts create matrices that capture their expert opinion, and then use those matrices to quantify costs to stakeholders. This expansion generalizes CSES to the common real-world case where events may be dependent.
{"title":"Addressing the need for independence in the CSE model","authors":"R. Abercrombie, Erik M. Ferragut, Frederick T. Sheldon, M. Grimaila","doi":"10.1109/CICYBS.2011.5949395","DOIUrl":"https://doi.org/10.1109/CICYBS.2011.5949395","url":null,"abstract":"Information system security risk, defined as the product of the monetary losses associated with security incidents and the probability that they occur, is a suitable decision criterion when considering different information system architectures. Risk assessment is the widely accepted process used to understand, quantify, and document the effects of undesirable events on organizational objectives so that risk management, continuity of operations planning, and contingency planning can be performed. One technique, the Cyberspace Security Econometrics System (CSES), is a methodology for estimating security costs to stakeholders as a function of possible risk postures. In earlier works, we presented a computational infrastructure that allows an analyst to estimate the security of a system in terms of the loss that each stakeholder stands to sustain, as a result of security breakdowns. Additional work has applied CSES to specific business cases. The current state-of-the-art of CSES addresses independent events. In typical usage, analysts create matrices that capture their expert opinion, and then use those matrices to quantify costs to stakeholders. This expansion generalizes CSES to the common real-world case where events may be dependent.","PeriodicalId":436263,"journal":{"name":"2011 IEEE Symposium on Computational Intelligence in Cyber Security (CICS)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130689746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-04-11DOI: 10.1109/CICYBS.2011.5949408
J. Zhan, Xing Fang, P. Killion
Trust is a human-related phenomenon in social networks. Trust research on social networks has gained much attention on its usefulness, and on modeling propagations. There is little focus on finding maximum trust in social networks which is particularly important when a social network is oriented by certain tasks. In this paper, we first propose a trust maximization algorithm based on the task-oriented social networks. We then take communication cost into account and introduce four different trust optimization algorithms. We also conduct extensive experiments to evaluate the proposed algorithms and test their performance. To our best knowledge, this is pioneering work on trust optimization in task-oriented social networks.
{"title":"Trust optimization in task-oriented social networks","authors":"J. Zhan, Xing Fang, P. Killion","doi":"10.1109/CICYBS.2011.5949408","DOIUrl":"https://doi.org/10.1109/CICYBS.2011.5949408","url":null,"abstract":"Trust is a human-related phenomenon in social networks. Trust research on social networks has gained much attention on its usefulness, and on modeling propagations. There is little focus on finding maximum trust in social networks which is particularly important when a social network is oriented by certain tasks. In this paper, we first propose a trust maximization algorithm based on the task-oriented social networks. We then take communication cost into account and introduce four different trust optimization algorithms. We also conduct extensive experiments to evaluate the proposed algorithms and test their performance. To our best knowledge, this is pioneering work on trust optimization in task-oriented social networks.","PeriodicalId":436263,"journal":{"name":"2011 IEEE Symposium on Computational Intelligence in Cyber Security (CICS)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123369751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-04-11DOI: 10.1109/CICYBS.2011.5949386
Erik M. Ferragut, David M. Darmon, Craig A. Shue, Stephen Kelley
Detection of rare or previously unseen attacks in cyber security presents a central challenge: how does one search for a sufficiently wide variety of types of anomalies and yet allow the process to scale to increasingly complex data? In particular, creating each anomaly detector manually and training each one separately presents untenable strains on both human and computer resources. In this paper we propose a systematic method for constructing a potentially very large number of complementary anomaly detectors from a single probabilistic model of the data. Only one model needs to be trained, but numerous detectors can then be implemented. This approach promises to scale better than manual methods to the complex heterogeneity of real-life data. As an example, we develop a Latent Dirichlet Allocation probability model of TCP connections entering Oak Ridge National Laboratory. We show that several detectors can be automatically constructed from the model and will provide anomaly detection at flow, sub-flow, and host (both server and client) levels. This demonstrates how the fundamental connection between anomaly detection and probabilistic modeling can be exploited to develop more robust operational solutions.
{"title":"Automatic construction of anomaly detectors from graphical models","authors":"Erik M. Ferragut, David M. Darmon, Craig A. Shue, Stephen Kelley","doi":"10.1109/CICYBS.2011.5949386","DOIUrl":"https://doi.org/10.1109/CICYBS.2011.5949386","url":null,"abstract":"Detection of rare or previously unseen attacks in cyber security presents a central challenge: how does one search for a sufficiently wide variety of types of anomalies and yet allow the process to scale to increasingly complex data? In particular, creating each anomaly detector manually and training each one separately presents untenable strains on both human and computer resources. In this paper we propose a systematic method for constructing a potentially very large number of complementary anomaly detectors from a single probabilistic model of the data. Only one model needs to be trained, but numerous detectors can then be implemented. This approach promises to scale better than manual methods to the complex heterogeneity of real-life data. As an example, we develop a Latent Dirichlet Allocation probability model of TCP connections entering Oak Ridge National Laboratory. We show that several detectors can be automatically constructed from the model and will provide anomaly detection at flow, sub-flow, and host (both server and client) levels. This demonstrates how the fundamental connection between anomaly detection and probabilistic modeling can be exploited to develop more robust operational solutions.","PeriodicalId":436263,"journal":{"name":"2011 IEEE Symposium on Computational Intelligence in Cyber Security (CICS)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132540278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-04-11DOI: 10.1109/CICYBS.2011.5949393
Senhua Yu, D. Dasgupta
The Human Immune System (HIS) employs multilevel defense against harmful and unseen pathogens through innate and adaptive immunity. Innate immunity protects the body from the known invaders whereas adaptive immunity develops a memory of past encounter and has the ability to learn about previously unknown pathogens. These salient features of the HIS are inspiring the researchers in the area of intrusion detection to develop automated and adaptive defensive tools. This paper presents a new variant of Conserved Self Pattern Recognition Algorithm (CSPRA) called CSPRA-ID (CSPRA for Intrusion Detection). The CSPRA-ID is given the capability of effectively identifying known intrusions by utilizing the knowledge of well-known attacks to build a conserved self pattern (APC detector) while it retains the ability to detect novel intrusions because of the nature of one-class classification of the T detectors. Furthermore, the T detectors in the CSPRA-ID are generated with a novel near-deterministic scheme that is proposed in this paper. The near-deterministic generation scheme places the detector with Brute Force method to guarantee the next detector to be very foreign to the existing detector. Moreover, the placement of the variable-sized detector is online determined during the Monte Carlo estimate of detector coverage and thus the detectors with an optimal distribution are generated without any additional optimization step. A comparative study between CSPRA-ID and one-class SVM shows that the CSPRA-ID is promising on DARPA network intrusion data in terms of detection accuracy and computation efficiency.
人体免疫系统(HIS)通过先天免疫和适应性免疫对有害和看不见的病原体进行多层次防御。先天免疫保护身体免受已知入侵者的侵害,而适应性免疫则形成对过去遭遇的记忆,并有能力了解以前未知的病原体。HIS系统的这些突出特点激励着入侵检测领域的研究人员开发自动化、自适应的防御工具。本文提出了保守自模式识别算法(CSPRA)的一种新变体CSPRA- id (CSPRA for Intrusion Detection)。CSPRA-ID被赋予了有效识别已知入侵的能力,通过利用已知攻击的知识来建立一个保守的自模式(APC检测器),同时由于T检测器的一类分类性质,它保留了检测新入侵的能力。此外,本文提出了一种新的近确定性方案来生成CSPRA-ID中的T检测器。近确定性生成方案采用蛮力方法放置检测器,以保证下一个检测器与现有检测器非常陌生。此外,可变尺寸检测器的位置在检测器覆盖范围的蒙特卡罗估计期间在线确定,因此无需任何额外的优化步骤即可生成具有最优分布的检测器。CSPRA-ID与一类支持向量机的对比研究表明,CSPRA-ID在检测精度和计算效率方面对DARPA网络入侵数据具有较好的应用前景。
{"title":"An effective network-based Intrusion Detection using Conserved Self Pattern Recognition Algorithm augmented with near-deterministic detector generation","authors":"Senhua Yu, D. Dasgupta","doi":"10.1109/CICYBS.2011.5949393","DOIUrl":"https://doi.org/10.1109/CICYBS.2011.5949393","url":null,"abstract":"The Human Immune System (HIS) employs multilevel defense against harmful and unseen pathogens through innate and adaptive immunity. Innate immunity protects the body from the known invaders whereas adaptive immunity develops a memory of past encounter and has the ability to learn about previously unknown pathogens. These salient features of the HIS are inspiring the researchers in the area of intrusion detection to develop automated and adaptive defensive tools. This paper presents a new variant of Conserved Self Pattern Recognition Algorithm (CSPRA) called CSPRA-ID (CSPRA for Intrusion Detection). The CSPRA-ID is given the capability of effectively identifying known intrusions by utilizing the knowledge of well-known attacks to build a conserved self pattern (APC detector) while it retains the ability to detect novel intrusions because of the nature of one-class classification of the T detectors. Furthermore, the T detectors in the CSPRA-ID are generated with a novel near-deterministic scheme that is proposed in this paper. The near-deterministic generation scheme places the detector with Brute Force method to guarantee the next detector to be very foreign to the existing detector. Moreover, the placement of the variable-sized detector is online determined during the Monte Carlo estimate of detector coverage and thus the detectors with an optimal distribution are generated without any additional optimization step. A comparative study between CSPRA-ID and one-class SVM shows that the CSPRA-ID is promising on DARPA network intrusion data in terms of detection accuracy and computation efficiency.","PeriodicalId":436263,"journal":{"name":"2011 IEEE Symposium on Computational Intelligence in Cyber Security (CICS)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131933253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-04-11DOI: 10.1109/CICYBS.2011.5949405
S. Hajian, J. Domingo-Ferrer, A. Martínez-Ballesté
Automated data collection has fostered the use of data mining for intrusion and crime detection. Indeed, banks, large corporations, insurance companies, casinos, etc. are increasingly mining data about their customers or employees in view of detecting potential intrusion, fraud or even crime. Mining algorithms are trained from datasets which may be biased in what regards gender, race, religion or other attributes. Furthermore, mining is often outsourced or carried out in cooperation by several entities. For those reasons, discrimination concerns arise. Potential intrusion, fraud or crime should be inferred from objective misbehavior, rather than from sensitive attributes like gender, race or religion. This paper discusses how to clean training datasets and outsourced datasets in such a way that legitimate classification rules can still be extracted but discriminating rules based on sensitive attributes cannot.
{"title":"Discrimination prevention in data mining for intrusion and crime detection","authors":"S. Hajian, J. Domingo-Ferrer, A. Martínez-Ballesté","doi":"10.1109/CICYBS.2011.5949405","DOIUrl":"https://doi.org/10.1109/CICYBS.2011.5949405","url":null,"abstract":"Automated data collection has fostered the use of data mining for intrusion and crime detection. Indeed, banks, large corporations, insurance companies, casinos, etc. are increasingly mining data about their customers or employees in view of detecting potential intrusion, fraud or even crime. Mining algorithms are trained from datasets which may be biased in what regards gender, race, religion or other attributes. Furthermore, mining is often outsourced or carried out in cooperation by several entities. For those reasons, discrimination concerns arise. Potential intrusion, fraud or crime should be inferred from objective misbehavior, rather than from sensitive attributes like gender, race or religion. This paper discusses how to clean training datasets and outsourced datasets in such a way that legitimate classification rules can still be extracted but discriminating rules based on sensitive attributes cannot.","PeriodicalId":436263,"journal":{"name":"2011 IEEE Symposium on Computational Intelligence in Cyber Security (CICS)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121393101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-04-11DOI: 10.1109/CICYBS.2011.5949403
S. Musman, Mike Tanner, A. Temin, E. Elsaesser, Lewis Loren
Understanding the context of how IT contributes to making missions more or less successful is a cornerstone of mission assurance. This paper describes a continuation of our previous work that used process modeling to allow us to estimate the impact of cyber incidents on missions. In our previous work we focused on developing a capability that could work as an online process to estimate the impacts of incidents that are discovered and reported. In this paper we focus instead on how our techniques and approach to mission modeling and computing assessments with the model can be used offline to help support mission assurance engineering. The heart of our approach involves using a process model of the system that can be run as an executable simulation to estimate mission outcomes. These models not only contain information about the mission activities, but also contain attributes of the process itself and the context in which the system operates. They serve as a probabilistic model and stochastic simulation of the system itself. Our contributions to this process modeling approach have been the addition of IT activity models that document in the model how various mission activities depend on IT supported processes and the ability to relate how the capabilities of the IT can affect the mission outcomes. Here we demonstrate how it is possible to evaluate the mission model offline and compute characteristics of the system that reflect its mission assurance properties. Using the models it is possible to identify the crown jewels, to expose the systems susceptibility to different attack effects, and evaluate how different mitigation techniques would likely work. Being based on an executable model of the system itself, our approach is much more powerful than a static assessment. Being based on business process modeling, and since business process analysis is becoming popular as a systems engineering tool, we also hope our approach will push mission assurance analysis tasks into a framework that allows them to become a standard systems engineering practice rather than the “off to the side” activity it currently is.
{"title":"A systems engineering approach for crown jewels estimation and mission assurance decision making","authors":"S. Musman, Mike Tanner, A. Temin, E. Elsaesser, Lewis Loren","doi":"10.1109/CICYBS.2011.5949403","DOIUrl":"https://doi.org/10.1109/CICYBS.2011.5949403","url":null,"abstract":"Understanding the context of how IT contributes to making missions more or less successful is a cornerstone of mission assurance. This paper describes a continuation of our previous work that used process modeling to allow us to estimate the impact of cyber incidents on missions. In our previous work we focused on developing a capability that could work as an online process to estimate the impacts of incidents that are discovered and reported. In this paper we focus instead on how our techniques and approach to mission modeling and computing assessments with the model can be used offline to help support mission assurance engineering. The heart of our approach involves using a process model of the system that can be run as an executable simulation to estimate mission outcomes. These models not only contain information about the mission activities, but also contain attributes of the process itself and the context in which the system operates. They serve as a probabilistic model and stochastic simulation of the system itself. Our contributions to this process modeling approach have been the addition of IT activity models that document in the model how various mission activities depend on IT supported processes and the ability to relate how the capabilities of the IT can affect the mission outcomes. Here we demonstrate how it is possible to evaluate the mission model offline and compute characteristics of the system that reflect its mission assurance properties. Using the models it is possible to identify the crown jewels, to expose the systems susceptibility to different attack effects, and evaluate how different mitigation techniques would likely work. Being based on an executable model of the system itself, our approach is much more powerful than a static assessment. Being based on business process modeling, and since business process analysis is becoming popular as a systems engineering tool, we also hope our approach will push mission assurance analysis tasks into a framework that allows them to become a standard systems engineering practice rather than the “off to the side” activity it currently is.","PeriodicalId":436263,"journal":{"name":"2011 IEEE Symposium on Computational Intelligence in Cyber Security (CICS)","volume":"312 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132349655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-04-11DOI: 10.1109/CICYBS.2011.5949389
A. B. Hamida, M. Koubàa, C. Amar, H. Nicolas
Illegal copying of multimedia files has become a very common practice. Indeed, with the rapid development of means of communication, sharing, copying and illegal downloading have become a very easy handling action, at everybody's reach. The magnitude of this continuously increasing phenomenon may have a significant economic impact since it induces a marked loss on turnover. To cope with this huge problem, it becomes necessary to control video traffic and ensure traceability. Thus, each user receives a personalized media release, containing a personal identifier inserted through a robust watermarking technique. If this copy is redistributed illegally, we are able to trace the dishonest user who can be prosecuted. This expresses an urgent need for implementing a reliable fingerprinting scheme with high performances. In this context, we present in this paper a hierarchical fingerprinting system based on Tardos code in order to reduce computational costs required for the pirates' detection. Both theoretical analyses and experimental results are provided to show the performance of the proposed system.
{"title":"Hierarchical traceability of multimedia documents","authors":"A. B. Hamida, M. Koubàa, C. Amar, H. Nicolas","doi":"10.1109/CICYBS.2011.5949389","DOIUrl":"https://doi.org/10.1109/CICYBS.2011.5949389","url":null,"abstract":"Illegal copying of multimedia files has become a very common practice. Indeed, with the rapid development of means of communication, sharing, copying and illegal downloading have become a very easy handling action, at everybody's reach. The magnitude of this continuously increasing phenomenon may have a significant economic impact since it induces a marked loss on turnover. To cope with this huge problem, it becomes necessary to control video traffic and ensure traceability. Thus, each user receives a personalized media release, containing a personal identifier inserted through a robust watermarking technique. If this copy is redistributed illegally, we are able to trace the dishonest user who can be prosecuted. This expresses an urgent need for implementing a reliable fingerprinting scheme with high performances. In this context, we present in this paper a hierarchical fingerprinting system based on Tardos code in order to reduce computational costs required for the pirates' detection. Both theoretical analyses and experimental results are provided to show the performance of the proposed system.","PeriodicalId":436263,"journal":{"name":"2011 IEEE Symposium on Computational Intelligence in Cyber Security (CICS)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128548846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}