Pub Date : 2006-06-21DOI: 10.1109/IAW.2006.1652074
N. Hallberg, J. Hallberg
This paper presents an approach for extracting security requirements from early design specifications. An increasing part of the communication and sharing of information in our society utilizes electronic media. Many organizations, especially distributed and Net-centric, are entirely dependent on well functioning information systems. Thus, IT security is becoming central to the ability to fulfill business goals, build trustworthy systems, and protect assets. In order to develop systems with adequate security features, it is essential to capture the corresponding security needs and requirements. The main objective of this paper is to present and illustrate the use of a method for extracting security needs from textual descriptions of general requirements of information systems, and to transform these needs into security requirements and security techniques. The consequences of selected security techniques are described as design implications. The method utilizes quality tools, such as voice of the customer table and affinity and hierarchy diagrams. To illustrate the method, known as the usage-centric security requirements engineering (USeR) method, it is demonstrated in a case study. The USeR method enables the identification of security needs from statements about information systems, and the transformation of those needs into security techniques. Although the method needs to be used with complementary approaches, e.g. misuse cases to detect security requirements originating from the functional requirements, it provides a coherent approach and holistic view that even in the early stages can guide the system evolution to achieve information systems more resistant to security threats
{"title":"The Usage-Centric Security Requirements Engineering (USeR) Method","authors":"N. Hallberg, J. Hallberg","doi":"10.1109/IAW.2006.1652074","DOIUrl":"https://doi.org/10.1109/IAW.2006.1652074","url":null,"abstract":"This paper presents an approach for extracting security requirements from early design specifications. An increasing part of the communication and sharing of information in our society utilizes electronic media. Many organizations, especially distributed and Net-centric, are entirely dependent on well functioning information systems. Thus, IT security is becoming central to the ability to fulfill business goals, build trustworthy systems, and protect assets. In order to develop systems with adequate security features, it is essential to capture the corresponding security needs and requirements. The main objective of this paper is to present and illustrate the use of a method for extracting security needs from textual descriptions of general requirements of information systems, and to transform these needs into security requirements and security techniques. The consequences of selected security techniques are described as design implications. The method utilizes quality tools, such as voice of the customer table and affinity and hierarchy diagrams. To illustrate the method, known as the usage-centric security requirements engineering (USeR) method, it is demonstrated in a case study. The USeR method enables the identification of security needs from statements about information systems, and the transformation of those needs into security techniques. Although the method needs to be used with complementary approaches, e.g. misuse cases to detect security requirements originating from the functional requirements, it provides a coherent approach and holistic view that even in the early stages can guide the system evolution to achieve information systems more resistant to security threats","PeriodicalId":326306,"journal":{"name":"2006 IEEE Information Assurance Workshop","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124122555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2006-06-21DOI: 10.1109/IAW.2006.1652120
K. G. Labbe, N. Rowe, J. D. Fulp
Host-based intrusion-prevention systems are currently popular technologies which try to prevent exploits from succeeding on a host. They are like host-based intrusion-detection systems (P. E. Proctor, 2001) but include means to automatically take actions once malicious activities or code are discovered. This can include terminating connections, services, or ports; refusing commands; blocking packets from specific Internet addresses; initiating tracing of packets; and sending modified packets back to a user. Automated responses to exploits can be quick without human intervention. Around ten commercial vendors are currently offering intrusion-prevention products (N. Desai, May 2006), and Snort-Inline is a popular open-source tool. Total intrusion prevention is a difficult goal to achieve, since it takes time to recognize an exploit and by then the damage may be done. So it is important to have a way to test the often-broad claims of intrusion-prevention products. The testing we propose is not as comprehensive as that offered by attack-traffic simulators like Skaion's TGS (www.skaion.com) or by the DETER testbed (www.deterlab.net). But attack-traffic simulators, even when up-to-date, only model broad characteristics of attacks and not their context-dependent behavior, so they can produce significant numbers of false negatives. DETER emulates rather than executes malicious software to provide added safety, which is not quite the same. DETER also imposes several bureaucratic obstacles for getting approval for experiments and obtaining time on their hardware to run them; this bureaucracy requires motivation and time to navigate. For quick testing in depth of a new product that has not been evaluated in DETER, or for finding reasons to rule out a product, a simpler approach that is easier to set up is required
{"title":"A Methodology for Evaluation of Host-Based Intrusion Prevention Systems and Its Application","authors":"K. G. Labbe, N. Rowe, J. D. Fulp","doi":"10.1109/IAW.2006.1652120","DOIUrl":"https://doi.org/10.1109/IAW.2006.1652120","url":null,"abstract":"Host-based intrusion-prevention systems are currently popular technologies which try to prevent exploits from succeeding on a host. They are like host-based intrusion-detection systems (P. E. Proctor, 2001) but include means to automatically take actions once malicious activities or code are discovered. This can include terminating connections, services, or ports; refusing commands; blocking packets from specific Internet addresses; initiating tracing of packets; and sending modified packets back to a user. Automated responses to exploits can be quick without human intervention. Around ten commercial vendors are currently offering intrusion-prevention products (N. Desai, May 2006), and Snort-Inline is a popular open-source tool. Total intrusion prevention is a difficult goal to achieve, since it takes time to recognize an exploit and by then the damage may be done. So it is important to have a way to test the often-broad claims of intrusion-prevention products. The testing we propose is not as comprehensive as that offered by attack-traffic simulators like Skaion's TGS (www.skaion.com) or by the DETER testbed (www.deterlab.net). But attack-traffic simulators, even when up-to-date, only model broad characteristics of attacks and not their context-dependent behavior, so they can produce significant numbers of false negatives. DETER emulates rather than executes malicious software to provide added safety, which is not quite the same. DETER also imposes several bureaucratic obstacles for getting approval for experiments and obtaining time on their hardware to run them; this bureaucracy requires motivation and time to navigate. For quick testing in depth of a new product that has not been evaluated in DETER, or for finding reasons to rule out a product, a simpler approach that is easier to set up is required","PeriodicalId":326306,"journal":{"name":"2006 IEEE Information Assurance Workshop","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133275773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2006-06-21DOI: 10.1109/IAW.2006.1652127
M. Savastano, A. Luciano, A. Pagano, B. Peticone, L. Riccardi
In the context of the countermeasures against criminal or terrorist acts, the attribution of identity to a unknown speaker, (for example to an individual talking on a phone line), may play a primary role. Speaker identification (SI) may be performed with or without the human support and, according to this distinction, SI systems are divided in "semi-automatic" and "automatic" (J. P. Campbell, Sept. 1997). In semi-automatic protocols, the process of identification is carried out by means of electronic instruments with the support of a technician who generally has a linguistic background. Automatic systems do not need human support and may operate in quasi-real-time, and this may represent a feature particularly appealing in some operative scenarios. Obviously, the complexity of automatic systems is relevant and then, generally, complex architectures are required. In the present paper the authors propose a four-classifiers methodology which exhibits some innovative solutions in the context of similar approaches. In particular, a new robust approach to pitch extraction allows to overcome a set of problems generally associated with this task
在对犯罪或恐怖主义行为采取对策的背景下,将身份归属于不知名的说话者(例如,在电话线上说话的人)可能起主要作用。说话人识别(SI)可以在有或没有人类支持的情况下进行,根据这种区分,SI系统分为“半自动”和“自动”(J. P. Campbell, 1997年9月)。在半自动协议中,识别过程是在通常具有语言背景的技术人员的支持下,通过电子仪器进行的。自动系统不需要人工支持,可以准实时操作,这在某些操作场景中可能是一个特别吸引人的特征。显然,自动化系统的复杂性是相关的,然后,通常需要复杂的体系结构。在本文中,作者提出了一种四分类器方法,该方法在类似方法的背景下展示了一些创新的解决方案。特别是,一个新的鲁棒的方法来提取音高允许克服一组通常与此任务相关的问题
{"title":"A Multi-step Method for Speaker Identification","authors":"M. Savastano, A. Luciano, A. Pagano, B. Peticone, L. Riccardi","doi":"10.1109/IAW.2006.1652127","DOIUrl":"https://doi.org/10.1109/IAW.2006.1652127","url":null,"abstract":"In the context of the countermeasures against criminal or terrorist acts, the attribution of identity to a unknown speaker, (for example to an individual talking on a phone line), may play a primary role. Speaker identification (SI) may be performed with or without the human support and, according to this distinction, SI systems are divided in \"semi-automatic\" and \"automatic\" (J. P. Campbell, Sept. 1997). In semi-automatic protocols, the process of identification is carried out by means of electronic instruments with the support of a technician who generally has a linguistic background. Automatic systems do not need human support and may operate in quasi-real-time, and this may represent a feature particularly appealing in some operative scenarios. Obviously, the complexity of automatic systems is relevant and then, generally, complex architectures are required. In the present paper the authors propose a four-classifiers methodology which exhibits some innovative solutions in the context of similar approaches. In particular, a new robust approach to pitch extraction allows to overcome a set of problems generally associated with this task","PeriodicalId":326306,"journal":{"name":"2006 IEEE Information Assurance Workshop","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129322612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2006-06-21DOI: 10.1109/IAW.2006.1652099
Neil C. Rowe, B. Duong, E. J. Custy
Cyber-attackers are becoming more aware of honeypots. They generally want to avoid honeypots since it is hard to spread attacks from them, attacks are thoroughly monitored on them, and some honeypots contain planted false information. This suggests that it could be useful for a computer system to pretend it is a honeypot, to scare away smarter attackers. We examine here from a number of perspectives how this could be accomplished as a kind of "vaccination" of systems to reduce numbers of attacks and their severity. We develop a mathematical model of what would make an attacker go away. We report experiments with deliberate distortions on text to see at what point people could detect deception, and discover they can respond to subtle clues. We also report experiments with real attackers against a honeypot of increasing obviousness. Results show that attacks on it decreased over time which may indicate that attackers are being scared away. We conclude with some speculation about the escalation of honeypot-antihoneypot techniques
{"title":"Fake Honeypots: A Defensive Tactic for Cyberspace","authors":"Neil C. Rowe, B. Duong, E. J. Custy","doi":"10.1109/IAW.2006.1652099","DOIUrl":"https://doi.org/10.1109/IAW.2006.1652099","url":null,"abstract":"Cyber-attackers are becoming more aware of honeypots. They generally want to avoid honeypots since it is hard to spread attacks from them, attacks are thoroughly monitored on them, and some honeypots contain planted false information. This suggests that it could be useful for a computer system to pretend it is a honeypot, to scare away smarter attackers. We examine here from a number of perspectives how this could be accomplished as a kind of \"vaccination\" of systems to reduce numbers of attacks and their severity. We develop a mathematical model of what would make an attacker go away. We report experiments with deliberate distortions on text to see at what point people could detect deception, and discover they can respond to subtle clues. We also report experiments with real attackers against a honeypot of increasing obviousness. Results show that attacks on it decreased over time which may indicate that attackers are being scared away. We conclude with some speculation about the escalation of honeypot-antihoneypot techniques","PeriodicalId":326306,"journal":{"name":"2006 IEEE Information Assurance Workshop","volume":"54 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116269116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2006-06-21DOI: 10.1109/IAW.2006.1652126
C. Fox, D. Wilson
The Interrogator infrastructure is comprised of a number of networks each consisting of multiple thousands of nodes. The data produced by the sensors in this infrastructure is collected and stored in three distinct formats: relational databases, data files containing packet traffic or network flow information, and other report files - usually in extensible markup language (XML) format. In a network infrastructure of this size, it becomes very difficult to keep abreast of the complex relationships that exist within. Additionally, due to the sheer volume of data produced in the previously mentioned formats, a method to aid in extracting the security relevant content from the data becomes highly essential. We propose the use of network graphs to address these limitations in the current Interrogator architecture. Generation of the graphs required the development of methods to extract - from the data sources available - the needed connectivity and data transfer information. This information was then passed to a graphing utility, Graphviz, which was used to generate the network graphs. Using the capabilities provided in Graphviz, we were able to quickly obtain information about any node in the network including: the connectivity of the node, the data transferred, and any alerts generated that included these nodes. These graphs are used as another analysis source for an analyst to aid in the identification of suspicious network behavior
{"title":"Visualization in Interrogator using Graphviz","authors":"C. Fox, D. Wilson","doi":"10.1109/IAW.2006.1652126","DOIUrl":"https://doi.org/10.1109/IAW.2006.1652126","url":null,"abstract":"The Interrogator infrastructure is comprised of a number of networks each consisting of multiple thousands of nodes. The data produced by the sensors in this infrastructure is collected and stored in three distinct formats: relational databases, data files containing packet traffic or network flow information, and other report files - usually in extensible markup language (XML) format. In a network infrastructure of this size, it becomes very difficult to keep abreast of the complex relationships that exist within. Additionally, due to the sheer volume of data produced in the previously mentioned formats, a method to aid in extracting the security relevant content from the data becomes highly essential. We propose the use of network graphs to address these limitations in the current Interrogator architecture. Generation of the graphs required the development of methods to extract - from the data sources available - the needed connectivity and data transfer information. This information was then passed to a graphing utility, Graphviz, which was used to generate the network graphs. Using the capabilities provided in Graphviz, we were able to quickly obtain information about any node in the network including: the connectivity of the node, the data transferred, and any alerts generated that included these nodes. These graphs are used as another analysis source for an analyst to aid in the identification of suspicious network behavior","PeriodicalId":326306,"journal":{"name":"2006 IEEE Information Assurance Workshop","volume":"109 9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115686599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2006-06-21DOI: 10.1109/IAW.2006.1652121
S. Price, S. Price
The information assurance (IA) model, an extension of the McCumber information security model, specifies security services for information when it is at rest, in transit, or being processed. According to the IA model, the processing information state is protected by technology, operations, and people security countermeasures. However, what has not been considered is the power wielded by an ordinary user over the processes in their environment. The authors consider people to be the principle countermeasure in the model. Unfortunately, this becomes problematic when users introduce unknown or unauthorized processes into a system which may affect information and the security services of the system. Indeed, such processes run with the rights and privileges of the user. The intentional or accidental execution of unauthorized applications epitomizes the insider threat. Therefore, system and data security is at the mercy of executing processes and the hands of the authorized user. Another way to represent this situation is to say that unknown and unauthorized processes, whether or not under the control of the user, change the secure state processing (SSP) of a system
{"title":"Secure State Processing","authors":"S. Price, S. Price","doi":"10.1109/IAW.2006.1652121","DOIUrl":"https://doi.org/10.1109/IAW.2006.1652121","url":null,"abstract":"The information assurance (IA) model, an extension of the McCumber information security model, specifies security services for information when it is at rest, in transit, or being processed. According to the IA model, the processing information state is protected by technology, operations, and people security countermeasures. However, what has not been considered is the power wielded by an ordinary user over the processes in their environment. The authors consider people to be the principle countermeasure in the model. Unfortunately, this becomes problematic when users introduce unknown or unauthorized processes into a system which may affect information and the security services of the system. Indeed, such processes run with the rights and privileges of the user. The intentional or accidental execution of unauthorized applications epitomizes the insider threat. Therefore, system and data security is at the mercy of executing processes and the hands of the authorized user. Another way to represent this situation is to say that unknown and unauthorized processes, whether or not under the control of the user, change the secure state processing (SSP) of a system","PeriodicalId":326306,"journal":{"name":"2006 IEEE Information Assurance Workshop","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115107771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2006-06-21DOI: 10.1109/IAW.2006.1652103
S. Ramaswami, S. Upadhyaya
The open medium, dynamic topology and infrastructureless characteristics of MANETs and sensor networks, have found widespread military applications. However, the nature of these networks and the limited processing capabilities of the nodes make them vulnerable to malicious attacks. In this paper we address the problem of colluding and coordinated black hole attacks, one of the major security issues in MANET based defense applications. These attacks are caused by malicious nodes that advertise the availability of the shortest route to the intended destination, thereby exploiting the functioning of the AODV protocol and retaining the data packets. This leads to loss of critical and sensitive information being relayed across the network. We propose a technique that overcomes the shortcomings of this protocol, and makes it less vulnerable to such attacks by identifying the malicious nodes and isolating them from the network. We have developed a lightweight acknowledgement scheme with multipath routing for securing the protocol. The proposed technique can be extended to similar routing protocols and scenarios in MANETs
{"title":"Smart Handling of Colluding Black Hole Attacks in MANETs and Wireless Sensor Networks using Multipath Routing","authors":"S. Ramaswami, S. Upadhyaya","doi":"10.1109/IAW.2006.1652103","DOIUrl":"https://doi.org/10.1109/IAW.2006.1652103","url":null,"abstract":"The open medium, dynamic topology and infrastructureless characteristics of MANETs and sensor networks, have found widespread military applications. However, the nature of these networks and the limited processing capabilities of the nodes make them vulnerable to malicious attacks. In this paper we address the problem of colluding and coordinated black hole attacks, one of the major security issues in MANET based defense applications. These attacks are caused by malicious nodes that advertise the availability of the shortest route to the intended destination, thereby exploiting the functioning of the AODV protocol and retaining the data packets. This leads to loss of critical and sensitive information being relayed across the network. We propose a technique that overcomes the shortcomings of this protocol, and makes it less vulnerable to such attacks by identifying the malicious nodes and isolating them from the network. We have developed a lightweight acknowledgement scheme with multipath routing for securing the protocol. The proposed technique can be extended to similar routing protocols and scenarios in MANETs","PeriodicalId":326306,"journal":{"name":"2006 IEEE Information Assurance Workshop","volume":"1994 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125549595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2006-06-21DOI: 10.1109/IAW.2006.1652083
A. El-Semary, J. Edmonds, J. González-Pino, M. Papa
This paper describes the use of fuzzy logic in the implementation of an intelligent intrusion detection system. The system uses a data miner that integrates Apriori and Kuok's algorithms to produce fuzzy logic rules that capture features of interest in network traffic. Using an inference engine, implemented using FuzzyJess, the intrusion detection system evaluates these rules and gives network administrators indications of the firing strength of the ruleset. The resulting system is capable of adapting to changes in attack signatures. In addition, by identifying relevant network traffic attributes, the system has the inherent ability to provide abstract views that support network security analysis. Examples and experimental results using intrusion detection datasets from MIT Lincoln Laboratory demonstrate the potential of the approach
{"title":"Applying Data Mining of Fuzzy Association Rules to Network Intrusion Detection","authors":"A. El-Semary, J. Edmonds, J. González-Pino, M. Papa","doi":"10.1109/IAW.2006.1652083","DOIUrl":"https://doi.org/10.1109/IAW.2006.1652083","url":null,"abstract":"This paper describes the use of fuzzy logic in the implementation of an intelligent intrusion detection system. The system uses a data miner that integrates Apriori and Kuok's algorithms to produce fuzzy logic rules that capture features of interest in network traffic. Using an inference engine, implemented using FuzzyJess, the intrusion detection system evaluates these rules and gives network administrators indications of the firing strength of the ruleset. The resulting system is capable of adapting to changes in attack signatures. In addition, by identifying relevant network traffic attributes, the system has the inherent ability to provide abstract views that support network security analysis. Examples and experimental results using intrusion detection datasets from MIT Lincoln Laboratory demonstrate the potential of the approach","PeriodicalId":326306,"journal":{"name":"2006 IEEE Information Assurance Workshop","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127584428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2006-06-21DOI: 10.1109/IAW.2006.1652125
L. Laribee, D.S. Barnes, N. Rowe, C.H. Martell
The weakest link in an information-security chain is often the user because people can be manipulated. Attacking computer systems with information gained from social interactions is one form of social engineering (K. Mitnick, et al. 2002). It can be much easier to do than targeting the complex technological protections of systems (J. McDermott, Social engineering - the weakest link in information security). In an effort to formalize social engineering for cyberspace, we are building models of trust and attack. Models help in understanding the bewildering number of different tactics that can be employed. Social engineering attacks can be complex with multiple ploys and targets; our models function as subroutines that are called multiple times to accomplish attack goals in a coordinated plan. Models enable us to infer good countermeasures to social engineering
信息安全链中最薄弱的环节往往是用户,因为人们可能被操纵。利用从社会互动中获得的信息攻击计算机系统是社会工程的一种形式(K. Mitnick, et al. 2002)。这比瞄准系统的复杂技术保护要容易得多(J. McDermott,社会工程——信息安全中最薄弱的环节)。为了使网络空间的社会工程正式化,我们正在建立信任和攻击的模型。模型有助于理解可以采用的令人眼花缭乱的不同策略。社会工程攻击可能很复杂,有多种手段和目标;我们的模型作为子例程发挥作用,这些子例程被多次调用,以在协调的计划中完成攻击目标。模型使我们能够推断出针对社会工程的良好对策
{"title":"Analysis and Defensive Tools for Social-Engineering Attacks on Computer Systems","authors":"L. Laribee, D.S. Barnes, N. Rowe, C.H. Martell","doi":"10.1109/IAW.2006.1652125","DOIUrl":"https://doi.org/10.1109/IAW.2006.1652125","url":null,"abstract":"The weakest link in an information-security chain is often the user because people can be manipulated. Attacking computer systems with information gained from social interactions is one form of social engineering (K. Mitnick, et al. 2002). It can be much easier to do than targeting the complex technological protections of systems (J. McDermott, Social engineering - the weakest link in information security). In an effort to formalize social engineering for cyberspace, we are building models of trust and attack. Models help in understanding the bewildering number of different tactics that can be employed. Social engineering attacks can be complex with multiple ploys and targets; our models function as subroutines that are called multiple times to accomplish attack goals in a coordinated plan. Models enable us to infer good countermeasures to social engineering","PeriodicalId":326306,"journal":{"name":"2006 IEEE Information Assurance Workshop","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132383175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2006-06-21DOI: 10.1109/IAW.2006.1652107
P. Defibaugh-Chavez, R. Veeraghattam, M. Kannappa, S. Mukkamala, A. Sung
To detect and deflect attempts at unauthorized use of information systems, network resources called honeypots are deployed. Honeypots are an efficient way to gather information and are being increasingly used for information security purposes. This paper focuses on the network level detection of honeypots by taking the feature set of the systems and also the network level activity into consideration. Earlier work in the area has been based on the system level detection. The results aim at bringing out the limitations in the current honeypot technology
{"title":"Network Based Detection of Virtual Environments and Low Interaction Honeypots","authors":"P. Defibaugh-Chavez, R. Veeraghattam, M. Kannappa, S. Mukkamala, A. Sung","doi":"10.1109/IAW.2006.1652107","DOIUrl":"https://doi.org/10.1109/IAW.2006.1652107","url":null,"abstract":"To detect and deflect attempts at unauthorized use of information systems, network resources called honeypots are deployed. Honeypots are an efficient way to gather information and are being increasingly used for information security purposes. This paper focuses on the network level detection of honeypots by taking the feature set of the systems and also the network level activity into consideration. Earlier work in the area has been based on the system level detection. The results aim at bringing out the limitations in the current honeypot technology","PeriodicalId":326306,"journal":{"name":"2006 IEEE Information Assurance Workshop","volume":"145 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127309046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}