{"title":"A Methodology for Evaluation of Host-Based Intrusion Prevention Systems and Its Application","authors":"K. G. Labbe, N. Rowe, J. D. Fulp","doi":"10.1109/IAW.2006.1652120","DOIUrl":null,"url":null,"abstract":"Host-based intrusion-prevention systems are currently popular technologies which try to prevent exploits from succeeding on a host. They are like host-based intrusion-detection systems (P. E. Proctor, 2001) but include means to automatically take actions once malicious activities or code are discovered. This can include terminating connections, services, or ports; refusing commands; blocking packets from specific Internet addresses; initiating tracing of packets; and sending modified packets back to a user. Automated responses to exploits can be quick without human intervention. Around ten commercial vendors are currently offering intrusion-prevention products (N. Desai, May 2006), and Snort-Inline is a popular open-source tool. Total intrusion prevention is a difficult goal to achieve, since it takes time to recognize an exploit and by then the damage may be done. So it is important to have a way to test the often-broad claims of intrusion-prevention products. The testing we propose is not as comprehensive as that offered by attack-traffic simulators like Skaion's TGS (www.skaion.com) or by the DETER testbed (www.deterlab.net). But attack-traffic simulators, even when up-to-date, only model broad characteristics of attacks and not their context-dependent behavior, so they can produce significant numbers of false negatives. DETER emulates rather than executes malicious software to provide added safety, which is not quite the same. DETER also imposes several bureaucratic obstacles for getting approval for experiments and obtaining time on their hardware to run them; this bureaucracy requires motivation and time to navigate. For quick testing in depth of a new product that has not been evaluated in DETER, or for finding reasons to rule out a product, a simpler approach that is easier to set up is required","PeriodicalId":326306,"journal":{"name":"2006 IEEE Information Assurance Workshop","volume":"96 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2006-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2006 IEEE Information Assurance Workshop","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IAW.2006.1652120","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6
Abstract
Host-based intrusion-prevention systems are currently popular technologies which try to prevent exploits from succeeding on a host. They are like host-based intrusion-detection systems (P. E. Proctor, 2001) but include means to automatically take actions once malicious activities or code are discovered. This can include terminating connections, services, or ports; refusing commands; blocking packets from specific Internet addresses; initiating tracing of packets; and sending modified packets back to a user. Automated responses to exploits can be quick without human intervention. Around ten commercial vendors are currently offering intrusion-prevention products (N. Desai, May 2006), and Snort-Inline is a popular open-source tool. Total intrusion prevention is a difficult goal to achieve, since it takes time to recognize an exploit and by then the damage may be done. So it is important to have a way to test the often-broad claims of intrusion-prevention products. The testing we propose is not as comprehensive as that offered by attack-traffic simulators like Skaion's TGS (www.skaion.com) or by the DETER testbed (www.deterlab.net). But attack-traffic simulators, even when up-to-date, only model broad characteristics of attacks and not their context-dependent behavior, so they can produce significant numbers of false negatives. DETER emulates rather than executes malicious software to provide added safety, which is not quite the same. DETER also imposes several bureaucratic obstacles for getting approval for experiments and obtaining time on their hardware to run them; this bureaucracy requires motivation and time to navigate. For quick testing in depth of a new product that has not been evaluated in DETER, or for finding reasons to rule out a product, a simpler approach that is easier to set up is required