Pub Date : 2009-09-29DOI: 10.1109/DSN.2009.5270323
Yu-Sung Wu, S. Bagchi, Navjot Singh, Ratsameetip Wita
In this paper, we present an approach for detection of spam calls over IP telephony called SPIT in VoIP systems. SPIT detection is different from spam detection in email in that the process has to be soft real-time, fewer features are available for examination due to the difficulty of mining voice traffic at runtime, and similarity in signaling traffic between legitimate and malicious callers. Our approach differs from existing work in its adaptability to new environments without the need for laborious and error-prone manual parameter configuration. We use clustering based on the call parameters, using optional user feedback for some calls, which they mark as SPIT or non-SPIT. We improve on a popular algorithm for semi-supervised learning, called MPCK-Means, to make it scalable to a large number of calls and operate at runtime. Our evaluation on captured call traces shows a fifteen fold reduction in computation time, with improvement in detection accuracy.
{"title":"Spam detection in voice-over-IP calls through semi-supervised clustering","authors":"Yu-Sung Wu, S. Bagchi, Navjot Singh, Ratsameetip Wita","doi":"10.1109/DSN.2009.5270323","DOIUrl":"https://doi.org/10.1109/DSN.2009.5270323","url":null,"abstract":"In this paper, we present an approach for detection of spam calls over IP telephony called SPIT in VoIP systems. SPIT detection is different from spam detection in email in that the process has to be soft real-time, fewer features are available for examination due to the difficulty of mining voice traffic at runtime, and similarity in signaling traffic between legitimate and malicious callers. Our approach differs from existing work in its adaptability to new environments without the need for laborious and error-prone manual parameter configuration. We use clustering based on the call parameters, using optional user feedback for some calls, which they mark as SPIT or non-SPIT. We improve on a popular algorithm for semi-supervised learning, called MPCK-Means, to make it scalable to a large number of calls and operate at runtime. Our evaluation on captured call traces shows a fifteen fold reduction in computation time, with improvement in detection accuracy.","PeriodicalId":376982,"journal":{"name":"2009 IEEE/IFIP International Conference on Dependable Systems & Networks","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116714192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-09-29DOI: 10.1109/DSN.2009.5270353
Eric Rozier, W. Belluomini, Veera Deenadhayalan, J. Hafner, K. K. Rao, Pin Zhou
Despite the reliability of modern disks, recent studies have made it clear that a new class of faults, UndetectedDisk Errors (UDEs) also known as silent data corruption events, become a real challenge as storage capacity scales. While RAID systems have proven effective in protecting data from traditional disk failures, silent data corruption events remain a significant problem unaddressed by RAID. We present a fault model for UDEs, and a hybrid framework for simulating UDEs in large-scale systems. The framework combines a multi-resolution discrete event simulator with numerical solvers. Our implementation enables us to model arbitrary storage systems and workloads and estimate the rate of undetected data corruptions. We present results for several systems and workloads, from gigascale to petascale. These results indicate that corruption from UDEs is a significant problem in the absence of protection schemes and that such schemes dramatically decrease the rate of undetected data corruption.
{"title":"Evaluating the impact of Undetected Disk Errors in RAID systems","authors":"Eric Rozier, W. Belluomini, Veera Deenadhayalan, J. Hafner, K. K. Rao, Pin Zhou","doi":"10.1109/DSN.2009.5270353","DOIUrl":"https://doi.org/10.1109/DSN.2009.5270353","url":null,"abstract":"Despite the reliability of modern disks, recent studies have made it clear that a new class of faults, UndetectedDisk Errors (UDEs) also known as silent data corruption events, become a real challenge as storage capacity scales. While RAID systems have proven effective in protecting data from traditional disk failures, silent data corruption events remain a significant problem unaddressed by RAID. We present a fault model for UDEs, and a hybrid framework for simulating UDEs in large-scale systems. The framework combines a multi-resolution discrete event simulator with numerical solvers. Our implementation enables us to model arbitrary storage systems and workloads and estimate the rate of undetected data corruptions. We present results for several systems and workloads, from gigascale to petascale. These results indicate that corruption from UDEs is a significant problem in the absence of protection schemes and that such schemes dramatically decrease the rate of undetected data corruption.","PeriodicalId":376982,"journal":{"name":"2009 IEEE/IFIP International Conference on Dependable Systems & Networks","volume":"32 3-4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114045790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-09-29DOI: 10.1109/DSN.2009.5270295
Maohua Lu, T. Chiueh
migration and thus enables a new form of fault tolerance that is completely transparent to applications and operating systems. While initial prototypes show promise, virtualization-based fault-tolerant architecture still experiences substantial performance overhead especially for data-intensive workloads. The main performance challenge of virtualizationbased fault tolerance is how to synchronize the memory states of the Master and Slave in a way that minimizes the end-to-end impact on the application performance. This paper describes three optimization techniques for memory state synchronization: fine-grained dirty region identification, speculative state transfer, and synchronization traffic reduction using active slave, and presents a comprehensive performance study of these techniques under three realistic workloads, the TPC-E benchmark, the SPECsfs 2008 CIFS benchmark, and a Microsoft Exchange workload. We show that these three techniques can each reduce the amount of end-of-epoch synchronization traffic by a factor of up to 7, 15 and 5, respectively.
{"title":"Fast memory state synchronization for virtualization-based fault tolerance","authors":"Maohua Lu, T. Chiueh","doi":"10.1109/DSN.2009.5270295","DOIUrl":"https://doi.org/10.1109/DSN.2009.5270295","url":null,"abstract":"migration and thus enables a new form of fault tolerance that is completely transparent to applications and operating systems. While initial prototypes show promise, virtualization-based fault-tolerant architecture still experiences substantial performance overhead especially for data-intensive workloads. The main performance challenge of virtualizationbased fault tolerance is how to synchronize the memory states of the Master and Slave in a way that minimizes the end-to-end impact on the application performance. This paper describes three optimization techniques for memory state synchronization: fine-grained dirty region identification, speculative state transfer, and synchronization traffic reduction using active slave, and presents a comprehensive performance study of these techniques under three realistic workloads, the TPC-E benchmark, the SPECsfs 2008 CIFS benchmark, and a Microsoft Exchange workload. We show that these three techniques can each reduce the amount of end-of-epoch synchronization traffic by a factor of up to 7, 15 and 5, respectively.","PeriodicalId":376982,"journal":{"name":"2009 IEEE/IFIP International Conference on Dependable Systems & Networks","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115748650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-09-29DOI: 10.1109/DSN.2009.5270357
J. Herder, H. Bos, Ben Gras, P. Homburg, A. Tanenbaum
This work explores the principles and practice of isolating low-level device drivers in order to improve OS dependability. In particular, we explore the operations drivers can perform and how fault propagation in the event a bug is triggered can be prevented. We have prototyped our ideas in an open-source multiserver OS (MINIX 3) that isolates drivers by strictly enforcing least authority and iteratively refined our isolation techniques using a pragmatic approach based on extensive software-implemented fault-injection (SWIFI) testing. In the end, out of 3,400,000 common faults injected randomly into 4 different Ethernet drivers using both programmed I/O and DMA, no fault was able to break our protection mechanisms and crash the OS. In total, we experienced only one hang, but this appears to be caused by buggy hardware.
{"title":"Fault isolation for device drivers","authors":"J. Herder, H. Bos, Ben Gras, P. Homburg, A. Tanenbaum","doi":"10.1109/DSN.2009.5270357","DOIUrl":"https://doi.org/10.1109/DSN.2009.5270357","url":null,"abstract":"This work explores the principles and practice of isolating low-level device drivers in order to improve OS dependability. In particular, we explore the operations drivers can perform and how fault propagation in the event a bug is triggered can be prevented. We have prototyped our ideas in an open-source multiserver OS (MINIX 3) that isolates drivers by strictly enforcing least authority and iteratively refined our isolation techniques using a pragmatic approach based on extensive software-implemented fault-injection (SWIFI) testing. In the end, out of 3,400,000 common faults injected randomly into 4 different Ethernet drivers using both programmed I/O and DMA, no fault was able to break our protection mechanisms and crash the OS. In total, we experienced only one hang, but this appears to be caused by buggy hardware.","PeriodicalId":376982,"journal":{"name":"2009 IEEE/IFIP International Conference on Dependable Systems & Networks","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132413847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-09-29DOI: 10.1109/DSN.2009.5270319
Guanhua Yan, L. Cuéllar, S. Eidenbenz, N. Hengartner
The rising popularity of mobile devices, such as cellular phones and PDAs, has made them a lucrative playground for mobile malware propagation. One common infection vector exploited by these mobile malware is Bluetooth. In this paper, we propose an architecture called Blue-Watchdog that detects Bluetooth worm propagation in public areas based on statistical methods. To achieve fast and accurate Bluetooth worm detection, Blue-Watchdog monitors abrupt changes of average paging rate per Bluetooth device from both temporal and temporal-spatial perspectives. The temporal scheme relies on the CUSUM (Cumulative Sum) sequential test together with the generalized likelihood ratio (GLR), and the temporal-spatial scheme aims to identify spatial regions with abnormally frequent paging attempts. Experimental results show that Blue-Watchdog not only has low false alarm rates, but also effectively detects Bluetooth worms that spread quickly in areas where Bluetooth devices are greatly mixed due to high mobility and also those that propagate relatively slowly in a spatially constrained fashion.
{"title":"Blue-Watchdog: Detecting Bluetooth worm propagation in public areas","authors":"Guanhua Yan, L. Cuéllar, S. Eidenbenz, N. Hengartner","doi":"10.1109/DSN.2009.5270319","DOIUrl":"https://doi.org/10.1109/DSN.2009.5270319","url":null,"abstract":"The rising popularity of mobile devices, such as cellular phones and PDAs, has made them a lucrative playground for mobile malware propagation. One common infection vector exploited by these mobile malware is Bluetooth. In this paper, we propose an architecture called Blue-Watchdog that detects Bluetooth worm propagation in public areas based on statistical methods. To achieve fast and accurate Bluetooth worm detection, Blue-Watchdog monitors abrupt changes of average paging rate per Bluetooth device from both temporal and temporal-spatial perspectives. The temporal scheme relies on the CUSUM (Cumulative Sum) sequential test together with the generalized likelihood ratio (GLR), and the temporal-spatial scheme aims to identify spatial regions with abnormally frequent paging attempts. Experimental results show that Blue-Watchdog not only has low false alarm rates, but also effectively detects Bluetooth worms that spread quickly in areas where Bluetooth devices are greatly mixed due to high mobility and also those that propagate relatively slowly in a spatially constrained fashion.","PeriodicalId":376982,"journal":{"name":"2009 IEEE/IFIP International Conference on Dependable Systems & Networks","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115623438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-09-29DOI: 10.1109/DSN.2009.5270338
Fedor V. Yarochkin, Ofir Arkin, Meder Kydyraliev, Shih-Yao Dai, Yennun Huang, S. Kuo
Active operating system fingerprinting is the process of actively determining a target network system's underlying operating system type and characteristics by probing the target system network stack with specifically crafted packets and analyzing received response. Identifying the underlying operating system of a network host is an important characteristic that can be used to complement network inventory processes, intrusion detection system discovery mechanisms, security network scanners, vulnerability analysis systems and other security tools that need to evaluate vulnerabilities on remote network systems.
{"title":"Xprobe2++: Low volume remote network information gathering tool","authors":"Fedor V. Yarochkin, Ofir Arkin, Meder Kydyraliev, Shih-Yao Dai, Yennun Huang, S. Kuo","doi":"10.1109/DSN.2009.5270338","DOIUrl":"https://doi.org/10.1109/DSN.2009.5270338","url":null,"abstract":"Active operating system fingerprinting is the process of actively determining a target network system's underlying operating system type and characteristics by probing the target system network stack with specifically crafted packets and analyzing received response. Identifying the underlying operating system of a network host is an important characteristic that can be used to complement network inventory processes, intrusion detection system discovery mechanisms, security network scanners, vulnerability analysis systems and other security tools that need to evaluate vulnerabilities on remote network systems.","PeriodicalId":376982,"journal":{"name":"2009 IEEE/IFIP International Conference on Dependable Systems & Networks","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125632403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-09-29DOI: 10.1109/DSN.2009.5270300
Jianqiang Luo, Lihao Xu, J. Plank
In large storage systems, it is crucial to protect data from loss due to failures. Erasure codes lay the foundation of this protection, enabling systems to reconstruct lost data when components fail. Erasure codes can however impose significant performance overhead in two core operations: encoding, where coding information is calculated from newly written data, and decoding, where data is reconstructed after failures. This paper focuses on improving the performance of encoding, the more frequent operation. It does so by scheduling the operations of XOR-based erasure codes to optimize their use of cache memory. We call the technique XORscheduling and demonstrate how it applies to a wide variety of existing erasure codes. We conduct a performance evaluation of scheduling these codes on a variety of processors and show that XOR-scheduling significantly improves upon the traditional approach. Hence, we believe that XORscheduling has great potential to have wide impact in practical storage systems.
{"title":"An efficient XOR-scheduling algorithm for erasure codes encoding","authors":"Jianqiang Luo, Lihao Xu, J. Plank","doi":"10.1109/DSN.2009.5270300","DOIUrl":"https://doi.org/10.1109/DSN.2009.5270300","url":null,"abstract":"In large storage systems, it is crucial to protect data from loss due to failures. Erasure codes lay the foundation of this protection, enabling systems to reconstruct lost data when components fail. Erasure codes can however impose significant performance overhead in two core operations: encoding, where coding information is calculated from newly written data, and decoding, where data is reconstructed after failures. This paper focuses on improving the performance of encoding, the more frequent operation. It does so by scheduling the operations of XOR-based erasure codes to optimize their use of cache memory. We call the technique XORscheduling and demonstrate how it applies to a wide variety of existing erasure codes. We conduct a performance evaluation of scheduling these codes on a variety of processors and show that XOR-scheduling significantly improves upon the traditional approach. Hence, we believe that XORscheduling has great potential to have wide impact in practical storage systems.","PeriodicalId":376982,"journal":{"name":"2009 IEEE/IFIP International Conference on Dependable Systems & Networks","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130597597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-09-29DOI: 10.1109/DSN.2009.5270335
Matthias Sand, Stefan Potyra, V. Sieh
FAUmachine is a virtual machine for the highly detailed simulation of standard PC hardware together with an environment. FAUmachine comes with fault injection capabilities and an automatic experiment controller facility. Due to its use of just-in-time compiler techniques, it offers good performance. This tool description introduces the new feature of FAUmachine to simulate systems deterministically. This will enable developers to design and test complex systems for fault tolerance by running identically reproducible automated tests in reasonable time and thus even allow testing for real time constraints.
{"title":"Deterministic high-speed simulation of complex systems including fault-injection","authors":"Matthias Sand, Stefan Potyra, V. Sieh","doi":"10.1109/DSN.2009.5270335","DOIUrl":"https://doi.org/10.1109/DSN.2009.5270335","url":null,"abstract":"FAUmachine is a virtual machine for the highly detailed simulation of standard PC hardware together with an environment. FAUmachine comes with fault injection capabilities and an automatic experiment controller facility. Due to its use of just-in-time compiler techniques, it offers good performance. This tool description introduces the new feature of FAUmachine to simulate systems deterministically. This will enable developers to design and test complex systems for fault tolerance by running identically reproducible automated tests in reasonable time and thus even allow testing for real time constraints.","PeriodicalId":376982,"journal":{"name":"2009 IEEE/IFIP International Conference on Dependable Systems & Networks","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128837645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-09-29DOI: 10.1109/DSN.2009.5270289
Ziming Zheng, Z. Lan, Byung-Hoon Park, A. Geist
Log preprocessing, a process applied on the raw log before applying a predictive method, is of paramount importance to failure prediction and diagnosis. While existing filtering methods have demonstrated good compression rate, they fail to preserve important failure patterns that are crucial for failure analysis. To address the problem, in this paper we present a log preprocessing method. It consists of three integrated steps: (1) event categorization to uniformly classify system events and identify fatal events; (2) event filtering to remove temporal and spatial redundant records, while also preserving necessary failure patterns for failure analysis; (3) causality-related filtering to combine correlated events for filtering through apriori association rule mining. We demonstrate the effectiveness of our preprocessing method by using real failure logs collected from the Cray XT4 at ORNL and the Blue Gene/L system at SDSC. Experiments show that our method can preserve more failure patterns for failure analysis, thereby improving failure prediction by up to 174%.
{"title":"System log pre-processing to improve failure prediction","authors":"Ziming Zheng, Z. Lan, Byung-Hoon Park, A. Geist","doi":"10.1109/DSN.2009.5270289","DOIUrl":"https://doi.org/10.1109/DSN.2009.5270289","url":null,"abstract":"Log preprocessing, a process applied on the raw log before applying a predictive method, is of paramount importance to failure prediction and diagnosis. While existing filtering methods have demonstrated good compression rate, they fail to preserve important failure patterns that are crucial for failure analysis. To address the problem, in this paper we present a log preprocessing method. It consists of three integrated steps: (1) event categorization to uniformly classify system events and identify fatal events; (2) event filtering to remove temporal and spatial redundant records, while also preserving necessary failure patterns for failure analysis; (3) causality-related filtering to combine correlated events for filtering through apriori association rule mining. We demonstrate the effectiveness of our preprocessing method by using real failure logs collected from the Cray XT4 at ORNL and the Blue Gene/L system at SDSC. Experiments show that our method can preserve more failure patterns for failure analysis, thereby improving failure prediction by up to 174%.","PeriodicalId":376982,"journal":{"name":"2009 IEEE/IFIP International Conference on Dependable Systems & Networks","volume":"46 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132335063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-09-29DOI: 10.1109/DSN.2009.5270349
J. Fonseca, M. Vieira, H. Madeira
In this paper we propose a methodology to inject realistic attacks in web applications. The methodology is based on the idea that by injecting realistic vulnerabilities in a web application and attacking them automatically we can assess existing security mechanisms. To provide true to life results, this methodology relies on field studies of a large number of vulnerabilities in web applications. The paper also describes a set of tools implementing the proposed methodology. They allow the automation of the entire process, including gathering results and analysis. We used these tools to conduct a set of experiments to demonstrate the feasibility and effectiveness of the proposed methodology. The experiments include the evaluation of coverage and false positives of an Intrusion Detection System for SQL Injection and the assessment of the effectiveness of two Web Application Vulnerability Scanners. Results show that the injection of vulnerabilities and attacks is an effective way to evaluate security mechanisms and tools.
{"title":"Vulnerability & attack injection for web applications","authors":"J. Fonseca, M. Vieira, H. Madeira","doi":"10.1109/DSN.2009.5270349","DOIUrl":"https://doi.org/10.1109/DSN.2009.5270349","url":null,"abstract":"In this paper we propose a methodology to inject realistic attacks in web applications. The methodology is based on the idea that by injecting realistic vulnerabilities in a web application and attacking them automatically we can assess existing security mechanisms. To provide true to life results, this methodology relies on field studies of a large number of vulnerabilities in web applications. The paper also describes a set of tools implementing the proposed methodology. They allow the automation of the entire process, including gathering results and analysis. We used these tools to conduct a set of experiments to demonstrate the feasibility and effectiveness of the proposed methodology. The experiments include the evaluation of coverage and false positives of an Intrusion Detection System for SQL Injection and the assessment of the effectiveness of two Web Application Vulnerability Scanners. Results show that the injection of vulnerabilities and attacks is an effective way to evaluate security mechanisms and tools.","PeriodicalId":376982,"journal":{"name":"2009 IEEE/IFIP International Conference on Dependable Systems & Networks","volume":"67 9","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131892947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}