Pub Date : 2019-01-29DOI: 10.4108/EAI.25-1-2019.159348
R. L. Perez, Florian Adamsky, R. Soua, T. Engel
Since Critical Infrastructures (CIs) use systems and equipment that are separated by long distances, Supervisory Control And Data Acquisition (SCADA) systems are used to monitor their behaviour and to send commands remotely. For a long time, operator of CIs applied the air gap principle, a security strategy that physically isolates the control network from other communication channels. True isolation, however, is di ffi cult nowadays due to the massive spread of connectivity: using open protocols and more connectivity opens new network attacks against CIs. To cope with this dilemma, sophisticated security measures are needed to address malicious intrusions, which are steadily increasing in number and variety. However, traditional Intrusion Detection Systems (IDSs) cannot detect attacks that are not already present in their databases. To this end, we assess in this paper Machine Learning (ML) techniques for anomaly detection in SCADA systems using a real data set collected from a gas pipeline system and provided by the Mississippi State University (MSU). The contribution of this paper is two-fold: 1) The evaluation of four techniques for missing data estimation and two techniques for data normalization, 2) The performances of Support Vector Machine (SVM), Random Forest (RF), Bidirectional Long Short Term Memory (BLSTM) are assessed in terms of accuracy, precision, recall and F 1 score for intrusion detection. Two cases are di ff erentiated: binary and categorical classifications. Our experiments reveal that RF and BLSTM detect intrusions e ff ectively, with an F 1 score of respectively > 99% and > 96%.
{"title":"Forget the Myth of the Air Gap: Machine Learning for Reliable Intrusion Detection in SCADA Systems","authors":"R. L. Perez, Florian Adamsky, R. Soua, T. Engel","doi":"10.4108/EAI.25-1-2019.159348","DOIUrl":"https://doi.org/10.4108/EAI.25-1-2019.159348","url":null,"abstract":"Since Critical Infrastructures (CIs) use systems and equipment that are separated by long distances, Supervisory Control And Data Acquisition (SCADA) systems are used to monitor their behaviour and to send commands remotely. For a long time, operator of CIs applied the air gap principle, a security strategy that physically isolates the control network from other communication channels. True isolation, however, is di ffi cult nowadays due to the massive spread of connectivity: using open protocols and more connectivity opens new network attacks against CIs. To cope with this dilemma, sophisticated security measures are needed to address malicious intrusions, which are steadily increasing in number and variety. However, traditional Intrusion Detection Systems (IDSs) cannot detect attacks that are not already present in their databases. To this end, we assess in this paper Machine Learning (ML) techniques for anomaly detection in SCADA systems using a real data set collected from a gas pipeline system and provided by the Mississippi State University (MSU). The contribution of this paper is two-fold: 1) The evaluation of four techniques for missing data estimation and two techniques for data normalization, 2) The performances of Support Vector Machine (SVM), Random Forest (RF), Bidirectional Long Short Term Memory (BLSTM) are assessed in terms of accuracy, precision, recall and F 1 score for intrusion detection. Two cases are di ff erentiated: binary and categorical classifications. Our experiments reveal that RF and BLSTM detect intrusions e ff ectively, with an F 1 score of respectively > 99% and > 96%.","PeriodicalId":335727,"journal":{"name":"EAI Endorsed Trans. Security Safety","volume":"146 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132118444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-01-25DOI: 10.4108/EAI.8-4-2019.157413
Mina Khalili, Mengyuan Zhang, D. Borbor, Lingyu Wang, Nicandro Scarabeo, M. Zamor
Nowadays, small to medium sized companies, which usually cannot afford hiring dedicated security experts, are interested in benefiting from Managed Security Services (MSS) provided by third party Security Operation Centers (SOC) to tackle network-wide threats. Accordingly, the performance of the SOC is becoming more and more important to the service providers in order to optimize their resources and compete in the global market. Security specialists in a SOC, called analysts, have an important role to analyze suspicious machine-generated alerts to see whether they are real attacks. How to monitor and improve the performance of analysts inside a SOC is a critical issue that most service providers need to address. In this paper, by observing workflows of a real-world SOC, a tool consisting of three different modules is designed for monitoring analysts' activities, analysis performance measurement, and performing simulation scenarios. The tool empowers managers to evaluate the SOC's performance which helps them to conform to Service-Level Agreement (SLA) regarding required response time to security incidents, and see the need for improvement. Moreover, the designed tool is strengthened by a background service module to provide feedback about anomalies or informative issues for security analysts in the SOC. Three case studies have been conducted based on real data collected from the operational SOC, and simulation results have demonstrated the effectiveness of the different modules of the designed tool in improving the SOC performance.
{"title":"Monitoring and Improving Managed Security Services inside a Security Operation Center","authors":"Mina Khalili, Mengyuan Zhang, D. Borbor, Lingyu Wang, Nicandro Scarabeo, M. Zamor","doi":"10.4108/EAI.8-4-2019.157413","DOIUrl":"https://doi.org/10.4108/EAI.8-4-2019.157413","url":null,"abstract":"Nowadays, small to medium sized companies, which usually cannot afford hiring dedicated security experts, are interested in benefiting from Managed Security Services (MSS) provided by third party Security Operation Centers (SOC) to tackle network-wide threats. Accordingly, the performance of the SOC is becoming more and more important to the service providers in order to optimize their resources and compete in the global market. Security specialists in a SOC, called analysts, have an important role to analyze suspicious machine-generated alerts to see whether they are real attacks. How to monitor and improve the performance of analysts inside a SOC is a critical issue that most service providers need to address. In this paper, by observing workflows of a real-world SOC, a tool consisting of three different modules is designed for monitoring analysts' activities, analysis performance measurement, and performing simulation scenarios. The tool empowers managers to evaluate the SOC's performance which helps them to conform to Service-Level Agreement (SLA) regarding required response time to security incidents, and see the need for improvement. Moreover, the designed tool is strengthened by a background service module to provide feedback about anomalies or informative issues for security analysts in the SOC. Three case studies have been conducted based on real data collected from the operational SOC, and simulation results have demonstrated the effectiveness of the different modules of the designed tool in improving the SOC performance.","PeriodicalId":335727,"journal":{"name":"EAI Endorsed Trans. Security Safety","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130845345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-01-25DOI: 10.4108/eai.8-4-2019.157414
Xianmang He, Yuan Hong, Yindong Chen
Data privacy has attracted significant interests in both database theory and security communities in the past few decades. Differential privacy has emerged as a new paradigm for rigorous privacy protection regardless of adversaries prior knowledge. However, the meaning of privacy bound and how to select an appropriate may still be unclear to the general data owners. More recently, some approaches have been proposed to derive the upper bounds of for specified privacy risks. Unfortunately, these upper bounds suffer from some deficiencies (e.g., the bound relies on the data size, or might be too large), which greatly limits their applicability. To remedy this problem, we propose a novel approach that converts the privacy bound in differential privacy to privacy risks understandable to generic users, and present an in-depth theoretical analysis for it. Finally, we have conducted experiments to demonstrate the effectiveness of our model. Received on 19 December 2018; accepted on 21 January 2019; published on 25 January 2019
{"title":"Exploring the Privacy Bound for Differential Privacy: From Theory to Practice","authors":"Xianmang He, Yuan Hong, Yindong Chen","doi":"10.4108/eai.8-4-2019.157414","DOIUrl":"https://doi.org/10.4108/eai.8-4-2019.157414","url":null,"abstract":"Data privacy has attracted significant interests in both database theory and security communities in the past few decades. Differential privacy has emerged as a new paradigm for rigorous privacy protection regardless of adversaries prior knowledge. However, the meaning of privacy bound and how to select an appropriate may still be unclear to the general data owners. More recently, some approaches have been proposed to derive the upper bounds of for specified privacy risks. Unfortunately, these upper bounds suffer from some deficiencies (e.g., the bound relies on the data size, or might be too large), which greatly limits their applicability. To remedy this problem, we propose a novel approach that converts the privacy bound in differential privacy to privacy risks understandable to generic users, and present an in-depth theoretical analysis for it. Finally, we have conducted experiments to demonstrate the effectiveness of our model. Received on 19 December 2018; accepted on 21 January 2019; published on 25 January 2019","PeriodicalId":335727,"journal":{"name":"EAI Endorsed Trans. Security Safety","volume":"31 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114012771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-01-25DOI: 10.4108/eai.8-4-2019.157415
Yang Song, Chen Wu, Sencun Zhu, Haining Wang
In order to promote apps in mobile app stores, for malicious developers and users, manipulating average rating is a popular and feasible way. In this work, we propose a two-phase machine learning approach to detecting app rating manipulation attacks. In the first learning phase, we generate feature ranks for different app stores and find that top features match the characteristics of abused apps and malicious users. In the second learning phase, we choose top N features and train our models for each app store. With cross-validation, our training models achieve 85% f-score. We also use our training models to discover new suspicious apps from our data set and evaluate them with two criteria. Finally, we conduct some analysis based on the suspicious apps classified by our training models and some interesting results are discovered. Received on 09 January 2019; accepted on 20 January 2019; published on 25 January 2019
{"title":"A Machine Learning Based Approach for Mobile App Rating Manipulation Detection","authors":"Yang Song, Chen Wu, Sencun Zhu, Haining Wang","doi":"10.4108/eai.8-4-2019.157415","DOIUrl":"https://doi.org/10.4108/eai.8-4-2019.157415","url":null,"abstract":"In order to promote apps in mobile app stores, for malicious developers and users, manipulating average rating is a popular and feasible way. In this work, we propose a two-phase machine learning approach to detecting app rating manipulation attacks. In the first learning phase, we generate feature ranks for different app stores and find that top features match the characteristics of abused apps and malicious users. In the second learning phase, we choose top N features and train our models for each app store. With cross-validation, our training models achieve 85% f-score. We also use our training models to discover new suspicious apps from our data set and evaluate them with two criteria. Finally, we conduct some analysis based on the suspicious apps classified by our training models and some interesting results are discovered. Received on 09 January 2019; accepted on 20 January 2019; published on 25 January 2019","PeriodicalId":335727,"journal":{"name":"EAI Endorsed Trans. Security Safety","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115947645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-01-10DOI: 10.4108/eai.10-1-2019.156243
Muhammad Jafer, M. A. Khan, S. Rehman, T. Zia
{"title":"Secure Communication in VANET Broadcasting","authors":"Muhammad Jafer, M. A. Khan, S. Rehman, T. Zia","doi":"10.4108/eai.10-1-2019.156243","DOIUrl":"https://doi.org/10.4108/eai.10-1-2019.156243","url":null,"abstract":"","PeriodicalId":335727,"journal":{"name":"EAI Endorsed Trans. Security Safety","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114440949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-01-10DOI: 10.4108/eai.10-1-2019.156246
Nawaf Alhebaishi, Lingyu Wang, A. Singhal
Today’s businesses are increasingly relying on the cloud as an alternative IT solution due to its fexibility and lower cost. Compared to traditional enterprise networks, a cloud infrastructure is typically much larger and more complex. Understanding the potential security threats in such infrastructures is naturally more challenging than in traditional networks. This is evidenced by the fact that there are limited efforts on threat modeling for cloud infrastructures. In this paper, we conduct comprehensive threat modeling exercises based on two representative cloud infrastructures using several popular threat modeling methods, including attack surface, attack trees, attack graphs, and security metrics based on attack trees and attack graphs, respectively. Those threat modeling efforts may provide cloud providers useful lessons toward better understanding and improving the security of their cloud infrastructures. In addition, we show how hardening solution can be applied based on the threat models and security metrics through extended exercises. Such results may not only beneft the cloud provider but also embed more confdence in cloud tenants by providing them a clearer picture of the potential threats and mitigation solutions.
{"title":"Threat Modeling for Cloud Infrastructures","authors":"Nawaf Alhebaishi, Lingyu Wang, A. Singhal","doi":"10.4108/eai.10-1-2019.156246","DOIUrl":"https://doi.org/10.4108/eai.10-1-2019.156246","url":null,"abstract":"Today’s businesses are increasingly relying on the cloud as an alternative IT solution due to its fexibility and lower cost. Compared to traditional enterprise networks, a cloud infrastructure is typically much larger and more complex. Understanding the potential security threats in such infrastructures is naturally more challenging than in traditional networks. This is evidenced by the fact that there are limited efforts on threat modeling for cloud infrastructures. In this paper, we conduct comprehensive threat modeling exercises based on two representative cloud infrastructures using several popular threat modeling methods, including attack surface, attack trees, attack graphs, and security metrics based on attack trees and attack graphs, respectively. Those threat modeling efforts may provide cloud providers useful lessons toward better understanding and improving the security of their cloud infrastructures. In addition, we show how hardening solution can be applied based on the threat models and security metrics through extended exercises. Such results may not only beneft the cloud provider but also embed more confdence in cloud tenants by providing them a clearer picture of the potential threats and mitigation solutions.","PeriodicalId":335727,"journal":{"name":"EAI Endorsed Trans. Security Safety","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133931671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-12-11DOI: 10.4108/eai.11-12-2018.156032
W. Qu, Wei Huo, Lingyu Wang
Web-based applications delivered using clouds are becoming increasingly popular due to less demand of client-side resources and easier maintenance than desktop counterparts. At the same time, larger attack surfaces and developers’ lack of security proficiency or awareness leave Web applications particularly vulnerable to security attacks. On the other hand, diversity has long been considered as a viable approach to detecting security attacks since functionally similar but internally di ff erent variants of an application will likely respond to the same attack in di ff erent ways. However, most diversity-by-design approaches have met di ffi culties in practice due to the prohibitive cost in terms of both development and maintenance. In this work, we propose to employ opportunistic diversity inherent to Web applications and their database backends to detect injection attacks. We first conduct a case study of common vulnerabilities to confirm the potential of opportunistic diversity for detecting potential attacks. We then devise a multi-stage approach to examine features extracted from the database queries, their e ff ect on the database, the query results, as well as the user-end results. Next, we combine the partial results obtained from di ff erent stages using a learning-based approach to further improve the detection accuracy. Finally, we evaluate our approach using a real world Web application.
{"title":"Opportunistic Diversity-Based Detection of Injection Attacks in Web Applications","authors":"W. Qu, Wei Huo, Lingyu Wang","doi":"10.4108/eai.11-12-2018.156032","DOIUrl":"https://doi.org/10.4108/eai.11-12-2018.156032","url":null,"abstract":"Web-based applications delivered using clouds are becoming increasingly popular due to less demand of client-side resources and easier maintenance than desktop counterparts. At the same time, larger attack surfaces and developers’ lack of security proficiency or awareness leave Web applications particularly vulnerable to security attacks. On the other hand, diversity has long been considered as a viable approach to detecting security attacks since functionally similar but internally di ff erent variants of an application will likely respond to the same attack in di ff erent ways. However, most diversity-by-design approaches have met di ffi culties in practice due to the prohibitive cost in terms of both development and maintenance. In this work, we propose to employ opportunistic diversity inherent to Web applications and their database backends to detect injection attacks. We first conduct a case study of common vulnerabilities to confirm the potential of opportunistic diversity for detecting potential attacks. We then devise a multi-stage approach to examine features extracted from the database queries, their e ff ect on the database, the query results, as well as the user-end results. Next, we combine the partial results obtained from di ff erent stages using a learning-based approach to further improve the detection accuracy. Finally, we evaluate our approach using a real world Web application.","PeriodicalId":335727,"journal":{"name":"EAI Endorsed Trans. Security Safety","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134279526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}