Meeting the availability service level objective while minimizing the costs of the IT service provision is a major challenge for IT service designers. In order to optimize component choices and redundancy mechanisms, the redundancy allocation problem (RAP) was defined. RAP solution algorithms support decision makers with (sub)optimal design configurations that trade-off availability and costs. However, the existing RAP definitions are not suitable for IT service design since they do not include inter-component dependencies such as common mode failures. Therefore, a RAP definition is provided in this paper in which the characteristics of modern IT systems such as standby mechanisms, performance degradation and generic dependencies are integrated. The RAP definition and an adapted genetic algorithm are applied to optimize the costs of an excerpt of an application service provider's IT system landscape. The results demonstrate that the developed approach is applicable and suitable to minimize IT service costs while fulfilling the availability guarantees that are documented in service level agreements.
{"title":"Optimizing IT Service Costs with Respect to the Availability Service Level Objective","authors":"Sascha Bosse, Matthias Splieth, K. Turowski","doi":"10.1109/ARES.2015.11","DOIUrl":"https://doi.org/10.1109/ARES.2015.11","url":null,"abstract":"Meeting the availability service level objective while minimizing the costs of the IT service provision is a major challenge for IT service designers. In order to optimize component choices and redundancy mechanisms, the redundancy allocation problem (RAP) was defined. RAP solution algorithms support decision makers with (sub)optimal design configurations that trade-off availability and costs. However, the existing RAP definitions are not suitable for IT service design since they do not include inter-component dependencies such as common mode failures. Therefore, a RAP definition is provided in this paper in which the characteristics of modern IT systems such as standby mechanisms, performance degradation and generic dependencies are integrated. The RAP definition and an adapted genetic algorithm are applied to optimize the costs of an excerpt of an application service provider's IT system landscape. The results demonstrate that the developed approach is applicable and suitable to minimize IT service costs while fulfilling the availability guarantees that are documented in service level agreements.","PeriodicalId":331539,"journal":{"name":"2015 10th International Conference on Availability, Reliability and Security","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131765142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Matthias Hummer, Michael Kunz, M. Netter, L. Fuchs, G. Pernul
Due to compliance and IT security requirements, company-wide Identity and Access Management within organizations has gained significant importance in research and practice over the last years. Companies aim at standardizing user management policies in order to reduce administrative overhead and strengthen IT security. Despite of its relevance, hardly any supportive means for the automated detection and refinement as well as management of policies are available. As a result, policies outdate over time, leading to security vulnerabilities and inefficiencies. Existing research mainly focuses on policy detection without providing the required guidance for policy management. This paper closes the existing gap by proposing a Dynamic Policy Management Process which structures the activities required for policy management in Identity and Access Management environments. In contrast to current approaches it fosters the consideration of contextual user management data for policy detection and refinement and offers result visualization techniques that foster human understanding. In order to underline its applicability, this paper provides a naturalistic evaluation based on real-life data from a large industrial company.
{"title":"Advanced Identity and Access Policy Management Using Contextual Data","authors":"Matthias Hummer, Michael Kunz, M. Netter, L. Fuchs, G. Pernul","doi":"10.1109/ARES.2015.40","DOIUrl":"https://doi.org/10.1109/ARES.2015.40","url":null,"abstract":"Due to compliance and IT security requirements, company-wide Identity and Access Management within organizations has gained significant importance in research and practice over the last years. Companies aim at standardizing user management policies in order to reduce administrative overhead and strengthen IT security. Despite of its relevance, hardly any supportive means for the automated detection and refinement as well as management of policies are available. As a result, policies outdate over time, leading to security vulnerabilities and inefficiencies. Existing research mainly focuses on policy detection without providing the required guidance for policy management. This paper closes the existing gap by proposing a Dynamic Policy Management Process which structures the activities required for policy management in Identity and Access Management environments. In contrast to current approaches it fosters the consideration of contextual user management data for policy detection and refinement and offers result visualization techniques that foster human understanding. In order to underline its applicability, this paper provides a naturalistic evaluation based on real-life data from a large industrial company.","PeriodicalId":331539,"journal":{"name":"2015 10th International Conference on Availability, Reliability and Security","volume":"61 8","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131874273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cold boot attacks provide a means to obtain a dump of a computer's volatile memory even if the machine is locked. Such a dump can be used to reconstruct hard disk encryption keys and get access to the content of Bit locker or True crypt encrypted drives. This is even possible, if the obtained dump contains errors. Cold boot attacks have been demonstrated successfully on DDR1 and DDR2 SDRAM. They have also been tried on DDR3 SDRAM using various types of equipment but all attempts have failed so far. In this paper we describe a different hardware setup which turns out to work for DDR3 SDRAM as well. Using this setup it will be possible for digital forensic investigators to recover keys from newer machines that use DDR3 SDRAM.
{"title":"Cold Boot Attacks on DDR2 and DDR3 SDRAM","authors":"Simon Lindenlauf, Hans Höfken, Marko Schuba","doi":"10.1109/ARES.2015.28","DOIUrl":"https://doi.org/10.1109/ARES.2015.28","url":null,"abstract":"Cold boot attacks provide a means to obtain a dump of a computer's volatile memory even if the machine is locked. Such a dump can be used to reconstruct hard disk encryption keys and get access to the content of Bit locker or True crypt encrypted drives. This is even possible, if the obtained dump contains errors. Cold boot attacks have been demonstrated successfully on DDR1 and DDR2 SDRAM. They have also been tried on DDR3 SDRAM using various types of equipment but all attempts have failed so far. In this paper we describe a different hardware setup which turns out to work for DDR3 SDRAM as well. Using this setup it will be possible for digital forensic investigators to recover keys from newer machines that use DDR3 SDRAM.","PeriodicalId":331539,"journal":{"name":"2015 10th International Conference on Availability, Reliability and Security","volume":"152 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134318091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
H. Tian, Yanpeng Wu, Yongfeng Huang, Jin Liu, Yonghong Chen, Tian Wang, Yiqiao Cai
Steganography in low bit-rare speech streams is an important branch of Voice-over-IP steganography. From the point of preventing cybercrimes, it is significant to design effective steganalysis methods. In this paper, we present a support-vector-machine based steganalysis of low bit-rate speech exploiting statistic characteristics of pulse positions. Specifically, we utilize the probability distribution of pulse positions as a long-time distribution feature, extract Markov transition probabilities of pulse positions according to the short-time invariance characteristic of speech signals, and employ joint probability matrices to characterize the pulse-to-pulse correlation. We evaluate the performance of the proposed method with a large number of G.729a encoded samples, and compare it with the state-of-the-art methods. The experimental results demonstrate that our method significantly outperforms the previous ones on detection accuracy at any given embedding rates or with any sample lengths. Particularly, this method can successfully detect steganography employing only one or a few of the potential cover bits, which is hard to be effectively detected by the existing methods.
{"title":"Steganalysis of Low Bit-Rate Speech Based on Statistic Characteristics of Pulse Positions","authors":"H. Tian, Yanpeng Wu, Yongfeng Huang, Jin Liu, Yonghong Chen, Tian Wang, Yiqiao Cai","doi":"10.1109/ARES.2015.21","DOIUrl":"https://doi.org/10.1109/ARES.2015.21","url":null,"abstract":"Steganography in low bit-rare speech streams is an important branch of Voice-over-IP steganography. From the point of preventing cybercrimes, it is significant to design effective steganalysis methods. In this paper, we present a support-vector-machine based steganalysis of low bit-rate speech exploiting statistic characteristics of pulse positions. Specifically, we utilize the probability distribution of pulse positions as a long-time distribution feature, extract Markov transition probabilities of pulse positions according to the short-time invariance characteristic of speech signals, and employ joint probability matrices to characterize the pulse-to-pulse correlation. We evaluate the performance of the proposed method with a large number of G.729a encoded samples, and compare it with the state-of-the-art methods. The experimental results demonstrate that our method significantly outperforms the previous ones on detection accuracy at any given embedding rates or with any sample lengths. Particularly, this method can successfully detect steganography employing only one or a few of the potential cover bits, which is hard to be effectively detected by the existing methods.","PeriodicalId":331539,"journal":{"name":"2015 10th International Conference on Availability, Reliability and Security","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116954557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jaemin Park, Eunchan Kim, Sungjin Park, Cheoloh Kang
Mobile cloud computing requires the efficient approach to access the outsourced data in public clouds due to resource scarceness of mobile devices. To this end, the outsourced data should be protected efficiently from being accessed in plaintext by unauthorized users and public clouds. User revocation should be appropriately managed to guarantee backward secrecy, collusion resistance, and key freshness. In this paper, we present AKMD (Advanced Attribute-based Key Management for Mobile Devices in Hybrid Clouds), an improved key management in hybrid clouds using cipher text-policy attribute-based encryption to allow only authorized users to access the outsourced data stored in public clouds while guaranteeing the efficiency by delegating the key management tasks to private clouds. We introduce new two procedures to handle user revocations, rekey of data encryption keys and policy renewal to support the backward secrecy and key freshness. Our implementation and analysis show that AKMD improves efficiency in security computations and key storage space for mobile devices and guarantees the improved security.
由于移动设备资源的稀缺性,移动云计算需要有效的方法来访问公共云中的外包数据。为此,应有效保护外包数据,防止未经授权的用户和公共云以明文形式访问。用户撤销应该得到适当的管理,以保证反向保密、抗合谋和密钥的新鲜度。在本文中,我们提出了AKMD (Advanced Attribute-based Key Management for Mobile Devices In Hybrid cloud),这是一种改进的混合云密钥管理方法,使用基于密文策略属性的加密技术,只允许授权用户访问存储在公共云中的外包数据,同时通过将密钥管理任务委托给私有云来保证效率。我们引入了两个新的过程来处理用户撤销,数据加密密钥的重新密钥和策略更新,以支持向后保密和密钥新鲜度。我们的实现和分析表明,AKMD提高了移动设备的安全计算效率和密钥存储空间,保证了改进后的安全性。
{"title":"Advanced Attribute-Based Key Management for Mobile Devices in Hybrid Clouds","authors":"Jaemin Park, Eunchan Kim, Sungjin Park, Cheoloh Kang","doi":"10.1109/ARES.2015.27","DOIUrl":"https://doi.org/10.1109/ARES.2015.27","url":null,"abstract":"Mobile cloud computing requires the efficient approach to access the outsourced data in public clouds due to resource scarceness of mobile devices. To this end, the outsourced data should be protected efficiently from being accessed in plaintext by unauthorized users and public clouds. User revocation should be appropriately managed to guarantee backward secrecy, collusion resistance, and key freshness. In this paper, we present AKMD (Advanced Attribute-based Key Management for Mobile Devices in Hybrid Clouds), an improved key management in hybrid clouds using cipher text-policy attribute-based encryption to allow only authorized users to access the outsourced data stored in public clouds while guaranteeing the efficiency by delegating the key management tasks to private clouds. We introduce new two procedures to handle user revocations, rekey of data encryption keys and policy renewal to support the backward secrecy and key freshness. Our implementation and analysis show that AKMD improves efficiency in security computations and key storage space for mobile devices and guarantees the improved security.","PeriodicalId":331539,"journal":{"name":"2015 10th International Conference on Availability, Reliability and Security","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115152623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Existing access control solutions applying Cipher text Policy Attribute based Encryption (CP-ABE) scheme usually rely on the static access enforcement based on the access control policy. In real-world scenario, the static pattern of access control policy may not be sufficient to effectively respond the security problems or advanced access control requirements. In this paper, we enhance our collaborative access control model: C-CP-ARBE, to be capable to support a more rigorous access control with security constraints and preventive access policy (PAP) enforcement feature. To this end, we design constraints specification model and PAP enforcement scheme in multi-authority cloud storage systems. We employ Multi-Agent System (MAS) to automate the authentication and authorization function as well as to increase the performance of overall cryptographic processes. As of MAS concept, the scalability and separation of security functions of our access control system are enhanced. Finally, we present the experiments to demonstrate the improved efficiency and practicality of our proposed scheme.
{"title":"Enabling Constraints and Dynamic Preventive Access Control Policy Enforcement in the Cloud","authors":"S. Fugkeaw, Hiroyuki Sato","doi":"10.1109/ARES.2015.33","DOIUrl":"https://doi.org/10.1109/ARES.2015.33","url":null,"abstract":"Existing access control solutions applying Cipher text Policy Attribute based Encryption (CP-ABE) scheme usually rely on the static access enforcement based on the access control policy. In real-world scenario, the static pattern of access control policy may not be sufficient to effectively respond the security problems or advanced access control requirements. In this paper, we enhance our collaborative access control model: C-CP-ARBE, to be capable to support a more rigorous access control with security constraints and preventive access policy (PAP) enforcement feature. To this end, we design constraints specification model and PAP enforcement scheme in multi-authority cloud storage systems. We employ Multi-Agent System (MAS) to automate the authentication and authorization function as well as to increase the performance of overall cryptographic processes. As of MAS concept, the scalability and separation of security functions of our access control system are enhanced. Finally, we present the experiments to demonstrate the improved efficiency and practicality of our proposed scheme.","PeriodicalId":331539,"journal":{"name":"2015 10th International Conference on Availability, Reliability and Security","volume":"150 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116342348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In a networking context, Access Control Lists (ACLs) refer to security rules associated to network equipment, such as routers, switches and firewalls. Methods and tools to automate the management of ACLs distributed among several equipment shall verify if the corresponding ACLs are functionally equivalent. In this paper, we address such a verification process. We present a formal method to verify when two ACLs are iso functional and illustrate our proposal over a practical example.
在网络环境中,acl (Access Control Lists)是指与路由器、交换机、防火墙等网络设备相关联的安全规则。对分布在多台设备上的acl进行自动化管理的方法和工具,应验证相应的acl在功能上是否相等。在本文中,我们讨论了这样一个验证过程。我们提出了一种形式化的方法来验证两个acl何时具有相同的功能,并通过一个实际示例说明了我们的建议。
{"title":"On the Isofunctionality of Network Access Control Lists","authors":"Malek Belhaouane, Joaquín García, Hervé Debar","doi":"10.1109/ARES.2015.78","DOIUrl":"https://doi.org/10.1109/ARES.2015.78","url":null,"abstract":"In a networking context, Access Control Lists (ACLs) refer to security rules associated to network equipment, such as routers, switches and firewalls. Methods and tools to automate the management of ACLs distributed among several equipment shall verify if the corresponding ACLs are functionally equivalent. In this paper, we address such a verification process. We present a formal method to verify when two ACLs are iso functional and illustrate our proposal over a practical example.","PeriodicalId":331539,"journal":{"name":"2015 10th International Conference on Availability, Reliability and Security","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115320128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The emerging technologies of Smart Camera Sensor Networks (SCSN) are being driven by the social need for security assurance and analytical information. SCSN are deployed for protection and for surveillance tracking of potential criminals. A smart camera sensor does not just capture visual and audio information but covers the whole electromagnetic spectrum. It constitutes of intelligent onboard processor, autonomous communication interfaces, memory and has the ability to execute algorithms. The rapid deployment of smart camera sensors with ubiquitous imaging access causes security and privacy issues for the captured data and its metadata, as well as the need for trust and cooperation between the smart camera sensors. The intelligence growth in this technology requires adequate information security with capable privacy and trust protocols to prevent malicious content attacks. This paper presents, first, a clear definition of SCSN. It addresses current methodologies with perspectives in privacy and trust protection, and proposes a multi-layer security approach. The proposed approach highlights the need for a public key infrastructure layer in association with a Reputation-Based Cooperation mechanism.
{"title":"Privacy and Trust in Smart Camera Sensor Networks","authors":"M. Loughlin, A. Adnane","doi":"10.1109/ARES.2015.31","DOIUrl":"https://doi.org/10.1109/ARES.2015.31","url":null,"abstract":"The emerging technologies of Smart Camera Sensor Networks (SCSN) are being driven by the social need for security assurance and analytical information. SCSN are deployed for protection and for surveillance tracking of potential criminals. A smart camera sensor does not just capture visual and audio information but covers the whole electromagnetic spectrum. It constitutes of intelligent onboard processor, autonomous communication interfaces, memory and has the ability to execute algorithms. The rapid deployment of smart camera sensors with ubiquitous imaging access causes security and privacy issues for the captured data and its metadata, as well as the need for trust and cooperation between the smart camera sensors. The intelligence growth in this technology requires adequate information security with capable privacy and trust protocols to prevent malicious content attacks. This paper presents, first, a clear definition of SCSN. It addresses current methodologies with perspectives in privacy and trust protection, and proposes a multi-layer security approach. The proposed approach highlights the need for a public key infrastructure layer in association with a Reputation-Based Cooperation mechanism.","PeriodicalId":331539,"journal":{"name":"2015 10th International Conference on Availability, Reliability and Security","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128341625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Authorities like the Federal Financial Institutions Examination Council in the US and the European Central Bank in Europe have stepped up their expected minimum security requirements for financial institutions, including the requirements for risk analysis. In a previous article, we introduced a visual tool and a systematic way to estimate the probability of a successful incident response process, which we called an incident response tree (IRT). In this article, we present several scenarios using the IRT which could be used in a risk analysis of online financial services concerning fraud prevention. By minimizing the problem of underreporting, we are able to calculate the conditional probabilities of prevention, detection, and response in the incident response process of a financial institution. We also introduce a quantitative model for estimating expected loss from fraud, and conditional fraud value at risk, which enables a direct comparison of risk among online banking channels in a multi-channel environment.
美国联邦金融机构审查委员会(Federal Financial Institutions Examination Council)和欧洲欧洲央行(European Central Bank)等监管机构已经提高了对金融机构的最低安全要求,包括风险分析要求。在前一篇文章中,我们介绍了一种可视化工具和一种系统的方法来估计成功的事件响应过程的概率,我们称之为事件响应树(IRT)。在本文中,我们介绍了使用IRT的几个场景,IRT可用于在线金融服务的风险分析,涉及欺诈预防。通过最小化漏报问题,我们能够在金融机构的事件响应过程中计算预防、检测和响应的条件概率。我们还引入了一个定量模型,用于估计欺诈的预期损失和风险中的条件欺诈价值,从而可以直接比较多渠道环境下网上银行渠道的风险。
{"title":"Modeling Fraud Prevention of Online Services Using Incident Response Trees and Value at Risk","authors":"D. Gorton","doi":"10.1109/ARES.2015.17","DOIUrl":"https://doi.org/10.1109/ARES.2015.17","url":null,"abstract":"Authorities like the Federal Financial Institutions Examination Council in the US and the European Central Bank in Europe have stepped up their expected minimum security requirements for financial institutions, including the requirements for risk analysis. In a previous article, we introduced a visual tool and a systematic way to estimate the probability of a successful incident response process, which we called an incident response tree (IRT). In this article, we present several scenarios using the IRT which could be used in a risk analysis of online financial services concerning fraud prevention. By minimizing the problem of underreporting, we are able to calculate the conditional probabilities of prevention, detection, and response in the incident response process of a financial institution. We also introduce a quantitative model for estimating expected loss from fraud, and conditional fraud value at risk, which enables a direct comparison of risk among online banking channels in a multi-channel environment.","PeriodicalId":331539,"journal":{"name":"2015 10th International Conference on Availability, Reliability and Security","volume":"170 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128646193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Outsourcing computing and storage to the cloud does not eliminate the need for handling of information security incidents. However, the long provider chains and unclear responsibilities in the cloud make incident response difficult. In this paper we present results from interviews in critical infrastructure organisations that highlight incident handling needs that would apply to cloud customers, and suggest mechanisms that facilitate inter-provider collaboration in handling of incidents in the cloud, improving the accountability of the cloud service providers.
{"title":"How Much Cloud Can You Handle?","authors":"M. Jaatun, Inger Anne Tøndel","doi":"10.1109/ARES.2015.38","DOIUrl":"https://doi.org/10.1109/ARES.2015.38","url":null,"abstract":"Outsourcing computing and storage to the cloud does not eliminate the need for handling of information security incidents. However, the long provider chains and unclear responsibilities in the cloud make incident response difficult. In this paper we present results from interviews in critical infrastructure organisations that highlight incident handling needs that would apply to cloud customers, and suggest mechanisms that facilitate inter-provider collaboration in handling of incidents in the cloud, improving the accountability of the cloud service providers.","PeriodicalId":331539,"journal":{"name":"2015 10th International Conference on Availability, Reliability and Security","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130892700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}