Pub Date : 2013-06-28DOI: 10.1109/SERVICES.2013.63
Nabil El Ioini, A. Sillitti, G. Succi
Web Services (WS) are software components accessible over the Internet through a well-defined set of standards. When consumers invoke a service, they expect to receive a valid response. However, the problem is to determine the structure of a valid request [21]. WS specifications are used to solve this problem since they are considered the primary piece of information for building service requests. Unfortunately, existing specifications do not provide enough support for this type information (e.g., WSDL) or there is little support on the client side (e.g., OWL-S). In this paper we address this issue by implementing a technique to reduce the number of faulty requests. We specifically propose an approach for extending WSDL with service input parameters rules that help consumers and integrators to verify their calls on the client side.
{"title":"Using Rules for Web Service Client Side Testing","authors":"Nabil El Ioini, A. Sillitti, G. Succi","doi":"10.1109/SERVICES.2013.63","DOIUrl":"https://doi.org/10.1109/SERVICES.2013.63","url":null,"abstract":"Web Services (WS) are software components accessible over the Internet through a well-defined set of standards. When consumers invoke a service, they expect to receive a valid response. However, the problem is to determine the structure of a valid request [21]. WS specifications are used to solve this problem since they are considered the primary piece of information for building service requests. Unfortunately, existing specifications do not provide enough support for this type information (e.g., WSDL) or there is little support on the client side (e.g., OWL-S). In this paper we address this issue by implementing a technique to reduce the number of faulty requests. We specifically propose an approach for extending WSDL with service input parameters rules that help consumers and integrators to verify their calls on the client side.","PeriodicalId":169370,"journal":{"name":"2013 IEEE Ninth World Congress on Services","volume":"146 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123384099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-06-28DOI: 10.1109/SERVICES.2013.74
K. Ravindran
Given cloud-based realization of a distributed system S, QoS auditing enables risk analysis and accounting of SLA violations under various security threats and resource depletions faced by S. The problem of QoS failures and security infringements arises due to the third-party control of cloud resources and components that are used in realizing the application-oriented service exported by S. The less-than-100% trust between the various sub-systems of S is a major issue that necessitates a probabilistic analysis of the application behavior relative to the SLA negotiated with S. In this light, QoS auditing allows reasoning about how good the SLA is complied by S in the face of hostile environment conditions. The paper describes case studies of CDN and replicated web service realized on a cloud.
{"title":"QoS Auditing for Evaluation of SLA in Cloud-based Distributed Services","authors":"K. Ravindran","doi":"10.1109/SERVICES.2013.74","DOIUrl":"https://doi.org/10.1109/SERVICES.2013.74","url":null,"abstract":"Given cloud-based realization of a distributed system S, QoS auditing enables risk analysis and accounting of SLA violations under various security threats and resource depletions faced by S. The problem of QoS failures and security infringements arises due to the third-party control of cloud resources and components that are used in realizing the application-oriented service exported by S. The less-than-100% trust between the various sub-systems of S is a major issue that necessitates a probabilistic analysis of the application behavior relative to the SLA negotiated with S. In this light, QoS auditing allows reasoning about how good the SLA is complied by S in the face of hostile environment conditions. The paper describes case studies of CDN and replicated web service realized on a cloud.","PeriodicalId":169370,"journal":{"name":"2013 IEEE Ninth World Congress on Services","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116482560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-06-28DOI: 10.1109/SERVICES.2013.52
K. Rajaraman, Le Duy Ngan, Yuzhang Feng, Anitha Veeramani, Joel Koo Chong En, C. C. Keong, F. S. Tsai, A. Andrzejak
Return on investment is a critical decision factor for end-users going for cloud deployments. However, major cloud vendors typically provide a myriad of interdependent cloud service options in a variety of purchasing models, that severely complicates cost estimation and optimization. In this paper, we propose a novel Amazon EC2 cost optimization system, called EC2 Bargain Hunter, that innovatively combines services and cloud computing principles with ideas from semantic technologies. The system supports the entire-range of EC2 instance types, and can be used in real-time to perform live cost optimization. We demonstrate that unprecedented cost savings, by a factor of 30, on Amazon EC2 offerings can be found with this system in a few clicks. Furthermore, our approach can be adapted to other IaaS providers, which enables truly real-life cloud cost optimization and thus is a significant step towards making the cloud really cost-effective for the end-users.
{"title":"EC2BargainHunter: It's Easy to Hunt for Cost Savings on Amazon EC2!","authors":"K. Rajaraman, Le Duy Ngan, Yuzhang Feng, Anitha Veeramani, Joel Koo Chong En, C. C. Keong, F. S. Tsai, A. Andrzejak","doi":"10.1109/SERVICES.2013.52","DOIUrl":"https://doi.org/10.1109/SERVICES.2013.52","url":null,"abstract":"Return on investment is a critical decision factor for end-users going for cloud deployments. However, major cloud vendors typically provide a myriad of interdependent cloud service options in a variety of purchasing models, that severely complicates cost estimation and optimization. In this paper, we propose a novel Amazon EC2 cost optimization system, called EC2 Bargain Hunter, that innovatively combines services and cloud computing principles with ideas from semantic technologies. The system supports the entire-range of EC2 instance types, and can be used in real-time to perform live cost optimization. We demonstrate that unprecedented cost savings, by a factor of 30, on Amazon EC2 offerings can be found with this system in a few clicks. Furthermore, our approach can be adapted to other IaaS providers, which enables truly real-life cloud cost optimization and thus is a significant step towards making the cloud really cost-effective for the end-users.","PeriodicalId":169370,"journal":{"name":"2013 IEEE Ninth World Congress on Services","volume":"115 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126862435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-06-28DOI: 10.1109/SERVICES.2013.57
Wenhong Tian, Ruini Xue, Jun Cao, Qin Xiong, Yunjun Hu
This paper considers online energy-efficient scheduling of real-time virtual machines (VMs) for Cloud data centers. Each request is associated with a starttime, a end-time, a processing time and demand for a Physical Machine (PM) capacity. The goal is to schedule all of the requests non-preemptively in their start-timeend- time windows, subjecting to PM capacity constraints, such that total busy time of all used PMs is minimized (called MinTBT-ON for abbreviation). This problem is a fundamental scheduling problem for parallel jobs allocation on mutliple machines, it has important applications in power-aware scheduling in cloud computing, optical network design and customer service systems and other related areas. Offline scheduling to minimize busy time is NP-hard already in the special case where all jobs have the same processing time and can be scheduled in a fixed time interval. One best-known result for MinTBT-ON problem is a g-competitive algorithm for general instances using First-Fit algorithm for unit-size jobs, where g is the total capacity of a PM. In this paper, a B-competitive algorithm, GRID is proposed and proved for general case, where B is a natural number and 1 <; B <; g. More results are obtained and applied to Cloud computing to improve energy-efficiency.
{"title":"An Energy-Efficient Online Parallel Scheduling Algorithm for Cloud Data Centers","authors":"Wenhong Tian, Ruini Xue, Jun Cao, Qin Xiong, Yunjun Hu","doi":"10.1109/SERVICES.2013.57","DOIUrl":"https://doi.org/10.1109/SERVICES.2013.57","url":null,"abstract":"This paper considers online energy-efficient scheduling of real-time virtual machines (VMs) for Cloud data centers. Each request is associated with a starttime, a end-time, a processing time and demand for a Physical Machine (PM) capacity. The goal is to schedule all of the requests non-preemptively in their start-timeend- time windows, subjecting to PM capacity constraints, such that total busy time of all used PMs is minimized (called MinTBT-ON for abbreviation). This problem is a fundamental scheduling problem for parallel jobs allocation on mutliple machines, it has important applications in power-aware scheduling in cloud computing, optical network design and customer service systems and other related areas. Offline scheduling to minimize busy time is NP-hard already in the special case where all jobs have the same processing time and can be scheduled in a fixed time interval. One best-known result for MinTBT-ON problem is a g-competitive algorithm for general instances using First-Fit algorithm for unit-size jobs, where g is the total capacity of a PM. In this paper, a B-competitive algorithm, GRID is proposed and proved for general case, where B is a natural number and 1 <; B <; g. More results are obtained and applied to Cloud computing to improve energy-efficiency.","PeriodicalId":169370,"journal":{"name":"2013 IEEE Ninth World Congress on Services","volume":"2005 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125816577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-06-28DOI: 10.1109/SERVICES.2013.78
Zhijie Li, Ming Li
Cloud computing is experiencing phenomenal growth and there are now many vendors offering their cloud services. In cloud computing, cloud providers cooperate together to offer their computing resource as a utility and software as a service to customers. The demands and the price of cloud service should be negotiated between providers and users based on the Service Level Agreement (SLA). In order to help cloud providers achieving an agreeable price for their services and maximizing the benefits of both cloud providers and clients, this paper proposes a cloud pricing system consisting of hierarchical system, M/M/c queuing model and pricing model. Simulation results verify the efficiency of our proposed system.
{"title":"A Hierarchical Cloud Pricing System","authors":"Zhijie Li, Ming Li","doi":"10.1109/SERVICES.2013.78","DOIUrl":"https://doi.org/10.1109/SERVICES.2013.78","url":null,"abstract":"Cloud computing is experiencing phenomenal growth and there are now many vendors offering their cloud services. In cloud computing, cloud providers cooperate together to offer their computing resource as a utility and software as a service to customers. The demands and the price of cloud service should be negotiated between providers and users based on the Service Level Agreement (SLA). In order to help cloud providers achieving an agreeable price for their services and maximizing the benefits of both cloud providers and clients, this paper proposes a cloud pricing system consisting of hierarchical system, M/M/c queuing model and pricing model. Simulation results verify the efficiency of our proposed system.","PeriodicalId":169370,"journal":{"name":"2013 IEEE Ninth World Congress on Services","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133057044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-06-28DOI: 10.1109/SERVICES.2013.11
Brian Xu, S. Kumar, Manonmani Kumar
To address the current need of innovative technologies that blend rapid data processing capabilities of computers with intuitive decision making skills of humans, we have developed a prototype of Cloud Enabled Brain Computer Interface (CEB) decision making technologies. The implemented architecture integrates cloud enabled big data analytics capabilities, networked BCI (Brain Computer Interface) devices, and Decision Making Engine. The novel CEB technology comprises of 1. Cloud-enabled BCI (Brain-Computer Interface) headsets, which is developed and networked in a cloud to enable rapid decision making and 2. Genetic algorithm based decision making engine, to intelligently assist the users in decision making; Advantage of our architecture is that when CEB loads the data, it will automatically recommend the best applicable Machine Learning (ML) algorithms after being evaluated to solve a given problem. Hence, with such automated machine learning techniques, CEB users workload is significantly reduced. Our experiments on DARPA dataset indicate that CEB technologies performed 10 times faster and about 4 times less false negative rate than current computational methods in seeking and understanding information. Our results demonstrate that these CEB technologies would enable humans to accurately and quickly detect meaningful information from a mass amount of data. The novel CEB technologies ensure that the reduced manpower does not result in reduced performance.
{"title":"Cloud Based Architecture for Enabling Intuitive Decision Making","authors":"Brian Xu, S. Kumar, Manonmani Kumar","doi":"10.1109/SERVICES.2013.11","DOIUrl":"https://doi.org/10.1109/SERVICES.2013.11","url":null,"abstract":"To address the current need of innovative technologies that blend rapid data processing capabilities of computers with intuitive decision making skills of humans, we have developed a prototype of Cloud Enabled Brain Computer Interface (CEB) decision making technologies. The implemented architecture integrates cloud enabled big data analytics capabilities, networked BCI (Brain Computer Interface) devices, and Decision Making Engine. The novel CEB technology comprises of 1. Cloud-enabled BCI (Brain-Computer Interface) headsets, which is developed and networked in a cloud to enable rapid decision making and 2. Genetic algorithm based decision making engine, to intelligently assist the users in decision making; Advantage of our architecture is that when CEB loads the data, it will automatically recommend the best applicable Machine Learning (ML) algorithms after being evaluated to solve a given problem. Hence, with such automated machine learning techniques, CEB users workload is significantly reduced. Our experiments on DARPA dataset indicate that CEB technologies performed 10 times faster and about 4 times less false negative rate than current computational methods in seeking and understanding information. Our results demonstrate that these CEB technologies would enable humans to accurately and quickly detect meaningful information from a mass amount of data. The novel CEB technologies ensure that the reduced manpower does not result in reduced performance.","PeriodicalId":169370,"journal":{"name":"2013 IEEE Ninth World Congress on Services","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133559272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-06-28DOI: 10.1109/SERVICES.2013.75
George Chatzikonstantinou, Michael Athanasopoulos, K. Kontogiannis
Since its inception, Service Orientation allowed for distributed clients to invoke remote operations utilizing standardized protocols, programming paradigms and architectures. Furthermore, the problem of compiling complex service compositions, based on contextual information and user preferences, has been also extensively investigated by the research community. However, these techniques are mostly used within a single, or within coupled service domains that utilize predefined orchestration and composition service flows. In this paper, we propose an approach whereby service providers can specify complex service tasks as collections of goal model templates that can be instantiated and customized by the invoking clients. A reasoning process evaluates whether instantiated goals can be fulfilled based on the clients selections and consequently generates service flows that are compliant to the goal model and to the clients preferences. The major difference from existing context aware service computing frameworks is the introduction of a reasoning process that allows for the evaluation of various and possibly synergetic client goals and the on-time initiation and enactment of goal compliant service compositions. A proof of concept prototype has been implemented utilizing SOA technologies for service invocation and flow control.
{"title":"Towards a Goal Driven Task Personalization Specification Framework","authors":"George Chatzikonstantinou, Michael Athanasopoulos, K. Kontogiannis","doi":"10.1109/SERVICES.2013.75","DOIUrl":"https://doi.org/10.1109/SERVICES.2013.75","url":null,"abstract":"Since its inception, Service Orientation allowed for distributed clients to invoke remote operations utilizing standardized protocols, programming paradigms and architectures. Furthermore, the problem of compiling complex service compositions, based on contextual information and user preferences, has been also extensively investigated by the research community. However, these techniques are mostly used within a single, or within coupled service domains that utilize predefined orchestration and composition service flows. In this paper, we propose an approach whereby service providers can specify complex service tasks as collections of goal model templates that can be instantiated and customized by the invoking clients. A reasoning process evaluates whether instantiated goals can be fulfilled based on the clients selections and consequently generates service flows that are compliant to the goal model and to the clients preferences. The major difference from existing context aware service computing frameworks is the introduction of a reasoning process that allows for the evaluation of various and possibly synergetic client goals and the on-time initiation and enactment of goal compliant service compositions. A proof of concept prototype has been implemented utilizing SOA technologies for service invocation and flow control.","PeriodicalId":169370,"journal":{"name":"2013 IEEE Ninth World Congress on Services","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116019798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-06-28DOI: 10.1109/SERVICES.2013.32
Artem Chebotko, John Abraham, P. Brazier, Anthony Piazza, A. Kashlev, Shiyong Lu
Provenance, which records the history of an in-silico experiment, has been identified as an important requirement for scientific workflows to support scientific discovery reproducibility, result interpretation, and problem diagnosis. Large provenance datasets are composed of many smaller provenance graphs, each of which corresponds to a single workflow execution. In this work, we explore and address the challenge of efficient and scalable storage and querying of large collections of provenance graphs serialized as RDF graphs in an Apache HBase database. Specifically, we propose: (i) novel storage and indexing techniques for RDF data in HBase that are better suited for provenance datasets rather than generic RDF graphs and (ii) novel SPARQL query evaluation algorithms that solely rely on indices to compute expensive join operations, make use of numeric values that represent triple positions rather than actual triples, and eliminate the need for intermediate data transfers over a network. The empirical evaluation of our algorithms using provenance datasets and queries of the University of Texas Provenance Benchmark confirms that our approach is efficient and scalable.
{"title":"Storing, Indexing and Querying Large Provenance Data Sets as RDF Graphs in Apache HBase","authors":"Artem Chebotko, John Abraham, P. Brazier, Anthony Piazza, A. Kashlev, Shiyong Lu","doi":"10.1109/SERVICES.2013.32","DOIUrl":"https://doi.org/10.1109/SERVICES.2013.32","url":null,"abstract":"Provenance, which records the history of an in-silico experiment, has been identified as an important requirement for scientific workflows to support scientific discovery reproducibility, result interpretation, and problem diagnosis. Large provenance datasets are composed of many smaller provenance graphs, each of which corresponds to a single workflow execution. In this work, we explore and address the challenge of efficient and scalable storage and querying of large collections of provenance graphs serialized as RDF graphs in an Apache HBase database. Specifically, we propose: (i) novel storage and indexing techniques for RDF data in HBase that are better suited for provenance datasets rather than generic RDF graphs and (ii) novel SPARQL query evaluation algorithms that solely rely on indices to compute expensive join operations, make use of numeric values that represent triple positions rather than actual triples, and eliminate the need for intermediate data transfers over a network. The empirical evaluation of our algorithms using provenance datasets and queries of the University of Texas Provenance Benchmark confirms that our approach is efficient and scalable.","PeriodicalId":169370,"journal":{"name":"2013 IEEE Ninth World Congress on Services","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114781535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-06-28DOI: 10.1109/SERVICES.2013.67
Jianhua Zheng, Liang-Jie Zhang, Rong Zhu, Ke Ning, Dong Liu
Matrix multiplication is used in a variety of applications. It requires a lot of computation time especially for large-scale matrices. Parallel processing is a good choice for matrix multiplication operation. To overcome the efficiencies of existing algorithms for parallel matrix multiplication, a matrix multiplication processing scheme based on vector linear combination (VLC) was presented. The VLC scheme splits the matrix multiplication procedure into two steps. The first step obtains the weighted vectors by scalar multiplication. The second step gets the final result through a linear combination of the weighted vectors with identical row numbers. We present parallel matrix multiplication implementations using MapReduce (MR) based on VLC scheme and explain in detail the MR job. The map method receives the matrix input and generates intermediate (key, value) pairs according to the VLC scheme requirement. The reduce method conducts the scalar multiplication and vectors summation. In the end, the reduce method outputs the result in the way of row vector. Then performance theoretical analysis and experiment result comparing with other algorithms are proposed. Algorithm presented in this paper needs less computation time than other algorithms. Finally, we conclude the paper and propose future works.
{"title":"Parallel Matrix Multiplication Algorithm Based on Vector Linear Combination Using MapReduce","authors":"Jianhua Zheng, Liang-Jie Zhang, Rong Zhu, Ke Ning, Dong Liu","doi":"10.1109/SERVICES.2013.67","DOIUrl":"https://doi.org/10.1109/SERVICES.2013.67","url":null,"abstract":"Matrix multiplication is used in a variety of applications. It requires a lot of computation time especially for large-scale matrices. Parallel processing is a good choice for matrix multiplication operation. To overcome the efficiencies of existing algorithms for parallel matrix multiplication, a matrix multiplication processing scheme based on vector linear combination (VLC) was presented. The VLC scheme splits the matrix multiplication procedure into two steps. The first step obtains the weighted vectors by scalar multiplication. The second step gets the final result through a linear combination of the weighted vectors with identical row numbers. We present parallel matrix multiplication implementations using MapReduce (MR) based on VLC scheme and explain in detail the MR job. The map method receives the matrix input and generates intermediate (key, value) pairs according to the VLC scheme requirement. The reduce method conducts the scalar multiplication and vectors summation. In the end, the reduce method outputs the result in the way of row vector. Then performance theoretical analysis and experiment result comparing with other algorithms are proposed. Algorithm presented in this paper needs less computation time than other algorithms. Finally, we conclude the paper and propose future works.","PeriodicalId":169370,"journal":{"name":"2013 IEEE Ninth World Congress on Services","volume":"253 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115007798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-06-28DOI: 10.1109/SERVICES.2013.76
Sean S. E. Thorpe, Tyrone Grandison, Arnett Campbell, Janet Williams, K. Burrell, I. Ray
Cloud computing log digital investigations relate to the investigation of a potential crime using the digital forensic evidence from a virtual machine (VM) host operating system using the hypervisor event logs. In cloud digital log forensics, work on the forensic reconstruction of evidence on VM hosts system is required, but with the heterogeneous complexity involved with an enterprise's private cloud not to mention public cloud distributed environments, a possible Web Services-centric approach may be required for such log supported investigations. A data cloud log forensics service oriented architecture (SOA) audit framework for this type of forensic examination needs to allow for the reconstruction of transactions spanning multiple VM hosts, platforms and applications. This paper explores the requirements of a cloud log forensics SOA framework for performing effective digital investigation examinations in these abstract web services environments. This framework will be necessary in order to develop investigative and forensic auditing tools and techniques for use in cloud based log-centric SOAs.
{"title":"Towards a Forensic-Based Service Oriented Architecture Framework for Auditing of Cloud Logs","authors":"Sean S. E. Thorpe, Tyrone Grandison, Arnett Campbell, Janet Williams, K. Burrell, I. Ray","doi":"10.1109/SERVICES.2013.76","DOIUrl":"https://doi.org/10.1109/SERVICES.2013.76","url":null,"abstract":"Cloud computing log digital investigations relate to the investigation of a potential crime using the digital forensic evidence from a virtual machine (VM) host operating system using the hypervisor event logs. In cloud digital log forensics, work on the forensic reconstruction of evidence on VM hosts system is required, but with the heterogeneous complexity involved with an enterprise's private cloud not to mention public cloud distributed environments, a possible Web Services-centric approach may be required for such log supported investigations. A data cloud log forensics service oriented architecture (SOA) audit framework for this type of forensic examination needs to allow for the reconstruction of transactions spanning multiple VM hosts, platforms and applications. This paper explores the requirements of a cloud log forensics SOA framework for performing effective digital investigation examinations in these abstract web services environments. This framework will be necessary in order to develop investigative and forensic auditing tools and techniques for use in cloud based log-centric SOAs.","PeriodicalId":169370,"journal":{"name":"2013 IEEE Ninth World Congress on Services","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117067069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}