Zhen Tang, Heng Wu, Lei Sun, Zhongshan Ren, Wei Wang, Wei Zhou, Liang Yang
Flash-based Solid State Disk (SSD) is widely used in the Internet-based virtual computing environment, usually as cache of the hard disk drive-based virtual machine (VM) storage. Existing SSD caching schemes mainly treat the VMs as independent units and focus on critical performance metrics concerning one single VM, such as the IO latency, throughput, or the cache miss rate. However, in the Internet-based virtual computing environment, one transactional application usually consists of multiple VMs on different hypervisors. Transaction-aware SSD caching schemes may potentially better improve the end user-perceived quality of service. The key insight here is to utilize the relationships among VMs inside the transactional application to better guide the allocation of the SSD cache, so as to help learn the pattern of workload changes and build adaptive SSD caching schemes. To this end, we propose the Transaction-Aware SSD caching (TA-SSD), which takes the characteristics of transactions into consideration, uses closed loop adaptation to react to changing workload, and introduces the genetic algorithm to enable nearly optimal planning. The evaluation shows that comparing to the equally partitioned cache, the allocation produced by the TA-SSD can boost the performance by up to 40%, with dynamic changes in the intensity and the type of the workload.
基于flash的SSD (Solid State Disk)硬盘被广泛应用于基于互联网的虚拟计算环境中,通常作为基于硬盘驱动器的虚拟机存储的缓存。现有的SSD caching方案主要是将虚拟机作为独立的单元,关注单个虚拟机的关键性能指标,如IO时延、吞吐量、cache miss率等。然而,在基于internet的虚拟计算环境中,一个事务性应用程序通常由不同管理程序上的多个vm组成。事务感知的SSD缓存方案可能会更好地提高最终用户感知的服务质量。这里的关键观点是利用事务性应用程序内部vm之间的关系来更好地指导SSD缓存的分配,从而帮助了解工作负载变化的模式并构建自适应的SSD缓存方案。为此,我们提出了基于事务感知的SSD缓存(TA-SSD),它考虑了事务的特性,使用闭环自适应来应对工作负载的变化,并引入遗传算法来实现近乎最优的规划。评估表明,与等分区缓存相比,TA-SSD产生的分配可以在工作负载的强度和类型动态变化的情况下提高高达40%的性能。
{"title":"Transaction-aware SSD Cache Allocation for the Virtualization Environment","authors":"Zhen Tang, Heng Wu, Lei Sun, Zhongshan Ren, Wei Wang, Wei Zhou, Liang Yang","doi":"10.1109/SOSE.2018.00029","DOIUrl":"https://doi.org/10.1109/SOSE.2018.00029","url":null,"abstract":"Flash-based Solid State Disk (SSD) is widely used in the Internet-based virtual computing environment, usually as cache of the hard disk drive-based virtual machine (VM) storage. Existing SSD caching schemes mainly treat the VMs as independent units and focus on critical performance metrics concerning one single VM, such as the IO latency, throughput, or the cache miss rate. However, in the Internet-based virtual computing environment, one transactional application usually consists of multiple VMs on different hypervisors. Transaction-aware SSD caching schemes may potentially better improve the end user-perceived quality of service. The key insight here is to utilize the relationships among VMs inside the transactional application to better guide the allocation of the SSD cache, so as to help learn the pattern of workload changes and build adaptive SSD caching schemes. To this end, we propose the Transaction-Aware SSD caching (TA-SSD), which takes the characteristics of transactions into consideration, uses closed loop adaptation to react to changing workload, and introduces the genetic algorithm to enable nearly optimal planning. The evaluation shows that comparing to the equally partitioned cache, the allocation produced by the TA-SSD can boost the performance by up to 40%, with dynamic changes in the intensity and the type of the workload.","PeriodicalId":414464,"journal":{"name":"2018 IEEE Symposium on Service-Oriented System Engineering (SOSE)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133861627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pablo Morales-Ferreira, Miguel Santiago-Duran, Cristopher Gaytan-Diaz, J. L. González, Víctor Jesús Sosa Sosa, I. Lopez-Arevalo
Information dispersal is a fault-tolerant technique where files of size |F| are split into n redundant pieces of size |F|/k that are dispersed to different servers where k pieces suffice for recovering the original file whenever k
{"title":"A Data Distribution Service for Cloud and Containerized Storage Based on Information Dispersal","authors":"Pablo Morales-Ferreira, Miguel Santiago-Duran, Cristopher Gaytan-Diaz, J. L. González, Víctor Jesús Sosa Sosa, I. Lopez-Arevalo","doi":"10.1109/SOSE.2018.00020","DOIUrl":"https://doi.org/10.1109/SOSE.2018.00020","url":null,"abstract":"Information dispersal is a fault-tolerant technique where files of size |F| are split into n redundant pieces of size |F|/k that are dispersed to different servers where k pieces suffice for recovering the original file whenever k<n. This technique is a popular solution for service providers to withstand server failures and to improve the storage utilization. However, the coding/decoding service time produced by this technique as well as the management of pieces of heterogeneous size, that belong to different files, represent both a challenge for the deployment of this technique on clouds and clusters. This paper presents the design and development of a data distribution service for fault-tolerant cloud/cluster storage. This service includes an information dispersal client for coding/decoding files in-memory, which improves the service experience of end-users when delivering/retrieving files to/from cloud storage services. It also includes a data placement method to allocate, locate and manage redundant pieces of heterogeneous size in a uniform manner, which produces load balancing in the storage nodes. A prototype of this service was implemented in a private cloud and containerized cluster. An experimental evaluation based on synthetic traces and a case study based on satellite images revealed that the service prototype preserved a balanced load even in scenarios when managing pieces of heterogeneous size and that, when performing coding/decoding in-memory, the service experience of end-users was improved in comparison with tested traditional solutions.","PeriodicalId":414464,"journal":{"name":"2018 IEEE Symposium on Service-Oriented System Engineering (SOSE)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124073875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The key management service (KMS) has become a fundamental component of cloud computing. For enforce security, existing clouds usually deploy a centralized KMS protected by specialized hardware, i.e., hardware security module (HSM), which is exclusively controlled by the cloud provider. Joint cloud computing (JointCloud) is a new architecture of cloud computing, which makes the best use of the advantage of different clouds. However, in JointCloud, different cloud providers have their respective KMS. Thus it is impossible for one user’s different applications in different clouds to share the same key in different KMS. The key stored in KMS will be unreachable after the application is migrated to a new cloud, which makes the encrypted data being unusable. To address these problems, we introduce TZ-KMS which provides a trusted distributed key management service with ARM TrustZone technology. We locate a TZ-KMS instance in the secure world (a trusted execution environment provided by ARM TrustZone) of each machine, and the instance handles requests from the user application. A distributed key management method is further provided to synchronize user keys among different TZ-KMS instances. TZ-KMS allows one user’s applications, located in different clouds, to share the same key management service securely. User keys are still reachable after the application is migrated to a new cloud. We have implemented a prototype of TZ-KMS, and the evaluation shows that our system has a good performance and scalability.
{"title":"TZ-KMS: A Secure Key Management Service for Joint Cloud Computing with ARM TrustZone","authors":"Shiyu Luo, Zhichao Hua, Yubin Xia","doi":"10.1109/SOSE.2018.00030","DOIUrl":"https://doi.org/10.1109/SOSE.2018.00030","url":null,"abstract":"The key management service (KMS) has become a fundamental component of cloud computing. For enforce security, existing clouds usually deploy a centralized KMS protected by specialized hardware, i.e., hardware security module (HSM), which is exclusively controlled by the cloud provider. Joint cloud computing (JointCloud) is a new architecture of cloud computing, which makes the best use of the advantage of different clouds. However, in JointCloud, different cloud providers have their respective KMS. Thus it is impossible for one user’s different applications in different clouds to share the same key in different KMS. The key stored in KMS will be unreachable after the application is migrated to a new cloud, which makes the encrypted data being unusable. To address these problems, we introduce TZ-KMS which provides a trusted distributed key management service with ARM TrustZone technology. We locate a TZ-KMS instance in the secure world (a trusted execution environment provided by ARM TrustZone) of each machine, and the instance handles requests from the user application. A distributed key management method is further provided to synchronize user keys among different TZ-KMS instances. TZ-KMS allows one user’s applications, located in different clouds, to share the same key management service securely. User keys are still reachable after the application is migrated to a new cloud. We have implemented a prototype of TZ-KMS, and the evaluation shows that our system has a good performance and scalability.","PeriodicalId":414464,"journal":{"name":"2018 IEEE Symposium on Service-Oriented System Engineering (SOSE)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125535399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the development of cloud computing, more and more enterprises are building their own cluster to deploy various types of distributed systems on the public cloud to satisfy their growing business need. In the process of migrating the business to the cloud, the enterprise faces two problems. The one is that the rental of virtual machines on the public cloud is a complicated process. Users need to understand and select a variety of parameters while the parameters of different public clouds are not the same. The other is that deployment and scale-out of distributed systems remain complex for inexperienced users. To address the above problem, this paper designs and implements a method for automatically deploying and scaling out Docklet, which is a typical distributed system, on the cloud from scratch. Finally, we present several examples to show the effectiveness.
{"title":"A Case of Automatically Deploying and Scaling Out Distributed Systems on the Cloud from Scratch","authors":"Yehong Zhong, Junming Ma, Bo An, Donggang Cao","doi":"10.1109/SOSE.2018.00039","DOIUrl":"https://doi.org/10.1109/SOSE.2018.00039","url":null,"abstract":"With the development of cloud computing, more and more enterprises are building their own cluster to deploy various types of distributed systems on the public cloud to satisfy their growing business need. In the process of migrating the business to the cloud, the enterprise faces two problems. The one is that the rental of virtual machines on the public cloud is a complicated process. Users need to understand and select a variety of parameters while the parameters of different public clouds are not the same. The other is that deployment and scale-out of distributed systems remain complex for inexperienced users. To address the above problem, this paper designs and implements a method for automatically deploying and scaling out Docklet, which is a typical distributed system, on the cloud from scratch. Finally, we present several examples to show the effectiveness.","PeriodicalId":414464,"journal":{"name":"2018 IEEE Symposium on Service-Oriented System Engineering (SOSE)","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132441069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
F. Meng, Jingmin Xu, Xiao Zhang, L. Yang, Pengfei Chen, Y. Wang, Xiaoxi Liu, Naga Ayachitula, K. Murthy, L. Shwartz, George M. Galambos, Zhuo Su, Jun Zheng
More and more industries are experiencing digital disruption triggered by new technologies for example cloud, mobile, Internet-of-Things, Big Data, and Artificial Intelligence. Majority of applications are predicted to provide cognitive capabilities to amplify human skills and expertise in coming two years. Information Technology (IT) services industry is also shifting from people-led and technology-assisted model into a people-assisted and technology-led model. However, the ever-changing IT technologies, increasingly complicated IT environments, and ever-shortening IT delivery cycles in real world pose great challenges to existing IT Service Management (ITSM) technologies. This paper aims to discuss the trends, opportunities, and challenges in transformation of real-world ITSM in cognitive era and to trigger more practical research work in this exploited area. It firstly reviews the evolution of ITSM and discusses key technologies behind the evolution. Then, it summarizes opportunities and challenges in transforming ITSM with cognitive capabilities in real word. Further, we discuss key enabling technologies to drive the evolution of ITSM towards Cognitive one. Finally, we conclude the paper and envision real-world best practices in this area.
{"title":"Opportunities and Challenges Towards Cognitive IT Service Management in Real World","authors":"F. Meng, Jingmin Xu, Xiao Zhang, L. Yang, Pengfei Chen, Y. Wang, Xiaoxi Liu, Naga Ayachitula, K. Murthy, L. Shwartz, George M. Galambos, Zhuo Su, Jun Zheng","doi":"10.1109/SOSE.2018.00028","DOIUrl":"https://doi.org/10.1109/SOSE.2018.00028","url":null,"abstract":"More and more industries are experiencing digital disruption triggered by new technologies for example cloud, mobile, Internet-of-Things, Big Data, and Artificial Intelligence. Majority of applications are predicted to provide cognitive capabilities to amplify human skills and expertise in coming two years. Information Technology (IT) services industry is also shifting from people-led and technology-assisted model into a people-assisted and technology-led model. However, the ever-changing IT technologies, increasingly complicated IT environments, and ever-shortening IT delivery cycles in real world pose great challenges to existing IT Service Management (ITSM) technologies. This paper aims to discuss the trends, opportunities, and challenges in transformation of real-world ITSM in cognitive era and to trigger more practical research work in this exploited area. It firstly reviews the evolution of ITSM and discusses key technologies behind the evolution. Then, it summarizes opportunities and challenges in transforming ITSM with cognitive capabilities in real word. Further, we discuss key enabling technologies to drive the evolution of ITSM towards Cognitive one. Finally, we conclude the paper and envision real-world best practices in this area.","PeriodicalId":414464,"journal":{"name":"2018 IEEE Symposium on Service-Oriented System Engineering (SOSE)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133175306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thiago Garrett, S. Dustdar, L. C. E. Bona, E. P. Duarte
The Internet of Things (IoT) is expected to constitute a significant portion of the Internet in the future, both in terms of traffic, and market share. For it to achieve its full potential, innovative solutions are necessary to address several open challenges. In this context we discuss Network Neutrality, which states that all traffic in the Internet must be treated equally, i.e., without traffic differentiation (TD). Unfair traffic management may result in a non-competitive market, affecting selectively the quality of experience of different IoT applications. This scenario might hinder innovation, threatening IoT success. Monitoring TD on the IoT is thus important for a more competitive market. In this paper, we first study the impact of TD on common IoT traffic patterns, such as periodic updates and real-time notifications. We present simulation results, and discuss which types of IoT applications are most affected by TD. We then discuss a solution for monitoring TD on IoT. The solution takes advantage of the IoT to address several open challenges of TD detection. For instance, the large amount of devices results in a prolific environment for making TD-related measurements. The solution can thus employ machine learning for continuously monitoring TD as the numerous IoT devices and applications communicate.
{"title":"Traffic Differentiation on Internet of Things","authors":"Thiago Garrett, S. Dustdar, L. C. E. Bona, E. P. Duarte","doi":"10.1109/SOSE.2018.00026","DOIUrl":"https://doi.org/10.1109/SOSE.2018.00026","url":null,"abstract":"The Internet of Things (IoT) is expected to constitute a significant portion of the Internet in the future, both in terms of traffic, and market share. For it to achieve its full potential, innovative solutions are necessary to address several open challenges. In this context we discuss Network Neutrality, which states that all traffic in the Internet must be treated equally, i.e., without traffic differentiation (TD). Unfair traffic management may result in a non-competitive market, affecting selectively the quality of experience of different IoT applications. This scenario might hinder innovation, threatening IoT success. Monitoring TD on the IoT is thus important for a more competitive market. In this paper, we first study the impact of TD on common IoT traffic patterns, such as periodic updates and real-time notifications. We present simulation results, and discuss which types of IoT applications are most affected by TD. We then discuss a solution for monitoring TD on IoT. The solution takes advantage of the IoT to address several open challenges of TD detection. For instance, the large amount of devices results in a prolific environment for making TD-related measurements. The solution can thus employ machine learning for continuously monitoring TD as the numerous IoT devices and applications communicate.","PeriodicalId":414464,"journal":{"name":"2018 IEEE Symposium on Service-Oriented System Engineering (SOSE)","volume":"109 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124675841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the growing popularity of Service-Oriented-Computing (SOC) architecture, the number of Web services on the internet is increasing rapidly. When faced with a large number of candidate services with similar functionalities, personalized Web service recommendation is becoming an important issue. Quality-of-Service (QoS) is usually used to characterize the non-functional properties of Web services. Thus accurate QoS prediction is an important step in the service recommendation. In this paper, we propose a Cluster Feature based Latent Factor Model (CFLFM) for QoS prediction. First, we cluster users and services into several groups based on history records, respectively. We assume that users or services in the same cluster share some latent features. By incorporating this kind of information, we design an integrated latent factor model. Finally, we conduct comprehensive experiments on a real-world Web service dataset. The experimental results show that our approach can achieve higher QoS prediction accuracy than other competing approaches.
{"title":"A Cluster Feature Based Approach for QoS Prediction in Web Service Recommendation","authors":"Shuhong Chen, Yuxing Peng, Haibo Mi, Changjian Wang, Zhen Huang","doi":"10.1109/SOSE.2018.00041","DOIUrl":"https://doi.org/10.1109/SOSE.2018.00041","url":null,"abstract":"With the growing popularity of Service-Oriented-Computing (SOC) architecture, the number of Web services on the internet is increasing rapidly. When faced with a large number of candidate services with similar functionalities, personalized Web service recommendation is becoming an important issue. Quality-of-Service (QoS) is usually used to characterize the non-functional properties of Web services. Thus accurate QoS prediction is an important step in the service recommendation. In this paper, we propose a Cluster Feature based Latent Factor Model (CFLFM) for QoS prediction. First, we cluster users and services into several groups based on history records, respectively. We assume that users or services in the same cluster share some latent features. By incorporating this kind of information, we design an integrated latent factor model. Finally, we conduct comprehensive experiments on a real-world Web service dataset. The experimental results show that our approach can achieve higher QoS prediction accuracy than other competing approaches.","PeriodicalId":414464,"journal":{"name":"2018 IEEE Symposium on Service-Oriented System Engineering (SOSE)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125470550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Software aging is a chronic process that is hidden under system monitoring until a system failure occurs. Aging related failures (ARFs) are the result of a variety of complex factors. Therefore, how to precisely predict the ARFs for a running software system is a challenge problem. Previous studies typically predict ARFs by means of predicting the time to resource exhaustion (TTE), which adopts resource data as aging indicators to predict when the resource data achieve the preset threshold. However, the practical effect of prior approaches are far from satisfactory due to lack of effective aging indicators and difficult to set accurate threshold. In this paper, we propose a hybrid approach, which combines model and measurements to construct a probabilistic aging indicator. The aging indicator is a multifactorial aging indicator that is more effective than traditional ones. Moreover, the hybrid approach is threshold-free in ARFs prediction. We evaluate the hybrid approach in Data caching system and Media streaming system, the results show that the hybrid approach can achieve high precision and recall for ARFs prediction. Compared to previous approaches, our approach increases the prediction precision and recall significantly.
软件老化是一个隐藏在系统监控之下的慢性过程,直到系统发生故障。老化相关失效(ARFs)是多种复杂因素共同作用的结果。因此,如何准确地预测一个运行中的软件系统的arf是一个具有挑战性的问题。以往的研究一般是通过预测资源耗尽时间(time to resource exhaustion, TTE)来预测arf,即以资源数据作为老化指标,预测资源数据何时达到预设阈值。然而,由于缺乏有效的老化指标和难以设定准确的阈值,现有方法的实际效果并不理想。在本文中,我们提出了一种混合方法,将模型和测量相结合来构建概率老化指标。衰老指标是一个多因素的衰老指标,比传统的指标更有效。此外,混合方法在ARFs预测中是无阈值的。在数据缓存系统和媒体流系统中对混合方法进行了评估,结果表明混合方法对arf预测具有较高的精度和召回率。与以前的方法相比,我们的方法显著提高了预测精度和召回率。
{"title":"A Hybrid Approach for Predicting Aging-Related Failures of Software Systems","authors":"Jingwei Li, Yong Qi, Lin Cai","doi":"10.1109/SOSE.2018.00021","DOIUrl":"https://doi.org/10.1109/SOSE.2018.00021","url":null,"abstract":"Software aging is a chronic process that is hidden under system monitoring until a system failure occurs. Aging related failures (ARFs) are the result of a variety of complex factors. Therefore, how to precisely predict the ARFs for a running software system is a challenge problem. Previous studies typically predict ARFs by means of predicting the time to resource exhaustion (TTE), which adopts resource data as aging indicators to predict when the resource data achieve the preset threshold. However, the practical effect of prior approaches are far from satisfactory due to lack of effective aging indicators and difficult to set accurate threshold. In this paper, we propose a hybrid approach, which combines model and measurements to construct a probabilistic aging indicator. The aging indicator is a multifactorial aging indicator that is more effective than traditional ones. Moreover, the hybrid approach is threshold-free in ARFs prediction. We evaluate the hybrid approach in Data caching system and Media streaming system, the results show that the hybrid approach can achieve high precision and recall for ARFs prediction. Compared to previous approaches, our approach increases the prediction precision and recall significantly.","PeriodicalId":414464,"journal":{"name":"2018 IEEE Symposium on Service-Oriented System Engineering (SOSE)","volume":"130 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132018347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Since the Internet of Vehicles (IoV) technology has recently attracted huge research attention, IoV services that can collect, process data and further provision services are increasingly becoming the mainstream. Considering the process efficiency, geo-distributed data is typically collected and exploited on different Clouds, making it significantly essential for IoV application to be deployed on multiple Clouds whilst system components still function well and jointly work. In this paper, we provide a scalable IoV system deployment in the joint Cloud environment where cloud vendors collaboratively cooperate as an alliance. In particular, system components are independently deployed in accordance with the data placement and resource capacities etc. A multi-replication mechanism is utilized to achieve the cross-cloud parallel processing, thereby effectively handling the scalability issues in the massive-scale vehicle data processing. Furthermore, we adopt the multi-source data fusion to facilitate the accuracy of IoV data analytics. We demonstrate the effectiveness of the proposed approaches through real-world use cases including fleet distribution management and passenger demands prediction.
{"title":"A Scalable lnternet-of-Vehicles Service over Joint Clouds","authors":"Yong Zhang, Mingming Zhang, Tianyu Wo, Xuelian Lin, Renyu Yang, Jie Xu","doi":"10.1109/SOSE.2018.00035","DOIUrl":"https://doi.org/10.1109/SOSE.2018.00035","url":null,"abstract":"Since the Internet of Vehicles (IoV) technology has recently attracted huge research attention, IoV services that can collect, process data and further provision services are increasingly becoming the mainstream. Considering the process efficiency, geo-distributed data is typically collected and exploited on different Clouds, making it significantly essential for IoV application to be deployed on multiple Clouds whilst system components still function well and jointly work. In this paper, we provide a scalable IoV system deployment in the joint Cloud environment where cloud vendors collaboratively cooperate as an alliance. In particular, system components are independently deployed in accordance with the data placement and resource capacities etc. A multi-replication mechanism is utilized to achieve the cross-cloud parallel processing, thereby effectively handling the scalability issues in the massive-scale vehicle data processing. Furthermore, we adopt the multi-source data fusion to facilitate the accuracy of IoV data analytics. We demonstrate the effectiveness of the proposed approaches through real-world use cases including fleet distribution management and passenger demands prediction.","PeriodicalId":414464,"journal":{"name":"2018 IEEE Symposium on Service-Oriented System Engineering (SOSE)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127347897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the hybrid cloud, multiple private and public clouds usually communicate through Wide Area Networks (WAN), suffering from high latency and low bandwidth for inter-cloud data transmission. While existing DFSs are widely used in a single cloud, they may bring significant I/O delay in the hybrid cloud where, based on our investigation, applications are sensitive to latency but barely rely on strictly consistent storage. However, existing DFSs are mainly designed with strong consistency semantics. To address this problem, we implement HCFS2 (Hybrid Cloud File Storage Service). We reuse some components of MooseFS while weakening its consistency semantics. HCFS2 holds three main features: (1) It generates file update digests through intercepting and parsing client I/O operations in userspace, and leverages gossip protocol to distribute file update digests among geographically distributed servers. (2) It maintains weak consistency among storage servers in the hybrid cloud, and it uses a two-level consistency setup to ensure local consistency and global consistency, respectively. (3) It maintains three Log queues to simplify the Log management and uses different task queues to parallelize different processing stages for synchronization, thus improving its concurrent operation performance. Experiment results show that HCFS2 achieves good performance in the hybrid cloud.
{"title":"HCFS2: A File Storage Service with Weak Consistency in the Hybrid Cloud","authors":"Jie Sun, Chunming Hu, Tianyu Wo, L. Du, Song Yang","doi":"10.1109/SOSE.2018.00038","DOIUrl":"https://doi.org/10.1109/SOSE.2018.00038","url":null,"abstract":"In the hybrid cloud, multiple private and public clouds usually communicate through Wide Area Networks (WAN), suffering from high latency and low bandwidth for inter-cloud data transmission. While existing DFSs are widely used in a single cloud, they may bring significant I/O delay in the hybrid cloud where, based on our investigation, applications are sensitive to latency but barely rely on strictly consistent storage. However, existing DFSs are mainly designed with strong consistency semantics. To address this problem, we implement HCFS2 (Hybrid Cloud File Storage Service). We reuse some components of MooseFS while weakening its consistency semantics. HCFS2 holds three main features: (1) It generates file update digests through intercepting and parsing client I/O operations in userspace, and leverages gossip protocol to distribute file update digests among geographically distributed servers. (2) It maintains weak consistency among storage servers in the hybrid cloud, and it uses a two-level consistency setup to ensure local consistency and global consistency, respectively. (3) It maintains three Log queues to simplify the Log management and uses different task queues to parallelize different processing stages for synchronization, thus improving its concurrent operation performance. Experiment results show that HCFS2 achieves good performance in the hybrid cloud.","PeriodicalId":414464,"journal":{"name":"2018 IEEE Symposium on Service-Oriented System Engineering (SOSE)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123956770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}