首页 > 最新文献

2016 IEEE International Conference on Services Computing (SCC)最新文献

英文 中文
A Hybrid Process-Data Model to Avoid Data Conflicting in BPMN 避免BPMN中数据冲突的混合过程-数据模型
Pub Date : 2016-06-01 DOI: 10.1109/SCC.2016.119
Rongheng Lin, Budan Wu, Hua Zou, Naiwang Guo
Typically, BPMN Designer only needs to consider the business process without knowing the detail of invoked service, which helps them to simplify the design procedure. However, in some data centric workflow scenario, if designer didn't know about the data model of the invoked service, the BPMN workflow execution will be inefficient due to data conflict. There is lack of dynamically data modeling capability in BPMN, which means some data conflicts might happen in the designed workflow. To solve the problem, this paper introduced a hybrid model combining process and data, which is called process-data (PD) model. PD model defined several data conflict scenarios, which transformed the conflicting problem into parallel collection constructing problem. A novel collection generating method is introduced for the parallel collection creation. Based on the output of method, user can find a way to optimize the data conflict and increase the performance of the workflow.
通常,BPMN设计人员只需要考虑业务流程,而不需要知道被调用服务的细节,这有助于他们简化设计过程。然而,在某些以数据为中心的工作流场景中,如果设计人员不知道被调用服务的数据模型,则会由于数据冲突而导致BPMN工作流执行效率低下。BPMN缺乏动态数据建模能力,这意味着在设计的工作流中可能会发生一些数据冲突。为了解决这一问题,本文引入了一种过程与数据相结合的混合模型,即过程-数据模型。PD模型定义了几种数据冲突场景,将冲突问题转化为并行集合构造问题。提出了一种新的并行收集生成方法。根据方法的输出,用户可以找到优化数据冲突和提高工作流性能的方法。
{"title":"A Hybrid Process-Data Model to Avoid Data Conflicting in BPMN","authors":"Rongheng Lin, Budan Wu, Hua Zou, Naiwang Guo","doi":"10.1109/SCC.2016.119","DOIUrl":"https://doi.org/10.1109/SCC.2016.119","url":null,"abstract":"Typically, BPMN Designer only needs to consider the business process without knowing the detail of invoked service, which helps them to simplify the design procedure. However, in some data centric workflow scenario, if designer didn't know about the data model of the invoked service, the BPMN workflow execution will be inefficient due to data conflict. There is lack of dynamically data modeling capability in BPMN, which means some data conflicts might happen in the designed workflow. To solve the problem, this paper introduced a hybrid model combining process and data, which is called process-data (PD) model. PD model defined several data conflict scenarios, which transformed the conflicting problem into parallel collection constructing problem. A novel collection generating method is introduced for the parallel collection creation. Based on the output of method, user can find a way to optimize the data conflict and increase the performance of the workflow.","PeriodicalId":115693,"journal":{"name":"2016 IEEE International Conference on Services Computing (SCC)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115433307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Description and Evaluation of Elasticity Strategies for Business Processes in the Cloud 云计算中业务流程弹性策略的描述和评估
Pub Date : 2016-06-01 DOI: 10.1109/SCC.2016.34
A. Jrad, Sami Bhiri, S. Tata
More and more companies are currently migrating business processes to the Cloud in order to handle customer service in an efficient and cost effective way. Cloud Computing's elasticity and flexibility in service delivery makes it an ideal solution for companies to deal with highly variable service demands and uncertain financial environment to ensure the required QoS while using resources and reduce their expenses. Elasticity management is witnessing a lot of attention from IT community as a pivotal issue for finding the right tradeoffs between QoS levels and operational costs by working on developing novel methods and mechanisms. However, controlling business process elasticity and defining non-trivial elasticity strategies are challenging issues. In this paper, we propose an elasticity strategy description language, called Strat. It is defined as an extensible Domain-Specific Language to allow business process holders to describe elasticity strategies that are evaluated using our formal evaluation framework. Given a usage behavior and a business process, the evaluation consists in providing a set of plots that allows the analysis and the comparison of strategies. Our contributions and developments provide Cloud tenants with facilities to choose elasticity strategies that fit to their business processes and usage behaviors.
目前,越来越多的公司正在将业务流程迁移到云端,以便以高效且经济的方式处理客户服务。云计算在服务交付方面的弹性和灵活性使其成为企业应对高度可变的服务需求和不确定的财务环境的理想解决方案,从而在使用资源的同时确保所需的QoS并降低其费用。弹性管理正受到IT社区的广泛关注,因为它是通过开发新方法和机制在QoS级别和运营成本之间找到正确权衡的关键问题。然而,控制业务流程弹性和定义重要的弹性策略是具有挑战性的问题。本文提出了一种弹性策略描述语言Strat。它被定义为一种可扩展的领域特定语言,允许业务流程持有者描述使用我们的正式评估框架进行评估的弹性策略。给定一个使用行为和一个业务流程,评估包括提供一组图,这些图允许对策略进行分析和比较。我们的贡献和开发为云租户提供了选择适合其业务流程和使用行为的弹性策略的工具。
{"title":"Description and Evaluation of Elasticity Strategies for Business Processes in the Cloud","authors":"A. Jrad, Sami Bhiri, S. Tata","doi":"10.1109/SCC.2016.34","DOIUrl":"https://doi.org/10.1109/SCC.2016.34","url":null,"abstract":"More and more companies are currently migrating business processes to the Cloud in order to handle customer service in an efficient and cost effective way. Cloud Computing's elasticity and flexibility in service delivery makes it an ideal solution for companies to deal with highly variable service demands and uncertain financial environment to ensure the required QoS while using resources and reduce their expenses. Elasticity management is witnessing a lot of attention from IT community as a pivotal issue for finding the right tradeoffs between QoS levels and operational costs by working on developing novel methods and mechanisms. However, controlling business process elasticity and defining non-trivial elasticity strategies are challenging issues. In this paper, we propose an elasticity strategy description language, called Strat. It is defined as an extensible Domain-Specific Language to allow business process holders to describe elasticity strategies that are evaluated using our formal evaluation framework. Given a usage behavior and a business process, the evaluation consists in providing a set of plots that allows the analysis and the comparison of strategies. Our contributions and developments provide Cloud tenants with facilities to choose elasticity strategies that fit to their business processes and usage behaviors.","PeriodicalId":115693,"journal":{"name":"2016 IEEE International Conference on Services Computing (SCC)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124269622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
How to Distribute the Detection Load among Virtual Machines to Maximize the Detection of Distributed Attacks in the Cloud? 如何在虚拟机之间分配检测负载,最大限度地检测云中的分布式攻击?
Pub Date : 2016-06-01 DOI: 10.1109/SCC.2016.48
O. A. Wahab, J. Bentahar, H. Otrok, A. Mourad
Security has been identified to be the principal stumbling-block preventing users and enterprises from moving their businesses to the cloud. The reason is that cloud systems, besides inheriting all the vulnerabilities of the traditional computing systems, appeal to new types of threats engendered mainly by the virtualization concept that allows multiple users' virtual machines (VMs) to share a common computing platform. This broadens the attack space of the malicious users and increases their ability to attack both the cloud system and other co-resident VMs. Motivated by the absence of any approach that addresses the problem of optimal detection load distribution in the domain of cloud computing, we develop a resource-aware maxmin game theoretical model that guides the hypervisor on how the detection load should be optimally distributed among its guest VMs in the real-time. The objective is to maximize the hypervisor's probability of detection, knowing that the attacker is dividing the attack over several VMs to minimize this probability. Experimental results on Amazon EC2 pricing dataset reveal that our model increases the probability of detecting distributed attacks, reduces the false positives, and minimizes the resources wasted during the detection process.
安全性已被确定为阻碍用户和企业将业务迁移到云的主要障碍。原因是云系统除了继承了传统计算系统的所有漏洞之外,还吸引了主要由虚拟化概念产生的新型威胁,虚拟化概念允许多个用户的虚拟机(vm)共享一个公共计算平台。这扩大了恶意用户的攻击空间,增加了他们攻击云系统和其他共同驻留虚拟机的能力。由于缺乏解决云计算领域中最佳检测负载分配问题的任何方法,我们开发了一个资源感知的maxmin游戏理论模型,该模型指导管理程序如何在其来宾虚拟机之间实时最佳地分配检测负载。目标是最大化管理程序的检测概率,知道攻击者将攻击分散在几个vm上,以最小化这种概率。在Amazon EC2定价数据集上的实验结果表明,我们的模型提高了检测分布式攻击的概率,减少了误报,并最大限度地减少了检测过程中的资源浪费。
{"title":"How to Distribute the Detection Load among Virtual Machines to Maximize the Detection of Distributed Attacks in the Cloud?","authors":"O. A. Wahab, J. Bentahar, H. Otrok, A. Mourad","doi":"10.1109/SCC.2016.48","DOIUrl":"https://doi.org/10.1109/SCC.2016.48","url":null,"abstract":"Security has been identified to be the principal stumbling-block preventing users and enterprises from moving their businesses to the cloud. The reason is that cloud systems, besides inheriting all the vulnerabilities of the traditional computing systems, appeal to new types of threats engendered mainly by the virtualization concept that allows multiple users' virtual machines (VMs) to share a common computing platform. This broadens the attack space of the malicious users and increases their ability to attack both the cloud system and other co-resident VMs. Motivated by the absence of any approach that addresses the problem of optimal detection load distribution in the domain of cloud computing, we develop a resource-aware maxmin game theoretical model that guides the hypervisor on how the detection load should be optimally distributed among its guest VMs in the real-time. The objective is to maximize the hypervisor's probability of detection, knowing that the attacker is dividing the attack over several VMs to minimize this probability. Experimental results on Amazon EC2 pricing dataset reveal that our model increases the probability of detecting distributed attacks, reduces the false positives, and minimizes the resources wasted during the detection process.","PeriodicalId":115693,"journal":{"name":"2016 IEEE International Conference on Services Computing (SCC)","volume":"166 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115004039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
An Optimization Approach to Services Sales Forecasting in a Multi-staged Sales Pipeline 多阶段销售管道中服务销售预测的优化方法
Pub Date : 2016-06-01 DOI: 10.1109/SCC.2016.98
Aly Megahed, Peifeng Yin, H. M. Nezhad
Services organization manage a pipeline of sales opportunities with variable enterprise sales engagement lifespan, maturity levels (belonging to progressive sales stages), and contract values at any given point in time. Accurate forecasting of contract signings by the end of a time period (e.g., a quarter) is a desire for many services organizations in order to get an accurate projection of incoming revenues, and to provide support for delivery planning, resource allocation, budgeting, and effective sales opportunity management. While the problem of sales forecasting has been investigated in its generic context, sales forecasting for services organizations entails the consideration of additional complexities, which has not been thoroughly investigated: (i) considering opportunities in multi-staged sales pipeline, which means providing stage-specific treatment of sales opportunities in each group, and (ii) using the information of the current pipeline build-up, as well as the projection of the pipeline growth over the remaining time period before the end of the target time period in order to make predictions. In this paper, we formulate this problem, considering the service-specific context, as a machine learning problem over the set of historical services sales data. We introduce a novel optimization approach for finding the optimized weights of a sales forecasting function. The objective value of our optimization model minimizes the average error rates for predicting sales based on two factors of conversion rates and growth factors for any given point in time in a sales period over historical data. Our model also optimally determines the number of historical periods that should be used in the machine learning framework to predict the future revenue. We have evaluated the presented method, and the results demonstrate superior performance (in terms of absolute and relative errors) compared to a baseline state of the art method.
服务组织在任何给定的时间点管理具有可变企业销售参与寿命、成熟度级别(属于渐进销售阶段)和合同值的销售机会管道。在一段时间(例如,一个季度)结束时,对合同签署的准确预测是许多服务组织的愿望,以便获得收入的准确预测,并为交付计划、资源分配、预算和有效的销售机会管理提供支持。虽然销售预测问题已在其一般范围内进行了调查,但服务组织的销售预测需要考虑额外的复杂性,这一点尚未得到彻底的调查:(i)考虑多阶段销售渠道中的机会,这意味着对每个集团的销售机会提供具体阶段的处理;(ii)使用当前管道建设的信息,以及在目标时间段结束前剩余时间段内管道增长的预测,以便做出预测。在本文中,考虑到特定于服务的上下文,我们将这个问题表述为历史服务销售数据集上的机器学习问题。我们引入了一种新的优化方法来寻找销售预测函数的优化权重。我们的优化模型的客观值最小化平均错误率预测销售基于两个因素的转换率和增长因素在任何给定的时间点在销售期间的历史数据。我们的模型还最佳地确定了应该在机器学习框架中用于预测未来收入的历史时期的数量。我们已经对所提出的方法进行了评估,结果表明,与现有方法的基线状态相比,该方法具有优越的性能(在绝对和相对误差方面)。
{"title":"An Optimization Approach to Services Sales Forecasting in a Multi-staged Sales Pipeline","authors":"Aly Megahed, Peifeng Yin, H. M. Nezhad","doi":"10.1109/SCC.2016.98","DOIUrl":"https://doi.org/10.1109/SCC.2016.98","url":null,"abstract":"Services organization manage a pipeline of sales opportunities with variable enterprise sales engagement lifespan, maturity levels (belonging to progressive sales stages), and contract values at any given point in time. Accurate forecasting of contract signings by the end of a time period (e.g., a quarter) is a desire for many services organizations in order to get an accurate projection of incoming revenues, and to provide support for delivery planning, resource allocation, budgeting, and effective sales opportunity management. While the problem of sales forecasting has been investigated in its generic context, sales forecasting for services organizations entails the consideration of additional complexities, which has not been thoroughly investigated: (i) considering opportunities in multi-staged sales pipeline, which means providing stage-specific treatment of sales opportunities in each group, and (ii) using the information of the current pipeline build-up, as well as the projection of the pipeline growth over the remaining time period before the end of the target time period in order to make predictions. In this paper, we formulate this problem, considering the service-specific context, as a machine learning problem over the set of historical services sales data. We introduce a novel optimization approach for finding the optimized weights of a sales forecasting function. The objective value of our optimization model minimizes the average error rates for predicting sales based on two factors of conversion rates and growth factors for any given point in time in a sales period over historical data. Our model also optimally determines the number of historical periods that should be used in the machine learning framework to predict the future revenue. We have evaluated the presented method, and the results demonstrate superior performance (in terms of absolute and relative errors) compared to a baseline state of the art method.","PeriodicalId":115693,"journal":{"name":"2016 IEEE International Conference on Services Computing (SCC)","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127305614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Service-Oriented Resource Management of Cloud Platforms 面向服务的云平台资源管理
Pub Date : 2016-06-01 DOI: 10.1109/SCC.2016.63
Xing Hu, Rui Zhang, Qianxiang Wang
How to deploy more services while keeping the Quality of Services is one of the key challenges faced by the resource management of cloud platforms, especially for PaaS. Existing approaches focus mainly on cloud platforms which mainly host small number of applications, and consider few features of different applications. In this paper, we present SORM, a Service-Oriented Resource Management mechanism on cloud platforms. The core of SORM is a service feature model which involves resource consumption and request variance of services. For each server, SORM deploys service instances with complementary resource consumption, so as to improve resource utilization. SORM also divides servers into three pools and deploys service instances onto different pools, mainly based on their request variance features, so as to reduce computational over-head of resource management and keep cloud platforms stable. We evaluate the effectiveness and efficiency of SORM by simulation experiments and find that: compared with one exiting approach. SORM can deploy 3.6 times more services with nearly 74.1% time cost.
如何在保持服务质量的同时部署更多的服务,是云平台资源管理面临的关键挑战之一,特别是对于PaaS而言。现有的方法主要集中在主要托管少量应用程序的云平台上,并且很少考虑不同应用程序的特性。本文提出了一种基于云平台的面向服务的资源管理机制SORM。SORM的核心是服务特征模型,该模型涉及服务的资源消耗和请求方差。SORM在每台服务器上部署资源消耗互补的服务实例,提高资源利用率。SORM还将服务器划分为三个池,并主要根据其请求方差特征将服务实例部署到不同的池中,以减少资源管理的计算开销,保持云平台的稳定。通过仿真实验对SORM的有效性和效率进行了评价,结果表明:与现有的一种SORM方法相比,SORM的有效性和效率更高。SORM可以以近74.1%的时间成本部署3.6倍以上的服务。
{"title":"Service-Oriented Resource Management of Cloud Platforms","authors":"Xing Hu, Rui Zhang, Qianxiang Wang","doi":"10.1109/SCC.2016.63","DOIUrl":"https://doi.org/10.1109/SCC.2016.63","url":null,"abstract":"How to deploy more services while keeping the Quality of Services is one of the key challenges faced by the resource management of cloud platforms, especially for PaaS. Existing approaches focus mainly on cloud platforms which mainly host small number of applications, and consider few features of different applications. In this paper, we present SORM, a Service-Oriented Resource Management mechanism on cloud platforms. The core of SORM is a service feature model which involves resource consumption and request variance of services. For each server, SORM deploys service instances with complementary resource consumption, so as to improve resource utilization. SORM also divides servers into three pools and deploys service instances onto different pools, mainly based on their request variance features, so as to reduce computational over-head of resource management and keep cloud platforms stable. We evaluate the effectiveness and efficiency of SORM by simulation experiments and find that: compared with one exiting approach. SORM can deploy 3.6 times more services with nearly 74.1% time cost.","PeriodicalId":115693,"journal":{"name":"2016 IEEE International Conference on Services Computing (SCC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132978248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Enhancing User Control on Personal Data Usage in Internet of Things Ecosystems 加强对物联网生态系统中个人数据使用的用户控制
Pub Date : 2016-06-01 DOI: 10.1109/SCC.2016.45
B. Carminati, Pietro Colombo, E. Ferrari, Gokhan Sagirlar
Internet of Things (IoT) services are improving our life, supporting people in a variety of situations. However, due to the high volume of managed personal data, they can be a serious threat for individuals privacy. Users data are commonly gathered by devices scattered in the IoT, each of which sees a portion of them. The combination of different data may lead to infer users sensitive information. The distributed nature and the complexity of the IoT scenario cause users to lose the control on how their data are handled. In this paper, we start addressing this issue with a framework that empowers users to better control data management within IoT ecosystems. A novel privacy reference model allows users to state how their data can be processed and what cannot be inferred from them, and a dedicated mechanism allows enforcing the stated references. Experimental results show the efficiency of the enforcement.
物联网(IoT)服务正在改善我们的生活,为人们在各种情况下提供支持。然而,由于管理的个人数据量很大,它们可能对个人隐私构成严重威胁。用户数据通常由分散在物联网中的设备收集,每个设备都可以看到其中的一部分。不同数据的组合可能会推断出用户的敏感信息。物联网场景的分布式特性和复杂性导致用户失去对其数据处理方式的控制。在本文中,我们开始通过一个框架来解决这个问题,该框架使用户能够更好地控制物联网生态系统中的数据管理。一个新颖的隐私参考模型允许用户声明如何处理他们的数据以及不能从他们推断出什么,并且一个专用的机制允许执行声明的引用。实验结果表明了该算法的有效性。
{"title":"Enhancing User Control on Personal Data Usage in Internet of Things Ecosystems","authors":"B. Carminati, Pietro Colombo, E. Ferrari, Gokhan Sagirlar","doi":"10.1109/SCC.2016.45","DOIUrl":"https://doi.org/10.1109/SCC.2016.45","url":null,"abstract":"Internet of Things (IoT) services are improving our life, supporting people in a variety of situations. However, due to the high volume of managed personal data, they can be a serious threat for individuals privacy. Users data are commonly gathered by devices scattered in the IoT, each of which sees a portion of them. The combination of different data may lead to infer users sensitive information. The distributed nature and the complexity of the IoT scenario cause users to lose the control on how their data are handled. In this paper, we start addressing this issue with a framework that empowers users to better control data management within IoT ecosystems. A novel privacy reference model allows users to state how their data can be processed and what cannot be inferred from them, and a dedicated mechanism allows enforcing the stated references. Experimental results show the efficiency of the enforcement.","PeriodicalId":115693,"journal":{"name":"2016 IEEE International Conference on Services Computing (SCC)","volume":"236 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123297885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Architectural Models to Simplify Administration of Service-Oriented Applications 简化面向服务应用程序管理的体系结构模型
Pub Date : 2016-06-01 DOI: 10.1109/SCC.2016.41
P. Lalanda, Stéphanie Chollet, Catherine Hamon, V. Lestideau
Pervasive applications are often executed in fluctuating conditions and need frequent adaptations to meet requirements. Autonomic computing techniques are frequently used to automate adaptations to changing execution conditions. However, some administration tasks still have to be performed by human administrators. Such tasks are very complex because of a lack of understanding of the system evolutions. In this paper, we propose to build and link models at runtime of supervised applications in order to simplify the administrators' job. Our approach is illustrated on a health application called actimetrics, developed with the Orange Labs.
普及应用程序通常在波动的条件下执行,需要经常调整以满足需求。自主计算技术经常用于自动适应不断变化的执行条件。但是,有些管理任务仍然必须由人工管理员执行。由于缺乏对系统演化的理解,这些任务非常复杂。为了简化管理员的工作,我们建议在受监督的应用程序运行时构建和链接模型。我们的方法在一款名为actimetrics的健康应用程序上得到了说明,该应用程序是由Orange Labs开发的。
{"title":"Architectural Models to Simplify Administration of Service-Oriented Applications","authors":"P. Lalanda, Stéphanie Chollet, Catherine Hamon, V. Lestideau","doi":"10.1109/SCC.2016.41","DOIUrl":"https://doi.org/10.1109/SCC.2016.41","url":null,"abstract":"Pervasive applications are often executed in fluctuating conditions and need frequent adaptations to meet requirements. Autonomic computing techniques are frequently used to automate adaptations to changing execution conditions. However, some administration tasks still have to be performed by human administrators. Such tasks are very complex because of a lack of understanding of the system evolutions. In this paper, we propose to build and link models at runtime of supervised applications in order to simplify the administrators' job. Our approach is illustrated on a health application called actimetrics, developed with the Orange Labs.","PeriodicalId":115693,"journal":{"name":"2016 IEEE International Conference on Services Computing (SCC)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123468675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cost-Effective Request Scheduling for Greening Cloud Data Centers 绿色云数据中心的经济高效请求调度
Pub Date : 2016-06-01 DOI: 10.1109/SCC.2016.14
Ying Chen, Chuang Lin, Jiwei Huang, Xuemin Shen
With the popularity of cloud computing, many cloud service providers deploy regional data centers to offer services and pplications. These large-scale data centers have drawn extensive attention in terms of the huge energy demand and carbon emission. Thus, how to make use of their spatial diversities to green data centers and reduce cloud provider's costs is an important concern. In this paper, we integrate service reward, electricity cost, carbon taxes and service performance to study cost-effective request scheduling for cloud data centers. We propose an online and distributed scheduling algorithm CESA to chieve the flexible tradeoff between these conflicting objectives. The time complexity of CESA is polynomial, and it can be implemented in a parallel way. CESA requires no prior knowledge of the statistics of request arrivals or future electricity prices, yet it provably approximates the optimal system profit while bounding the queue length. Real-trace based simulations are conducted which verify the effectiveness of our CESA algorithm.
随着云计算的普及,许多云服务提供商部署了区域数据中心来提供服务和应用程序。这些大型数据中心的巨大能源需求和碳排放引起了广泛关注。因此,如何利用其空间多样性来实现绿色数据中心,降低云提供商的成本是一个重要的问题。本文综合考虑服务报酬、电力成本、碳税和服务绩效,研究云数据中心请求调度的成本效益。我们提出了一种在线和分布式调度算法CESA,以实现这些冲突目标之间的灵活权衡。CESA的时间复杂度为多项式,可以并行实现。CESA不需要预先了解请求到达的统计数据或未来电价,但它可以证明在限制队列长度的情况下近似于最优系统利润。仿真结果验证了CESA算法的有效性。
{"title":"Cost-Effective Request Scheduling for Greening Cloud Data Centers","authors":"Ying Chen, Chuang Lin, Jiwei Huang, Xuemin Shen","doi":"10.1109/SCC.2016.14","DOIUrl":"https://doi.org/10.1109/SCC.2016.14","url":null,"abstract":"With the popularity of cloud computing, many cloud service providers deploy regional data centers to offer services and pplications. These large-scale data centers have drawn extensive attention in terms of the huge energy demand and carbon emission. Thus, how to make use of their spatial diversities to green data centers and reduce cloud provider's costs is an important concern. In this paper, we integrate service reward, electricity cost, carbon taxes and service performance to study cost-effective request scheduling for cloud data centers. We propose an online and distributed scheduling algorithm CESA to chieve the flexible tradeoff between these conflicting objectives. The time complexity of CESA is polynomial, and it can be implemented in a parallel way. CESA requires no prior knowledge of the statistics of request arrivals or future electricity prices, yet it provably approximates the optimal system profit while bounding the queue length. Real-trace based simulations are conducted which verify the effectiveness of our CESA algorithm.","PeriodicalId":115693,"journal":{"name":"2016 IEEE International Conference on Services Computing (SCC)","volume":"8 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116674348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Can HTTP/2 Really Help Web Performance on Smartphones? HTTP/2真的能提高智能手机上的Web性能吗?
Pub Date : 2016-06-01 DOI: 10.1109/SCC.2016.36
Yi Liu, Yun Ma, Xuanzhe Liu, Gang Huang
HTTP/2 is the next-generation Web protocol based on Google's SPDY protocol, and attempts to solve the shortcomings and inflexibilities of HTTP/1.x. As smartphones become the main access channel for Web services, we are curious if HTTP/2 can really help the performance of Web browsing. In this paper, we conduct a measurement study on the performance of HTTP/2 and HTTPS to reveal the mystery of HTTP/2. We clone the Alexa top 200 websites into our own server, and revisit them through HTTP/2-enabled proxy, and HTTPS-enabled proxy, respectively. We compare HTTP/2 and HTTPS as a transport protocol to transfer Web objects to identify the factors that may affect HTTP/2, including Round-Trip Time (RTT), bandwidth, loss rate, number of objects on a page, and objects sizes. We find that HTTP/2 hurts with high packet loss, but helps many small objects. The computation and dependencies of fetching Web objects reduce the performance improvement of HTTP/2, and sometimes can even hurt the performance of page loading. At last, we test the server push feature of HTTP/2 to leverage the performance.
HTTP/2是基于Google的SPDY协议的下一代Web协议,试图解决HTTP/1.x的缺点和不灵活性。随着智能手机成为Web服务的主要访问渠道,我们很好奇HTTP/2是否真的能提高Web浏览的性能。本文对HTTP/2和HTTPS的性能进行了测量研究,揭示了HTTP/2的奥秘。我们将Alexa前200个网站克隆到我们自己的服务器上,并分别通过启用HTTP/2的代理和启用https的代理重新访问它们。我们比较了HTTP/2和HTTPS作为传输协议来传输Web对象,以确定可能影响HTTP/2的因素,包括往返时间(RTT)、带宽、损失率、页面上的对象数量和对象大小。我们发现HTTP/2的丢包率高,但对许多小对象有帮助。获取Web对象的计算和依赖关系降低了HTTP/2的性能改进,有时甚至会损害页面加载的性能。最后,我们测试了HTTP/2的服务器推送特性,以充分利用其性能。
{"title":"Can HTTP/2 Really Help Web Performance on Smartphones?","authors":"Yi Liu, Yun Ma, Xuanzhe Liu, Gang Huang","doi":"10.1109/SCC.2016.36","DOIUrl":"https://doi.org/10.1109/SCC.2016.36","url":null,"abstract":"HTTP/2 is the next-generation Web protocol based on Google's SPDY protocol, and attempts to solve the shortcomings and inflexibilities of HTTP/1.x. As smartphones become the main access channel for Web services, we are curious if HTTP/2 can really help the performance of Web browsing. In this paper, we conduct a measurement study on the performance of HTTP/2 and HTTPS to reveal the mystery of HTTP/2. We clone the Alexa top 200 websites into our own server, and revisit them through HTTP/2-enabled proxy, and HTTPS-enabled proxy, respectively. We compare HTTP/2 and HTTPS as a transport protocol to transfer Web objects to identify the factors that may affect HTTP/2, including Round-Trip Time (RTT), bandwidth, loss rate, number of objects on a page, and objects sizes. We find that HTTP/2 hurts with high packet loss, but helps many small objects. The computation and dependencies of fetching Web objects reduce the performance improvement of HTTP/2, and sometimes can even hurt the performance of page loading. At last, we test the server push feature of HTTP/2 to leverage the performance.","PeriodicalId":115693,"journal":{"name":"2016 IEEE International Conference on Services Computing (SCC)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125152157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
CoCOA: A Framework for Comparing Aggregate Client Operations in BPO Services 一个比较BPO服务中聚合客户操作的框架
Pub Date : 2016-06-01 DOI: 10.1109/SCC.2016.76
R. Ghosh, Avantika Gupta, S. Chattopadhyay, A. Banerjee, K. Dasgupta
Operational efficiency is a major indicator by which the profitability of a business process outsourcing (BPO) service is evaluated. To measure such operational efficiency, BPO service providers define and monitor a set of key performance indicators (KPI) (e.g., productivity of employees, turn-around-time). While a pair of clients can be directly compared using a KPI, comparing the aggregate client operations across multiple KPIs is non-trivial. This is primarily because KPIs are disparate in nature (e.g., cost is measured in dollar while turn-around-time is measured in minutes). In this paper, we present CoCOA, a framework that compares aggregate operations of clients in BPO services so that they can be viewed in a single pane of glass. Two key modules of CoCOA are: (a) client rank aggregator and (b) KPI importance classifier. For a given time period, the rank aggregator module determines an aggregate ranking of clients using variety of inputs (e.g., individual KPI rank, priority of a KPI). When the aggregate rank of a client deteriorates over successive time periods, KPI importance classifier identifies the responsible KPIs for such deterioration. Thus, CoCOA not only helps in comparing the aggregate operation of clients, but also provides prescriptive analytics for improving organizational performance for a given client. We evaluate our approach using anonymized data set collected from a real BPO business and show how responsible KPIs can be identified when there is a deterioration in aggregate client rank.
运营效率是评估业务流程外包(BPO)服务盈利能力的主要指标。为了衡量这种运营效率,BPO服务提供商定义并监控一组关键绩效指标(KPI)(例如,员工生产率、周转时间)。虽然可以使用KPI直接比较一对客户端,但是跨多个KPI比较聚合的客户端操作是非常重要的。这主要是因为kpi本质上是不同的(例如,成本是以美元来衡量的,而周转时间是以分钟来衡量的)。在本文中,我们介绍了CoCOA,这是一个框架,用于比较BPO服务中客户端的聚合操作,以便可以在单个窗格中查看它们。CoCOA的两个关键模块是:(a)客户端排名聚合器和(b) KPI重要性分类器。对于给定的时间段,排名聚合器模块使用各种输入(例如,单个KPI排名,KPI的优先级)确定客户端的总体排名。当客户的总排名在连续的时间段内恶化时,KPI重要性分类器将识别导致这种恶化的责任KPI。因此,CoCOA不仅有助于比较客户机的总体操作,而且还为改进给定客户机的组织性能提供了规定性分析。我们使用从真实的业务流程外包业务中收集的匿名数据集来评估我们的方法,并展示了当客户总排名下降时如何识别负责任的kpi。
{"title":"CoCOA: A Framework for Comparing Aggregate Client Operations in BPO Services","authors":"R. Ghosh, Avantika Gupta, S. Chattopadhyay, A. Banerjee, K. Dasgupta","doi":"10.1109/SCC.2016.76","DOIUrl":"https://doi.org/10.1109/SCC.2016.76","url":null,"abstract":"Operational efficiency is a major indicator by which the profitability of a business process outsourcing (BPO) service is evaluated. To measure such operational efficiency, BPO service providers define and monitor a set of key performance indicators (KPI) (e.g., productivity of employees, turn-around-time). While a pair of clients can be directly compared using a KPI, comparing the aggregate client operations across multiple KPIs is non-trivial. This is primarily because KPIs are disparate in nature (e.g., cost is measured in dollar while turn-around-time is measured in minutes). In this paper, we present CoCOA, a framework that compares aggregate operations of clients in BPO services so that they can be viewed in a single pane of glass. Two key modules of CoCOA are: (a) client rank aggregator and (b) KPI importance classifier. For a given time period, the rank aggregator module determines an aggregate ranking of clients using variety of inputs (e.g., individual KPI rank, priority of a KPI). When the aggregate rank of a client deteriorates over successive time periods, KPI importance classifier identifies the responsible KPIs for such deterioration. Thus, CoCOA not only helps in comparing the aggregate operation of clients, but also provides prescriptive analytics for improving organizational performance for a given client. We evaluate our approach using anonymized data set collected from a real BPO business and show how responsible KPIs can be identified when there is a deterioration in aggregate client rank.","PeriodicalId":115693,"journal":{"name":"2016 IEEE International Conference on Services Computing (SCC)","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122522386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
2016 IEEE International Conference on Services Computing (SCC)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1