Typically, BPMN Designer only needs to consider the business process without knowing the detail of invoked service, which helps them to simplify the design procedure. However, in some data centric workflow scenario, if designer didn't know about the data model of the invoked service, the BPMN workflow execution will be inefficient due to data conflict. There is lack of dynamically data modeling capability in BPMN, which means some data conflicts might happen in the designed workflow. To solve the problem, this paper introduced a hybrid model combining process and data, which is called process-data (PD) model. PD model defined several data conflict scenarios, which transformed the conflicting problem into parallel collection constructing problem. A novel collection generating method is introduced for the parallel collection creation. Based on the output of method, user can find a way to optimize the data conflict and increase the performance of the workflow.
{"title":"A Hybrid Process-Data Model to Avoid Data Conflicting in BPMN","authors":"Rongheng Lin, Budan Wu, Hua Zou, Naiwang Guo","doi":"10.1109/SCC.2016.119","DOIUrl":"https://doi.org/10.1109/SCC.2016.119","url":null,"abstract":"Typically, BPMN Designer only needs to consider the business process without knowing the detail of invoked service, which helps them to simplify the design procedure. However, in some data centric workflow scenario, if designer didn't know about the data model of the invoked service, the BPMN workflow execution will be inefficient due to data conflict. There is lack of dynamically data modeling capability in BPMN, which means some data conflicts might happen in the designed workflow. To solve the problem, this paper introduced a hybrid model combining process and data, which is called process-data (PD) model. PD model defined several data conflict scenarios, which transformed the conflicting problem into parallel collection constructing problem. A novel collection generating method is introduced for the parallel collection creation. Based on the output of method, user can find a way to optimize the data conflict and increase the performance of the workflow.","PeriodicalId":115693,"journal":{"name":"2016 IEEE International Conference on Services Computing (SCC)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115433307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
More and more companies are currently migrating business processes to the Cloud in order to handle customer service in an efficient and cost effective way. Cloud Computing's elasticity and flexibility in service delivery makes it an ideal solution for companies to deal with highly variable service demands and uncertain financial environment to ensure the required QoS while using resources and reduce their expenses. Elasticity management is witnessing a lot of attention from IT community as a pivotal issue for finding the right tradeoffs between QoS levels and operational costs by working on developing novel methods and mechanisms. However, controlling business process elasticity and defining non-trivial elasticity strategies are challenging issues. In this paper, we propose an elasticity strategy description language, called Strat. It is defined as an extensible Domain-Specific Language to allow business process holders to describe elasticity strategies that are evaluated using our formal evaluation framework. Given a usage behavior and a business process, the evaluation consists in providing a set of plots that allows the analysis and the comparison of strategies. Our contributions and developments provide Cloud tenants with facilities to choose elasticity strategies that fit to their business processes and usage behaviors.
{"title":"Description and Evaluation of Elasticity Strategies for Business Processes in the Cloud","authors":"A. Jrad, Sami Bhiri, S. Tata","doi":"10.1109/SCC.2016.34","DOIUrl":"https://doi.org/10.1109/SCC.2016.34","url":null,"abstract":"More and more companies are currently migrating business processes to the Cloud in order to handle customer service in an efficient and cost effective way. Cloud Computing's elasticity and flexibility in service delivery makes it an ideal solution for companies to deal with highly variable service demands and uncertain financial environment to ensure the required QoS while using resources and reduce their expenses. Elasticity management is witnessing a lot of attention from IT community as a pivotal issue for finding the right tradeoffs between QoS levels and operational costs by working on developing novel methods and mechanisms. However, controlling business process elasticity and defining non-trivial elasticity strategies are challenging issues. In this paper, we propose an elasticity strategy description language, called Strat. It is defined as an extensible Domain-Specific Language to allow business process holders to describe elasticity strategies that are evaluated using our formal evaluation framework. Given a usage behavior and a business process, the evaluation consists in providing a set of plots that allows the analysis and the comparison of strategies. Our contributions and developments provide Cloud tenants with facilities to choose elasticity strategies that fit to their business processes and usage behaviors.","PeriodicalId":115693,"journal":{"name":"2016 IEEE International Conference on Services Computing (SCC)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124269622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Security has been identified to be the principal stumbling-block preventing users and enterprises from moving their businesses to the cloud. The reason is that cloud systems, besides inheriting all the vulnerabilities of the traditional computing systems, appeal to new types of threats engendered mainly by the virtualization concept that allows multiple users' virtual machines (VMs) to share a common computing platform. This broadens the attack space of the malicious users and increases their ability to attack both the cloud system and other co-resident VMs. Motivated by the absence of any approach that addresses the problem of optimal detection load distribution in the domain of cloud computing, we develop a resource-aware maxmin game theoretical model that guides the hypervisor on how the detection load should be optimally distributed among its guest VMs in the real-time. The objective is to maximize the hypervisor's probability of detection, knowing that the attacker is dividing the attack over several VMs to minimize this probability. Experimental results on Amazon EC2 pricing dataset reveal that our model increases the probability of detecting distributed attacks, reduces the false positives, and minimizes the resources wasted during the detection process.
{"title":"How to Distribute the Detection Load among Virtual Machines to Maximize the Detection of Distributed Attacks in the Cloud?","authors":"O. A. Wahab, J. Bentahar, H. Otrok, A. Mourad","doi":"10.1109/SCC.2016.48","DOIUrl":"https://doi.org/10.1109/SCC.2016.48","url":null,"abstract":"Security has been identified to be the principal stumbling-block preventing users and enterprises from moving their businesses to the cloud. The reason is that cloud systems, besides inheriting all the vulnerabilities of the traditional computing systems, appeal to new types of threats engendered mainly by the virtualization concept that allows multiple users' virtual machines (VMs) to share a common computing platform. This broadens the attack space of the malicious users and increases their ability to attack both the cloud system and other co-resident VMs. Motivated by the absence of any approach that addresses the problem of optimal detection load distribution in the domain of cloud computing, we develop a resource-aware maxmin game theoretical model that guides the hypervisor on how the detection load should be optimally distributed among its guest VMs in the real-time. The objective is to maximize the hypervisor's probability of detection, knowing that the attacker is dividing the attack over several VMs to minimize this probability. Experimental results on Amazon EC2 pricing dataset reveal that our model increases the probability of detecting distributed attacks, reduces the false positives, and minimizes the resources wasted during the detection process.","PeriodicalId":115693,"journal":{"name":"2016 IEEE International Conference on Services Computing (SCC)","volume":"166 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115004039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Services organization manage a pipeline of sales opportunities with variable enterprise sales engagement lifespan, maturity levels (belonging to progressive sales stages), and contract values at any given point in time. Accurate forecasting of contract signings by the end of a time period (e.g., a quarter) is a desire for many services organizations in order to get an accurate projection of incoming revenues, and to provide support for delivery planning, resource allocation, budgeting, and effective sales opportunity management. While the problem of sales forecasting has been investigated in its generic context, sales forecasting for services organizations entails the consideration of additional complexities, which has not been thoroughly investigated: (i) considering opportunities in multi-staged sales pipeline, which means providing stage-specific treatment of sales opportunities in each group, and (ii) using the information of the current pipeline build-up, as well as the projection of the pipeline growth over the remaining time period before the end of the target time period in order to make predictions. In this paper, we formulate this problem, considering the service-specific context, as a machine learning problem over the set of historical services sales data. We introduce a novel optimization approach for finding the optimized weights of a sales forecasting function. The objective value of our optimization model minimizes the average error rates for predicting sales based on two factors of conversion rates and growth factors for any given point in time in a sales period over historical data. Our model also optimally determines the number of historical periods that should be used in the machine learning framework to predict the future revenue. We have evaluated the presented method, and the results demonstrate superior performance (in terms of absolute and relative errors) compared to a baseline state of the art method.
{"title":"An Optimization Approach to Services Sales Forecasting in a Multi-staged Sales Pipeline","authors":"Aly Megahed, Peifeng Yin, H. M. Nezhad","doi":"10.1109/SCC.2016.98","DOIUrl":"https://doi.org/10.1109/SCC.2016.98","url":null,"abstract":"Services organization manage a pipeline of sales opportunities with variable enterprise sales engagement lifespan, maturity levels (belonging to progressive sales stages), and contract values at any given point in time. Accurate forecasting of contract signings by the end of a time period (e.g., a quarter) is a desire for many services organizations in order to get an accurate projection of incoming revenues, and to provide support for delivery planning, resource allocation, budgeting, and effective sales opportunity management. While the problem of sales forecasting has been investigated in its generic context, sales forecasting for services organizations entails the consideration of additional complexities, which has not been thoroughly investigated: (i) considering opportunities in multi-staged sales pipeline, which means providing stage-specific treatment of sales opportunities in each group, and (ii) using the information of the current pipeline build-up, as well as the projection of the pipeline growth over the remaining time period before the end of the target time period in order to make predictions. In this paper, we formulate this problem, considering the service-specific context, as a machine learning problem over the set of historical services sales data. We introduce a novel optimization approach for finding the optimized weights of a sales forecasting function. The objective value of our optimization model minimizes the average error rates for predicting sales based on two factors of conversion rates and growth factors for any given point in time in a sales period over historical data. Our model also optimally determines the number of historical periods that should be used in the machine learning framework to predict the future revenue. We have evaluated the presented method, and the results demonstrate superior performance (in terms of absolute and relative errors) compared to a baseline state of the art method.","PeriodicalId":115693,"journal":{"name":"2016 IEEE International Conference on Services Computing (SCC)","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127305614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
How to deploy more services while keeping the Quality of Services is one of the key challenges faced by the resource management of cloud platforms, especially for PaaS. Existing approaches focus mainly on cloud platforms which mainly host small number of applications, and consider few features of different applications. In this paper, we present SORM, a Service-Oriented Resource Management mechanism on cloud platforms. The core of SORM is a service feature model which involves resource consumption and request variance of services. For each server, SORM deploys service instances with complementary resource consumption, so as to improve resource utilization. SORM also divides servers into three pools and deploys service instances onto different pools, mainly based on their request variance features, so as to reduce computational over-head of resource management and keep cloud platforms stable. We evaluate the effectiveness and efficiency of SORM by simulation experiments and find that: compared with one exiting approach. SORM can deploy 3.6 times more services with nearly 74.1% time cost.
{"title":"Service-Oriented Resource Management of Cloud Platforms","authors":"Xing Hu, Rui Zhang, Qianxiang Wang","doi":"10.1109/SCC.2016.63","DOIUrl":"https://doi.org/10.1109/SCC.2016.63","url":null,"abstract":"How to deploy more services while keeping the Quality of Services is one of the key challenges faced by the resource management of cloud platforms, especially for PaaS. Existing approaches focus mainly on cloud platforms which mainly host small number of applications, and consider few features of different applications. In this paper, we present SORM, a Service-Oriented Resource Management mechanism on cloud platforms. The core of SORM is a service feature model which involves resource consumption and request variance of services. For each server, SORM deploys service instances with complementary resource consumption, so as to improve resource utilization. SORM also divides servers into three pools and deploys service instances onto different pools, mainly based on their request variance features, so as to reduce computational over-head of resource management and keep cloud platforms stable. We evaluate the effectiveness and efficiency of SORM by simulation experiments and find that: compared with one exiting approach. SORM can deploy 3.6 times more services with nearly 74.1% time cost.","PeriodicalId":115693,"journal":{"name":"2016 IEEE International Conference on Services Computing (SCC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132978248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
B. Carminati, Pietro Colombo, E. Ferrari, Gokhan Sagirlar
Internet of Things (IoT) services are improving our life, supporting people in a variety of situations. However, due to the high volume of managed personal data, they can be a serious threat for individuals privacy. Users data are commonly gathered by devices scattered in the IoT, each of which sees a portion of them. The combination of different data may lead to infer users sensitive information. The distributed nature and the complexity of the IoT scenario cause users to lose the control on how their data are handled. In this paper, we start addressing this issue with a framework that empowers users to better control data management within IoT ecosystems. A novel privacy reference model allows users to state how their data can be processed and what cannot be inferred from them, and a dedicated mechanism allows enforcing the stated references. Experimental results show the efficiency of the enforcement.
{"title":"Enhancing User Control on Personal Data Usage in Internet of Things Ecosystems","authors":"B. Carminati, Pietro Colombo, E. Ferrari, Gokhan Sagirlar","doi":"10.1109/SCC.2016.45","DOIUrl":"https://doi.org/10.1109/SCC.2016.45","url":null,"abstract":"Internet of Things (IoT) services are improving our life, supporting people in a variety of situations. However, due to the high volume of managed personal data, they can be a serious threat for individuals privacy. Users data are commonly gathered by devices scattered in the IoT, each of which sees a portion of them. The combination of different data may lead to infer users sensitive information. The distributed nature and the complexity of the IoT scenario cause users to lose the control on how their data are handled. In this paper, we start addressing this issue with a framework that empowers users to better control data management within IoT ecosystems. A novel privacy reference model allows users to state how their data can be processed and what cannot be inferred from them, and a dedicated mechanism allows enforcing the stated references. Experimental results show the efficiency of the enforcement.","PeriodicalId":115693,"journal":{"name":"2016 IEEE International Conference on Services Computing (SCC)","volume":"236 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123297885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Lalanda, Stéphanie Chollet, Catherine Hamon, V. Lestideau
Pervasive applications are often executed in fluctuating conditions and need frequent adaptations to meet requirements. Autonomic computing techniques are frequently used to automate adaptations to changing execution conditions. However, some administration tasks still have to be performed by human administrators. Such tasks are very complex because of a lack of understanding of the system evolutions. In this paper, we propose to build and link models at runtime of supervised applications in order to simplify the administrators' job. Our approach is illustrated on a health application called actimetrics, developed with the Orange Labs.
{"title":"Architectural Models to Simplify Administration of Service-Oriented Applications","authors":"P. Lalanda, Stéphanie Chollet, Catherine Hamon, V. Lestideau","doi":"10.1109/SCC.2016.41","DOIUrl":"https://doi.org/10.1109/SCC.2016.41","url":null,"abstract":"Pervasive applications are often executed in fluctuating conditions and need frequent adaptations to meet requirements. Autonomic computing techniques are frequently used to automate adaptations to changing execution conditions. However, some administration tasks still have to be performed by human administrators. Such tasks are very complex because of a lack of understanding of the system evolutions. In this paper, we propose to build and link models at runtime of supervised applications in order to simplify the administrators' job. Our approach is illustrated on a health application called actimetrics, developed with the Orange Labs.","PeriodicalId":115693,"journal":{"name":"2016 IEEE International Conference on Services Computing (SCC)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123468675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the popularity of cloud computing, many cloud service providers deploy regional data centers to offer services and pplications. These large-scale data centers have drawn extensive attention in terms of the huge energy demand and carbon emission. Thus, how to make use of their spatial diversities to green data centers and reduce cloud provider's costs is an important concern. In this paper, we integrate service reward, electricity cost, carbon taxes and service performance to study cost-effective request scheduling for cloud data centers. We propose an online and distributed scheduling algorithm CESA to chieve the flexible tradeoff between these conflicting objectives. The time complexity of CESA is polynomial, and it can be implemented in a parallel way. CESA requires no prior knowledge of the statistics of request arrivals or future electricity prices, yet it provably approximates the optimal system profit while bounding the queue length. Real-trace based simulations are conducted which verify the effectiveness of our CESA algorithm.
{"title":"Cost-Effective Request Scheduling for Greening Cloud Data Centers","authors":"Ying Chen, Chuang Lin, Jiwei Huang, Xuemin Shen","doi":"10.1109/SCC.2016.14","DOIUrl":"https://doi.org/10.1109/SCC.2016.14","url":null,"abstract":"With the popularity of cloud computing, many cloud service providers deploy regional data centers to offer services and pplications. These large-scale data centers have drawn extensive attention in terms of the huge energy demand and carbon emission. Thus, how to make use of their spatial diversities to green data centers and reduce cloud provider's costs is an important concern. In this paper, we integrate service reward, electricity cost, carbon taxes and service performance to study cost-effective request scheduling for cloud data centers. We propose an online and distributed scheduling algorithm CESA to chieve the flexible tradeoff between these conflicting objectives. The time complexity of CESA is polynomial, and it can be implemented in a parallel way. CESA requires no prior knowledge of the statistics of request arrivals or future electricity prices, yet it provably approximates the optimal system profit while bounding the queue length. Real-trace based simulations are conducted which verify the effectiveness of our CESA algorithm.","PeriodicalId":115693,"journal":{"name":"2016 IEEE International Conference on Services Computing (SCC)","volume":"8 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116674348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
HTTP/2 is the next-generation Web protocol based on Google's SPDY protocol, and attempts to solve the shortcomings and inflexibilities of HTTP/1.x. As smartphones become the main access channel for Web services, we are curious if HTTP/2 can really help the performance of Web browsing. In this paper, we conduct a measurement study on the performance of HTTP/2 and HTTPS to reveal the mystery of HTTP/2. We clone the Alexa top 200 websites into our own server, and revisit them through HTTP/2-enabled proxy, and HTTPS-enabled proxy, respectively. We compare HTTP/2 and HTTPS as a transport protocol to transfer Web objects to identify the factors that may affect HTTP/2, including Round-Trip Time (RTT), bandwidth, loss rate, number of objects on a page, and objects sizes. We find that HTTP/2 hurts with high packet loss, but helps many small objects. The computation and dependencies of fetching Web objects reduce the performance improvement of HTTP/2, and sometimes can even hurt the performance of page loading. At last, we test the server push feature of HTTP/2 to leverage the performance.
{"title":"Can HTTP/2 Really Help Web Performance on Smartphones?","authors":"Yi Liu, Yun Ma, Xuanzhe Liu, Gang Huang","doi":"10.1109/SCC.2016.36","DOIUrl":"https://doi.org/10.1109/SCC.2016.36","url":null,"abstract":"HTTP/2 is the next-generation Web protocol based on Google's SPDY protocol, and attempts to solve the shortcomings and inflexibilities of HTTP/1.x. As smartphones become the main access channel for Web services, we are curious if HTTP/2 can really help the performance of Web browsing. In this paper, we conduct a measurement study on the performance of HTTP/2 and HTTPS to reveal the mystery of HTTP/2. We clone the Alexa top 200 websites into our own server, and revisit them through HTTP/2-enabled proxy, and HTTPS-enabled proxy, respectively. We compare HTTP/2 and HTTPS as a transport protocol to transfer Web objects to identify the factors that may affect HTTP/2, including Round-Trip Time (RTT), bandwidth, loss rate, number of objects on a page, and objects sizes. We find that HTTP/2 hurts with high packet loss, but helps many small objects. The computation and dependencies of fetching Web objects reduce the performance improvement of HTTP/2, and sometimes can even hurt the performance of page loading. At last, we test the server push feature of HTTP/2 to leverage the performance.","PeriodicalId":115693,"journal":{"name":"2016 IEEE International Conference on Services Computing (SCC)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125152157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Ghosh, Avantika Gupta, S. Chattopadhyay, A. Banerjee, K. Dasgupta
Operational efficiency is a major indicator by which the profitability of a business process outsourcing (BPO) service is evaluated. To measure such operational efficiency, BPO service providers define and monitor a set of key performance indicators (KPI) (e.g., productivity of employees, turn-around-time). While a pair of clients can be directly compared using a KPI, comparing the aggregate client operations across multiple KPIs is non-trivial. This is primarily because KPIs are disparate in nature (e.g., cost is measured in dollar while turn-around-time is measured in minutes). In this paper, we present CoCOA, a framework that compares aggregate operations of clients in BPO services so that they can be viewed in a single pane of glass. Two key modules of CoCOA are: (a) client rank aggregator and (b) KPI importance classifier. For a given time period, the rank aggregator module determines an aggregate ranking of clients using variety of inputs (e.g., individual KPI rank, priority of a KPI). When the aggregate rank of a client deteriorates over successive time periods, KPI importance classifier identifies the responsible KPIs for such deterioration. Thus, CoCOA not only helps in comparing the aggregate operation of clients, but also provides prescriptive analytics for improving organizational performance for a given client. We evaluate our approach using anonymized data set collected from a real BPO business and show how responsible KPIs can be identified when there is a deterioration in aggregate client rank.
{"title":"CoCOA: A Framework for Comparing Aggregate Client Operations in BPO Services","authors":"R. Ghosh, Avantika Gupta, S. Chattopadhyay, A. Banerjee, K. Dasgupta","doi":"10.1109/SCC.2016.76","DOIUrl":"https://doi.org/10.1109/SCC.2016.76","url":null,"abstract":"Operational efficiency is a major indicator by which the profitability of a business process outsourcing (BPO) service is evaluated. To measure such operational efficiency, BPO service providers define and monitor a set of key performance indicators (KPI) (e.g., productivity of employees, turn-around-time). While a pair of clients can be directly compared using a KPI, comparing the aggregate client operations across multiple KPIs is non-trivial. This is primarily because KPIs are disparate in nature (e.g., cost is measured in dollar while turn-around-time is measured in minutes). In this paper, we present CoCOA, a framework that compares aggregate operations of clients in BPO services so that they can be viewed in a single pane of glass. Two key modules of CoCOA are: (a) client rank aggregator and (b) KPI importance classifier. For a given time period, the rank aggregator module determines an aggregate ranking of clients using variety of inputs (e.g., individual KPI rank, priority of a KPI). When the aggregate rank of a client deteriorates over successive time periods, KPI importance classifier identifies the responsible KPIs for such deterioration. Thus, CoCOA not only helps in comparing the aggregate operation of clients, but also provides prescriptive analytics for improving organizational performance for a given client. We evaluate our approach using anonymized data set collected from a real BPO business and show how responsible KPIs can be identified when there is a deterioration in aggregate client rank.","PeriodicalId":115693,"journal":{"name":"2016 IEEE International Conference on Services Computing (SCC)","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122522386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}