Huankai Chen, Frank Z. Wang, Matteo Migliavacca, L. Chua, N. Helian
The principle of local activity originated from electronic circuits, but can easily translate into other non-electrical homogeneous/heterogeneous media. Cloud resource is an example of a locally-active device, which is the origin of complexity in cloud scheduling system. However, most of the researchers implicitly assume the cloud resource to be locally passive when constructing new scheduling strategies. As a result, their research solutions perform poorly in the complex cloud environment. In this paper, we first study several complexity factors caused by the locally-active cloud resource. And then we extended the ”Local Activity Principle” concept with a quantitative measurement based on Entropy Theory. Furthermore, we classify the scheduling system into ”Order” or ”Chaos” state with simulating complexity in the cloud. Finally, we propose a new approach to controlling the chaos based on resource's Local Activity Ranking for QoS-aware cloud scheduling and implement such idea in Spark. Experiments demonstrate that our approach outperforms the native Spark Fair Scheduler with server cost reduced by 23%, average response time improved by 15% - 20% and standard deviation of response time minimized by 30% - 45%.
{"title":"Complexity Reduction: Local Activity Ranking by Resource Entropy for QoS-Aware Cloud Scheduling","authors":"Huankai Chen, Frank Z. Wang, Matteo Migliavacca, L. Chua, N. Helian","doi":"10.1109/SCC.2016.82","DOIUrl":"https://doi.org/10.1109/SCC.2016.82","url":null,"abstract":"The principle of local activity originated from electronic circuits, but can easily translate into other non-electrical homogeneous/heterogeneous media. Cloud resource is an example of a locally-active device, which is the origin of complexity in cloud scheduling system. However, most of the researchers implicitly assume the cloud resource to be locally passive when constructing new scheduling strategies. As a result, their research solutions perform poorly in the complex cloud environment. In this paper, we first study several complexity factors caused by the locally-active cloud resource. And then we extended the ”Local Activity Principle” concept with a quantitative measurement based on Entropy Theory. Furthermore, we classify the scheduling system into ”Order” or ”Chaos” state with simulating complexity in the cloud. Finally, we propose a new approach to controlling the chaos based on resource's Local Activity Ranking for QoS-aware cloud scheduling and implement such idea in Spark. Experiments demonstrate that our approach outperforms the native Spark Fair Scheduler with server cost reduced by 23%, average response time improved by 15% - 20% and standard deviation of response time minimized by 30% - 45%.","PeriodicalId":115693,"journal":{"name":"2016 IEEE International Conference on Services Computing (SCC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129911631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xue Ouyang, P. Garraghan, Changjian Wang, P. Townend, Jie Xu
The ability of servers to effectively execute tasks within Cloud datacenters varies due to heterogeneous CPU and memory capacities, resource contention situations, network configurations and operational age. Unexpectedly slow server nodes (node-level stragglers) result in assigned tasks becoming task-level stragglers, which dramatically impede parallel job execution. However, it is currently unknown how slow nodes directly correlate to task straggler manifestation. To address this knowledge gap, we propose a method for node performance modeling and ranking in Cloud datacenters based on analyzing parallel job execution tracelog data. By using a production Cloud system as a case study, we demonstrate how node execution performance is driven by temporal changes in node operation as opposed to node hardware capacity. Different sample sets have been filtered in order to evaluate the generality of our framework, and the analytic results demonstrate that node abilities of executing parallel tasks tend to follow a 3-parameter-loglogistic distribution. Further statistical attribute values such as confidence interval, quantile value, extreme case possibility, etc. can also be used for ranking and identifying potential straggler nodes within the cluster. We exploit a graph-based algorithm for partitioning server nodes into five levels, with 0.83% of node-level stragglers identified. Our work lays the foundation towards enhancing scheduling algorithms by avoiding slow nodes, reducing task straggler occurrence, and improving parallel job performance.
{"title":"An Approach for Modeling and Ranking Node-Level Stragglers in Cloud Datacenters","authors":"Xue Ouyang, P. Garraghan, Changjian Wang, P. Townend, Jie Xu","doi":"10.1109/SCC.2016.93","DOIUrl":"https://doi.org/10.1109/SCC.2016.93","url":null,"abstract":"The ability of servers to effectively execute tasks within Cloud datacenters varies due to heterogeneous CPU and memory capacities, resource contention situations, network configurations and operational age. Unexpectedly slow server nodes (node-level stragglers) result in assigned tasks becoming task-level stragglers, which dramatically impede parallel job execution. However, it is currently unknown how slow nodes directly correlate to task straggler manifestation. To address this knowledge gap, we propose a method for node performance modeling and ranking in Cloud datacenters based on analyzing parallel job execution tracelog data. By using a production Cloud system as a case study, we demonstrate how node execution performance is driven by temporal changes in node operation as opposed to node hardware capacity. Different sample sets have been filtered in order to evaluate the generality of our framework, and the analytic results demonstrate that node abilities of executing parallel tasks tend to follow a 3-parameter-loglogistic distribution. Further statistical attribute values such as confidence interval, quantile value, extreme case possibility, etc. can also be used for ranking and identifying potential straggler nodes within the cluster. We exploit a graph-based algorithm for partitioning server nodes into five levels, with 0.83% of node-level stragglers identified. Our work lays the foundation towards enhancing scheduling algorithms by avoiding slow nodes, reducing task straggler occurrence, and improving parallel job performance.","PeriodicalId":115693,"journal":{"name":"2016 IEEE International Conference on Services Computing (SCC)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134028839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Implementing the required degree of isolation between tenants is one of the significant challenges for deploying a multitenant application on the cloud. In this paper, we applied COMITRE (COmponent-based approach to Multitenancy Isolation Through request RE-routing) to empirically evaluate the degree of isolation between tenants enabled by three multitenancy patterns (i.e., shared component, tenant-isolated component, and dedicated component) for a cloud-hosted Bug tracking system using Bugzilla. The study revealed among other things that a component deployed based on dedicated component offers the highest degree of isolation (especially for database transactions where support for locking is enabled). Tenant isolation based on performance (e.g., response time) favoured shared component (compared to resource consumption (e.g., CPU and memory) which favoured dedicated component). We also discuss key challenges and recommendations for implementing multitenancy for application components in cloud-hosted bug tracking systems with guarantees for isolation between multiple tenants.
{"title":"Implementing the Required Degree of Multitenancy Isolation: A Case Study of Cloud-Hosted Bug Tracking System","authors":"L. Ochei, Andrei V. Petrovski, J. Bass","doi":"10.1109/SCC.2016.56","DOIUrl":"https://doi.org/10.1109/SCC.2016.56","url":null,"abstract":"Implementing the required degree of isolation between tenants is one of the significant challenges for deploying a multitenant application on the cloud. In this paper, we applied COMITRE (COmponent-based approach to Multitenancy Isolation Through request RE-routing) to empirically evaluate the degree of isolation between tenants enabled by three multitenancy patterns (i.e., shared component, tenant-isolated component, and dedicated component) for a cloud-hosted Bug tracking system using Bugzilla. The study revealed among other things that a component deployed based on dedicated component offers the highest degree of isolation (especially for database transactions where support for locking is enabled). Tenant isolation based on performance (e.g., response time) favoured shared component (compared to resource consumption (e.g., CPU and memory) which favoured dedicated component). We also discuss key challenges and recommendations for implementing multitenancy for application components in cloud-hosted bug tracking systems with guarantees for isolation between multiple tenants.","PeriodicalId":115693,"journal":{"name":"2016 IEEE International Conference on Services Computing (SCC)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121789333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Carlos Müller, Hong Linh Truong, Pablo Fernández, G. Copil, Antonio Ruiz-Cortés, S. Dustdar
In cloud service provisioning scenarios with a changing demand from consumers, it is appealing for cloud providers to leverage only a limited amount of the virtualized resources required to provide the service. However, it is not easy to determine how much resources are required to satisfy consumers expectations in terms of Quality of Service (QoS). Some existing frameworks provide mechanisms to adapt the required cloud resources in the service delivery, also called an elastic service, but only for consumers with the same QoS expectations. The problem arises when the service provider must deal with several consumers, each demanding a different QoS for the service. In such an scenario, cloud resources provisioning must deal with trade-offs between different QoS, while fulfilling these QoS, within the same service deployment. In this paper we propose an elasticity-aware governance platform for cloud service delivery that reacts to the dynamic service load introduced by consumers demand. Such a reaction consists of provisioning the required amount of cloud resources to satisfy the different QoS that is offered to the consumers by means of several service level agreements. The proposed platform aims to keep under control the QoS experienced by multiple service consumers while maintaining a controlled cost.
{"title":"An Elasticity-Aware Governance Platform for Cloud Service Delivery","authors":"Carlos Müller, Hong Linh Truong, Pablo Fernández, G. Copil, Antonio Ruiz-Cortés, S. Dustdar","doi":"10.1109/SCC.2016.17","DOIUrl":"https://doi.org/10.1109/SCC.2016.17","url":null,"abstract":"In cloud service provisioning scenarios with a changing demand from consumers, it is appealing for cloud providers to leverage only a limited amount of the virtualized resources required to provide the service. However, it is not easy to determine how much resources are required to satisfy consumers expectations in terms of Quality of Service (QoS). Some existing frameworks provide mechanisms to adapt the required cloud resources in the service delivery, also called an elastic service, but only for consumers with the same QoS expectations. The problem arises when the service provider must deal with several consumers, each demanding a different QoS for the service. In such an scenario, cloud resources provisioning must deal with trade-offs between different QoS, while fulfilling these QoS, within the same service deployment. In this paper we propose an elasticity-aware governance platform for cloud service delivery that reacts to the dynamic service load introduced by consumers demand. Such a reaction consists of provisioning the required amount of cloud resources to satisfy the different QoS that is offered to the consumers by means of several service level agreements. The proposed platform aims to keep under control the QoS experienced by multiple service consumers while maintaining a controlled cost.","PeriodicalId":115693,"journal":{"name":"2016 IEEE International Conference on Services Computing (SCC)","volume":"241 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132904186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ikbel Guidara, Imane Al Jaouhari, Nawal Guermouche
To implement abstract business processes, elementary services are selected for each abstract task. Because of uncertainties of Quality of Service (QoS) values during execution, services may become faulty and cause the violation of end-to-end awaited constraints. Additionally, due to the dynamic nature of service systems, several environment changes may occur at run-time. In fact, services can join or leave the system or change their offerings. To deal with possible changes and maintain the feasibility of the selected solution, enabling dynamic service selection during execution is essential. This is not a trivial task especially in the presence of several constraints and dependencies between services namely QoS and temporal constraints. Existing approaches do not consider the specificities of temporal properties and usually handle violations after they have occurred. In this paper, a novel proactive dynamic service selection approach is proposed to deal with changes during execution while considering both QoS and temporal constraints. Experiments show that, by using our approach, faults can be successfully handled in a reasonable time while guaranteeing overall constraints.
为了实现抽象业务流程,为每个抽象任务选择基本服务。由于在执行过程中QoS (Quality of Service)值的不确定性,服务可能会出现故障,从而导致违反端到端等待约束。另外,由于服务系统的动态性,在运行时可能会发生一些环境变化。实际上,服务可以加入或离开系统,也可以更改它们的产品。为了处理可能的更改并维护所选解决方案的可行性,在执行期间启用动态服务选择是必不可少的。这不是一项微不足道的任务,特别是在服务之间存在一些约束和依赖的情况下,即QoS和时间约束。现有的方法没有考虑时间属性的特殊性,并且通常在发生违规之后处理违规。在考虑QoS和时间约束的前提下,提出了一种新的主动动态服务选择方法来处理执行过程中的变化。实验表明,该方法可以在保证整体约束的前提下,在合理的时间内成功处理故障。
{"title":"Dynamic Selection for Service Composition Based on Temporal and QoS Constraints","authors":"Ikbel Guidara, Imane Al Jaouhari, Nawal Guermouche","doi":"10.1109/SCC.2016.42","DOIUrl":"https://doi.org/10.1109/SCC.2016.42","url":null,"abstract":"To implement abstract business processes, elementary services are selected for each abstract task. Because of uncertainties of Quality of Service (QoS) values during execution, services may become faulty and cause the violation of end-to-end awaited constraints. Additionally, due to the dynamic nature of service systems, several environment changes may occur at run-time. In fact, services can join or leave the system or change their offerings. To deal with possible changes and maintain the feasibility of the selected solution, enabling dynamic service selection during execution is essential. This is not a trivial task especially in the presence of several constraints and dependencies between services namely QoS and temporal constraints. Existing approaches do not consider the specificities of temporal properties and usually handle violations after they have occurred. In this paper, a novel proactive dynamic service selection approach is proposed to deal with changes during execution while considering both QoS and temporal constraints. Experiments show that, by using our approach, faults can be successfully handled in a reasonable time while guaranteeing overall constraints.","PeriodicalId":115693,"journal":{"name":"2016 IEEE International Conference on Services Computing (SCC)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121713593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Anisetti, C. Ardagna, E. Damiani, Filippo Gaudenzi
The cloud computing paradigm provides an environment where services can be composed and reused at high rates. Existing composition techniques focus on providing the desired functionality and at a given deployment cost. In this paper, we focus on the definition of cloud service compositions driven by certified non-functional properties. We define a cost evaluation methodology aimed to provide the composition that minimizes the total costs of the cloud provider taking into account deployment, certification, and mismatch costs, and evaluate it using three different cost profiles.
{"title":"A Cost-Effective Certification-Based Service Composition for the Cloud","authors":"M. Anisetti, C. Ardagna, E. Damiani, Filippo Gaudenzi","doi":"10.1109/SCC.2016.15","DOIUrl":"https://doi.org/10.1109/SCC.2016.15","url":null,"abstract":"The cloud computing paradigm provides an environment where services can be composed and reused at high rates. Existing composition techniques focus on providing the desired functionality and at a given deployment cost. In this paper, we focus on the definition of cloud service compositions driven by certified non-functional properties. We define a cost evaluation methodology aimed to provide the composition that minimizes the total costs of the cloud provider taking into account deployment, certification, and mismatch costs, and evaluate it using three different cost profiles.","PeriodicalId":115693,"journal":{"name":"2016 IEEE International Conference on Services Computing (SCC)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127291299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuliang Shi, Xudong Zhao, Shanqing Guo, Shijun Liu, Li-zhen Cui
Efficient resources utilization and better system performance are always two important objectives that service providers pursue to enjoy a maximum profit. In this paper, through analyzing experimental measurements, we study the performance impact of interdependent soft resources on an n-tier application benchmark - the RUBiS system. Soft resources are vital factors that influence hardware resources usage and overall application performance. Improper soft configurations can result in correlated bottlenecks and make performance degradation, so tuning the configuration of soft resources is imperative. Based on the experimental measurements, SRConfig method is applied to predict the soft configurations through adopting the back propagation neural network in n-tier application. Experimental results validate the accuracy and efficacy of our method.
{"title":"SRConfig: An Empirical Method of Interdependent Soft Configurations for Improving Performance in n-Tier Application","authors":"Yuliang Shi, Xudong Zhao, Shanqing Guo, Shijun Liu, Li-zhen Cui","doi":"10.1109/SCC.2016.84","DOIUrl":"https://doi.org/10.1109/SCC.2016.84","url":null,"abstract":"Efficient resources utilization and better system performance are always two important objectives that service providers pursue to enjoy a maximum profit. In this paper, through analyzing experimental measurements, we study the performance impact of interdependent soft resources on an n-tier application benchmark - the RUBiS system. Soft resources are vital factors that influence hardware resources usage and overall application performance. Improper soft configurations can result in correlated bottlenecks and make performance degradation, so tuning the configuration of soft resources is imperative. Based on the experimental measurements, SRConfig method is applied to predict the soft configurations through adopting the back propagation neural network in n-tier application. Experimental results validate the accuracy and efficacy of our method.","PeriodicalId":115693,"journal":{"name":"2016 IEEE International Conference on Services Computing (SCC)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127482532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mobile Internet and cloud computing are changing the paradigm of conventional business processes in service provision and demand. With the increasing user interaction and data transmission between mobile devices and workflow services, conventional business process pattern with centralized workflow engine is suffering from great challenges in service availability, reliability and user experience due to uncertain and changeable user contexts in mobile environments. In this paper, we propose a new paradigm for process services based on dynamic multiple replicas of process instance to improve the reliability and efficiency of process services for mobile users in dynamic and instable environments. Also, we give both the replication and synchronization algorithm for process instances based on task dependency paths annotated with vector clocks, which aims to maximize the number of available process replicas during the replicating and synchronizing process. Simulation experiments indicate that the proposed system can provide much better availability and efficiency for process operations in mobile computing environments.
{"title":"Maximizing the Availability of Process Services in Mobile Computing Environments","authors":"W. He, Hui Li, Li-zhen Cui, Shuoyan Lu","doi":"10.1109/SCC.2016.69","DOIUrl":"https://doi.org/10.1109/SCC.2016.69","url":null,"abstract":"Mobile Internet and cloud computing are changing the paradigm of conventional business processes in service provision and demand. With the increasing user interaction and data transmission between mobile devices and workflow services, conventional business process pattern with centralized workflow engine is suffering from great challenges in service availability, reliability and user experience due to uncertain and changeable user contexts in mobile environments. In this paper, we propose a new paradigm for process services based on dynamic multiple replicas of process instance to improve the reliability and efficiency of process services for mobile users in dynamic and instable environments. Also, we give both the replication and synchronization algorithm for process instances based on task dependency paths annotated with vector clocks, which aims to maximize the number of available process replicas during the replicating and synchronizing process. Simulation experiments indicate that the proposed system can provide much better availability and efficiency for process operations in mobile computing environments.","PeriodicalId":115693,"journal":{"name":"2016 IEEE International Conference on Services Computing (SCC)","volume":"1075 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116024933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
During past several years, we have built an online big data service called CMA that includes a group of scientific modeling and analysis tools, machine learning algorithms and a large scale image database for biological cell classification and phenotyping study. Due to the complexity and “nontestable” of scientific software and machine learning algorithms, adequately verifying and validating big data services is a grand challenge. In this paper, we introduce a framework for ensuring the quality of big data services. The framework includes an iterative metamorphic testing technique for testing “non-testable” scientific software, and an experiment based approach with stratified 10-fold cross validation for validating machine learning algorithms. The effectiveness of the framework for ensuring the quality of big data services is demonstrated through verifying and validating the software and algorithms in CMA.
{"title":"A Framework for Ensuring the Quality of a Big Data Service","authors":"Junhua Ding, Dongmei Zhang, Xin-Hua Hu","doi":"10.1109/SCC.2016.18","DOIUrl":"https://doi.org/10.1109/SCC.2016.18","url":null,"abstract":"During past several years, we have built an online big data service called CMA that includes a group of scientific modeling and analysis tools, machine learning algorithms and a large scale image database for biological cell classification and phenotyping study. Due to the complexity and “nontestable” of scientific software and machine learning algorithms, adequately verifying and validating big data services is a grand challenge. In this paper, we introduce a framework for ensuring the quality of big data services. The framework includes an iterative metamorphic testing technique for testing “non-testable” scientific software, and an experiment based approach with stratified 10-fold cross validation for validating machine learning algorithms. The effectiveness of the framework for ensuring the quality of big data services is demonstrated through verifying and validating the software and algorithms in CMA.","PeriodicalId":115693,"journal":{"name":"2016 IEEE International Conference on Services Computing (SCC)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114276699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Walter Priesnitz Filho, Carlos Ribeiro, Thomas Zefferer
Electronic identity (eID) systems enable electronic services and applications to identify users reliably. In an eID system, unique data, i.e. an eID, is assigned to each user. The eID unambiguously identifies the user within the eID system. In most cases, the user's eID is extended by additional attributes such as name, address, or date of birth. The assigned eID and associated attributes are used by electronic services and applications to identify users unambiguously and to obtain required information about these users. In practice, required user attributes potentially need to be exchanged between different eID systems. Unfortunately, each eID system uses its own ontology to represent and organize eIDs and associated attributes. This diversity of ontology definitions prevents an easy exchange of eIDs and attributes between eID systems. To address this issue, we propose an ontology-alignment solution that provides interoperability between eID systems. We show the feasibility of the proposed solution through a Web service (WS) based implementation. This WS enables eID-based applications to retrieve eID attributes from different eID systems. Experiments conducted show that the proposed solution and the resulting WS works with arbitrary ontologies and hence provides interoperability between eID systems.
{"title":"An Ontology-Based Interoperability Solution for Electronic-Identity Systems","authors":"Walter Priesnitz Filho, Carlos Ribeiro, Thomas Zefferer","doi":"10.1109/SCC.2016.11","DOIUrl":"https://doi.org/10.1109/SCC.2016.11","url":null,"abstract":"Electronic identity (eID) systems enable electronic services and applications to identify users reliably. In an eID system, unique data, i.e. an eID, is assigned to each user. The eID unambiguously identifies the user within the eID system. In most cases, the user's eID is extended by additional attributes such as name, address, or date of birth. The assigned eID and associated attributes are used by electronic services and applications to identify users unambiguously and to obtain required information about these users. In practice, required user attributes potentially need to be exchanged between different eID systems. Unfortunately, each eID system uses its own ontology to represent and organize eIDs and associated attributes. This diversity of ontology definitions prevents an easy exchange of eIDs and attributes between eID systems. To address this issue, we propose an ontology-alignment solution that provides interoperability between eID systems. We show the feasibility of the proposed solution through a Web service (WS) based implementation. This WS enables eID-based applications to retrieve eID attributes from different eID systems. Experiments conducted show that the proposed solution and the resulting WS works with arbitrary ontologies and hence provides interoperability between eID systems.","PeriodicalId":115693,"journal":{"name":"2016 IEEE International Conference on Services Computing (SCC)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114596066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}