The present paper aims at providing an abstract framework to define the regulatory compliance problem. In particular we show how the framework can be used to solve the problem of deciding whether a structured process is compliant with a single regulation, which is composed of a primary obligation and a chain of compensations.
{"title":"Towards an Abstract Framework for Compliance","authors":"S. C. Tosatto, Guido Governatori, Pierre Kelsen","doi":"10.1109/EDOCW.2013.16","DOIUrl":"https://doi.org/10.1109/EDOCW.2013.16","url":null,"abstract":"The present paper aims at providing an abstract framework to define the regulatory compliance problem. In particular we show how the framework can be used to solve the problem of deciding whether a structured process is compliant with a single regulation, which is composed of a primary obligation and a chain of compensations.","PeriodicalId":376599,"journal":{"name":"2013 17th IEEE International Enterprise Distributed Object Computing Conference Workshops","volume":"150 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132127830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Big Data is an increasingly significant topic for management and IT departments. In the beginning, Big Data applications were large on premise installations. Today, cloud services are used increasingly to implement Big Data applications. This can be done on different ways supporting different strategic enterprise goals. Therefore, we develop a framework that enumerates the alternatives for implementing Big Data applications using cloud-services and identify the strategic goals supported by these Alternatives. The created framework clarifies the options for Big Data initiatives using cloud-computing and thus improves the strategic alignment of Big Data applications.
{"title":"Strategic Alignment of Cloud-Based Architectures for Big Data","authors":"Rainer Schmidt, Michael Möhring","doi":"10.1109/EDOCW.2013.22","DOIUrl":"https://doi.org/10.1109/EDOCW.2013.22","url":null,"abstract":"Big Data is an increasingly significant topic for management and IT departments. In the beginning, Big Data applications were large on premise installations. Today, cloud services are used increasingly to implement Big Data applications. This can be done on different ways supporting different strategic enterprise goals. Therefore, we develop a framework that enumerates the alternatives for implementing Big Data applications using cloud-services and identify the strategic goals supported by these Alternatives. The created framework clarifies the options for Big Data initiatives using cloud-computing and thus improves the strategic alignment of Big Data applications.","PeriodicalId":376599,"journal":{"name":"2013 17th IEEE International Enterprise Distributed Object Computing Conference Workshops","volume":"152 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133103175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Software testing is a critical process for achieving product quality. Its importance is more and more recognized, and there is a growing concern in improving the accomplishment of this process. In this context, Knowledge Management emerges as an important supporting tool. However, managing relevant knowledge to reuse is difficult and it requires some means to represent and to associate semantics to a large volume of test information. In order to address this problem, we have developed a Reference Ontology on Software Testing (ROost). ROost is built reusing ontology patterns from the Software Process Ontology Pattern Language (SP-OPL). In this paper, we discuss how ROost was developed, and present a fragment of Roost that concerns with software testing process, its activities, artifacts, and procedures.
{"title":"Using Ontology Patterns for Building a Reference Software Testing Ontology","authors":"É. Souza, R. Falbo, N. Vijaykumar","doi":"10.1109/EDOCW.2013.10","DOIUrl":"https://doi.org/10.1109/EDOCW.2013.10","url":null,"abstract":"Software testing is a critical process for achieving product quality. Its importance is more and more recognized, and there is a growing concern in improving the accomplishment of this process. In this context, Knowledge Management emerges as an important supporting tool. However, managing relevant knowledge to reuse is difficult and it requires some means to represent and to associate semantics to a large volume of test information. In order to address this problem, we have developed a Reference Ontology on Software Testing (ROost). ROost is built reusing ontology patterns from the Software Process Ontology Pattern Language (SP-OPL). In this paper, we discuss how ROost was developed, and present a fragment of Roost that concerns with software testing process, its activities, artifacts, and procedures.","PeriodicalId":376599,"journal":{"name":"2013 17th IEEE International Enterprise Distributed Object Computing Conference Workshops","volume":"257 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127703689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Adequate tool support for Enterprise Architecture (EA) and its respective management function is crucial for the success of the discipline in practice. However, currently available tools used in organizations focus on structured information neglecting the collaborative effort required for developing and planning the EA. As a result, utilization of these tools by stakeholders is often not sufficient and availability of EA products in the organization is limited. We investigate the integration of existing EA tools and Enterprise Wikis to tackle these challenges. We will describe how EA initiatives can benefit from the use and integration of an Enterprise Wiki with an existing EA tool. Main goal of our research is to increase the utilization of EA tools and enhance the availability of EA products by incorporating unstructured information content in the tools. For this purpose we analyze task characteristics that we revealed from the processes and task descriptions of the EA department of a German insurance organization and align them with technology characteristics of EA tools and Enterprise Wikis. We empirically evaluated these technology characteristics using an online survey with results from 105 organizations in previous work. We apply the technology-to-performance chain model to derive the fit between task and technology characteristics for EA management (EAM) tool support in order to evaluate our hypotheses.
{"title":"Analyzing Task and Technology Characteristics for Enterprise Architecture Management Tool Support","authors":"M. Hauder, Max Fiedler, F. Matthes, Björn Wüst","doi":"10.1109/EDOCW.2013.36","DOIUrl":"https://doi.org/10.1109/EDOCW.2013.36","url":null,"abstract":"Adequate tool support for Enterprise Architecture (EA) and its respective management function is crucial for the success of the discipline in practice. However, currently available tools used in organizations focus on structured information neglecting the collaborative effort required for developing and planning the EA. As a result, utilization of these tools by stakeholders is often not sufficient and availability of EA products in the organization is limited. We investigate the integration of existing EA tools and Enterprise Wikis to tackle these challenges. We will describe how EA initiatives can benefit from the use and integration of an Enterprise Wiki with an existing EA tool. Main goal of our research is to increase the utilization of EA tools and enhance the availability of EA products by incorporating unstructured information content in the tools. For this purpose we analyze task characteristics that we revealed from the processes and task descriptions of the EA department of a German insurance organization and align them with technology characteristics of EA tools and Enterprise Wikis. We empirically evaluated these technology characteristics using an online survey with results from 105 organizations in previous work. We apply the technology-to-performance chain model to derive the fit between task and technology characteristics for EA management (EAM) tool support in order to evaluate our hypotheses.","PeriodicalId":376599,"journal":{"name":"2013 17th IEEE International Enterprise Distributed Object Computing Conference Workshops","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128039626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gonçalo Antunes, Marzieh Bakhshandeh, Rudolf Mayer, J. Borbinha, A. Caetano
Enterprise architecture aligns business and information technology through the management of different elements and domains. An architecture description encompasses a wide and heterogeneous spectrum of areas, such as business processes, metrics, application components, people and technological infrastructure. Views express the elements and relationships of one or more domains from the perspective of specific system concerns relevant to one or more of its stakeholders. As a result, each view needs to be expressed in the description language that best suits its concerns. However, enterprise architecture languages tend to advocate a rigid "one-model fits all" approach where an all-encompassing description language describes several architectural domains. This approach hinders extensibility and adds complexity to the overall description language. On the other hand, integrating multiple models raises several challenges at the level of model coherence, consistency and trace ability. Moreover, EA models should be computable so that the effort involved in their analysis is manageable. This work advocates the employment of ontologies and associated techniques in EA for contributing to the solving of the aforementioned issues. Thus, a proposal is made comprising an extensible architecture that consists of a core domain-independent ontology that can be extended through the integration of domain-specific ontologies focusing on specific concerns. The proposal is demonstrated through a real-world evaluation scenario involving the analysis of the models according to the requirements of the scenario stakeholders.
{"title":"Using Ontologies for Enterprise Architecture Analysis","authors":"Gonçalo Antunes, Marzieh Bakhshandeh, Rudolf Mayer, J. Borbinha, A. Caetano","doi":"10.1109/EDOCW.2013.47","DOIUrl":"https://doi.org/10.1109/EDOCW.2013.47","url":null,"abstract":"Enterprise architecture aligns business and information technology through the management of different elements and domains. An architecture description encompasses a wide and heterogeneous spectrum of areas, such as business processes, metrics, application components, people and technological infrastructure. Views express the elements and relationships of one or more domains from the perspective of specific system concerns relevant to one or more of its stakeholders. As a result, each view needs to be expressed in the description language that best suits its concerns. However, enterprise architecture languages tend to advocate a rigid \"one-model fits all\" approach where an all-encompassing description language describes several architectural domains. This approach hinders extensibility and adds complexity to the overall description language. On the other hand, integrating multiple models raises several challenges at the level of model coherence, consistency and trace ability. Moreover, EA models should be computable so that the effort involved in their analysis is manageable. This work advocates the employment of ontologies and associated techniques in EA for contributing to the solving of the aforementioned issues. Thus, a proposal is made comprising an extensible architecture that consists of a core domain-independent ontology that can be extended through the integration of domain-specific ontologies focusing on specific concerns. The proposal is demonstrated through a real-world evaluation scenario involving the analysis of the models according to the requirements of the scenario stakeholders.","PeriodicalId":376599,"journal":{"name":"2013 17th IEEE International Enterprise Distributed Object Computing Conference Workshops","volume":"71 7","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121003136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Matthias Farwick, Thomas Trojer, M. Breu, Stefan Ginther, Johannes Kleinlercher, A. Doblander
Today's Enterprise Architecture Management (EAM) tools are based on forms and graphical modeling capabilities via web-based applications or desktop clients. However, recent developments in textual modeling tools have not yet been considered for EA modeling in research and practice. In this paper we present a novel EAM-tool approach, called Txture, that consists of a textual modeling environment and a web-application to provide enterprise-wide architecture visualizations for different stakeholder groups. The tool is in production use at a major Austrian data center, where it proofed to be intuitive and provide efficient modeling capabilities compared to traditional approaches. In this paper we present lessons learned from the development of the tool as well as usage it and report on its benefits and drawbacks.
{"title":"A Case Study on Textual Enterprise Architecture Modeling","authors":"Matthias Farwick, Thomas Trojer, M. Breu, Stefan Ginther, Johannes Kleinlercher, A. Doblander","doi":"10.1109/EDOCW.2013.40","DOIUrl":"https://doi.org/10.1109/EDOCW.2013.40","url":null,"abstract":"Today's Enterprise Architecture Management (EAM) tools are based on forms and graphical modeling capabilities via web-based applications or desktop clients. However, recent developments in textual modeling tools have not yet been considered for EA modeling in research and practice. In this paper we present a novel EAM-tool approach, called Txture, that consists of a textual modeling environment and a web-application to provide enterprise-wide architecture visualizations for different stakeholder groups. The tool is in production use at a major Austrian data center, where it proofed to be intuitive and provide efficient modeling capabilities compared to traditional approaches. In this paper we present lessons learned from the development of the tool as well as usage it and report on its benefits and drawbacks.","PeriodicalId":376599,"journal":{"name":"2013 17th IEEE International Enterprise Distributed Object Computing Conference Workshops","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122426815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Boubaker, H. Mili, Yasmine Charif, Abderrahmane Leshob
A typical e-business transaction takes hours or days to complete, involves a number of partners, and comprises many failure points. With short-lived transactions, database systems ensure atomicity by either committing all of the elements of the transaction, or by canceling all of them in case of a failure. With typical e-business transactions, strict atomicity is not practical, and we need a way of reversing the effects of those activities that cannot be rolled back: that is compensation. For a given business process, identifying the various failure points, and designing the appropriate compensation processes represents the bulk of process design effort. Yet, business analysts have little or no guidance. For a given failure point, there appears to be an infinite variety of ways to compensate for it. We recognize that compensation is a business issue, but we argue that it can be explained in terms of a handful of parameters within the context of the REA ontology, including things such as the type of activity, the type of resource, and organizational policies. We propose a three-step compensation design approach that 1) starts by abstracting a business process to focus on those activities that create/modify value, 2) compensates for those activities, individually, based on values of the compensation parameters, and 3) composes those compensations using a Saga-like approach. In this paper, we present our approach along with an implementation algorithm and propose a business ontology for compensation design.
{"title":"Methodology and Tool for Business Process Compensation Design","authors":"A. Boubaker, H. Mili, Yasmine Charif, Abderrahmane Leshob","doi":"10.1109/EDOCW.2013.23","DOIUrl":"https://doi.org/10.1109/EDOCW.2013.23","url":null,"abstract":"A typical e-business transaction takes hours or days to complete, involves a number of partners, and comprises many failure points. With short-lived transactions, database systems ensure atomicity by either committing all of the elements of the transaction, or by canceling all of them in case of a failure. With typical e-business transactions, strict atomicity is not practical, and we need a way of reversing the effects of those activities that cannot be rolled back: that is compensation. For a given business process, identifying the various failure points, and designing the appropriate compensation processes represents the bulk of process design effort. Yet, business analysts have little or no guidance. For a given failure point, there appears to be an infinite variety of ways to compensate for it. We recognize that compensation is a business issue, but we argue that it can be explained in terms of a handful of parameters within the context of the REA ontology, including things such as the type of activity, the type of resource, and organizational policies. We propose a three-step compensation design approach that 1) starts by abstracting a business process to focus on those activities that create/modify value, 2) compensates for those activities, individually, based on values of the compensation parameters, and 3) composes those compensations using a Saga-like approach. In this paper, we present our approach along with an implementation algorithm and propose a business ontology for compensation design.","PeriodicalId":376599,"journal":{"name":"2013 17th IEEE International Enterprise Distributed Object Computing Conference Workshops","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121806448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the increasing significance of the service-oriented paradigm for implementing business solutions, assessing and analyzing such solutions also becomes an essential task to ensure and improve their quality of design. One way to develop such solutions, a.k.a., Service-Based systems (SBSs) is to generate BPEL (Business Process Execution Language) processes via orchestrating Web services. Development of large business processes (BPs) involves design decisions. Improper and wrong design decisions in software engineering are commonly known as antipatterns, i.e., poor solutions that might affect the quality of design. The detection of antipatterns is thus important to ensure and improve the quality of BPs. However, although BP antipatterns have been defined in the literature, no effort was given to detect such antipatterns within BPEL processes. With the aim of improving the design and quality of BPEL processes, we propose the first rule-based approach to specify and detect BP antipatterns. We specify 7 BP antipatterns from the literature and perform the detection for 4 of them in an initial experiment with 3 BPEL processes.
{"title":"Detection of Process Antipatterns: A BPEL Perspective","authors":"Francis Palma, Naouel Moha, Yann-Gaël Guéhéneuc","doi":"10.1109/EDOCW.2013.26","DOIUrl":"https://doi.org/10.1109/EDOCW.2013.26","url":null,"abstract":"With the increasing significance of the service-oriented paradigm for implementing business solutions, assessing and analyzing such solutions also becomes an essential task to ensure and improve their quality of design. One way to develop such solutions, a.k.a., Service-Based systems (SBSs) is to generate BPEL (Business Process Execution Language) processes via orchestrating Web services. Development of large business processes (BPs) involves design decisions. Improper and wrong design decisions in software engineering are commonly known as antipatterns, i.e., poor solutions that might affect the quality of design. The detection of antipatterns is thus important to ensure and improve the quality of BPs. However, although BP antipatterns have been defined in the literature, no effort was given to detect such antipatterns within BPEL processes. With the aim of improving the design and quality of BPEL processes, we propose the first rule-based approach to specify and detect BP antipatterns. We specify 7 BP antipatterns from the literature and perform the detection for 4 of them in an initial experiment with 3 BPEL processes.","PeriodicalId":376599,"journal":{"name":"2013 17th IEEE International Enterprise Distributed Object Computing Conference Workshops","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121666792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Logistics service industry is characterized by a high level of collaboration between logistics customers and providers. In recent years sophisticated, knowledge-intense business models such as fourth party and lead logistics evolved that are responsible for planning, coordination, and monitoring entire supply chains across logistics companies. The Logistics Service Engineering and Management (LSEM) platform is a service-oriented infrastructure for the development and management of collaborative contract logistics enabling fourth party and lead logistics. The Service Modeling Framework (SMF) is a pivotal element of the LSEM platform. It allows users of the platform to define, manage and combine logistics services from different providers and allows for an integrated view on complex services setups. In doing so, the SMF enables fourth party and lead logistics not only to work with logistics services but to integrate related service models in order to realize an interconnection of models thus leading to the emergence of a comprehensive logistics service model. In this paper we present how to accomplish the bottom up construction of a comprehensive service model on metamodel as well as on model level and present resulting benefits of interconnected models in terms of information extraction and transformation and in terms of flexibility and robustness of the overall approach.
{"title":"Interconnected Service Models -- Emergence of a Comprehensive Logistics Service Model","authors":"Christoph Augenstein, André Ludwig","doi":"10.1109/EDOCW.2013.33","DOIUrl":"https://doi.org/10.1109/EDOCW.2013.33","url":null,"abstract":"Logistics service industry is characterized by a high level of collaboration between logistics customers and providers. In recent years sophisticated, knowledge-intense business models such as fourth party and lead logistics evolved that are responsible for planning, coordination, and monitoring entire supply chains across logistics companies. The Logistics Service Engineering and Management (LSEM) platform is a service-oriented infrastructure for the development and management of collaborative contract logistics enabling fourth party and lead logistics. The Service Modeling Framework (SMF) is a pivotal element of the LSEM platform. It allows users of the platform to define, manage and combine logistics services from different providers and allows for an integrated view on complex services setups. In doing so, the SMF enables fourth party and lead logistics not only to work with logistics services but to integrate related service models in order to realize an interconnection of models thus leading to the emergence of a comprehensive logistics service model. In this paper we present how to accomplish the bottom up construction of a comprehensive service model on metamodel as well as on model level and present resulting benefits of interconnected models in terms of information extraction and transformation and in terms of flexibility and robustness of the overall approach.","PeriodicalId":376599,"journal":{"name":"2013 17th IEEE International Enterprise Distributed Object Computing Conference Workshops","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122919757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
María Teresa Gómez López, R. M. Gasca, S. Rinderle-Ma
Sometimes the business process model is not known completely, but a set of compliance rules can be used to describe the ordering and temporal relations between activities, incompatibilities, and existence dependencies in the process. The analysis of these compliance rules and the temporal events thrown during the execution of an instance, can be used to detect and diagnose a process behaviour that does not satisfy the expected behaviour. We propose to combine model-based diagnosis and constraint programming for the compliance violation analysis. This combination facilitates the diagnosis of discrepancies between the compliance rules and the events that the process generates as well as enables us to propose correct event time intervals to satisfy the compliance rules.
{"title":"Explaining the Incorrect Temporal Events during Business Process Monitoring by Means of Compliance Rules and Model-Based Diagnosis","authors":"María Teresa Gómez López, R. M. Gasca, S. Rinderle-Ma","doi":"10.1109/EDOCW.2013.25","DOIUrl":"https://doi.org/10.1109/EDOCW.2013.25","url":null,"abstract":"Sometimes the business process model is not known completely, but a set of compliance rules can be used to describe the ordering and temporal relations between activities, incompatibilities, and existence dependencies in the process. The analysis of these compliance rules and the temporal events thrown during the execution of an instance, can be used to detect and diagnose a process behaviour that does not satisfy the expected behaviour. We propose to combine model-based diagnosis and constraint programming for the compliance violation analysis. This combination facilitates the diagnosis of discrepancies between the compliance rules and the events that the process generates as well as enables us to propose correct event time intervals to satisfy the compliance rules.","PeriodicalId":376599,"journal":{"name":"2013 17th IEEE International Enterprise Distributed Object Computing Conference Workshops","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126981149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}