Trung-Viet Nguyen, Lam-Son Lê, Hong Linh Truong, Khuong Nguyen-An, P. Ha
With the rise of Internet of Things, end-users expect to obtain data from well-connected smart devices and stations through data services being provisioned in distributed architectures. Such services could be aggregated in a number of smart ways to provide the end-users and third-party applications with sophisticated data (e.g., weather data coupled with soil pollution), resulting in a growing number of service offerings to be requested. Service offerings that have been shortlisted for a certain data request (e.g., rainfall in a particular farming site) need to be ranked according to the end-users' preference. Service level agreements, i.e., the mutual responsibilities between the service provider and its consumers, address this sort of preference. Unfortunately, provisioning quality-aware services under this term still stays on the sidelines. In this paper, we propose a novel service architecture where the service level agreements shall be: (i) accumulated overtime on IoT service transactions; (ii) compiled when aggregating IoT services; (iii) used as a ranking criterion for suggesting IoT service offerings. We demonstrate our new approach in the service provisioning of agricultural datasets taken from a farming site of the Mekong Delta in Vietnam.
{"title":"Handling Service Level Agreements in IoT = Minding Rules + Log Analytics?","authors":"Trung-Viet Nguyen, Lam-Son Lê, Hong Linh Truong, Khuong Nguyen-An, P. Ha","doi":"10.1109/EDOC.2018.00027","DOIUrl":"https://doi.org/10.1109/EDOC.2018.00027","url":null,"abstract":"With the rise of Internet of Things, end-users expect to obtain data from well-connected smart devices and stations through data services being provisioned in distributed architectures. Such services could be aggregated in a number of smart ways to provide the end-users and third-party applications with sophisticated data (e.g., weather data coupled with soil pollution), resulting in a growing number of service offerings to be requested. Service offerings that have been shortlisted for a certain data request (e.g., rainfall in a particular farming site) need to be ranked according to the end-users' preference. Service level agreements, i.e., the mutual responsibilities between the service provider and its consumers, address this sort of preference. Unfortunately, provisioning quality-aware services under this term still stays on the sidelines. In this paper, we propose a novel service architecture where the service level agreements shall be: (i) accumulated overtime on IoT service transactions; (ii) compiled when aggregating IoT services; (iii) used as a ranking criterion for suggesting IoT service offerings. We demonstrate our new approach in the service provisioning of agricultural datasets taken from a farming site of the Mekong Delta in Vietnam.","PeriodicalId":6544,"journal":{"name":"2018 IEEE 22nd International Enterprise Distributed Object Computing Conference (EDOC)","volume":"23 1","pages":"145-153"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76535127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ömer Uludağ, Martin Kleehaus, Christoph Caprano, F. Matthes
Over the last two decades, agile methods have transformed and brought unique changes to software development practice by strongly emphasizing team collaboration, customer involvement, and change tolerance. The success of agile methods for small, co-located teams has inspired organizations to increasingly apply agile practices to large-scale efforts. Since these methods are originally designed for small teams, unprecedented challenges occur when introducing them at larger scale, such as inter-team coordination and communication, dependencies with other organizational units or general resistances to changes. Compared to the rich body of agile software development literature describing typical challenges, recurring challenges of stakeholders and initiatives in large-scale agile development has not yet been studied through secondary studies sufficiently. With this paper, we aim to fill this gap by presenting a structured literature review on challenges in large-scale agile development. We identified 79 challenges grouped into eleven categories.
{"title":"Identifying and Structuring Challenges in Large-Scale Agile Development Based on a Structured Literature Review","authors":"Ömer Uludağ, Martin Kleehaus, Christoph Caprano, F. Matthes","doi":"10.1109/EDOC.2018.00032","DOIUrl":"https://doi.org/10.1109/EDOC.2018.00032","url":null,"abstract":"Over the last two decades, agile methods have transformed and brought unique changes to software development practice by strongly emphasizing team collaboration, customer involvement, and change tolerance. The success of agile methods for small, co-located teams has inspired organizations to increasingly apply agile practices to large-scale efforts. Since these methods are originally designed for small teams, unprecedented challenges occur when introducing them at larger scale, such as inter-team coordination and communication, dependencies with other organizational units or general resistances to changes. Compared to the rich body of agile software development literature describing typical challenges, recurring challenges of stakeholders and initiatives in large-scale agile development has not yet been studied through secondary studies sufficiently. With this paper, we aim to fill this gap by presenting a structured literature review on challenges in large-scale agile development. We identified 79 challenges grouped into eleven categories.","PeriodicalId":6544,"journal":{"name":"2018 IEEE 22nd International Enterprise Distributed Object Computing Conference (EDOC)","volume":"52 1","pages":"191-197"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81575148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Message from the EDOC 2018 Program Chairs","authors":"","doi":"10.1109/edoc.2018.00006","DOIUrl":"https://doi.org/10.1109/edoc.2018.00006","url":null,"abstract":"","PeriodicalId":6544,"journal":{"name":"2018 IEEE 22nd International Enterprise Distributed Object Computing Conference (EDOC)","volume":"27 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85445727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Michael Wurster, Uwe Breitenbücher, Oliver Kopp, F. Leymann
In recent years, many deployment systems have been developed that process deployment models to automatically provision applications. The main objective of these systems is to shorten delivery times and to ensure a proper execution of the deployment process. However, these systems mainly focus on the correct technical execution of the deployment, but do not check whether the deployed application is working properly. Especially in DevOps scenarios where applications are modified frequently, this can quickly lead to broken deployments, for example, if a wrong component version was specified in the deployment model that has not been adapted to a new database schema. Ironically, even hardly noticeable errors in deployment models quickly result in technically successful deployments, which do not work at all. In this paper, we tackle these issues. We present a modeling concept that enables developers to define deployment tests directly along with the deployment model. These tests are then automatically run by a runtime after deployment to verify that the application is working properly. To validate the technical feasibility of the approach, we applied the concept to TOSCA and extended an existing open source TOSCA runtime.
{"title":"Modeling and Automated Execution of Application Deployment Tests","authors":"Michael Wurster, Uwe Breitenbücher, Oliver Kopp, F. Leymann","doi":"10.1109/EDOC.2018.00030","DOIUrl":"https://doi.org/10.1109/EDOC.2018.00030","url":null,"abstract":"In recent years, many deployment systems have been developed that process deployment models to automatically provision applications. The main objective of these systems is to shorten delivery times and to ensure a proper execution of the deployment process. However, these systems mainly focus on the correct technical execution of the deployment, but do not check whether the deployed application is working properly. Especially in DevOps scenarios where applications are modified frequently, this can quickly lead to broken deployments, for example, if a wrong component version was specified in the deployment model that has not been adapted to a new database schema. Ironically, even hardly noticeable errors in deployment models quickly result in technically successful deployments, which do not work at all. In this paper, we tackle these issues. We present a modeling concept that enables developers to define deployment tests directly along with the deployment model. These tests are then automatically run by a runtime after deployment to verify that the application is working properly. To validate the technical feasibility of the approach, we applied the concept to TOSCA and extended an existing open source TOSCA runtime.","PeriodicalId":6544,"journal":{"name":"2018 IEEE 22nd International Enterprise Distributed Object Computing Conference (EDOC)","volume":"75 1","pages":"171-180"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85985227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Contemporary process management systems support users during the execution of repetitive, predefined business processes. However, when unforeseen situations occur, which are not part of the process model serving as the template for process execution, contemporary process management technology is often unable to offer adequate user support. One solution to this problem is to allow for ad-hoc changes to process models, i.e., changes that may be applied on the fly to a running process instance. As opposed to the widespread activity-centric process modeling paradigm, for which the support of instance-specific ad-hoc changes is well researched, albeit not supported by most commercial solutions, there is no corresponding support for ad-hoc changes in other process support paradigms, such as artifact-centric or object-aware process management. This paper presents concepts for supporting such ad-hoc changes in object-aware process management, and gives insights into the challenges we tackled when implementing this kind of process flexibility in the PHILharmonicFlows process execution engine. The development of such advanced features is highly relevant for data-centric BPM, as the research field is generally perceived as having low maturity when compared to activity-centric BPM.
{"title":"Enabling Ad-Hoc Changes to Object-Aware Processes","authors":"Kevin Andrews, S. Steinau, M. Reichert","doi":"10.1109/EDOC.2018.00021","DOIUrl":"https://doi.org/10.1109/EDOC.2018.00021","url":null,"abstract":"Contemporary process management systems support users during the execution of repetitive, predefined business processes. However, when unforeseen situations occur, which are not part of the process model serving as the template for process execution, contemporary process management technology is often unable to offer adequate user support. One solution to this problem is to allow for ad-hoc changes to process models, i.e., changes that may be applied on the fly to a running process instance. As opposed to the widespread activity-centric process modeling paradigm, for which the support of instance-specific ad-hoc changes is well researched, albeit not supported by most commercial solutions, there is no corresponding support for ad-hoc changes in other process support paradigms, such as artifact-centric or object-aware process management. This paper presents concepts for supporting such ad-hoc changes in object-aware process management, and gives insights into the challenges we tackled when implementing this kind of process flexibility in the PHILharmonicFlows process execution engine. The development of such advanced features is highly relevant for data-centric BPM, as the research field is generally perceived as having low maturity when compared to activity-centric BPM.","PeriodicalId":6544,"journal":{"name":"2018 IEEE 22nd International Enterprise Distributed Object Computing Conference (EDOC)","volume":"18 1","pages":"85-94"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73900282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zia Babar, Alexei Lapouchnian, E. Yu, A. Chan, Sebastian Carbajales
Organizations are increasingly looking to adopt and incorporate cognitive capabilities into key business processes to aid human decision-making activities. The availability of context data helps with improved decision-making involving both human users and cognitive systems, while ensuring continuing satisfaction of enterprise objectives. Therefore, the ongoing monitoring, selection and management of context data for redesigning sections of the overall business process structure, particularly where the cognitive systems are integrated in business processes, is of great inter-est. This paper proposes a systematic model-based approach to visualize the detection of context changes in a business process, determine an appropriate response to this context change, and identify the corresponding reconfiguration of processes in another part of the enterprise. This paper not only handles context, but also looks at the processes that need to respond to changes in that context. Together these processes constitute a business process architecture. This enables business process reconfiguration to better integrate cognitive systems in process activities requiring decision-making. The use of such modeling techniques facilitates the investigation of multiple process configurations while considering satisfaction of functional and non-functional objectives and ongoing contextual changes.
{"title":"Modeling and Analyzing Process Architecture for Context-Driven Adaptation: Designing Cognitively-Enhanced Business Processes for Enterprises","authors":"Zia Babar, Alexei Lapouchnian, E. Yu, A. Chan, Sebastian Carbajales","doi":"10.1109/EDOC.2018.00018","DOIUrl":"https://doi.org/10.1109/EDOC.2018.00018","url":null,"abstract":"Organizations are increasingly looking to adopt and incorporate cognitive capabilities into key business processes to aid human decision-making activities. The availability of context data helps with improved decision-making involving both human users and cognitive systems, while ensuring continuing satisfaction of enterprise objectives. Therefore, the ongoing monitoring, selection and management of context data for redesigning sections of the overall business process structure, particularly where the cognitive systems are integrated in business processes, is of great inter-est. This paper proposes a systematic model-based approach to visualize the detection of context changes in a business process, determine an appropriate response to this context change, and identify the corresponding reconfiguration of processes in another part of the enterprise. This paper not only handles context, but also looks at the processes that need to respond to changes in that context. Together these processes constitute a business process architecture. This enables business process reconfiguration to better integrate cognitive systems in process activities requiring decision-making. The use of such modeling techniques facilitates the investigation of multiple process configurations while considering satisfaction of functional and non-functional objectives and ongoing contextual changes.","PeriodicalId":6544,"journal":{"name":"2018 IEEE 22nd International Enterprise Distributed Object Computing Conference (EDOC)","volume":"107 1","pages":"58-67"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77421817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Adina Aldea, M. Iacob, A. Wombacher, M. Hiralal, T. Franck
Industry 4.0 has begun to shape the way organizations operate by emphasizing the need for a duality between physical machines and sensors, and the (big) data they generate, exchange and use. Manufacturing is one of several industries which is expected to be impacted by this technological revolution. Increasing the information flows and integration of systems within organizations, and along the supply chain is considered one of the main challenges that needs to be addressed by these organizations. One approach for addressing this challenge is to investigate how this abundance of (big) operational data can be used in combination with IT-driven design approaches, such as Enterprise Architecture. Therefore, in this paper we propose our vision for Enterprise Architecture 4.0, i.e. an extended Enterprise Architecture approach for the context of Industry 4.0, and we give an account of our (work-in-progress) efforts to design a model management and analytics software platform supporting this vision. The usage of the software tool is exemplified with the help of a case study with an organization that develops IT and automation systems for the husbandry sector.
{"title":"Enterprise Architecture 4.0 – A Vision, an Approach and Software Tool Support","authors":"Adina Aldea, M. Iacob, A. Wombacher, M. Hiralal, T. Franck","doi":"10.1109/EDOC.2018.00011","DOIUrl":"https://doi.org/10.1109/EDOC.2018.00011","url":null,"abstract":"Industry 4.0 has begun to shape the way organizations operate by emphasizing the need for a duality between physical machines and sensors, and the (big) data they generate, exchange and use. Manufacturing is one of several industries which is expected to be impacted by this technological revolution. Increasing the information flows and integration of systems within organizations, and along the supply chain is considered one of the main challenges that needs to be addressed by these organizations. One approach for addressing this challenge is to investigate how this abundance of (big) operational data can be used in combination with IT-driven design approaches, such as Enterprise Architecture. Therefore, in this paper we propose our vision for Enterprise Architecture 4.0, i.e. an extended Enterprise Architecture approach for the context of Industry 4.0, and we give an account of our (work-in-progress) efforts to design a model management and analytics software platform supporting this vision. The usage of the software tool is exemplified with the help of a case study with an organization that develops IT and automation systems for the husbandry sector.","PeriodicalId":6544,"journal":{"name":"2018 IEEE 22nd International Enterprise Distributed Object Computing Conference (EDOC)","volume":"93 1","pages":"1-10"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82986308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ana Claudia Amorim, M. Silva, R. Pereira, M. Gonçalves
COBIT 5 is a widely-used framework for implementing sound governance of enterprise IT (GEIT). Despite the existence of official guidance, there are still several challenges that we can encounter. Currently, the ISACA's official implementation solution follows a sequentially ordered process corresponding to a traditional approach, however, organizations are increasingly embracing more agile ones for managing projects where the solution is not clear from the beginning. Using the Design Science Research Methodology, the authors analyse the current state of art and provide a Scrum based methodology for managing at team level a COBIT 5 programme. With a hybrid agile-traditional approach, the authors aim to eliminate some known challenges of COBIT 5, such as lack of support from top management and misaligned scopes and solutions. Additionally, in this paper, the authors present the results obtained from applying the designed methodology in a COBIT 5 programme in the Portuguese Finance Ministry, as well as inspect two series of interviews: 10 performed with experts from both Scrum and COBIT 5 areas to evaluate the solution, and 6 others with the team involved in the demonstration programme, to understand if the objectives where achieved. The article ends with lessons learned, limitations and future work.
{"title":"Using Scrum for Implementing IT Governance with COBIT 5","authors":"Ana Claudia Amorim, M. Silva, R. Pereira, M. Gonçalves","doi":"10.1109/EDOC.2018.00033","DOIUrl":"https://doi.org/10.1109/EDOC.2018.00033","url":null,"abstract":"COBIT 5 is a widely-used framework for implementing sound governance of enterprise IT (GEIT). Despite the existence of official guidance, there are still several challenges that we can encounter. Currently, the ISACA's official implementation solution follows a sequentially ordered process corresponding to a traditional approach, however, organizations are increasingly embracing more agile ones for managing projects where the solution is not clear from the beginning. Using the Design Science Research Methodology, the authors analyse the current state of art and provide a Scrum based methodology for managing at team level a COBIT 5 programme. With a hybrid agile-traditional approach, the authors aim to eliminate some known challenges of COBIT 5, such as lack of support from top management and misaligned scopes and solutions. Additionally, in this paper, the authors present the results obtained from applying the designed methodology in a COBIT 5 programme in the Portuguese Finance Ministry, as well as inspect two series of interviews: 10 performed with experts from both Scrum and COBIT 5 areas to evaluate the solution, and 6 others with the team involved in the demonstration programme, to understand if the objectives where achieved. The article ends with lessons learned, limitations and future work.","PeriodicalId":6544,"journal":{"name":"2018 IEEE 22nd International Enterprise Distributed Object Computing Conference (EDOC)","volume":"11 1","pages":"198-207"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78659470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Seco, S. Debois, Thomas T. Hildebrandt, Tijs Slaats
Enterprise computing applications generally consists of several inter-related business processes linked together via shared data objects and events. We address the open challenge of providing formal modelling and implementation techniques for such enterprise computing applications, introducing the declarative, data-centric and event-driven process language RESEDA for REactive SEmi-structured DAta. The language is inspired by the computational model of spreadsheets and recent advances in declarative business process modelling notations. The key idea is to associate either input events or reactive computation events to the individual elements of semi-structured data and declare reactive behaviour as explicit reaction rules and constraints between these events. Moreover, RESEDA comes with a formal operational semantics given as rewrite rules supporting both formal analysis and persistent execution of the application as sequences of rewrites of the data. The data, along with the set of constraints, thereby at the same time constitutes the specification of the data, its behaviour and the run-time execution component. This key contribution of the paper is to introduce the RESEDA language, its formal execution semantics and give a sufficient condition for liveness of programs. We also establish Turing-equivalence of the language independently of the choice of underlying data expressions and exemplify the use of RESEDA by a running example of an online store. A prototype implementation of RESEDA and the examples of the paper are available on-line at http://dcr.tools/reseda.
{"title":"RESEDA: Declaring Live Event-Driven Computations as REactive SEmi-Structured DAta","authors":"J. Seco, S. Debois, Thomas T. Hildebrandt, Tijs Slaats","doi":"10.1109/EDOC.2018.00020","DOIUrl":"https://doi.org/10.1109/EDOC.2018.00020","url":null,"abstract":"Enterprise computing applications generally consists of several inter-related business processes linked together via shared data objects and events. We address the open challenge of providing formal modelling and implementation techniques for such enterprise computing applications, introducing the declarative, data-centric and event-driven process language RESEDA for REactive SEmi-structured DAta. The language is inspired by the computational model of spreadsheets and recent advances in declarative business process modelling notations. The key idea is to associate either input events or reactive computation events to the individual elements of semi-structured data and declare reactive behaviour as explicit reaction rules and constraints between these events. Moreover, RESEDA comes with a formal operational semantics given as rewrite rules supporting both formal analysis and persistent execution of the application as sequences of rewrites of the data. The data, along with the set of constraints, thereby at the same time constitutes the specification of the data, its behaviour and the run-time execution component. This key contribution of the paper is to introduce the RESEDA language, its formal execution semantics and give a sufficient condition for liveness of programs. We also establish Turing-equivalence of the language independently of the choice of underlying data expressions and exemplify the use of RESEDA by a running example of an online store. A prototype implementation of RESEDA and the examples of the paper are available on-line at http://dcr.tools/reseda.","PeriodicalId":6544,"journal":{"name":"2018 IEEE 22nd International Enterprise Distributed Object Computing Conference (EDOC)","volume":"12 1","pages":"75-84"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86129274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Emmanuel Nowakowski, Matthias Farwick, Thomas Trojer, M. Haeusler, Johannes Kessler, R. Breu
Industry 4.0 has become an increasing driver of change in the manufacturing industry. However, companies are struggling with the often risky and expensive IT transformation projects that are needed to reach full automation of the sales, production and logistics cycle. We observed a lack of research on the practice of modeling and planning IT transformations towards Industry 4.0. To form the basis of research in this area, we conducted a series of expert interviews on the topic of enterprise architecture transformation planning in the context of Industry 4.0. As a result, we identified several pressing challenges that need to be addressed by organizations to successfully model, plan, and execute such IT transformations. This paper contributes to theory by identifying problems and potential design artifacts that are able to mitigate these problems.
{"title":"Enterprise Architecture Planning in the Context of Industry 4.0 Transformations","authors":"Emmanuel Nowakowski, Matthias Farwick, Thomas Trojer, M. Haeusler, Johannes Kessler, R. Breu","doi":"10.1109/EDOC.2018.00015","DOIUrl":"https://doi.org/10.1109/EDOC.2018.00015","url":null,"abstract":"Industry 4.0 has become an increasing driver of change in the manufacturing industry. However, companies are struggling with the often risky and expensive IT transformation projects that are needed to reach full automation of the sales, production and logistics cycle. We observed a lack of research on the practice of modeling and planning IT transformations towards Industry 4.0. To form the basis of research in this area, we conducted a series of expert interviews on the topic of enterprise architecture transformation planning in the context of Industry 4.0. As a result, we identified several pressing challenges that need to be addressed by organizations to successfully model, plan, and execute such IT transformations. This paper contributes to theory by identifying problems and potential design artifacts that are able to mitigate these problems.","PeriodicalId":6544,"journal":{"name":"2018 IEEE 22nd International Enterprise Distributed Object Computing Conference (EDOC)","volume":"7 1","pages":"35-43"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77500399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}