Pub Date : 2016-06-01DOI: 10.1109/RCIS.2016.7549288
M. Arru, E. Negre, C. Rosenthal-Sabroux, M. Grundstein
Warnings can help prevent damage and harm if they are issued timely and provide information that help responders and population to adequately prepare for the disaster to come. Today, there are many indicator and sensor systems that are designed to reduce disaster risks, or issue early-warnings. In a socially and environmentally responsible word, we need effective Early-Warning Systems (EWS). EWS are Information and Knowledge Systems dedicated to protect people against disasters damages. Such systems are designed to integrate data, information and knowledge from various sources and actors who do not usually interact to issue early-warnings. This paper introduces knowledge implications in EWS decision support design in general, with a discussion on communication processes between data, information and knowledge. We propose a knowledge-oriented vision of EWS elements to examine existing systems and provide dynamic and flow-oriented models. In this perspective, we analyze knowledge integration processes in the design of the fire safety system of our University.
{"title":"Towards a responsible early-warning system: Knowledge implications in decision support design","authors":"M. Arru, E. Negre, C. Rosenthal-Sabroux, M. Grundstein","doi":"10.1109/RCIS.2016.7549288","DOIUrl":"https://doi.org/10.1109/RCIS.2016.7549288","url":null,"abstract":"Warnings can help prevent damage and harm if they are issued timely and provide information that help responders and population to adequately prepare for the disaster to come. Today, there are many indicator and sensor systems that are designed to reduce disaster risks, or issue early-warnings. In a socially and environmentally responsible word, we need effective Early-Warning Systems (EWS). EWS are Information and Knowledge Systems dedicated to protect people against disasters damages. Such systems are designed to integrate data, information and knowledge from various sources and actors who do not usually interact to issue early-warnings. This paper introduces knowledge implications in EWS decision support design in general, with a discussion on communication processes between data, information and knowledge. We propose a knowledge-oriented vision of EWS elements to examine existing systems and provide dynamic and flow-oriented models. In this perspective, we analyze knowledge integration processes in the design of the fire safety system of our University.","PeriodicalId":344289,"journal":{"name":"2016 IEEE Tenth International Conference on Research Challenges in Information Science (RCIS)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128152587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-06-01DOI: 10.1109/RCIS.2016.7549307
Meena Jha, Sanjay Jha, L. O'Brien
With the rise of Big Data, a data-driven approach to business is transforming enterprises. Companies today are thinking about and using data in a myriad new ways to drive business value; from reducing risk and fraud in the financial sector to bringing new pharmaceuticals to market more quickly at a higher level of efficacy. Retailers can track purchase patterns and consumer preferences more accurately to guide product and marketing strategies. Media companies can offer more accurate recommendations and create specialized promotions. Businesses of all kinds can identify new revenue opportunities and operational efficiencies. Big Data can mean different things to different organizations, but one theme remains constant: Big Data calls for a new way of thinking and combining data analytics with business process workflows. Until now businesses were limited to utilizing customer and business information contained within in-house systems. Now they are increasingly analyzing external data too, gaining new insights into customers, markets, supply chains and operations. Organisational silos and a dearth of data specialists are the main obstacles to putting big data to work effectively for decision-making. Big data analytics need to be combined with business processes to improve operations and offer innovative services to customers. Business processes need to be reengineered for big data analytics. In this paper we discuss how the combination of Big Data analytics with business process using reengineering can deliver the benefits to organizations and customers.
{"title":"Combining big data analytics with business process using reengineering","authors":"Meena Jha, Sanjay Jha, L. O'Brien","doi":"10.1109/RCIS.2016.7549307","DOIUrl":"https://doi.org/10.1109/RCIS.2016.7549307","url":null,"abstract":"With the rise of Big Data, a data-driven approach to business is transforming enterprises. Companies today are thinking about and using data in a myriad new ways to drive business value; from reducing risk and fraud in the financial sector to bringing new pharmaceuticals to market more quickly at a higher level of efficacy. Retailers can track purchase patterns and consumer preferences more accurately to guide product and marketing strategies. Media companies can offer more accurate recommendations and create specialized promotions. Businesses of all kinds can identify new revenue opportunities and operational efficiencies. Big Data can mean different things to different organizations, but one theme remains constant: Big Data calls for a new way of thinking and combining data analytics with business process workflows. Until now businesses were limited to utilizing customer and business information contained within in-house systems. Now they are increasingly analyzing external data too, gaining new insights into customers, markets, supply chains and operations. Organisational silos and a dearth of data specialists are the main obstacles to putting big data to work effectively for decision-making. Big data analytics need to be combined with business processes to improve operations and offer innovative services to customers. Business processes need to be reengineered for big data analytics. In this paper we discuss how the combination of Big Data analytics with business process using reengineering can deliver the benefits to organizations and customers.","PeriodicalId":344289,"journal":{"name":"2016 IEEE Tenth International Conference on Research Challenges in Information Science (RCIS)","volume":"225 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132070644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-06-01DOI: 10.1109/RCIS.2016.7549333
Fethi Belghaouti, A. Bouzeghoub, Zakia Kazi-Aoul, Raja Chiky
Nowadays, high volumes of data are generated and published at a very high velocity by real-time systems, such as social networks, e-commerce, weather stations and sensors, producing heterogeneous data streams. To take advantage of linked data and offer interoperable solutions, semantic Web technologies have been used. To analyze these huge volumes of data, different stream mining algorithms exist such as compression or load-shedding. Nevertheless, most of them need many passes through the data and often store part of it on disk. If we want to apply efficient compression on semantic data streams, we need to first detect frequent graph patterns in RDF streams. In this article, we present FreGraPaD, an algorithm that detects those patterns in a single pass, using exclusively internal memory and following a data structure oriented approach. Experimental results clearly confirm the good accuracy of FreGraPaD in detecting frequent graph patterns from semantic data streams.
{"title":"FreGraPaD: Frequent RDF graph patterns detection for semantic data streams","authors":"Fethi Belghaouti, A. Bouzeghoub, Zakia Kazi-Aoul, Raja Chiky","doi":"10.1109/RCIS.2016.7549333","DOIUrl":"https://doi.org/10.1109/RCIS.2016.7549333","url":null,"abstract":"Nowadays, high volumes of data are generated and published at a very high velocity by real-time systems, such as social networks, e-commerce, weather stations and sensors, producing heterogeneous data streams. To take advantage of linked data and offer interoperable solutions, semantic Web technologies have been used. To analyze these huge volumes of data, different stream mining algorithms exist such as compression or load-shedding. Nevertheless, most of them need many passes through the data and often store part of it on disk. If we want to apply efficient compression on semantic data streams, we need to first detect frequent graph patterns in RDF streams. In this article, we present FreGraPaD, an algorithm that detects those patterns in a single pass, using exclusively internal memory and following a data structure oriented approach. Experimental results clearly confirm the good accuracy of FreGraPaD in detecting frequent graph patterns from semantic data streams.","PeriodicalId":344289,"journal":{"name":"2016 IEEE Tenth International Conference on Research Challenges in Information Science (RCIS)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132508279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-06-01DOI: 10.1109/RCIS.2016.7549357
M. Chabrol, B. Dalmas, S. Norre, Sophie Rodier
Process Mining aims to extract information from event logs to highlight the underlying business processes. It is useful in situations where there is no detailed and complete knowledge of how an overall system works, such as in a hospital where most processes are complex and ad-hoc. Many Process Mining discovery techniques have been proposed so far, but many challenges are still to be faced. Implicit dependencies are one of them. Choice-related phenomenon, implicit dependencies are not taken into account in most algorithms and graphical representations. In this paper, we propose the Implicit Dependencies Miner, a Process Tree based algorithm able to detect relevant dependencies.
{"title":"A process tree-based algorithm for the detection of implicit dependencies","authors":"M. Chabrol, B. Dalmas, S. Norre, Sophie Rodier","doi":"10.1109/RCIS.2016.7549357","DOIUrl":"https://doi.org/10.1109/RCIS.2016.7549357","url":null,"abstract":"Process Mining aims to extract information from event logs to highlight the underlying business processes. It is useful in situations where there is no detailed and complete knowledge of how an overall system works, such as in a hospital where most processes are complex and ad-hoc. Many Process Mining discovery techniques have been proposed so far, but many challenges are still to be faced. Implicit dependencies are one of them. Choice-related phenomenon, implicit dependencies are not taken into account in most algorithms and graphical representations. In this paper, we propose the Implicit Dependencies Miner, a Process Tree based algorithm able to detect relevant dependencies.","PeriodicalId":344289,"journal":{"name":"2016 IEEE Tenth International Conference on Research Challenges in Information Science (RCIS)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134192658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-06-01DOI: 10.1109/RCIS.2016.7549355
M. L. V. Eck, N. Sidorova, Wil M.P. van der Aalst
In this paper we address the challenge of applying process mining to discover models of human behaviour from sensor data. This challenge is caused by a gap between sensor data and the event logs that are used as input for process mining techniques, so we provide a transformation approach to bridge this gap. As a result, besides the automatic discovery of process models, the transformed sensor data can also be used by various other process mining techniques, e.g. to identify differences between observed behaviour and expected behaviour. We discuss the transformation approach in the context of the design process of smart products and related services, using a case study performed at Philips where a smart baby bottle has been developed. This case study also demonstrates that the use of process mining can add value to the smart product design process.
{"title":"Enabling process mining on sensor data from smart products","authors":"M. L. V. Eck, N. Sidorova, Wil M.P. van der Aalst","doi":"10.1109/RCIS.2016.7549355","DOIUrl":"https://doi.org/10.1109/RCIS.2016.7549355","url":null,"abstract":"In this paper we address the challenge of applying process mining to discover models of human behaviour from sensor data. This challenge is caused by a gap between sensor data and the event logs that are used as input for process mining techniques, so we provide a transformation approach to bridge this gap. As a result, besides the automatic discovery of process models, the transformed sensor data can also be used by various other process mining techniques, e.g. to identify differences between observed behaviour and expected behaviour. We discuss the transformation approach in the context of the design process of smart products and related services, using a case study performed at Philips where a smart baby bottle has been developed. This case study also demonstrates that the use of process mining can add value to the smart product design process.","PeriodicalId":344289,"journal":{"name":"2016 IEEE Tenth International Conference on Research Challenges in Information Science (RCIS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131066836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-06-01DOI: 10.1109/RCIS.2016.7549276
F. Barbier
People agree that there are two major concerns in Internet computing: Big Data and the Internet of Things (IoT). On purpose, expected evolutions and progresses in technology and science are ruled by the development of suited paradigms (e.g., plug & play middleware for the IoT or MapReduce for Big Data) to face up this “ever encountered” nature of Internet computing. To that extent, information processing (from raw data to meaningful - i.e., semantically rich-information) encompasses the necessity of building a considerable pool of Internet software in a truly different way. Such a paradigm shift is exposed in “The Reactive Manifesto” (www.reactivemanifesto.org). From an architectural perspective, this manifesto promotes software applications' componentization along with the idea of reactiveness: event-driven/message-driven, elasticity, responsiveness and resilience. In short, applications' components have emerging (reactive) features through their ability to seamlessly cooperate with events/messages. Nowadays, successes like Node.js or WebSockets strongly confirm the benefit of reactiveness. Beyond, this keynote tries to demystify and illustrate “reactiveness” through the State Chart XML W3 standard. The keynote discusses methods to design reactive Internet software from models to concrete implementation supports.
{"title":"Reactive information processing","authors":"F. Barbier","doi":"10.1109/RCIS.2016.7549276","DOIUrl":"https://doi.org/10.1109/RCIS.2016.7549276","url":null,"abstract":"People agree that there are two major concerns in Internet computing: Big Data and the Internet of Things (IoT). On purpose, expected evolutions and progresses in technology and science are ruled by the development of suited paradigms (e.g., plug & play middleware for the IoT or MapReduce for Big Data) to face up this “ever encountered” nature of Internet computing. To that extent, information processing (from raw data to meaningful - i.e., semantically rich-information) encompasses the necessity of building a considerable pool of Internet software in a truly different way. Such a paradigm shift is exposed in “The Reactive Manifesto” (www.reactivemanifesto.org). From an architectural perspective, this manifesto promotes software applications' componentization along with the idea of reactiveness: event-driven/message-driven, elasticity, responsiveness and resilience. In short, applications' components have emerging (reactive) features through their ability to seamlessly cooperate with events/messages. Nowadays, successes like Node.js or WebSockets strongly confirm the benefit of reactiveness. Beyond, this keynote tries to demystify and illustrate “reactiveness” through the State Chart XML W3 standard. The keynote discusses methods to design reactive Internet software from models to concrete implementation supports.","PeriodicalId":344289,"journal":{"name":"2016 IEEE Tenth International Conference on Research Challenges in Information Science (RCIS)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128314844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-06-01DOI: 10.1109/RCIS.2016.7549328
Konstantinos Angelopoulos, Fatma Başak Aydemir, P. Giorgini, J. Mylopoulos
Dealing with multiple requirement failures is an essential capability for self-adaptive software systems. This capability becomes more challenging in the presence of conflicting goals. This paper is concerned with the next adaptation problem: the problem of finding the best next adaptation in the presence of multiple failures. `Best' here means that the adaptation chosen optimizes a given set of objective functions, such as the cost of adaptation or the degree of failure for system requirements. The paper proposes a formal framework for defining the next adaptation problem, assuming that we can specify quantitatively the constraints that hold between indicators that measure the degree of failure of each requirement and control parameters. These constraints, along with one or several objective functions, are translated into a constrained multi-objective optimization problem that can be solved by using an OMT/SMT (Optimization Modulo Theories/Satisfiability Modulo Theories) solver, such as OptiMathSAT. The proposed framework is illustrated with the Meeting Scheduler exemplar and a second, e-shop case study.
{"title":"Solving the next adaptation problem with prometheus","authors":"Konstantinos Angelopoulos, Fatma Başak Aydemir, P. Giorgini, J. Mylopoulos","doi":"10.1109/RCIS.2016.7549328","DOIUrl":"https://doi.org/10.1109/RCIS.2016.7549328","url":null,"abstract":"Dealing with multiple requirement failures is an essential capability for self-adaptive software systems. This capability becomes more challenging in the presence of conflicting goals. This paper is concerned with the next adaptation problem: the problem of finding the best next adaptation in the presence of multiple failures. `Best' here means that the adaptation chosen optimizes a given set of objective functions, such as the cost of adaptation or the degree of failure for system requirements. The paper proposes a formal framework for defining the next adaptation problem, assuming that we can specify quantitatively the constraints that hold between indicators that measure the degree of failure of each requirement and control parameters. These constraints, along with one or several objective functions, are translated into a constrained multi-objective optimization problem that can be solved by using an OMT/SMT (Optimization Modulo Theories/Satisfiability Modulo Theories) solver, such as OptiMathSAT. The proposed framework is illustrated with the Meeting Scheduler exemplar and a second, e-shop case study.","PeriodicalId":344289,"journal":{"name":"2016 IEEE Tenth International Conference on Research Challenges in Information Science (RCIS)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128261779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-06-01DOI: 10.1109/RCIS.2016.7549279
Rutger van Hillo, H. Weigand
Advancements in information technology, new laws and regulations and rapidly changing business conditions have led to a need for more timely and ongoing assurance with effectively working controls. Continuous Auditing (CA) and Continuous Monitoring (CM) technologies have made this possible by obtaining real-time audit evidence and enabling organizations to review whether controls and systems function as intended on an ongoing basis. Although organizations understand the benefits of CA/CM, the current state of adoption is relatively low not the least because organizations find it difficult to quantify the value. This research used a design research approach to develop a framework that addresses the added value of redesigned internal controls within IT-supported business processes of organizations. The value of the CA/CM is broken down into three distinguished domains, Efficiency, Assurance and Quality. The Waterfall method is proposed as visualization method in order to indicate a possible cost saving and value increase of a redesigned control clearly. The framework has been tested and evaluated within a case organization. It can be concluded that the framework is applicable for providing more insights in the value of CA/CM.
{"title":"Continuous Auditing & Continuous Monitoring: Continuous value?","authors":"Rutger van Hillo, H. Weigand","doi":"10.1109/RCIS.2016.7549279","DOIUrl":"https://doi.org/10.1109/RCIS.2016.7549279","url":null,"abstract":"Advancements in information technology, new laws and regulations and rapidly changing business conditions have led to a need for more timely and ongoing assurance with effectively working controls. Continuous Auditing (CA) and Continuous Monitoring (CM) technologies have made this possible by obtaining real-time audit evidence and enabling organizations to review whether controls and systems function as intended on an ongoing basis. Although organizations understand the benefits of CA/CM, the current state of adoption is relatively low not the least because organizations find it difficult to quantify the value. This research used a design research approach to develop a framework that addresses the added value of redesigned internal controls within IT-supported business processes of organizations. The value of the CA/CM is broken down into three distinguished domains, Efficiency, Assurance and Quality. The Waterfall method is proposed as visualization method in order to indicate a possible cost saving and value increase of a redesigned control clearly. The framework has been tested and evaluated within a case organization. It can be concluded that the framework is applicable for providing more insights in the value of CA/CM.","PeriodicalId":344289,"journal":{"name":"2016 IEEE Tenth International Conference on Research Challenges in Information Science (RCIS)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130564266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-06-01DOI: 10.1109/RCIS.2016.7549330
M. Hosseini, A. Shahri, Keith Phalp, Raian Ali
Transparency is one of the main requirements in business information systems with its unique characteristics, and it requires designated engineering approaches. Despite that, transparency requirements have often been studied along other mainstream requirements relating to information, such as privacy, and seldom studied as a first-class concept. In addition, the literature on transparency is mainly driven by the perspective of information providers, and a large number of stakeholders who receive or request information are usually neglected. To achieve a holistic and more efficient management of transparency requirements, we propose a conceptual framework which integrates three mechanisms of crowdsourcing, structured feedback and social adaptation. Crowdsourcing facilitates the involvement of a large, diverse group of stakeholders in transparency engineering. The use of structured feedback helps automating the process of feedback acquisition and analysis. Eventually, as transparency requirements evolve over time, social adaptation can be applied to adapt the business information system to meet the emerging transparency requirements of the stakeholders.
{"title":"Crowdsourcing transparency requirements through structured feedback and social adaptation","authors":"M. Hosseini, A. Shahri, Keith Phalp, Raian Ali","doi":"10.1109/RCIS.2016.7549330","DOIUrl":"https://doi.org/10.1109/RCIS.2016.7549330","url":null,"abstract":"Transparency is one of the main requirements in business information systems with its unique characteristics, and it requires designated engineering approaches. Despite that, transparency requirements have often been studied along other mainstream requirements relating to information, such as privacy, and seldom studied as a first-class concept. In addition, the literature on transparency is mainly driven by the perspective of information providers, and a large number of stakeholders who receive or request information are usually neglected. To achieve a holistic and more efficient management of transparency requirements, we propose a conceptual framework which integrates three mechanisms of crowdsourcing, structured feedback and social adaptation. Crowdsourcing facilitates the involvement of a large, diverse group of stakeholders in transparency engineering. The use of structured feedback helps automating the process of feedback acquisition and analysis. Eventually, as transparency requirements evolve over time, social adaptation can be applied to adapt the business information system to meet the emerging transparency requirements of the stakeholders.","PeriodicalId":344289,"journal":{"name":"2016 IEEE Tenth International Conference on Research Challenges in Information Science (RCIS)","volume":"101 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132077336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-06-01DOI: 10.1109/RCIS.2016.7549280
Fatma Ellouze, M. Chaâbane, R. Bouaziz, E. Andonoff
Process flexibility has been investigated in depth in the context of intra-organisational processes, but it is still an open issue when processes cross the boundaries of companies. In this paper, we address the modelling of flexible inter-organisational processes using a version-based approach. Indeed, versions are known to be a powerful technique to deal with variability, evolution and adaptation of processes, which are the three main needs of process flexibility. More precisely, this paper presents VP2M (Version of Process Meta-Model), a meta-model supporting the modelling of versions of inter-organisational processes, addressing both static and dynamic aspects of VP2M. It also illustrates process version modelling within the Subsea Pipeline process example.
{"title":"Addressing inter-organisational process flexibility using versions: The VP2M approach","authors":"Fatma Ellouze, M. Chaâbane, R. Bouaziz, E. Andonoff","doi":"10.1109/RCIS.2016.7549280","DOIUrl":"https://doi.org/10.1109/RCIS.2016.7549280","url":null,"abstract":"Process flexibility has been investigated in depth in the context of intra-organisational processes, but it is still an open issue when processes cross the boundaries of companies. In this paper, we address the modelling of flexible inter-organisational processes using a version-based approach. Indeed, versions are known to be a powerful technique to deal with variability, evolution and adaptation of processes, which are the three main needs of process flexibility. More precisely, this paper presents VP2M (Version of Process Meta-Model), a meta-model supporting the modelling of versions of inter-organisational processes, addressing both static and dynamic aspects of VP2M. It also illustrates process version modelling within the Subsea Pipeline process example.","PeriodicalId":344289,"journal":{"name":"2016 IEEE Tenth International Conference on Research Challenges in Information Science (RCIS)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134155068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}