Pub Date : 2024-09-03DOI: 10.1007/s10270-024-01204-x
Tong Li, Yiting Wang, Xiang Wei, Xueying Zhang, Yu Liu
Understanding and interpreting vast amounts of information is pivotal in the contemporary data-rich age. Data visualization has emerged as a significant measure of comprehending these data. Similarly, an appropriate visualization can also enhance software modeling by providing straightforward and interactive representations. However, current data visualization methods predominantly require users to have data visualization-related expertise, which is usually challenging to obtain in reality. It is essential to bridge the gap between visualization requirements and visualization solutions for non-expert users, assisting them in automatically operationalizing their visualization requirements. This paper proposes a MUltilayer framework for analyzing and operationalizing visualization REQuirements that automatically derives appropriate visualization solutions based on users’ requirements. Specifically, we systematically investigate the connections among visualization requirements, visual variable characteristics, visual variable attributes, and visualization solutions, based on which we establish a conceptual framework that characterizes the relationships among different layers. Our proposal contributes to not only automatically operationalizing visualization requirements but also providing meaningful explanations for the derived visualization solutions. To promote our proposal and pragmatically benefit real users, we have developed and deployed a prototype tool based on the proposed framework, which is publicly available at https://reqdv.vmasks.fun. To evaluate our proposed framework, we conducted an initial controlled experiment with 44 participants to test the performance of the evolved mappings within our framework. Based on the expert’s feedback, we refined the mappings and incorporated a ranking system for visualization solutions tailored to specific requirements. To assess the current method, a subsequent experiment with another group of 44 participants and a focused case study involving two new participants were carried out. The results demonstrate that users perceive that the current method accelerates task completion, especially for complex tasks, by efficiently narrowing down options and prioritizing them. This approach is particularly advantageous for users with limited data visualization experience. Besides, the multilayer framework can be used to inspire the visualization of models in the software modeling community.
{"title":"MUREQ: a multilayer framework for analyzing and operationalizing visualization requirements","authors":"Tong Li, Yiting Wang, Xiang Wei, Xueying Zhang, Yu Liu","doi":"10.1007/s10270-024-01204-x","DOIUrl":"https://doi.org/10.1007/s10270-024-01204-x","url":null,"abstract":"<p>Understanding and interpreting vast amounts of information is pivotal in the contemporary data-rich age. Data visualization has emerged as a significant measure of comprehending these data. Similarly, an appropriate visualization can also enhance software modeling by providing straightforward and interactive representations. However, current data visualization methods predominantly require users to have data visualization-related expertise, which is usually challenging to obtain in reality. It is essential to bridge the gap between visualization requirements and visualization solutions for non-expert users, assisting them in automatically operationalizing their visualization requirements. This paper proposes a MUltilayer framework for analyzing and operationalizing visualization REQuirements that automatically derives appropriate visualization solutions based on users’ requirements. Specifically, we systematically investigate the connections among visualization requirements, visual variable characteristics, visual variable attributes, and visualization solutions, based on which we establish a conceptual framework that characterizes the relationships among different layers. Our proposal contributes to not only automatically operationalizing visualization requirements but also providing meaningful explanations for the derived visualization solutions. To promote our proposal and pragmatically benefit real users, we have developed and deployed a prototype tool based on the proposed framework, which is publicly available at https://reqdv.vmasks.fun. To evaluate our proposed framework, we conducted an initial controlled experiment with 44 participants to test the performance of the evolved mappings within our framework. Based on the expert’s feedback, we refined the mappings and incorporated a ranking system for visualization solutions tailored to specific requirements. To assess the current method, a subsequent experiment with another group of 44 participants and a focused case study involving two new participants were carried out. The results demonstrate that users perceive that the current method accelerates task completion, especially for complex tasks, by efficiently narrowing down options and prioritizing them. This approach is particularly advantageous for users with limited data visualization experience. Besides, the multilayer framework can be used to inspire the visualization of models in the software modeling community.</p>","PeriodicalId":49507,"journal":{"name":"Software and Systems Modeling","volume":"45 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142206199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-22DOI: 10.1007/s10270-024-01202-z
Tarek Skouti, Ronny Seiger, Frank J. Furrer, Susanne Strahringer
Business process modeling is essential for organizations to comprehend, analyze, and enhance their business operations. The business process model and notation (BPMN) is a standard widely adopted for illustrating business processes. However, it falls short when modeling roles, interactions, and responsibilities within complex modern processes that involve digital, human, and non-human entities, typically found in cyber-physical systems (CPS). In this paper, we introduce Role-based BPMN (RBPMN), a standard-compliant extension of BPMN 2.0 that distinctly depicts roles and their interactions within business processes. We underscore the value of RBPMN and a role-based context modeling approach through a modeling example in CPS that facilitates the representation of role-based variations in the process flow, namely a production process in a smart factory. Our findings suggest that RBPMN is a valuable BPMN extension that enhances the expressiveness, variability, and comprehensiveness of business process models, especially in complex and context-sensitive processes.
{"title":"RBPMN: the value of roles for business process modeling","authors":"Tarek Skouti, Ronny Seiger, Frank J. Furrer, Susanne Strahringer","doi":"10.1007/s10270-024-01202-z","DOIUrl":"https://doi.org/10.1007/s10270-024-01202-z","url":null,"abstract":"<p>Business process modeling is essential for organizations to comprehend, analyze, and enhance their business operations. The business process model and notation (BPMN) is a standard widely adopted for illustrating business processes. However, it falls short when modeling roles, interactions, and responsibilities within complex modern processes that involve digital, human, and non-human entities, typically found in cyber-physical systems (CPS). In this paper, we introduce Role-based BPMN (RBPMN), a standard-compliant extension of BPMN 2.0 that distinctly depicts roles and their interactions within business processes. We underscore the value of RBPMN and a role-based context modeling approach through a modeling example in CPS that facilitates the representation of role-based variations in the process flow, namely a production process in a smart factory. Our findings suggest that RBPMN is a valuable BPMN extension that enhances the expressiveness, variability, and comprehensiveness of business process models, especially in complex and context-sensitive processes.</p>","PeriodicalId":49507,"journal":{"name":"Software and Systems Modeling","volume":"66 3 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142206204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-16DOI: 10.1007/s10270-024-01201-0
Maxime Méré, Frédéric Jouault, Loïc Pallardy, Richard Perdriau
The formal verification of the properties of semi-formal models can make it easier to ensure their security and safety. However, this task is generally cumbersome for non-specialists in formal verification, particularly in an industrial context. This paper introduces an evaluation of four formal verification tools on an industrial case, called a Life Cycle Management System (LCMS). This LCMS makes it possible to deploy Product-Service Systems (PSSs) to customers using Systems-on-Chip (SoC). A PSS is a business model in which products and services are tightly connected and whose objective is to optimize the use of products, with a positive environmental impact. A SoC can embed hardware security; however, a LCMS must be secure from end to end, which requires a verification not only of the used protocol (in this case, a blockchain-based protocol), but also of the whole architecture. For that purpose, semi-formal UML models of a LCMS were first specified and designed with their associated properties, then improved in order to be formally verifiable. Despite being more complex, they remain capable of being processed by dedicated tools. In this paper, Verifpal and ProVerif, two formal cryptographic protocol verifiers, are used and evaluated for the cryptographic protocol and AnimUML (developed by one of the authors) and HugoRT, two verification tools for behavior and UML for the architectural model are evaluated. These tools are assessed and compared according to their coverage of properties and state spaces, limitations, and usability for non-specialists. Some limitations of the approach itself are also provided.
{"title":"Evaluating formal model verification tools in an industrial context: the case of a smart device life cycle management system","authors":"Maxime Méré, Frédéric Jouault, Loïc Pallardy, Richard Perdriau","doi":"10.1007/s10270-024-01201-0","DOIUrl":"https://doi.org/10.1007/s10270-024-01201-0","url":null,"abstract":"<p>The formal verification of the properties of semi-formal models can make it easier to ensure their security and safety. However, this task is generally cumbersome for non-specialists in formal verification, particularly in an industrial context. This paper introduces an evaluation of four formal verification tools on an industrial case, called a Life Cycle Management System (LCMS). This LCMS makes it possible to deploy Product-Service Systems (PSSs) to customers using Systems-on-Chip (SoC). A PSS is a business model in which products and services are tightly connected and whose objective is to optimize the use of products, with a positive environmental impact. A SoC can embed hardware security; however, a LCMS must be secure from end to end, which requires a verification not only of the used protocol (in this case, a blockchain-based protocol), but also of the whole architecture. For that purpose, semi-formal UML models of a LCMS were first specified and designed with their associated properties, then improved in order to be formally verifiable. Despite being more complex, they remain capable of being processed by dedicated tools. In this paper, Verifpal and ProVerif, two formal cryptographic protocol verifiers, are used and evaluated for the cryptographic protocol and AnimUML (developed by one of the authors) and HugoRT, two verification tools for behavior and UML for the architectural model are evaluated. These tools are assessed and compared according to their coverage of properties and state spaces, limitations, and usability for non-specialists. Some limitations of the approach itself are also provided.</p>","PeriodicalId":49507,"journal":{"name":"Software and Systems Modeling","volume":"28 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142206201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-16DOI: 10.1007/s10270-024-01198-6
Charlotte Verbruggen, Monique Snoeck
The domain of Enterprise Information Systems Engineering uses many different conceptual modelling languages and methods to specify the requirements of a system under development. The complexity of the systems under development may require addressing different perspectives with different models, such as the data and process perspectives. The modeller will thus have to choose the appropriate (set of) modelling languages according to their specific modelling goal. Given that the different aspects relate to a single system, ideally, the models that capture the different perspectives should be aligned and consistent to ensure their integration. Each candidate (set of) modelling languages comes with advantages and disadvantages. To make an informed choice in this matter, the modeller should select a number of criteria relevant to their problem domain and compare candidate modelling languages based on these criteria. A comprehensive evaluation framework for integrated modelling approaches, that considers more general aspects such as understandability, ease of use, model quality, etc. besides the ability to model the desired aspects, does not yet exist and is therefore the focus of this paper. In recent years, several combinations of modelling languages have been investigated. Amongst these combinations, data + process modelling has attracted a lot of interest, and, interestingly, evaluation frameworks for this combination have been proposed as well. Therefore, this paper will primarily focus on the integrated multi-modelling of data and processes, including the process-related viewpoints of users and authorisations. The contribution of this paper is two-fold: on a theoretical level, the paper provides an overview of existing evaluation frameworks in the literature, builds a more complete set of evaluation criteria and proposes a unified taxonomy for the classification of these evaluation criteria (TEC-MAP); on a practical level, the paper provides guidance and support to the modeller for selecting the appropriate evaluation criteria for their problem domain and presents three examples of the application of TEC-MAP.
{"title":"TEC-MAP: a taxonomy of evaluation criteria and its application to the multi-modelling of data and processes","authors":"Charlotte Verbruggen, Monique Snoeck","doi":"10.1007/s10270-024-01198-6","DOIUrl":"https://doi.org/10.1007/s10270-024-01198-6","url":null,"abstract":"<p>The domain of Enterprise Information Systems Engineering uses many different conceptual modelling languages and methods to specify the requirements of a system under development. The complexity of the systems under development may require addressing different perspectives with different models, such as the data and process perspectives. The modeller will thus have to choose the appropriate (set of) modelling languages according to their specific modelling goal. Given that the different aspects relate to a single system, ideally, the models that capture the different perspectives should be aligned and consistent to ensure their integration. Each candidate (set of) modelling languages comes with advantages and disadvantages. To make an informed choice in this matter, the modeller should select a number of criteria relevant to their problem domain and compare candidate modelling languages based on these criteria. A comprehensive evaluation framework for integrated modelling approaches, that considers more general aspects such as understandability, ease of use, model quality, etc. besides the ability to model the desired aspects, does not yet exist and is therefore the focus of this paper. In recent years, several combinations of modelling languages have been investigated. Amongst these combinations, data + process modelling has attracted a lot of interest, and, interestingly, evaluation frameworks for this combination have been proposed as well. Therefore, this paper will primarily focus on the integrated multi-modelling of data and processes, including the process-related viewpoints of users and authorisations. The contribution of this paper is two-fold: on a theoretical level, the paper provides an overview of existing evaluation frameworks in the literature, builds a more complete set of evaluation criteria and proposes a unified taxonomy for the classification of these evaluation criteria (TEC-MAP); on a practical level, the paper provides guidance and support to the modeller for selecting the appropriate evaluation criteria for their problem domain and presents three examples of the application of TEC-MAP.</p>","PeriodicalId":49507,"journal":{"name":"Software and Systems Modeling","volume":"18 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142206202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-07DOI: 10.1007/s10270-024-01199-5
Shaghayegh Tavassoli, Ramtin Khosravi
Family-based behavioral models capture the behavior of a software product line (SPL) in a single model, incorporating the variability among the products. In representing these models, a common technique is to annotate well-known behavioral modeling notations with features, e.g., featured finite state machine (FFSM) as an extension to the well-known finite state machine notation. It is not always the case that family-based behavioral models are prepared before developing an SPL, or kept up-to-date during the development and maintenance. Model learning is helpful in such situations. Taking advantage of the commonality among the SPL products, it is possible to reuse the product models in learning the behavior of the entire SPL. In this paper, the process of constructing FFSM models for SPLs is enhanced. Model learning is performed using an adaptive learning algorithm called PL*. Regarding the model learning step, we introduce a new heuristic method for determining the product learning orders with high learning efficiency. The proposed heuristic takes into account the complexity of features added by each product and improves the previous heuristics for learning order. To construct the whole family-based behavioral model of an SPL, the behavioral models of individual products are iteratively merged into the whole family-based model. A similarity metric is used to determine which states of the two models are merged with each other. By providing a formalization for the existing FFSMDiff algorithm for this purpose, we prove that in the FFSM constructed by this algorithm, the choice of the similarity metric does not affect the observable behavior of the constructed FFSM. We study the efficiency of three similarity metrics, two of which are local metrics, in the sense that they determine the similarity of two states only in terms of their adjacent transitions. On the other hand, a global similarity metric takes into account not only the adjacent transitions, but also the similarity of their adjacent states. It is shown by experimentation on two case studies that local similarity metrics can result in constructing FFSMs as concise as the FFSM resulting from the global similarity metric. The results also show that local similarity metrics increase the efficiency and scalability while maintaining the effectiveness of the FFSM construction.
{"title":"Efficient construction of family-based behavioral models from adaptively learned models","authors":"Shaghayegh Tavassoli, Ramtin Khosravi","doi":"10.1007/s10270-024-01199-5","DOIUrl":"https://doi.org/10.1007/s10270-024-01199-5","url":null,"abstract":"<p>Family-based behavioral models capture the behavior of a software product line (SPL) in a single model, incorporating the variability among the products. In representing these models, a common technique is to annotate well-known behavioral modeling notations with features, e.g., featured finite state machine (FFSM) as an extension to the well-known finite state machine notation. It is not always the case that family-based behavioral models are prepared before developing an SPL, or kept up-to-date during the development and maintenance. Model learning is helpful in such situations. Taking advantage of the commonality among the SPL products, it is possible to reuse the product models in learning the behavior of the entire SPL. In this paper, the process of constructing FFSM models for SPLs is enhanced. Model learning is performed using an adaptive learning algorithm called PL*. Regarding the model learning step, we introduce a new heuristic method for determining the product learning orders with high learning efficiency. The proposed heuristic takes into account the complexity of features added by each product and improves the previous heuristics for learning order. To construct the whole family-based behavioral model of an SPL, the behavioral models of individual products are iteratively merged into the whole family-based model. A similarity metric is used to determine which states of the two models are merged with each other. By providing a formalization for the existing FFSM<sub>Diff</sub> algorithm for this purpose, we prove that in the FFSM constructed by this algorithm, the choice of the similarity metric does not affect the observable behavior of the constructed FFSM. We study the efficiency of three similarity metrics, two of which are local metrics, in the sense that they determine the similarity of two states only in terms of their adjacent transitions. On the other hand, a global similarity metric takes into account not only the adjacent transitions, but also the similarity of their adjacent states. It is shown by experimentation on two case studies that local similarity metrics can result in constructing FFSMs as concise as the FFSM resulting from the global similarity metric. The results also show that local similarity metrics increase the efficiency and scalability while maintaining the effectiveness of the FFSM construction.</p>","PeriodicalId":49507,"journal":{"name":"Software and Systems Modeling","volume":"27 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141945496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-05DOI: 10.1007/s10270-024-01193-x
Jörg Holtmann, Jennifer Horkoff, Rebekka Wohlrab, Victoria Vu, Rashidah Kasauli, Salome Maro, Jan-Philipp Steghöfer, Eric Knauss
Large-scale systems development commonly faces the challenge of managing relevant knowledge between different organizational groups, particularly in increasingly agile contexts. Here, there is a conflict between coordination and group autonomy, and it is challenging to determine what necessary coordination information must be shared by what teams or groups, and what can be left to local team management. We introduce a way to manage this complexity using a modeling framework based on two core concepts: methodological islands (i.e., groups using different development methods than the surrounding organization) and boundary objects (i.e., artifacts that create a common understanding across team borders). However, we found that companies often lack a systematic way of assessing coordination issues and the use of boundary objects between methodological islands. As part of an iterative design science study, we have addressed this gap by producing a modeling framework (BOMI: Boundary Objects and Methodological Islands) to better capture and analyze coordination and knowledge management in practice. This framework includes a metamodel, as well as a list of bad smells over this metamodel that can be leveraged to detect inter-team coordination issues. The framework also includes a methodology to suggest concrete modeling steps and broader guidelines to help apply the approach successfully in practice. We have developed Eclipse-based tool support for the BOMI method, allowing for both graphical and textual model creation, and including an implementation of views over BOMI instance models in order to manage model complexity. We have evaluated these artifacts iteratively together with five large-scale companies developing complex systems. In this work, we describe the BOMI framework and its iterative evaluation in several real cases, reporting on lessons learned and identifying future work. We have produced a matured and stable modeling framework which facilitates understanding and reflection over complex organizational configurations, communication, governance, and coordination of knowledge artifacts in large-scale agile system development.
{"title":"Using boundary objects and methodological island (BOMI) modeling in large-scale agile systems development","authors":"Jörg Holtmann, Jennifer Horkoff, Rebekka Wohlrab, Victoria Vu, Rashidah Kasauli, Salome Maro, Jan-Philipp Steghöfer, Eric Knauss","doi":"10.1007/s10270-024-01193-x","DOIUrl":"https://doi.org/10.1007/s10270-024-01193-x","url":null,"abstract":"<p>Large-scale systems development commonly faces the challenge of managing relevant knowledge between different organizational groups, particularly in increasingly agile contexts. Here, there is a conflict between coordination and group autonomy, and it is challenging to determine what necessary coordination information must be shared by what teams or groups, and what can be left to local team management. We introduce a way to manage this complexity using a modeling framework based on two core concepts: methodological islands (i.e., groups using different development methods than the surrounding organization) and boundary objects (i.e., artifacts that create a common understanding across team borders). However, we found that companies often lack a systematic way of assessing coordination issues and the use of boundary objects between methodological islands. As part of an iterative design science study, we have addressed this gap by producing a modeling framework (BOMI: Boundary Objects and Methodological Islands) to better capture and analyze coordination and knowledge management in practice. This framework includes a metamodel, as well as a list of bad smells over this metamodel that can be leveraged to detect inter-team coordination issues. The framework also includes a methodology to suggest concrete modeling steps and broader guidelines to help apply the approach successfully in practice. We have developed Eclipse-based tool support for the BOMI method, allowing for both graphical and textual model creation, and including an implementation of views over BOMI instance models in order to manage model complexity. We have evaluated these artifacts iteratively together with five large-scale companies developing complex systems. In this work, we describe the BOMI framework and its iterative evaluation in several real cases, reporting on lessons learned and identifying future work. We have produced a matured and stable modeling framework which facilitates understanding and reflection over complex organizational configurations, communication, governance, and coordination of knowledge artifacts in large-scale agile system development.</p>","PeriodicalId":49507,"journal":{"name":"Software and Systems Modeling","volume":"33 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141945497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-05DOI: 10.1007/s10270-024-01195-9
Hendrik Göttmann, Birte Caesar, Lasse Beers, Malte Lochau, Andy Schürr, Alexander Fay
In many recent application domains, software systems must repeatedly reconfigure themselves at runtime to satisfy changing contextual requirements. To decide which next configuration is presumably best suited is a very challenging task as it involves not only functional requirements but also non-functional properties (NFP). NFP include multiple, potentially contradicting, criteria like real-time constraints and cost measures like energy consumption. Effectiveness of context-aware reconfiguration decisions further depends on mostly uncertain future contexts which makes greedy one-step decision heuristics potentially misleading. Moreover, the computational runtime overhead for reconfiguration planning should not nullify the benefits. Nevertheless, entirely pre-planning reconfiguration decisions during design time is also not feasible due to missing knowledge about runtime contexts. In this article, we propose a model-based technique for precomputing context-aware reconfiguration decisions under partially uncertain real-time constraints and cost measures. We employ a game-theoretic approach based on stochastic priced timed game automata as reconfiguration model. This formal model allows us to automatically synthesize winning strategies for the first player (the system) which efficiently delivers presumably best-fitting reconfiguration decisions as reactions to moves of the second player (the context) at runtime. Our tool implementation copes with the high computational complexity of strategy synthesis by utilizing the statistical model checker Uppaal Stratego to approximate near-optimal solutions. We applied our tool to a real-world example consisting of a reconfigurable robot support system for the construction of aircraft fuselages. Our evaluation results show that Uppaal Stratego is indeed able to precompute effective reconfiguration strategies within a reasonable amount of time.
{"title":"Cost-sensitive precomputation of real-time-aware reconfiguration strategies based on stochastic priced timed games","authors":"Hendrik Göttmann, Birte Caesar, Lasse Beers, Malte Lochau, Andy Schürr, Alexander Fay","doi":"10.1007/s10270-024-01195-9","DOIUrl":"https://doi.org/10.1007/s10270-024-01195-9","url":null,"abstract":"<p>In many recent application domains, software systems must repeatedly reconfigure themselves at runtime to satisfy changing contextual requirements. To decide which next configuration is presumably best suited is a very challenging task as it involves not only functional requirements but also non-functional properties (NFP). NFP include multiple, potentially contradicting, criteria like real-time constraints and cost measures like energy consumption. Effectiveness of context-aware reconfiguration decisions further depends on mostly uncertain future contexts which makes greedy one-step decision heuristics potentially misleading. Moreover, the computational runtime overhead for reconfiguration planning should not nullify the benefits. Nevertheless, entirely pre-planning reconfiguration decisions during design time is also not feasible due to missing knowledge about runtime contexts. In this article, we propose a model-based technique for precomputing context-aware reconfiguration decisions under partially uncertain real-time constraints and cost measures. We employ a game-theoretic approach based on stochastic priced timed game automata as reconfiguration model. This formal model allows us to automatically synthesize winning strategies for the first player (the system) which efficiently delivers presumably best-fitting reconfiguration decisions as reactions to moves of the second player (the context) at runtime. Our tool implementation copes with the high computational complexity of strategy synthesis by utilizing the statistical model checker <span>Uppaal Stratego</span> to approximate near-optimal solutions. We applied our tool to a real-world example consisting of a reconfigurable robot support system for the construction of aircraft fuselages. Our evaluation results show that <span>Uppaal Stratego</span> is indeed able to precompute effective reconfiguration strategies within a reasonable amount of time.</p>","PeriodicalId":49507,"journal":{"name":"Software and Systems Modeling","volume":"1 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141945498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-25DOI: 10.1007/s10270-024-01197-7
Daniel Farias, Bruno Nogueira, Ivaldir Farias Júnior, Ermeson Andrade
Satellite constellations play critical roles across various sectors, encompassing communication, Earth observation and space exploration. Ensuring the dependable operation of these constellations is of utmost importance. This paper introduces a dependability modeling approach using stochastic Petri nets to analyze satellite constellations. The primary focus is on improving operational efficiency through the assessment of availability, reliability and maintainability. The approach helps satellite designers make informed decisions when selecting constellation configurations by assessing various dependability metrics. Using a global navigation satellite system as a case study, we conduct extensive numerical experiments to evaluate the feasibility of our approach. The results demonstrate quantitatively the significant impact of redundant components on both reliability and availability. They also illustrate how utilizing satellites in repair and operational orbits can influence these metrics and highlight the direct correlation between reliability and maintainability.
卫星星座在通信、地球观测和空间探索等各个领域发挥着至关重要的作用。确保这些星座的可靠运行至关重要。本文介绍了一种利用随机 Petri 网分析卫星星座的可靠性建模方法。主要重点是通过评估可用性、可靠性和可维护性来提高运行效率。该方法通过评估各种可靠性指标,帮助卫星设计人员在选择星座配置时做出明智的决策。以全球导航卫星系统为例,我们进行了大量的数值实验,以评估我们方法的可行性。实验结果从数量上证明了冗余组件对可靠性和可用性的重大影响。结果还说明了在维修和运行轨道上使用卫星如何影响这些指标,并突出了可靠性和可维护性之间的直接关联。
{"title":"A modeling-based approach for dependability analysis of a constellation of satellites","authors":"Daniel Farias, Bruno Nogueira, Ivaldir Farias Júnior, Ermeson Andrade","doi":"10.1007/s10270-024-01197-7","DOIUrl":"https://doi.org/10.1007/s10270-024-01197-7","url":null,"abstract":"<p>Satellite constellations play critical roles across various sectors, encompassing communication, Earth observation and space exploration. Ensuring the dependable operation of these constellations is of utmost importance. This paper introduces a dependability modeling approach using stochastic Petri nets to analyze satellite constellations. The primary focus is on improving operational efficiency through the assessment of availability, reliability and maintainability. The approach helps satellite designers make informed decisions when selecting constellation configurations by assessing various dependability metrics. Using a global navigation satellite system as a case study, we conduct extensive numerical experiments to evaluate the feasibility of our approach. The results demonstrate quantitatively the significant impact of redundant components on both reliability and availability. They also illustrate how utilizing satellites in repair and operational orbits can influence these metrics and highlight the direct correlation between reliability and maintainability.</p>","PeriodicalId":49507,"journal":{"name":"Software and Systems Modeling","volume":"69 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141774104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-18DOI: 10.1007/s10270-024-01196-8
Benoit Combemale, Jeff Gray, Bernhard Rumpe
{"title":"Modeling for sustainability: Sustainable Development Goals (SDG) of the United Nations","authors":"Benoit Combemale, Jeff Gray, Bernhard Rumpe","doi":"10.1007/s10270-024-01196-8","DOIUrl":"https://doi.org/10.1007/s10270-024-01196-8","url":null,"abstract":"","PeriodicalId":49507,"journal":{"name":"Software and Systems Modeling","volume":"63 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141742289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Modeling is often associated with complex and heavy tooling, leading to a negative perception among practitioners. However, alternative paradigms, such as everything-as-code or low-code, are gaining acceptance due to their perceived ease of use. This paper explores the dichotomy between these perceptions through the lens of “modeler experience” (MX). MX includes factors such as user experience, motivation, integration, collaboration and versioning, and language complexity. We examine the relationships between these factors and their impact on different modeling usage scenarios. Our findings highlight the importance of considering MX when understanding how developers interact with modeling tools and the complexities of modeling and associated tooling.
{"title":"Systematizing modeler experience (MX) in model-driven engineering success stories","authors":"Reyhaneh Kalantari, Julian Oertel, Joeri Exelmans, Satrio Adi Rukmono, Vasco Amaral, Matthias Tichy, Katharina Juhnke, Jan-Philipp Steghöfer, Silvia Abrahão","doi":"10.1007/s10270-024-01194-w","DOIUrl":"https://doi.org/10.1007/s10270-024-01194-w","url":null,"abstract":"<p>Modeling is often associated with complex and heavy tooling, leading to a negative perception among practitioners. However, alternative paradigms, such as everything-as-code or low-code, are gaining acceptance due to their perceived ease of use. This paper explores the dichotomy between these perceptions through the lens of “modeler experience” (MX). MX includes factors such as user experience, motivation, integration, collaboration and versioning, and language complexity. We examine the relationships between these factors and their impact on different modeling usage scenarios. Our findings highlight the importance of considering MX when understanding how developers interact with modeling tools and the complexities of modeling and associated tooling.</p>","PeriodicalId":49507,"journal":{"name":"Software and Systems Modeling","volume":"31 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141586653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}