Pub Date : 2024-04-18DOI: 10.1007/s10270-024-01171-3
Giacomo Garaccione, Riccardo Coppola, Luca Ardito, Marco Torchiano
Gamification, the practice of using game elements in non-recreational contexts to increase user participation and interest, has been applied more and more throughout the years in software engineering. Business process modeling is a skill considered fundamental for software engineers, with Business Process Modeling Notation (BPMN) being one of the most commonly used notations for this discipline. BPMN modeling is present in different curricula in specific Master’s Degree courses related to software engineering but is usually seen by students as an unappealing or uninteresting activity. Gamification could potentially solve this issue, though there have been no relevant attempts in research yet. This paper aims at collecting preliminary insights on how gamification affects students’ motivation in performing BPMN modeling tasks and—as a consequence—their productivity and learning outcomes. A web application for modeling BPMN diagrams augmented with gamification mechanics such as feedback, rewards, progression, and penalization has been compared with a non-gamified version that provides more limited feedback in an experiment involving 200 students. The diagrams modeled by the students are collected and analyzed after the experiment. Students’ opinions are gathered using a post-experiment questionnaire. Statistical analysis showed that gamification leads students to check more often for their solutions’ correctness, increasing the semantic correctness of their diagrams, thus showing that it can improve students’ modeling skills. The results, however, are mixed and require additional experiments in the future to fine-tune the tool for actual classroom use.
{"title":"Gamification of business process modeling education: an experimental analysis","authors":"Giacomo Garaccione, Riccardo Coppola, Luca Ardito, Marco Torchiano","doi":"10.1007/s10270-024-01171-3","DOIUrl":"https://doi.org/10.1007/s10270-024-01171-3","url":null,"abstract":"<p>Gamification, the practice of using game elements in non-recreational contexts to increase user participation and interest, has been applied more and more throughout the years in software engineering. Business process modeling is a skill considered fundamental for software engineers, with Business Process Modeling Notation (BPMN) being one of the most commonly used notations for this discipline. BPMN modeling is present in different curricula in specific Master’s Degree courses related to software engineering but is usually seen by students as an unappealing or uninteresting activity. Gamification could potentially solve this issue, though there have been no relevant attempts in research yet. This paper aims at collecting preliminary insights on how gamification affects students’ motivation in performing BPMN modeling tasks and—as a consequence—their productivity and learning outcomes. A web application for modeling BPMN diagrams augmented with gamification mechanics such as feedback, rewards, progression, and penalization has been compared with a non-gamified version that provides more limited feedback in an experiment involving 200 students. The diagrams modeled by the students are collected and analyzed after the experiment. Students’ opinions are gathered using a post-experiment questionnaire. Statistical analysis showed that gamification leads students to check more often for their solutions’ correctness, increasing the semantic correctness of their diagrams, thus showing that it can improve students’ modeling skills. The results, however, are mixed and require additional experiments in the future to fine-tune the tool for actual classroom use.</p>","PeriodicalId":49507,"journal":{"name":"Software and Systems Modeling","volume":"13 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140629103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Model transformations play an essential role in the model-driven engineering paradigm. However, writing a correct transformation requires the user to understand both what the transformation should do and how to enact that change in the transformation. This easily leads to syntactic and semantic errors in transformations which are time-consuming to locate and fix. In this article, we extend our evolutionary algorithm (EA) approach to automatically repair transformations containing multiple semantic errors. To prevent the fitness plateaus and the single fitness peak limitations from our previous work, we include the notion of social diversity as an objective for our EA to promote repair patches tackling errors that are less covered by the other patches of the population. We evaluate our approach on four ATL transformations, which have been mutated to contain up to five semantic errors simultaneously. Our evaluation shows that integrating social diversity when searching for repair patches improves the quality of those patches and speeds up the convergence even when up to five semantic errors are involved.
{"title":"Improving repair of semantic ATL errors using a social diversity metric","authors":"Zahra VaraminyBahnemiry, Jessie Galasso, Bentley Oakes, Houari Sahraoui","doi":"10.1007/s10270-024-01170-4","DOIUrl":"https://doi.org/10.1007/s10270-024-01170-4","url":null,"abstract":"<p>Model transformations play an essential role in the model-driven engineering paradigm. However, writing a correct transformation requires the user to understand both <i>what</i> the transformation should do and <i>how</i> to enact that change in the transformation. This easily leads to <i>syntactic</i> and <i>semantic</i> errors in transformations which are time-consuming to locate and fix. In this article, we extend our evolutionary algorithm (EA) approach to automatically repair transformations containing <i>multiple semantic errors</i>. To prevent the <i>fitness plateaus</i> and the <i>single fitness peak</i> limitations from our previous work, we include the notion of <i>social diversity</i> as an objective for our EA to promote repair patches tackling errors that are less covered by the other patches of the population. We evaluate our approach on four ATL transformations, which have been mutated to contain up to five semantic errors simultaneously. Our evaluation shows that integrating social diversity when searching for repair patches improves the quality of those patches and speeds up the convergence even when up to five semantic errors are involved.</p>","PeriodicalId":49507,"journal":{"name":"Software and Systems Modeling","volume":"70 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140628836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-16DOI: 10.1007/s10270-023-01147-9
Sotirios Liaskos, Saba Zarbaf, John Mylopoulos, Shakil M. Khan
Conceptual modeling plays a central role in planning, designing, developing and maintaining software-intensive systems. One of the goals of conceptual modeling is to enable clear communication among stakeholders involved in said activities. To achieve effective communication, conceptual models must be understood by different people in the same way. To support such shared understanding, conceptual modeling languages are defined, which introduce rules and constraints on how individual models can be built and how they are to be understood. A key component of a modeling language is an ontology, i.e., a set of concepts that modelers must use to describe world phenomena. Once the concepts are chosen, a visual and/or textual vocabulary is adopted for representing the concepts. However, the choices both of the concepts and of the vocabulary used to represent them may affect the quality of the language under consideration: some choices may promote shared understanding better than other choices. To allow evaluation and comparison of alternative choices, we present Peira, a framework for empirically measuring the domain and comprehensibility appropriateness of conceptual modeling language ontologies. Given a language ontology to be evaluated, the framework is based on observing how prospective language users classify domain content under the concepts put forth by said ontology. A set of metrics is then used to analyze the observations and identify and characterize possible issues that the choice of concepts or the way they are represented may have. The metrics are abstract in that they can be operationalized into concrete implementations tailored to specific data collection instruments or study objectives. We evaluate the framework by applying it to compare an existing language against an artificial one that is manufactured to exhibit specific issues. We then test if the metrics indeed detect these issues. We find that the framework does offer the expected indications, but that it also requires good understanding of the metrics prior to committing to interpretations of the observations.
{"title":"Empirically evaluating modeling language ontologies: the Peira framework","authors":"Sotirios Liaskos, Saba Zarbaf, John Mylopoulos, Shakil M. Khan","doi":"10.1007/s10270-023-01147-9","DOIUrl":"https://doi.org/10.1007/s10270-023-01147-9","url":null,"abstract":"<p>Conceptual modeling plays a central role in planning, designing, developing and maintaining software-intensive systems. One of the goals of conceptual modeling is to enable clear communication among stakeholders involved in said activities. To achieve effective communication, conceptual models must be understood by different people in the same way. To support such shared understanding, conceptual modeling languages are defined, which introduce rules and constraints on how individual models can be built and how they are to be understood. A key component of a modeling language is an ontology, i.e., a set of concepts that modelers must use to describe world phenomena. Once the concepts are chosen, a visual and/or textual vocabulary is adopted for representing the concepts. However, the choices both of the concepts and of the vocabulary used to represent them may affect the quality of the language under consideration: some choices may promote shared understanding better than other choices. To allow evaluation and comparison of alternative choices, we present Peira, a framework for empirically measuring the domain and comprehensibility appropriateness of conceptual modeling language ontologies. Given a language ontology to be evaluated, the framework is based on observing how prospective language users classify domain content under the concepts put forth by said ontology. A set of metrics is then used to analyze the observations and identify and characterize possible issues that the choice of concepts or the way they are represented may have. The metrics are abstract in that they can be operationalized into concrete implementations tailored to specific data collection instruments or study objectives. We evaluate the framework by applying it to compare an existing language against an artificial one that is manufactured to exhibit specific issues. We then test if the metrics indeed detect these issues. We find that the framework does offer the expected indications, but that it also requires good understanding of the metrics prior to committing to interpretations of the observations.</p>","PeriodicalId":49507,"journal":{"name":"Software and Systems Modeling","volume":"102 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140597273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-08DOI: 10.1007/s10270-024-01156-2
Marien R. Krouwel, Martin Op ’t Land, Henderik A. Proper
Due to hyper-competition, technological advancements, regulatory changes, etc, the conditions under which enterprises need to thrive become increasingly turbulent. Consequently, enterprise agility increasingly determines an enterprise’s chances for success. As software development often is a limiting factor in achieving enterprise agility, enterprise agility and software adaptability become increasingly intertwined. As a consequence, decisions that regard flexibility should not be left to software developers alone. By taking a Model-driven Software Development (MDSD) approach, starting from DEMO ontological enterprise models and explicit (enterprise) implementation design decisions, the aim of this research is to bridge the gap from enterprise agility to software adaptability, in such a way that software development is no longer a limiting factor in achieving enterprise agility. Low-code technology is a growing market trend that builds on MDSD concepts and claims to offer a high degree of software adaptability. Therefore, as a first step to show the potential benefits to use DEMO ontological enterprise models as a base for MDSD, this research shows the design of a mapping from DEMO models to Mendix for the (automated) creation of a low-code application that also intrinsically accommodates run-time implementation design decisions.
{"title":"From enterprise models to low-code applications: mapping DEMO to Mendix; illustrated in the social housing domain","authors":"Marien R. Krouwel, Martin Op ’t Land, Henderik A. Proper","doi":"10.1007/s10270-024-01156-2","DOIUrl":"https://doi.org/10.1007/s10270-024-01156-2","url":null,"abstract":"<p>Due to hyper-competition, technological advancements, regulatory changes, etc, the conditions under which enterprises need to thrive become increasingly turbulent. Consequently, enterprise agility increasingly determines an enterprise’s chances for success. As software development often is a limiting factor in achieving enterprise agility, enterprise agility and software adaptability become increasingly intertwined. As a consequence, decisions that regard flexibility should not be left to software developers alone. By taking a Model-driven Software Development (MDSD) approach, starting from DEMO ontological enterprise models and explicit (enterprise) implementation design decisions, the aim of this research is to bridge the gap from enterprise agility to software adaptability, in such a way that software development is no longer a limiting factor in achieving enterprise agility. Low-code technology is a growing market trend that builds on MDSD concepts and claims to offer a high degree of software adaptability. Therefore, as a first step to show the potential benefits to use DEMO ontological enterprise models as a base for MDSD, this research shows the design of a mapping from DEMO models to Mendix for the (automated) creation of a low-code application that also intrinsically accommodates run-time implementation design decisions.\u0000</p>","PeriodicalId":49507,"journal":{"name":"Software and Systems Modeling","volume":"8 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140596955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-23DOI: 10.1007/s10270-024-01158-0
Edi Muškardin, Martin Tappler, Bernhard K. Aichernig, Ingo Pill
Black-box systems are inherently hard to verify. Many verification techniques, like model checking, require formal models as a basis. However, such models often do not exist, or they might be outdated. Active automata learning helps to address this issue by offering to automatically infer formal models from system interactions. Hence, automata learning has been receiving much attention in the verification community in recent years. This led to various efficiency improvements, paving the way toward industrial applications. Most research, however, has been focusing on deterministic systems. In this article, we present an approach to efficiently learn models of stochastic reactive systems. Our approach adapts (L^*)-based learning for Markov decision processes, which we improve and extend to stochastic Mealy machines. When compared with previous work, our evaluation demonstrates that the proposed optimizations and adaptations to stochastic Mealy machines can reduce learning costs by an order of magnitude while improving the accuracy of learned models.
{"title":"Active model learning of stochastic reactive systems (extended version)","authors":"Edi Muškardin, Martin Tappler, Bernhard K. Aichernig, Ingo Pill","doi":"10.1007/s10270-024-01158-0","DOIUrl":"https://doi.org/10.1007/s10270-024-01158-0","url":null,"abstract":"<p>Black-box systems are inherently hard to verify. Many verification techniques, like model checking, require formal models as a basis. However, such models often do not exist, or they might be outdated. Active automata learning helps to address this issue by offering to automatically infer formal models from system interactions. Hence, automata learning has been receiving much attention in the verification community in recent years. This led to various efficiency improvements, paving the way toward industrial applications. Most research, however, has been focusing on deterministic systems. In this article, we present an approach to efficiently learn models of stochastic reactive systems. Our approach adapts <span>(L^*)</span>-based learning for Markov decision processes, which we improve and extend to stochastic Mealy machines. When compared with previous work, our evaluation demonstrates that the proposed optimizations and adaptations to stochastic Mealy machines can reduce learning costs by an order of magnitude while improving the accuracy of learned models.</p>","PeriodicalId":49507,"journal":{"name":"Software and Systems Modeling","volume":"46 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140200814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-21DOI: 10.1007/s10270-024-01160-6
Bernhard K. Aichernig, Sandra König, Cristinel Mateis, Andrea Pferscher, Martin Tappler
In this article, we present a novel approach to learning finite automata with the help of recurrent neural networks. Our goal is not only to train a neural network that predicts the observable behavior of an automaton but also to learn its structure, including the set of states and transitions. In contrast to previous work, we constrain the training with a specific regularization term. We iteratively adapt the architecture to learn the minimal automaton, in the case where the number of states is unknown. We evaluate our approach with standard examples from the automata learning literature, but also include a case study of learning the finite-state models of real Bluetooth Low Energy protocol implementations. The results show that we can find an appropriate architecture to learn the correct minimal automata in all considered cases.
{"title":"Learning minimal automata with recurrent neural networks","authors":"Bernhard K. Aichernig, Sandra König, Cristinel Mateis, Andrea Pferscher, Martin Tappler","doi":"10.1007/s10270-024-01160-6","DOIUrl":"https://doi.org/10.1007/s10270-024-01160-6","url":null,"abstract":"<p>In this article, we present a novel approach to learning finite automata with the help of recurrent neural networks. Our goal is not only to train a neural network that predicts the observable behavior of an automaton but also to learn its structure, including the set of states and transitions. In contrast to previous work, we constrain the training with a specific regularization term. We iteratively adapt the architecture to learn the minimal automaton, in the case where the number of states is unknown. We evaluate our approach with standard examples from the automata learning literature, but also include a case study of learning the finite-state models of real Bluetooth Low Energy protocol implementations. The results show that we can find an appropriate architecture to learn the correct minimal automata in all considered cases.</p>","PeriodicalId":49507,"journal":{"name":"Software and Systems Modeling","volume":"102 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140200667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-20DOI: 10.1007/s10270-024-01159-z
Clemens Dubslaff, Patrick Wienhöft, Ansgar Fehnker
Recursive state machines (RSMs) are state-based models for procedural programs with wide-ranging applications in program verification and interprocedural analysis. Model-checking algorithms for RSMs and related formalisms have been intensively studied in the literature. In this article, we devise a new model-checking algorithm for RSMs and requirements in computation tree logic (CTL) that exploits the compositional structure of RSMs by ternary model checking in combination with a lazy evaluation scheme. Specifically, a procedural component is only analyzed in those cases in which it might influence the satisfaction of the CTL requirement. We implemented our model-checking algorithms and evaluate them on randomized scalability benchmarks and on an interprocedural data-flow analysis of Java programs, showing both practical applicability and significant speedups in comparison to state-of-the-art model-checking tools for procedural programs.
{"title":"Lazy model checking for recursive state machines","authors":"Clemens Dubslaff, Patrick Wienhöft, Ansgar Fehnker","doi":"10.1007/s10270-024-01159-z","DOIUrl":"https://doi.org/10.1007/s10270-024-01159-z","url":null,"abstract":"<p><i>Recursive state machines (RSMs)</i> are state-based models for procedural programs with wide-ranging applications in program verification and interprocedural analysis. Model-checking algorithms for RSMs and related formalisms have been intensively studied in the literature. In this article, we devise a new model-checking algorithm for RSMs and requirements in <i>computation tree logic (CTL)</i> that exploits the compositional structure of RSMs by ternary model checking in combination with a lazy evaluation scheme. Specifically, a procedural component is only analyzed in those cases in which it might influence the satisfaction of the CTL requirement. We implemented our model-checking algorithms and evaluate them on randomized scalability benchmarks and on an interprocedural data-flow analysis of <span>Java</span> programs, showing both practical applicability and significant speedups in comparison to state-of-the-art model-checking tools for procedural programs.</p>","PeriodicalId":49507,"journal":{"name":"Software and Systems Modeling","volume":"28 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140200745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-19DOI: 10.1007/s10270-024-01155-3
Jan Haltermann, Heike Wehrheim
Cooperative software validation aims at having verification and/or testing tools cooperate on the task of correctness checking. Cooperation involves the exchange of information about currently achieved results in the form of (verification) artifacts. These artifacts are typically specialized to the type of analysis performed by the tool, e.g., bounded model checking, abstract interpretation or symbolic execution, and hence require the definition of a new artifact for every new cooperation to be built. In this article, we introduce a unified artifact (called Generalized Information Exchange Automaton, short GIA) supporting the cooperation of over-approximating with under-approximating analyses. It provides information gathered by an analysis to its partner in a cooperation, independent of the type of analysis and usage context within software validation. We provide a formal definition of this artifact in the form of an automaton together with two operators on GIAs. The first operation reduces a program by excluding these parts, where the information that they are already processed is encoded in the GIA. The second operation combines partial results from two GIAs into a single on. We show that computed analysis results are never lost when connecting tools via these operations. To experimentally demonstrate the feasibility, we have implemented two such cooperation: one for verification and one for testing. The obtained results show the feasibility of our novel artifact in different contexts of cooperative software validation, in particular how the new artifact is able to overcome some drawbacks of existing artifacts.
合作式软件验证旨在让验证和/或测试工具合作完成正确性检查任务。合作包括以(验证)工件的形式交换关于当前所取得结果的信息。这些工件通常针对工具执行的分析类型而专门设计,例如有界模型检查、抽象解释或符号执行,因此需要为每一次新的合作定义新的工件。在本文中,我们介绍了一种统一的工具(称为 "广义信息交换自动机",简称 "GIA"),它支持过逼近分析与欠逼近分析之间的合作。它向合作中的伙伴提供分析所收集的信息,与软件验证中的分析类型和使用环境无关。我们以自动机的形式提供了这一工具的正式定义,同时还提供了 GIA 的两个运算符。第一种操作是通过排除这些部分来减少程序,这些部分已被处理的信息已在 GIA 中编码。第二种操作是将两个 GIA 的部分结果合并为一个单一结果。我们证明,通过这些操作连接工具时,计算出的分析结果绝不会丢失。为了在实验中证明其可行性,我们实施了两个这样的合作:一个用于验证,一个用于测试。所获得的结果表明,我们的新工具在不同的合作软件验证环境中都是可行的,特别是新工具如何能够克服现有工具的一些缺点。
{"title":"Exchanging information in cooperative software validation","authors":"Jan Haltermann, Heike Wehrheim","doi":"10.1007/s10270-024-01155-3","DOIUrl":"https://doi.org/10.1007/s10270-024-01155-3","url":null,"abstract":"<p>Cooperative software validation aims at having verification and/or testing tools <i>cooperate</i> on the task of correctness checking. Cooperation involves the exchange of information about currently achieved results in the form of (verification) artifacts. These artifacts are typically specialized to the type of analysis performed by the tool, e.g., bounded model checking, abstract interpretation or symbolic execution, and hence require the definition of a new artifact for every new cooperation to be built. In this article, we introduce a unified artifact (called Generalized Information Exchange Automaton, short GIA) supporting the cooperation of <i>over-approximating</i> with <i>under-approximating</i> analyses. It provides information gathered by an analysis to its partner in a cooperation, independent of the type of analysis and usage context within software validation. We provide a formal definition of this artifact in the form of an automaton together with two operators on GIAs. The first operation <i>reduces</i> a program by excluding these parts, where the information that they are already processed is encoded in the GIA. The second operation combines partial results from two GIAs into a single on. We show that computed analysis results are never lost when connecting tools via these operations. To experimentally demonstrate the feasibility, we have implemented two such cooperation: one for verification and one for testing. The obtained results show the feasibility of our novel artifact in different contexts of cooperative software validation, in particular how the new artifact is able to overcome some drawbacks of existing artifacts.</p>","PeriodicalId":49507,"journal":{"name":"Software and Systems Modeling","volume":"10 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140166983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-16DOI: 10.1007/s10270-023-01142-0
Thuy Nguyen, Imen Sayar, Sophie Ebersold, Jean-Michel Bruel
<p>To correctly formalise requirements expressed in natural language, <i>ambiguities</i> must first be identified and then fixed. This paper focuses on <i>behavioural requirements</i> (i.e. requirements related to dynamic aspects and phenomena). Its first objective is to show, based on a practical, public case study, that the disambiguation process <i>cannot be fully automated</i>: even though natural language processing (NLP) tools and machine learning might help in the <i>identification</i> of ambiguities, <i>fixing</i> them often requires a deep, application-specific <i>understanding</i> of the reasons of being of the system of interest, of the characteristics of its environment, of which trade-offs between conflicting objectives are acceptable, and of what is achievable and what is not; it may also require arduous negotiations between stakeholders. Such an understanding and consensus-making ability is not in the reach of current tools and technologies, and will likely remain so for a long while. Beyond ambiguity, requirements are often marred by various other types of defects that could lead to wholly unacceptable consequences. In particular, operational experience shows that requirements <i>inadequacy</i> (whereby, in some of the situations the system could face, what is required is woefully inappropriate or what is necessary is left unspecified) is a significant cause for systems failing to meet expectations. The second objective of this paper is to propose a semantically accurate behavioural requirements formalisation format enabling <i>tool-supported requirements verification</i>, notably with <i>simulation</i>. Such support is necessary for the engineering of large and complex <i>cyber-physical</i> and <i>socio-technical</i> systems to ensure, first, that the specified requirements indeed reflect the true intentions of their authors and second, that they are adequate for all the situations the system could face. To that end, the paper presents an overview of the BASAALT (<i>Behaviour Analysis and Simulation All Along systems Life Time</i>) systems engineering method, and of FORM-L (<i>FOrmal Requirements Modelling Language</i>), its supporting language, which aims at representing as accurately and completely as possible the semantics expressed in the original, natural language behavioural requirements, and is markedly different from languages intended for software code generation. The paper shows that generally, semantically accurate formalisation is not a simple <i>paraphrasing</i> of the original natural language requirements: additional elements are often needed to fully and explicitly reflect all that is implied in natural language. To provide such complements for the case study presented in the paper, we had to follow different <i>formalisation patterns</i>, i.e. sequences of formalisation steps. For this paper, to avoid being skewed by what a particular automatic tool can and cannot do, BASAALT and FORM-L were applied manually. Sti
{"title":"Identifying and fixing ambiguities in, and semantically accurate formalisation of, behavioural requirements","authors":"Thuy Nguyen, Imen Sayar, Sophie Ebersold, Jean-Michel Bruel","doi":"10.1007/s10270-023-01142-0","DOIUrl":"https://doi.org/10.1007/s10270-023-01142-0","url":null,"abstract":"<p>To correctly formalise requirements expressed in natural language, <i>ambiguities</i> must first be identified and then fixed. This paper focuses on <i>behavioural requirements</i> (i.e. requirements related to dynamic aspects and phenomena). Its first objective is to show, based on a practical, public case study, that the disambiguation process <i>cannot be fully automated</i>: even though natural language processing (NLP) tools and machine learning might help in the <i>identification</i> of ambiguities, <i>fixing</i> them often requires a deep, application-specific <i>understanding</i> of the reasons of being of the system of interest, of the characteristics of its environment, of which trade-offs between conflicting objectives are acceptable, and of what is achievable and what is not; it may also require arduous negotiations between stakeholders. Such an understanding and consensus-making ability is not in the reach of current tools and technologies, and will likely remain so for a long while. Beyond ambiguity, requirements are often marred by various other types of defects that could lead to wholly unacceptable consequences. In particular, operational experience shows that requirements <i>inadequacy</i> (whereby, in some of the situations the system could face, what is required is woefully inappropriate or what is necessary is left unspecified) is a significant cause for systems failing to meet expectations. The second objective of this paper is to propose a semantically accurate behavioural requirements formalisation format enabling <i>tool-supported requirements verification</i>, notably with <i>simulation</i>. Such support is necessary for the engineering of large and complex <i>cyber-physical</i> and <i>socio-technical</i> systems to ensure, first, that the specified requirements indeed reflect the true intentions of their authors and second, that they are adequate for all the situations the system could face. To that end, the paper presents an overview of the BASAALT (<i>Behaviour Analysis and Simulation All Along systems Life Time</i>) systems engineering method, and of FORM-L (<i>FOrmal Requirements Modelling Language</i>), its supporting language, which aims at representing as accurately and completely as possible the semantics expressed in the original, natural language behavioural requirements, and is markedly different from languages intended for software code generation. The paper shows that generally, semantically accurate formalisation is not a simple <i>paraphrasing</i> of the original natural language requirements: additional elements are often needed to fully and explicitly reflect all that is implied in natural language. To provide such complements for the case study presented in the paper, we had to follow different <i>formalisation patterns</i>, i.e. sequences of formalisation steps. For this paper, to avoid being skewed by what a particular automatic tool can and cannot do, BASAALT and FORM-L were applied manually. Sti","PeriodicalId":49507,"journal":{"name":"Software and Systems Modeling","volume":"23 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140152284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-16DOI: 10.1007/s10270-024-01157-1
Judith Michael, Volodymyr A. Shekhovtsov
Complex assistive systems providing human behavior support independent of the age or abilities of users are broadly used in a variety of domains including automotive, production, aviation, or medicine. Current research lacks a common understanding of which architectural components are needed to create assistive systems that use models at runtime. Existing descriptions of architectural components are focused on particular domains, consider only some parts of an assistive system, or do not consider models at runtime. We have analyzed common functional requirements for such systems to be able to propose a set of reusable components, which have to be considered when creating assistive systems that use models. Such components constitute a reference architecture that we propose within this paper. To validate the proposed architecture, we have expressed the architectures of two assistive systems from different domains, namely assistance for elderly people and assistance for operators in smart manufacturing in terms of compliance with such architecture. The proposed reference architecture will facilitate the creation of future assistive systems.
{"title":"A model-based reference architecture for complex assistive systems and its application","authors":"Judith Michael, Volodymyr A. Shekhovtsov","doi":"10.1007/s10270-024-01157-1","DOIUrl":"https://doi.org/10.1007/s10270-024-01157-1","url":null,"abstract":"<p>Complex assistive systems providing human behavior support independent of the age or abilities of users are broadly used in a variety of domains including automotive, production, aviation, or medicine. Current research lacks a common understanding of which architectural components are needed to create assistive systems that use models at runtime. Existing descriptions of architectural components are focused on particular domains, consider only some parts of an assistive system, or do not consider models at runtime. We have analyzed common functional requirements for such systems to be able to propose a set of reusable components, which have to be considered when creating assistive systems that use models. Such components constitute a reference architecture that we propose within this paper. To validate the proposed architecture, we have expressed the architectures of two assistive systems from different domains, namely assistance for elderly people and assistance for operators in smart manufacturing in terms of compliance with such architecture. The proposed reference architecture will facilitate the creation of future assistive systems.</p>","PeriodicalId":49507,"journal":{"name":"Software and Systems Modeling","volume":"15 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140152113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}