Pub Date : 2024-04-30DOI: 10.1007/s10270-024-01169-x
Florian Cesal, Dominik Bork
Many powerful metamodeling platforms enabling model-driven software engineering (MDSE) exist, each with its strengths, weaknesses, functionalities, programming language(s), and developer community. Platform interoperability would enable users to exploit their mutual benefits. Such interoperability would allow the transformation of metamodels and models created in one platform into equivalent metamodels and models in other platforms. Language engineers could then freely choose the metamodeling platform without risking a lock-in effect. Two well-documented and publicly available metamodeling platforms are the eclipse modeling framework (EMF) and the modeling SDK for visual studio (MSDKVS). In this paper, we propose an M3-level-bridge (M3B) that establishes interoperability between EMF and MSDKVS on the abstract syntax level and on the graphical concrete syntax level. To establish such interoperability we (i) compare the two platforms, (ii) present a conceptual mapping between them, and (iii) implement a bidirectional transformation bridge including both the metamodel and model layer. We evaluate our approach by transforming a collection of publicly available metamodels and automatically generated or manually created models thereof. The transformation outcomes are then used to quantitatively and qualitatively evaluate the transformation’s validity, executability, and expressiveness.
{"title":"Establishing interoperability between EMF and MSDKVS: an M3-level-bridge to transform metamodels and models","authors":"Florian Cesal, Dominik Bork","doi":"10.1007/s10270-024-01169-x","DOIUrl":"https://doi.org/10.1007/s10270-024-01169-x","url":null,"abstract":"<p>Many powerful metamodeling platforms enabling model-driven software engineering (MDSE) exist, each with its strengths, weaknesses, functionalities, programming language(s), and developer community. Platform interoperability would enable users to exploit their mutual benefits. Such interoperability would allow the transformation of metamodels and models created in one platform into equivalent metamodels and models in other platforms. Language engineers could then freely choose the metamodeling platform without risking a lock-in effect. Two well-documented and publicly available metamodeling platforms are the eclipse modeling framework (EMF) and the modeling SDK for visual studio (MSDKVS). In this paper, we propose an M3-level-bridge (M3B) that establishes interoperability between EMF and MSDKVS on the abstract syntax level and on the graphical concrete syntax level. To establish such interoperability we (i) compare the two platforms, (ii) present a conceptual mapping between them, and (iii) implement a bidirectional transformation bridge including both the metamodel and model layer. We evaluate our approach by transforming a collection of publicly available metamodels and automatically generated or manually created models thereof. The transformation outcomes are then used to quantitatively and qualitatively evaluate the transformation’s validity, executability, and expressiveness.</p>","PeriodicalId":49507,"journal":{"name":"Software and Systems Modeling","volume":"18 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140831563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-29DOI: 10.1007/s10270-024-01177-x
Richard F. Paige, Jordi Cabot
The modeling field is rapidly evolving and expanding to address new research topics and to connect with new disciplines. As such, what constituted a good modeling research contribution ten years ago may not be the same today. We try to distill some insights of what we (and the community we aim to represent) consider today as key elements of a good research paper in the field of software and systems modeling. Such insights—which will need to evolve and adapt with time—will be useful not just for authors of new papers, but also for reviewers and editors.
{"title":"What makes a good modeling research contribution?","authors":"Richard F. Paige, Jordi Cabot","doi":"10.1007/s10270-024-01177-x","DOIUrl":"https://doi.org/10.1007/s10270-024-01177-x","url":null,"abstract":"<p>The modeling field is rapidly evolving and expanding to address new research topics and to connect with new disciplines. As such, what constituted a good modeling research contribution ten years ago may not be the same today. We try to distill some insights of what we (and the community we aim to represent) consider today as key elements of a good research paper in the field of software and systems modeling. Such insights—which will need to evolve and adapt with time—will be useful not just for authors of new papers, but also for reviewers and editors.\u0000</p>","PeriodicalId":49507,"journal":{"name":"Software and Systems Modeling","volume":"76 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140831789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-29DOI: 10.1007/s10270-024-01173-1
Xiao He, Yi Liu, Huihong He
Similarity-based model matching is the cornerstone of model versioning. It pairs model elements based on a distance metric (e.g., edit distance). However, calculating the distances between elements is computationally expensive. Consequently, a similarity-based matcher typically suffers from performance issues when the model size increases. Based on observation, there are two main causes of the high computation cost: (1) when matching an element p, the matcher calculates the distance between p and every candidate element q, despite the obvious dissimilarity between p and q; (2) the matcher always calculates the distance between p and (q'), even though q and (q') are very similar and the distance between p and q is already known. This paper proposes a dual-hash-based approach, which employs two entirely different hashing techniques—similarity-preserving hashing and integrity-based hashing—to accelerate similarity-based model matching. With similarity-preserving hashing, our approach can quickly filter out the dissimilar candidate elements according to their similarity hashes computed using our similarity-preserving hash function, which maps an element to a 64-bit binary hash. With integrity-based hashing, our approach can cache and reuse computed distance values by associating them with the checksums of model elements. We also propose an index structure to facilitate hash-based model matching. Our approach has been implemented and integrated into EMF Compare. We evaluate our approach using open-source Ecore and UML models. The results show that our hash function is effective in preserving the similarity between model elements and our matching approach reduces time costs by 20–88% while assuring the matching results consistent with EMF Compare.
基于相似性的模型匹配是模型版本化的基石。它根据距离度量(如编辑距离)对模型元素进行配对。然而,计算元素之间的距离需要耗费大量计算资源。因此,当模型规模增大时,基于相似性的匹配器通常会出现性能问题。根据观察,计算成本高的主要原因有两个:(1)当匹配一个元素 p 时,匹配器会计算 p 和每个候选元素 q 之间的距离,尽管 p 和 q 之间有明显的不相似性;(2)匹配器总是计算 p 和 (q')之间的距离,尽管 q 和 (q')非常相似,并且 p 和 q 之间的距离已经已知。本文提出了一种基于双重散列的方法,它采用了两种完全不同的散列技术--保存相似性散列和基于完整性散列--来加速基于相似性的模型匹配。通过相似性保留哈希算法,我们的方法可以根据使用我们的相似性保留哈希函数计算出的相似性哈希值,快速筛选出不相似的候选元素,该哈希函数将元素映射为 64 位二进制哈希值。通过基于完整性的哈希算法,我们的方法可以将计算出的距离值与模型元素的校验和相关联,从而实现缓存和重用。我们还提出了一种索引结构,以促进基于哈希值的模型匹配。我们的方法已经实现并集成到 EMF Compare 中。我们使用开源 Ecore 和 UML 模型对我们的方法进行了评估。结果表明,我们的哈希函数能有效地保持模型元素之间的相似性,我们的匹配方法能减少 20-88% 的时间成本,同时确保匹配结果与 EMF Compare 一致。
{"title":"Accelerating similarity-based model matching using dual hashing","authors":"Xiao He, Yi Liu, Huihong He","doi":"10.1007/s10270-024-01173-1","DOIUrl":"https://doi.org/10.1007/s10270-024-01173-1","url":null,"abstract":"<p>Similarity-based model matching is the cornerstone of model versioning. It pairs model elements based on a distance metric (e.g., edit distance). However, calculating the distances between elements is computationally expensive. Consequently, a similarity-based matcher typically suffers from performance issues when the model size increases. Based on observation, there are two main causes of the high computation cost: (1) when matching an element <i>p</i>, the matcher calculates the distance between <i>p</i> and every candidate element <i>q</i>, despite the obvious dissimilarity between <i>p</i> and <i>q</i>; (2) the matcher always calculates the distance between <i>p</i> and <span>(q')</span>, even though <i>q</i> and <span>(q')</span> are very similar and the distance between <i>p</i> and <i>q</i> is already known. This paper proposes a dual-hash-based approach, which employs two entirely different hashing techniques—similarity-preserving hashing and integrity-based hashing—to accelerate similarity-based model matching. With similarity-preserving hashing, our approach can quickly filter out the dissimilar candidate elements according to their similarity hashes computed using our similarity-preserving hash function, which maps an element to a 64-bit binary hash. With integrity-based hashing, our approach can cache and reuse computed distance values by associating them with the checksums of model elements. We also propose an index structure to facilitate hash-based model matching. Our approach has been implemented and integrated into EMF Compare. We evaluate our approach using open-source Ecore and UML models. The results show that our hash function is effective in preserving the similarity between model elements and our matching approach reduces time costs by 20–88% while assuring the matching results consistent with EMF Compare.</p>","PeriodicalId":49507,"journal":{"name":"Software and Systems Modeling","volume":"54 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140831788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-18DOI: 10.1007/s10270-024-01167-z
Hossain Muhammad Muctadir, David A. Manrique Negrin, Raghavendran Gunasekaran, Loek Cleophas, Mark van den Brand, Boudewijn R. Haverkort
Digital twins (DTs) are often defined as a pairing of a physical entity and a corresponding virtual entity (VE), mimicking certain aspects of the former depending on the use-case. In recent years, this concept has facilitated numerous use-cases ranging from design to validation and predictive maintenance of large and small high-tech systems. Various heterogeneous cross-domain models are essential for such systems, and model-driven engineering plays a pivotal role in the design, development, and maintenance of these models. We believe models and model-driven engineering play a similarly crucial role in the context of a VE of a DT. Due to the rapidly growing popularity of DTs and their use in diverse domains and use-cases, the methodologies, tools, and practices for designing, developing, and maintaining the corresponding VEs differ vastly. To better understand these differences and similarities, we performed a semi-structured interview research with 19 professionals from industry and academia who are closely associated with different lifecycle stages of digital twins. In this paper, we present our analysis and findings from this study, which is based on seven research questions. In general, we identified an overall lack of uniformity in terms of the understanding of digital twins and used tools, techniques, and methodologies for the development and maintenance of the corresponding VEs. Furthermore, considering that digital twins are software intensive systems, we recognize a significant growth potential for adopting more software engineering practices, processes, and expertise in various stages of a digital twin’s lifecycle.
数字孪生(DT)通常被定义为物理实体与相应虚拟实体(VE)的配对,根据使用情况模仿前者的某些方面。近年来,这一概念促进了从大型和小型高科技系统的设计、验证和预测性维护等众多用例。对于这些系统来说,各种异构的跨领域模型是必不可少的,而模型驱动工程在这些模型的设计、开发和维护中发挥着举足轻重的作用。我们认为,模型和模型驱动工程在 DT 的虚拟环境中也发挥着同样重要的作用。由于 DT 及其在不同领域和用例中的应用迅速普及,设计、开发和维护相应 VE 的方法、工具和实践也大相径庭。为了更好地了解这些异同,我们对来自产业界和学术界的 19 位与数字孪生不同生命周期阶段密切相关的专业人士进行了半结构式访谈研究。在本文中,我们将根据七个研究问题,介绍我们的分析和研究结果。总的来说,我们发现在对数字孪生的理解以及开发和维护相应虚拟环境所使用的工具、技术和方法方面,总体上缺乏统一性。此外,考虑到数字孪生是软件密集型系统,我们认为在数字孪生生命周期的各个阶段采用更多的软件工程实践、流程和专业知识具有巨大的发展潜力。
{"title":"Current trends in digital twin development, maintenance, and operation: an interview study","authors":"Hossain Muhammad Muctadir, David A. Manrique Negrin, Raghavendran Gunasekaran, Loek Cleophas, Mark van den Brand, Boudewijn R. Haverkort","doi":"10.1007/s10270-024-01167-z","DOIUrl":"https://doi.org/10.1007/s10270-024-01167-z","url":null,"abstract":"<p>Digital twins (DTs) are often defined as a pairing of a physical entity and a corresponding virtual entity (VE), mimicking certain aspects of the former depending on the use-case. In recent years, this concept has facilitated numerous use-cases ranging from design to validation and predictive maintenance of large and small high-tech systems. Various heterogeneous cross-domain models are essential for such systems, and model-driven engineering plays a pivotal role in the design, development, and maintenance of these models. We believe models and model-driven engineering play a similarly crucial role in the context of a VE of a DT. Due to the rapidly growing popularity of DTs and their use in diverse domains and use-cases, the methodologies, tools, and practices for designing, developing, and maintaining the corresponding VEs differ vastly. To better understand these differences and similarities, we performed a semi-structured interview research with 19 professionals from industry and academia who are closely associated with different lifecycle stages of digital twins. In this paper, we present our analysis and findings from this study, which is based on seven research questions. In general, we identified an overall lack of uniformity in terms of the understanding of digital twins and used tools, techniques, and methodologies for the development and maintenance of the corresponding VEs. Furthermore, considering that digital twins are software intensive systems, we recognize a significant growth potential for adopting more software engineering practices, processes, and expertise in various stages of a digital twin’s lifecycle.\u0000</p>","PeriodicalId":49507,"journal":{"name":"Software and Systems Modeling","volume":"2023 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140628921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-18DOI: 10.1007/s10270-024-01171-3
Giacomo Garaccione, Riccardo Coppola, Luca Ardito, Marco Torchiano
Gamification, the practice of using game elements in non-recreational contexts to increase user participation and interest, has been applied more and more throughout the years in software engineering. Business process modeling is a skill considered fundamental for software engineers, with Business Process Modeling Notation (BPMN) being one of the most commonly used notations for this discipline. BPMN modeling is present in different curricula in specific Master’s Degree courses related to software engineering but is usually seen by students as an unappealing or uninteresting activity. Gamification could potentially solve this issue, though there have been no relevant attempts in research yet. This paper aims at collecting preliminary insights on how gamification affects students’ motivation in performing BPMN modeling tasks and—as a consequence—their productivity and learning outcomes. A web application for modeling BPMN diagrams augmented with gamification mechanics such as feedback, rewards, progression, and penalization has been compared with a non-gamified version that provides more limited feedback in an experiment involving 200 students. The diagrams modeled by the students are collected and analyzed after the experiment. Students’ opinions are gathered using a post-experiment questionnaire. Statistical analysis showed that gamification leads students to check more often for their solutions’ correctness, increasing the semantic correctness of their diagrams, thus showing that it can improve students’ modeling skills. The results, however, are mixed and require additional experiments in the future to fine-tune the tool for actual classroom use.
{"title":"Gamification of business process modeling education: an experimental analysis","authors":"Giacomo Garaccione, Riccardo Coppola, Luca Ardito, Marco Torchiano","doi":"10.1007/s10270-024-01171-3","DOIUrl":"https://doi.org/10.1007/s10270-024-01171-3","url":null,"abstract":"<p>Gamification, the practice of using game elements in non-recreational contexts to increase user participation and interest, has been applied more and more throughout the years in software engineering. Business process modeling is a skill considered fundamental for software engineers, with Business Process Modeling Notation (BPMN) being one of the most commonly used notations for this discipline. BPMN modeling is present in different curricula in specific Master’s Degree courses related to software engineering but is usually seen by students as an unappealing or uninteresting activity. Gamification could potentially solve this issue, though there have been no relevant attempts in research yet. This paper aims at collecting preliminary insights on how gamification affects students’ motivation in performing BPMN modeling tasks and—as a consequence—their productivity and learning outcomes. A web application for modeling BPMN diagrams augmented with gamification mechanics such as feedback, rewards, progression, and penalization has been compared with a non-gamified version that provides more limited feedback in an experiment involving 200 students. The diagrams modeled by the students are collected and analyzed after the experiment. Students’ opinions are gathered using a post-experiment questionnaire. Statistical analysis showed that gamification leads students to check more often for their solutions’ correctness, increasing the semantic correctness of their diagrams, thus showing that it can improve students’ modeling skills. The results, however, are mixed and require additional experiments in the future to fine-tune the tool for actual classroom use.</p>","PeriodicalId":49507,"journal":{"name":"Software and Systems Modeling","volume":"13 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140629103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Model transformations play an essential role in the model-driven engineering paradigm. However, writing a correct transformation requires the user to understand both what the transformation should do and how to enact that change in the transformation. This easily leads to syntactic and semantic errors in transformations which are time-consuming to locate and fix. In this article, we extend our evolutionary algorithm (EA) approach to automatically repair transformations containing multiple semantic errors. To prevent the fitness plateaus and the single fitness peak limitations from our previous work, we include the notion of social diversity as an objective for our EA to promote repair patches tackling errors that are less covered by the other patches of the population. We evaluate our approach on four ATL transformations, which have been mutated to contain up to five semantic errors simultaneously. Our evaluation shows that integrating social diversity when searching for repair patches improves the quality of those patches and speeds up the convergence even when up to five semantic errors are involved.
{"title":"Improving repair of semantic ATL errors using a social diversity metric","authors":"Zahra VaraminyBahnemiry, Jessie Galasso, Bentley Oakes, Houari Sahraoui","doi":"10.1007/s10270-024-01170-4","DOIUrl":"https://doi.org/10.1007/s10270-024-01170-4","url":null,"abstract":"<p>Model transformations play an essential role in the model-driven engineering paradigm. However, writing a correct transformation requires the user to understand both <i>what</i> the transformation should do and <i>how</i> to enact that change in the transformation. This easily leads to <i>syntactic</i> and <i>semantic</i> errors in transformations which are time-consuming to locate and fix. In this article, we extend our evolutionary algorithm (EA) approach to automatically repair transformations containing <i>multiple semantic errors</i>. To prevent the <i>fitness plateaus</i> and the <i>single fitness peak</i> limitations from our previous work, we include the notion of <i>social diversity</i> as an objective for our EA to promote repair patches tackling errors that are less covered by the other patches of the population. We evaluate our approach on four ATL transformations, which have been mutated to contain up to five semantic errors simultaneously. Our evaluation shows that integrating social diversity when searching for repair patches improves the quality of those patches and speeds up the convergence even when up to five semantic errors are involved.</p>","PeriodicalId":49507,"journal":{"name":"Software and Systems Modeling","volume":"70 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140628836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-16DOI: 10.1007/s10270-023-01147-9
Sotirios Liaskos, Saba Zarbaf, John Mylopoulos, Shakil M. Khan
Conceptual modeling plays a central role in planning, designing, developing and maintaining software-intensive systems. One of the goals of conceptual modeling is to enable clear communication among stakeholders involved in said activities. To achieve effective communication, conceptual models must be understood by different people in the same way. To support such shared understanding, conceptual modeling languages are defined, which introduce rules and constraints on how individual models can be built and how they are to be understood. A key component of a modeling language is an ontology, i.e., a set of concepts that modelers must use to describe world phenomena. Once the concepts are chosen, a visual and/or textual vocabulary is adopted for representing the concepts. However, the choices both of the concepts and of the vocabulary used to represent them may affect the quality of the language under consideration: some choices may promote shared understanding better than other choices. To allow evaluation and comparison of alternative choices, we present Peira, a framework for empirically measuring the domain and comprehensibility appropriateness of conceptual modeling language ontologies. Given a language ontology to be evaluated, the framework is based on observing how prospective language users classify domain content under the concepts put forth by said ontology. A set of metrics is then used to analyze the observations and identify and characterize possible issues that the choice of concepts or the way they are represented may have. The metrics are abstract in that they can be operationalized into concrete implementations tailored to specific data collection instruments or study objectives. We evaluate the framework by applying it to compare an existing language against an artificial one that is manufactured to exhibit specific issues. We then test if the metrics indeed detect these issues. We find that the framework does offer the expected indications, but that it also requires good understanding of the metrics prior to committing to interpretations of the observations.
{"title":"Empirically evaluating modeling language ontologies: the Peira framework","authors":"Sotirios Liaskos, Saba Zarbaf, John Mylopoulos, Shakil M. Khan","doi":"10.1007/s10270-023-01147-9","DOIUrl":"https://doi.org/10.1007/s10270-023-01147-9","url":null,"abstract":"<p>Conceptual modeling plays a central role in planning, designing, developing and maintaining software-intensive systems. One of the goals of conceptual modeling is to enable clear communication among stakeholders involved in said activities. To achieve effective communication, conceptual models must be understood by different people in the same way. To support such shared understanding, conceptual modeling languages are defined, which introduce rules and constraints on how individual models can be built and how they are to be understood. A key component of a modeling language is an ontology, i.e., a set of concepts that modelers must use to describe world phenomena. Once the concepts are chosen, a visual and/or textual vocabulary is adopted for representing the concepts. However, the choices both of the concepts and of the vocabulary used to represent them may affect the quality of the language under consideration: some choices may promote shared understanding better than other choices. To allow evaluation and comparison of alternative choices, we present Peira, a framework for empirically measuring the domain and comprehensibility appropriateness of conceptual modeling language ontologies. Given a language ontology to be evaluated, the framework is based on observing how prospective language users classify domain content under the concepts put forth by said ontology. A set of metrics is then used to analyze the observations and identify and characterize possible issues that the choice of concepts or the way they are represented may have. The metrics are abstract in that they can be operationalized into concrete implementations tailored to specific data collection instruments or study objectives. We evaluate the framework by applying it to compare an existing language against an artificial one that is manufactured to exhibit specific issues. We then test if the metrics indeed detect these issues. We find that the framework does offer the expected indications, but that it also requires good understanding of the metrics prior to committing to interpretations of the observations.</p>","PeriodicalId":49507,"journal":{"name":"Software and Systems Modeling","volume":"102 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140597273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-08DOI: 10.1007/s10270-024-01156-2
Marien R. Krouwel, Martin Op ’t Land, Henderik A. Proper
Due to hyper-competition, technological advancements, regulatory changes, etc, the conditions under which enterprises need to thrive become increasingly turbulent. Consequently, enterprise agility increasingly determines an enterprise’s chances for success. As software development often is a limiting factor in achieving enterprise agility, enterprise agility and software adaptability become increasingly intertwined. As a consequence, decisions that regard flexibility should not be left to software developers alone. By taking a Model-driven Software Development (MDSD) approach, starting from DEMO ontological enterprise models and explicit (enterprise) implementation design decisions, the aim of this research is to bridge the gap from enterprise agility to software adaptability, in such a way that software development is no longer a limiting factor in achieving enterprise agility. Low-code technology is a growing market trend that builds on MDSD concepts and claims to offer a high degree of software adaptability. Therefore, as a first step to show the potential benefits to use DEMO ontological enterprise models as a base for MDSD, this research shows the design of a mapping from DEMO models to Mendix for the (automated) creation of a low-code application that also intrinsically accommodates run-time implementation design decisions.
{"title":"From enterprise models to low-code applications: mapping DEMO to Mendix; illustrated in the social housing domain","authors":"Marien R. Krouwel, Martin Op ’t Land, Henderik A. Proper","doi":"10.1007/s10270-024-01156-2","DOIUrl":"https://doi.org/10.1007/s10270-024-01156-2","url":null,"abstract":"<p>Due to hyper-competition, technological advancements, regulatory changes, etc, the conditions under which enterprises need to thrive become increasingly turbulent. Consequently, enterprise agility increasingly determines an enterprise’s chances for success. As software development often is a limiting factor in achieving enterprise agility, enterprise agility and software adaptability become increasingly intertwined. As a consequence, decisions that regard flexibility should not be left to software developers alone. By taking a Model-driven Software Development (MDSD) approach, starting from DEMO ontological enterprise models and explicit (enterprise) implementation design decisions, the aim of this research is to bridge the gap from enterprise agility to software adaptability, in such a way that software development is no longer a limiting factor in achieving enterprise agility. Low-code technology is a growing market trend that builds on MDSD concepts and claims to offer a high degree of software adaptability. Therefore, as a first step to show the potential benefits to use DEMO ontological enterprise models as a base for MDSD, this research shows the design of a mapping from DEMO models to Mendix for the (automated) creation of a low-code application that also intrinsically accommodates run-time implementation design decisions.\u0000</p>","PeriodicalId":49507,"journal":{"name":"Software and Systems Modeling","volume":"8 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140596955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-23DOI: 10.1007/s10270-024-01158-0
Edi Muškardin, Martin Tappler, Bernhard K. Aichernig, Ingo Pill
Black-box systems are inherently hard to verify. Many verification techniques, like model checking, require formal models as a basis. However, such models often do not exist, or they might be outdated. Active automata learning helps to address this issue by offering to automatically infer formal models from system interactions. Hence, automata learning has been receiving much attention in the verification community in recent years. This led to various efficiency improvements, paving the way toward industrial applications. Most research, however, has been focusing on deterministic systems. In this article, we present an approach to efficiently learn models of stochastic reactive systems. Our approach adapts (L^*)-based learning for Markov decision processes, which we improve and extend to stochastic Mealy machines. When compared with previous work, our evaluation demonstrates that the proposed optimizations and adaptations to stochastic Mealy machines can reduce learning costs by an order of magnitude while improving the accuracy of learned models.
{"title":"Active model learning of stochastic reactive systems (extended version)","authors":"Edi Muškardin, Martin Tappler, Bernhard K. Aichernig, Ingo Pill","doi":"10.1007/s10270-024-01158-0","DOIUrl":"https://doi.org/10.1007/s10270-024-01158-0","url":null,"abstract":"<p>Black-box systems are inherently hard to verify. Many verification techniques, like model checking, require formal models as a basis. However, such models often do not exist, or they might be outdated. Active automata learning helps to address this issue by offering to automatically infer formal models from system interactions. Hence, automata learning has been receiving much attention in the verification community in recent years. This led to various efficiency improvements, paving the way toward industrial applications. Most research, however, has been focusing on deterministic systems. In this article, we present an approach to efficiently learn models of stochastic reactive systems. Our approach adapts <span>(L^*)</span>-based learning for Markov decision processes, which we improve and extend to stochastic Mealy machines. When compared with previous work, our evaluation demonstrates that the proposed optimizations and adaptations to stochastic Mealy machines can reduce learning costs by an order of magnitude while improving the accuracy of learned models.</p>","PeriodicalId":49507,"journal":{"name":"Software and Systems Modeling","volume":"46 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140200814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-21DOI: 10.1007/s10270-024-01160-6
Bernhard K. Aichernig, Sandra König, Cristinel Mateis, Andrea Pferscher, Martin Tappler
In this article, we present a novel approach to learning finite automata with the help of recurrent neural networks. Our goal is not only to train a neural network that predicts the observable behavior of an automaton but also to learn its structure, including the set of states and transitions. In contrast to previous work, we constrain the training with a specific regularization term. We iteratively adapt the architecture to learn the minimal automaton, in the case where the number of states is unknown. We evaluate our approach with standard examples from the automata learning literature, but also include a case study of learning the finite-state models of real Bluetooth Low Energy protocol implementations. The results show that we can find an appropriate architecture to learn the correct minimal automata in all considered cases.
{"title":"Learning minimal automata with recurrent neural networks","authors":"Bernhard K. Aichernig, Sandra König, Cristinel Mateis, Andrea Pferscher, Martin Tappler","doi":"10.1007/s10270-024-01160-6","DOIUrl":"https://doi.org/10.1007/s10270-024-01160-6","url":null,"abstract":"<p>In this article, we present a novel approach to learning finite automata with the help of recurrent neural networks. Our goal is not only to train a neural network that predicts the observable behavior of an automaton but also to learn its structure, including the set of states and transitions. In contrast to previous work, we constrain the training with a specific regularization term. We iteratively adapt the architecture to learn the minimal automaton, in the case where the number of states is unknown. We evaluate our approach with standard examples from the automata learning literature, but also include a case study of learning the finite-state models of real Bluetooth Low Energy protocol implementations. The results show that we can find an appropriate architecture to learn the correct minimal automata in all considered cases.</p>","PeriodicalId":49507,"journal":{"name":"Software and Systems Modeling","volume":"102 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140200667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}