We present algorithms for generating small random samples without replacement. We consider two cases. We present an algorithm for sampling a pair of distinct integers, and an algorithm for sampling a triple of distinct integers. The worst‐case runtime of both algorithms is constant, while the worst‐case runtimes of common algorithms for the general case of sampling elements from a set of increase with . Java implementations of both algorithms are included in the open source library .
{"title":"Algorithms for generating small random samples","authors":"Vincent A. Cicirello","doi":"10.1002/spe.3379","DOIUrl":"https://doi.org/10.1002/spe.3379","url":null,"abstract":"We present algorithms for generating small random samples without replacement. We consider two cases. We present an algorithm for sampling a pair of distinct integers, and an algorithm for sampling a triple of distinct integers. The worst‐case runtime of both algorithms is constant, while the worst‐case runtimes of common algorithms for the general case of sampling elements from a set of increase with . Java implementations of both algorithms are included in the open source library .","PeriodicalId":21899,"journal":{"name":"Software: Practice and Experience","volume":"16 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142265077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wenbo Zhou, Yujiao Zhao, Ye Zhang, Yiyuan Wang, Minghao Yin
UPPAAL is a formal modeling and verification tool based on timed automata, capable of effectively analyzing real‐time software and hardware systems. In this article, we investigate research on UPPAAL‐assisted formal modeling and verification. First, we propose four research questions considering tool characteristics, modeling methods, verification means and application domains. Then, the state‐of‐the‐art methods for model specification and verification in UPPAAL are discussed, involving model transformation, model repair, property specification, as well as verification and testing methods. Next, typical application cases of formal modeling and verification assisted by UPPAAL are analyzed, spanning across domains such as network protocol, multi‐agent system, cyber‐physical system, rail traffic and aerospace systems, cloud and edge computing systems, as well as biological and medical systems. Finally, we address the four proposed questions based on our survey and outline future research directions. By responding to these questions, we aim to provide summaries and insights into potential avenues for further exploration in this field.
{"title":"A comprehensive survey of UPPAAL‐assisted formal modeling and verification","authors":"Wenbo Zhou, Yujiao Zhao, Ye Zhang, Yiyuan Wang, Minghao Yin","doi":"10.1002/spe.3372","DOIUrl":"https://doi.org/10.1002/spe.3372","url":null,"abstract":"UPPAAL is a formal modeling and verification tool based on timed automata, capable of effectively analyzing real‐time software and hardware systems. In this article, we investigate research on UPPAAL‐assisted formal modeling and verification. First, we propose four research questions considering tool characteristics, modeling methods, verification means and application domains. Then, the state‐of‐the‐art methods for model specification and verification in UPPAAL are discussed, involving model transformation, model repair, property specification, as well as verification and testing methods. Next, typical application cases of formal modeling and verification assisted by UPPAAL are analyzed, spanning across domains such as network protocol, multi‐agent system, cyber‐physical system, rail traffic and aerospace systems, cloud and edge computing systems, as well as biological and medical systems. Finally, we address the four proposed questions based on our survey and outline future research directions. By responding to these questions, we aim to provide summaries and insights into potential avenues for further exploration in this field.","PeriodicalId":21899,"journal":{"name":"Software: Practice and Experience","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142265078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The growing number of software startups has witnessed an open debate on the suitability and appropriateness of commonly used software development methodologies, including agile software development methodologies and practices. Startups, for example, tend to focus on producing minimum viable product, which challenge the use of these methods and calls for bespoke adaptation of these practices to suit startups. Agile adoption is not easy for software startup teams due to unreadiness, inadequate preparation and weak structure of these teams, focusing only on small part of agile practices, and high uncertainty in essential requirements and proper technology. A review of the state‐of‐the‐art reports on limited number of studies that have investigated the adoption of agile methods and practices to best suit the requirements software startups. This study uses design science research methodology to address this gap and develop a guideline for agile adaptation specifically for software startups. The developed guideline was validated and improved with the participation of 23 experts from 7 software startup teams through survey questionnaires and open discussion. This guideline includes 13 recommendations, categorized into three sections: selection of agile methods and practices, preparation for adaptation, and the adaptation of agile methods and practices. Evaluation of the results shows the simplicity of understanding the guideline, its usefulness, and its support for the expected agility of the software development process.
{"title":"Empowering software startups with agile methods and practices: A design science research","authors":"Taghi Javdani Gandomani, Hazura Zulzalil, Rami Bahsoon","doi":"10.1002/spe.3371","DOIUrl":"https://doi.org/10.1002/spe.3371","url":null,"abstract":"The growing number of software startups has witnessed an open debate on the suitability and appropriateness of commonly used software development methodologies, including agile software development methodologies and practices. Startups, for example, tend to focus on producing minimum viable product, which challenge the use of these methods and calls for bespoke adaptation of these practices to suit startups. Agile adoption is not easy for software startup teams due to unreadiness, inadequate preparation and weak structure of these teams, focusing only on small part of agile practices, and high uncertainty in essential requirements and proper technology. A review of the state‐of‐the‐art reports on limited number of studies that have investigated the adoption of agile methods and practices to best suit the requirements software startups. This study uses design science research methodology to address this gap and develop a guideline for agile adaptation specifically for software startups. The developed guideline was validated and improved with the participation of 23 experts from 7 software startup teams through survey questionnaires and open discussion. This guideline includes 13 recommendations, categorized into three sections: selection of agile methods and practices, preparation for adaptation, and the adaptation of agile methods and practices. Evaluation of the results shows the simplicity of understanding the guideline, its usefulness, and its support for the expected agility of the software development process.","PeriodicalId":21899,"journal":{"name":"Software: Practice and Experience","volume":"18 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142182200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Unai Arronategui, José Ángel Bañares, José Manuel Colom
The study of real discrete event systems requires the use of models to cope with complexity and large scale. The only way to understand and analyse their behaviour prior to implementation is, in practice, through distributed simulation. Although it is a widely studied discipline, the difficulty of developing efficient distributed simulation code remains a challenge. The use of model driven engineering approaches allows a smooth way from informal specifications to executable code showing traces of the system behaviour. Formal models allow to conduct the phases of this engineering process, and in this work, the formalism is Petri nets. In the simulation literature, Petri nets have been shown to be particularly suitable for modelling and simulation of discrete event systems. This article reviews the role of Petri nets as the core formalism to support a model‐driven engineering approach for the execution of large scale models using distributed simulation. It deals with different aspects related to the Petri net‐based languages used at different stages of the modelling and simulation process, from conceptual modelling of complex systems to the generation of code for executing simulations of Petri net‐based models. After the review, the article proposes an efficient representation of Petri net‐based models. It is analysed from the perspective of the essential properties required for distributed simulation, and was found to provide efficient execution, scalability and dynamic configuration. The article highlights the importance of considering modelling constraints in order to guarantee good properties such as liveness and structural boundedness of Petri net components for the execution of large‐scale Petri net models. The Petri net‐based methodology is illustrated from the perspective of the impact of the formalism to help developing well‐formed models and efficient code for distributed simulation.
研究真实的离散事件系统需要使用模型来应对复杂性和大规模问题。在实践中,了解和分析其实施前的行为的唯一方法就是分布式仿真。尽管分布式仿真是一门被广泛研究的学科,但开发高效的分布式仿真代码仍然是一项挑战。使用模型驱动工程方法,可以顺利地从非正式规格到显示系统行为轨迹的可执行代码。形式化模型允许进行这一工程过程的各个阶段,在这项工作中,形式化就是 Petri 网。在仿真文献中,Petri 网被证明特别适用于离散事件系统的建模和仿真。本文回顾了 Petri 网作为支持模型驱动工程方法的核心形式主义在使用分布式仿真执行大型模型方面的作用。文章涉及建模和仿真过程不同阶段使用的基于 Petri 网的语言的不同方面,从复杂系统的概念建模到生成代码以执行基于 Petri 网的模型的仿真。在回顾之后,文章提出了基于 Petri 网模型的高效表示方法。文章从分布式仿真所需的基本特性角度对其进行了分析,发现它能提供高效执行、可扩展性和动态配置。文章强调了考虑建模约束的重要性,以保证在执行大规模 Petri 网模型时 Petri 网组件的有效性和结构约束性等良好特性。文章从形式主义的影响角度阐述了基于 Petri 网的方法,以帮助为分布式仿真开发格式良好的模型和高效的代码。
{"title":"Large scale system design aided by modelling and DES simulation: A Petri net approach","authors":"Unai Arronategui, José Ángel Bañares, José Manuel Colom","doi":"10.1002/spe.3374","DOIUrl":"https://doi.org/10.1002/spe.3374","url":null,"abstract":"The study of real discrete event systems requires the use of models to cope with complexity and large scale. The only way to understand and analyse their behaviour prior to implementation is, in practice, through distributed simulation. Although it is a widely studied discipline, the difficulty of developing efficient distributed simulation code remains a challenge. The use of model driven engineering approaches allows a smooth way from informal specifications to executable code showing traces of the system behaviour. Formal models allow to conduct the phases of this engineering process, and in this work, the formalism is Petri nets. In the simulation literature, Petri nets have been shown to be particularly suitable for modelling and simulation of discrete event systems. This article reviews the role of Petri nets as the core formalism to support a model‐driven engineering approach for the execution of large scale models using distributed simulation. It deals with different aspects related to the Petri net‐based languages used at different stages of the modelling and simulation process, from conceptual modelling of complex systems to the generation of code for executing simulations of Petri net‐based models. After the review, the article proposes an efficient representation of Petri net‐based models. It is analysed from the perspective of the essential properties required for distributed simulation, and was found to provide efficient execution, scalability and dynamic configuration. The article highlights the importance of considering modelling constraints in order to guarantee good properties such as liveness and structural boundedness of Petri net components for the execution of large‐scale Petri net models. The Petri net‐based methodology is illustrated from the perspective of the impact of the formalism to help developing well‐formed models and efficient code for distributed simulation.","PeriodicalId":21899,"journal":{"name":"Software: Practice and Experience","volume":"8 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142182198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
José Fuentes‐Sepúlveda, Diego Gatica, Gonzalo Navarro, M. Andrea Rodríguez, Diego Seco
Conventional database systems function as static data repositories, storing vast amounts of facts and offering efficient query processing capabilities. The sheer volume of data these systems store has a direct impact on their scalability, both in terms of storage space and query processing time. Deductive database systems, on the other hand, require far less storage space since they derive new knowledge by applying inference rules. The challenge is how to efficiently obtain the required derivations, compared to having them in explicit form. In this study, we concentrate on a set of predefined inference rules for subsumption and disjointness relations, including their negations. We use compact data structures to store the facts and provide algorithms to support each type of relation, minimizing even further the storage space requirements. Our experimental findings demonstrate the feasibility of this approach, which not only saves space but is often faster than a baseline that uses well‐known graph traversal algorithms implemented on top of a traditional adjacency list representation to derive the relations.
{"title":"Space‐efficient data structures for the inference of subsumption and disjointness relations","authors":"José Fuentes‐Sepúlveda, Diego Gatica, Gonzalo Navarro, M. Andrea Rodríguez, Diego Seco","doi":"10.1002/spe.3367","DOIUrl":"https://doi.org/10.1002/spe.3367","url":null,"abstract":"Conventional database systems function as static data repositories, storing vast amounts of facts and offering efficient query processing capabilities. The sheer volume of data these systems store has a direct impact on their scalability, both in terms of storage space and query processing time. Deductive database systems, on the other hand, require far less storage space since they derive new knowledge by applying inference rules. The challenge is how to efficiently obtain the required derivations, compared to having them in explicit form. In this study, we concentrate on a set of predefined inference rules for subsumption and disjointness relations, including their negations. We use compact data structures to store the facts and provide algorithms to support each type of relation, minimizing even further the storage space requirements. Our experimental findings demonstrate the feasibility of this approach, which not only saves space but is often faster than a baseline that uses well‐known graph traversal algorithms implemented on top of a traditional adjacency list representation to derive the relations.","PeriodicalId":21899,"journal":{"name":"Software: Practice and Experience","volume":"120 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142182201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Carlos M. Aderaldo, Thiago M. Costa, Davi M. Vasconcelos, Nabor C. Mendonça, Javier Cámara, David Garlan
Microservice developers increasingly use resiliency patterns such as Retry and Circuit Breaker to cope with remote services that are likely to fail. However, there is still little research on how the invocation delays typically introduced by those resiliency patterns may impact application performance under varying workloads and failure scenarios. This article presents a novel approach and benchmark tool for experimentally evaluating the performance impact of existing resiliency patterns in a controlled setting. The main novelty of this approach resides in the ability to declaratively specify and automatically generate multiple testing scenarios involving different resiliency patterns, which one can implement using any programming language and resilience library. The article illustrates the benefits of the proposed approach and tool by reporting on an experimental study of the performance impact of the Retry and Circuit Breaker resiliency patterns in two mainstream programming languages (C# and Java) using two popular resilience libraries (Polly and Resilience4j), under multiple service workloads and failure rates. Our results show that, under low to moderate failure rates, both resiliency patterns effectively reduce the load over the application's target service with barely any impact on the application's performance. However, as the failure rate increases, both patterns significantly degrade the application's performance, with their effect varying depending on the service's workload and the patterns' programming language and resilience library.
{"title":"A declarative approach and benchmark tool for controlled evaluation of microservice resiliency patterns","authors":"Carlos M. Aderaldo, Thiago M. Costa, Davi M. Vasconcelos, Nabor C. Mendonça, Javier Cámara, David Garlan","doi":"10.1002/spe.3368","DOIUrl":"https://doi.org/10.1002/spe.3368","url":null,"abstract":"Microservice developers increasingly use resiliency patterns such as Retry and Circuit Breaker to cope with remote services that are likely to fail. However, there is still little research on how the invocation delays typically introduced by those resiliency patterns may impact application performance under varying workloads and failure scenarios. This article presents a novel approach and benchmark tool for experimentally evaluating the performance impact of existing resiliency patterns in a controlled setting. The main novelty of this approach resides in the ability to declaratively specify and automatically generate multiple testing scenarios involving different resiliency patterns, which one can implement using any programming language and resilience library. The article illustrates the benefits of the proposed approach and tool by reporting on an experimental study of the performance impact of the Retry and Circuit Breaker resiliency patterns in two mainstream programming languages (C# and Java) using two popular resilience libraries (Polly and Resilience4j), under multiple service workloads and failure rates. Our results show that, under low to moderate failure rates, both resiliency patterns effectively reduce the load over the application's target service with barely any impact on the application's performance. However, as the failure rate increases, both patterns significantly degrade the application's performance, with their effect varying depending on the service's workload and the patterns' programming language and resilience library.","PeriodicalId":21899,"journal":{"name":"Software: Practice and Experience","volume":"88 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142182202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SummaryPseudorandom values are often generated as 64‐bit binary words. These random words need to be converted into ranged values without statistical bias. We present an efficient algorithm to generate multiple independent uniformly‐random bounded integers from a single uniformly‐random binary word, without any bias. In the common case, our method uses one multiplication and no division operations per value produced. In practice, our algorithm can more than double the speed of unbiased random shuffling for small to moderately large arrays.
{"title":"Batched ranged random integer generation","authors":"Nevin Brackett‐Rozinsky, Daniel Lemire","doi":"10.1002/spe.3369","DOIUrl":"https://doi.org/10.1002/spe.3369","url":null,"abstract":"SummaryPseudorandom values are often generated as 64‐bit binary words. These random words need to be converted into ranged values without statistical bias. We present an efficient algorithm to generate multiple independent uniformly‐random bounded integers from a single uniformly‐random binary word, without any bias. In the common case, our method uses one multiplication and no division operations per value produced. In practice, our algorithm can more than double the speed of unbiased random shuffling for small to moderately large arrays.","PeriodicalId":21899,"journal":{"name":"Software: Practice and Experience","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142182204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The escalating threat of easily transmitted diseases poses a huge challenge to government institutions and health systems worldwide. Advancements in information and communication technology offer a promising approach to effectively controlling infectious diseases. This article introduces a comprehensive framework for predicting and preventing zoonotic virus infections by leveraging the capabilities of artificial intelligence and the Internet of Things. The proposed framework employs IoT‐enabled smart devices for data acquisition and applies a fog‐enabled model for user authentication at the fog layer. Further, the user classification is performed using the proposed ensemble model, with cloud computing enabling efficient information analysis and sharing. The novel aspect of the proposed system involves utilizing the temporal graph matrix method to illustrate dependencies among users infected with the zoonotic flu and provide a nuanced understanding of user interactions. The implemented system demonstrates a classification accuracy of around 91% for around 5000 instances and reliability of around 93%. The presented framework not only aids uninfected citizens in avoiding regional exposure but also empowers government agencies to address the problem more effectively. Moreover, temporal mining results also reveal the efficacy of the proposed system in dealing with zoonotic cases.
{"title":"An AIoT‐driven smart healthcare framework for zoonoses detection in integrated fog‐cloud computing environments","authors":"Prabal Verma, Aditya Gupta, Vibha Jain, Kumar Shashvat, Mohit Kumar, Sukhpal Singh Gill","doi":"10.1002/spe.3366","DOIUrl":"https://doi.org/10.1002/spe.3366","url":null,"abstract":"The escalating threat of easily transmitted diseases poses a huge challenge to government institutions and health systems worldwide. Advancements in information and communication technology offer a promising approach to effectively controlling infectious diseases. This article introduces a comprehensive framework for predicting and preventing zoonotic virus infections by leveraging the capabilities of artificial intelligence and the Internet of Things. The proposed framework employs IoT‐enabled smart devices for data acquisition and applies a fog‐enabled model for user authentication at the fog layer. Further, the user classification is performed using the proposed ensemble model, with cloud computing enabling efficient information analysis and sharing. The novel aspect of the proposed system involves utilizing the temporal graph matrix method to illustrate dependencies among users infected with the zoonotic flu and provide a nuanced understanding of user interactions. The implemented system demonstrates a classification accuracy of around 91% for around 5000 instances and reliability of around 93%. The presented framework not only aids uninfected citizens in avoiding regional exposure but also empowers government agencies to address the problem more effectively. Moreover, temporal mining results also reveal the efficacy of the proposed system in dealing with zoonotic cases.","PeriodicalId":21899,"journal":{"name":"Software: Practice and Experience","volume":"21 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141770788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abdul Qayum, Mengqi Zhang, Simon Colreavy, Muslim Chochlov, Jim Buckley, Dayi Lin, Ashish Rajendra Sai
SummarySoftware architecture assists developers in addressing non‐functional requirements and in maintaining, debugging, and upgrading their software systems. Consequently, consistency between the designed architecture and the implemented software system itself is important; without this consistency the non‐functional requirements targeted may not be addressed and architectural documentation may mis‐direct maintenance efforts that target the associated code‐base. But often, when software is initially implemented or subsequently evolved, the designed architecture and software architecture become inconsistent, with the implemented structure degraded due to issues like developer time‐pressures, or ambiguous communication of the designed architecture. In such cases, Software Architecture Recovery (SAR) or consistency approaches can be applied to reconstruct the architecture of the software system and possibly to compare it to/re‐align it with the designed architecture. Many SAR approaches have been proposed in the research. However, choosing an appropriate architecture recovery approach for software systems is still an open issue. Consequently, this research aims to conduct a tertiary‐mapping study based on available secondary studies of architecture recovery approaches, to uncover important characteristics, towards the selection of appropriate SAR approaches. This research has aggregated 13 secondary studies and 10 primary studies beyond 2020 from 5 databases and, in doing so, identified 111 architecture recovery approaches. Based on these approaches, a taxonomy, containing nine main SAR‐selection categories is proposed and a framework (in the form of a supporting tool to help developers select an appropriate SAR approach) has been developed. Finally, this research identifies six potential open research gaps related to the underlying research that could be helpful for guiding research in the future.
摘要 软件架构可帮助开发人员解决非功能性需求,并对软件系统进行维护、调试和升级。因此,设计的体系结构与实施的软件系统本身之间的一致性非常重要;没有这种一致性,所针对的非功能性需求可能无法得到满足,体系结构文档可能会错误地引导针对相关代码库的维护工作。但是,在软件的初始实施或后续演进过程中,设计架构和软件架构往往会出现不一致,由于开发人员的时间压力或设计架构的沟通不明确等问题,导致实施结构退化。在这种情况下,可以采用软件架构恢复(SAR)或一致性方法来重建软件系统的架构,并将其与设计架构进行比较/重新对齐。研究中提出了许多 SAR 方法。然而,为软件系统选择合适的架构恢复方法仍是一个未决问题。因此,本研究旨在基于现有的架构恢复方法二次研究,进行一次三级映射研究,以发现重要特征,从而选择合适的 SAR 方法。本研究从 5 个数据库中汇总了 13 项二次研究和 10 项 2020 年以后的主要研究,并在此过程中确定了 111 种架构恢复方法。在这些方法的基础上,提出了包含九个主要 SAR 选择类别的分类法,并开发了一个框架(以辅助工具的形式,帮助开发人员选择合适的 SAR 方法)。最后,本研究确定了与基础研究相关的六个潜在的开放式研究缺口,这将有助于指导未来的研究工作。
{"title":"A Framework and Taxonomy for Characterizing the Applicability of Software Architecture Recovery Approaches: A Tertiary‐Mapping Study","authors":"Abdul Qayum, Mengqi Zhang, Simon Colreavy, Muslim Chochlov, Jim Buckley, Dayi Lin, Ashish Rajendra Sai","doi":"10.1002/spe.3364","DOIUrl":"https://doi.org/10.1002/spe.3364","url":null,"abstract":"SummarySoftware architecture assists developers in addressing non‐functional requirements and in maintaining, debugging, and upgrading their software systems. Consequently, consistency between the designed architecture and the implemented software system itself is important; without this consistency the non‐functional requirements targeted may not be addressed and architectural documentation may mis‐direct maintenance efforts that target the associated code‐base. But often, when software is initially implemented or subsequently evolved, the designed architecture and software architecture become inconsistent, with the implemented structure degraded due to issues like developer time‐pressures, or ambiguous communication of the designed architecture. In such cases, Software Architecture Recovery (SAR) or consistency approaches can be applied to reconstruct the architecture of the software system and possibly to compare it to/re‐align it with the designed architecture. Many SAR approaches have been proposed in the research. However, choosing an appropriate architecture recovery approach for software systems is still an open issue. Consequently, this research aims to conduct a tertiary‐mapping study based on available secondary studies of architecture recovery approaches, to uncover important characteristics, towards the selection of appropriate SAR approaches. This research has aggregated 13 secondary studies and 10 primary studies beyond 2020 from 5 databases and, in doing so, identified 111 architecture recovery approaches. Based on these approaches, a taxonomy, containing nine main SAR‐selection categories is proposed and a framework (in the form of a supporting tool to help developers select an appropriate SAR approach) has been developed. Finally, this research identifies six potential open research gaps related to the underlying research that could be helpful for guiding research in the future.","PeriodicalId":21899,"journal":{"name":"Software: Practice and Experience","volume":"62 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141741110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Net primary productivity (NPP) is essential for sustainable resource management and conservation, and it serves as a primary monitoring target in smart forestry systems. The predominant method for NPP inversion involves data collection through terrestrial and satellite sensing systems, followed by parameter estimation using models such as the Carnegie‐Ames‐Stanford Approach (CASA). While this method benefits from low costs and extensive monitoring capabilities, the data derived from multisource sensing systems display varied spatial scale characteristics, and the NPP inversion models cannot detect the impact of data heterogeneity on the outcomes sensitively, reducing the accuracy of fine‐grained NPP inversion. Therefore, this paper proposes a modular system for fine‐grained data processing and NPP inversion. Regarding data processing, a two‐stage spatial‐spectral fusion model based on non‐negative matrix factorization (NMF) is proposed to enhance the spatial resolution of remote sensing data. A spatial interpolation model based on stacking generalization with residual correction is introduced to get raster meteorological data compatible with remote sensing images. Furthermore, we optimize the CASA model with the kernel method to enhance model sensitivity and enrich the spatial details of the inversion results with high resolution. Through validation using real datasets, the proposed fusion and interpolation models have significant advantages over mainstream methods. Furthermore, the correlation coefficient () between the estimated NPP using our improved inversion model and the field‐measured NPP is 0.69, demonstrating the feasibility of this platform in detailed forest NPP monitoring tasks.
净初级生产力(NPP)对可持续资源管理和保护至关重要,也是智能林业系统的主要监测目标。净初级生产力反演的主要方法包括通过陆地和卫星传感系统收集数据,然后使用卡内基-阿姆斯-斯坦福方法(CASA)等模型进行参数估计。虽然这种方法具有成本低、监测能力强等优点,但多源传感系统获取的数据具有不同的空间尺度特征,而 NPP 反演模型无法灵敏地检测数据异质性对结果的影响,从而降低了细粒度 NPP 反演的准确性。因此,本文提出了一种模块化的精细化数据处理和 NPP 反演系统。在数据处理方面,提出了基于非负矩阵因式分解(NMF)的两阶段空间-光谱融合模型,以提高遥感数据的空间分辨率。引入了基于堆叠泛化和残差校正的空间插值模型,以获得与遥感图像兼容的栅格气象数据。此外,我们还利用核方法对 CASA 模型进行了优化,以提高模型灵敏度,丰富高分辨率反演结果的空间细节。通过使用真实数据集进行验证,所提出的融合和插值模型与主流方法相比具有显著优势。此外,利用改进后的反演模型估算的NPP与实地测量的NPP之间的相关系数()为0.69,证明了该平台在详细的森林NPP监测任务中的可行性。
{"title":"Fine‐grained forest net primary productivity monitoring: Software system integrating multisource data and smart optimization","authors":"Weitao Zou, Long Luo, Fangyu Sun, Chao Li, Guangsheng Chen, Weipeng Jing","doi":"10.1002/spe.3365","DOIUrl":"https://doi.org/10.1002/spe.3365","url":null,"abstract":"Net primary productivity (NPP) is essential for sustainable resource management and conservation, and it serves as a primary monitoring target in smart forestry systems. The predominant method for NPP inversion involves data collection through terrestrial and satellite sensing systems, followed by parameter estimation using models such as the Carnegie‐Ames‐Stanford Approach (CASA). While this method benefits from low costs and extensive monitoring capabilities, the data derived from multisource sensing systems display varied spatial scale characteristics, and the NPP inversion models cannot detect the impact of data heterogeneity on the outcomes sensitively, reducing the accuracy of fine‐grained NPP inversion. Therefore, this paper proposes a modular system for fine‐grained data processing and NPP inversion. Regarding data processing, a two‐stage spatial‐spectral fusion model based on non‐negative matrix factorization (NMF) is proposed to enhance the spatial resolution of remote sensing data. A spatial interpolation model based on stacking generalization with residual correction is introduced to get raster meteorological data compatible with remote sensing images. Furthermore, we optimize the CASA model with the kernel method to enhance model sensitivity and enrich the spatial details of the inversion results with high resolution. Through validation using real datasets, the proposed fusion and interpolation models have significant advantages over mainstream methods. Furthermore, the correlation coefficient () between the estimated NPP using our improved inversion model and the field‐measured NPP is 0.69, demonstrating the feasibility of this platform in detailed forest NPP monitoring tasks.","PeriodicalId":21899,"journal":{"name":"Software: Practice and Experience","volume":"32 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141610412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}