首页 > 最新文献

Software: Practice and Experience最新文献

英文 中文
Automated library mapping approach based on cross‐platform for mobile development programming languages 基于跨平台移动开发编程语言的自动库映射方法
Pub Date : 2023-10-19 DOI: 10.1002/spe.3281
Ahmad Ahmad Muhammad, Abdelrahman Soliman, Hala Zayed, Ahmed H. Yousef, Sahar Selim
Abstract Context The most popular mobile platforms, Android and iOS, are traditionally developed using native programming languages—Java and Kotlin for Android, and Objective‐C followed by Swift for iOS, respectively. Due to their popularity, there is always a demand to convert applications written for one of these two platforms to another. Cross‐platform mobile development is widely used as a solution where an application is written once and deployed on multiple platforms written in several other programming languages. One common cross‐platform approach that has been used recently by some research groups is the Trans‐Compilation approach. They focus on translating a program written in iOS into Android or vice versa. The main problem with their solutions is that library function mapping is not generalized and usually functions constitute most of the parts of any program. Objective This study aims to introduce an automatic library mapping approach for mobile programming languages. Method A library function of a source language will be automatically mapped to a corresponding function of the destination language by using the function structure for the two languages. The function structure includes the library to which the function belongs, the return type, parameter types, and the number of parameters. To test our approach, we map from Swift to Java. Results The results of our experiments show that our automatic library mapping approach achieves an average accuracy of 83.6% when tested on the most used libraries and outperforms current state‐of‐the‐art mapping techniques in terms of mapping accuracy. Conclusion These findings show that our automatic mapping approach is promising and can help to overcome the limitations of the trans‐compilation approaches.
最流行的移动平台,Android和iOS,传统上都是使用本地编程语言开发的,分别是Android的java和Kotlin, iOS的Objective - C和Swift。由于它们的流行,总是需要将为这两个平台之一编写的应用程序转换为另一个平台。跨平台移动开发是一种广泛使用的解决方案,即一次编写应用程序,并将其部署在使用多种其他编程语言编写的多个平台上。一些研究小组最近使用的一种常见的跨平台方法是编译方法。他们专注于将iOS上编写的程序翻译成Android,反之亦然。它们的解决方案的主要问题是库函数映射不是一般化的,通常函数构成任何程序的大部分部分。目的介绍一种面向移动编程语言的自动库映射方法。方法利用两种语言的函数结构,将源语言的库函数自动映射到目标语言的相应函数。函数结构包括函数所属的库、返回类型、形参类型和形参数量。为了测试我们的方法,我们从Swift映射到Java。实验结果表明,我们的自动图书馆映射方法在最常用的图书馆上达到了83.6%的平均精度,并且在映射精度方面优于当前最先进的映射技术。这些发现表明,我们的自动映射方法是有前途的,可以帮助克服翻译编译方法的局限性。
{"title":"Automated library mapping approach based on cross‐platform for mobile development programming languages","authors":"Ahmad Ahmad Muhammad, Abdelrahman Soliman, Hala Zayed, Ahmed H. Yousef, Sahar Selim","doi":"10.1002/spe.3281","DOIUrl":"https://doi.org/10.1002/spe.3281","url":null,"abstract":"Abstract Context The most popular mobile platforms, Android and iOS, are traditionally developed using native programming languages—Java and Kotlin for Android, and Objective‐C followed by Swift for iOS, respectively. Due to their popularity, there is always a demand to convert applications written for one of these two platforms to another. Cross‐platform mobile development is widely used as a solution where an application is written once and deployed on multiple platforms written in several other programming languages. One common cross‐platform approach that has been used recently by some research groups is the Trans‐Compilation approach. They focus on translating a program written in iOS into Android or vice versa. The main problem with their solutions is that library function mapping is not generalized and usually functions constitute most of the parts of any program. Objective This study aims to introduce an automatic library mapping approach for mobile programming languages. Method A library function of a source language will be automatically mapped to a corresponding function of the destination language by using the function structure for the two languages. The function structure includes the library to which the function belongs, the return type, parameter types, and the number of parameters. To test our approach, we map from Swift to Java. Results The results of our experiments show that our automatic library mapping approach achieves an average accuracy of 83.6% when tested on the most used libraries and outperforms current state‐of‐the‐art mapping techniques in terms of mapping accuracy. Conclusion These findings show that our automatic mapping approach is promising and can help to overcome the limitations of the trans‐compilation approaches.","PeriodicalId":21899,"journal":{"name":"Software: Practice and Experience","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135778964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Identifying metamorphic relations: A data mutation directed approach 识别变质关系:数据突变导向的方法
Pub Date : 2023-10-18 DOI: 10.1002/spe.3280
Chang‐ai Sun, Hui Jin, SiYi Wu, An Fu, ZuoYi Wang, Wing Kwong Chan
Summary Metamorphic testing (MT) is an effective technique to alleviate the test oracle problem. The principle of MT is to detect failures by checking whether some necessary properties, commonly known as metamorphic relations (MRs), of software under test (SUT) hold among multiple executions of source and follow‐up test cases. Since both the generation of follow‐up test cases and test result verification depend on MRs, the identification of MRs plays a key role in MT, which is an important yet difficult task requiring deep domain knowledge of the SUT. Accordingly, techniques that can direct a tester to identify MRs effectively are desirable. In this paper, we propose MT, a data mutation directed approach to identifying MRs. MT guides a tester to identify MRs by providing a set of data mutation operators and template‐style mapping rules, which not only alleviates the difficulties faced in the process of MR identification but also improves the identification effectiveness. We have further developed a tool to implement the proposed approach and conducted an empirical study to evaluate the MR identification effectiveness of MT and the performance of MRs identified by MT with respect to fault detection capability and statement coverage. The empirical results show that MT is able to identify MRs for numeric programs effectively, and the identified MRs have high fault detection capability and statement coverage. The work presented in this paper advances the field of MT by providing a simple yet practical approach to the MR identification problem.
变形测试(MT)是缓解测试oracle问题的一种有效技术。MT的原理是通过检查被测软件(SUT)的一些必要属性(通常称为变质关系(MRs))是否在源测试用例和后续测试用例的多次执行中成立来检测故障。由于后续测试用例的生成和测试结果的验证都依赖于MRs,因此MRs的识别在机器翻译中起着关键作用,这是一项重要而困难的任务,需要深入的SUT领域知识。因此,能够指导测试人员有效地识别MRs的技术是需要的。在本文中,我们提出了一种基于数据突变的MR识别方法,MT通过提供一组数据突变算子和模板式映射规则来指导测试者识别MR,这不仅减轻了MR识别过程中面临的困难,而且提高了识别效率。我们进一步开发了一种工具来实现所提出的方法,并进行了一项实证研究,以评估机器翻译的MR识别有效性以及机器翻译识别的MR在故障检测能力和语句覆盖率方面的性能。实证结果表明,机器翻译能够有效地识别出数值程序中的MRs,并且识别出的MRs具有较高的故障检测能力和语句覆盖率。本文提出的工作通过提供一种简单而实用的方法来解决MR识别问题,从而推动了MT领域的发展。
{"title":"Identifying metamorphic relations: A data mutation directed approach","authors":"Chang‐ai Sun, Hui Jin, SiYi Wu, An Fu, ZuoYi Wang, Wing Kwong Chan","doi":"10.1002/spe.3280","DOIUrl":"https://doi.org/10.1002/spe.3280","url":null,"abstract":"Summary Metamorphic testing (MT) is an effective technique to alleviate the test oracle problem. The principle of MT is to detect failures by checking whether some necessary properties, commonly known as metamorphic relations (MRs), of software under test (SUT) hold among multiple executions of source and follow‐up test cases. Since both the generation of follow‐up test cases and test result verification depend on MRs, the identification of MRs plays a key role in MT, which is an important yet difficult task requiring deep domain knowledge of the SUT. Accordingly, techniques that can direct a tester to identify MRs effectively are desirable. In this paper, we propose MT, a data mutation directed approach to identifying MRs. MT guides a tester to identify MRs by providing a set of data mutation operators and template‐style mapping rules, which not only alleviates the difficulties faced in the process of MR identification but also improves the identification effectiveness. We have further developed a tool to implement the proposed approach and conducted an empirical study to evaluate the MR identification effectiveness of MT and the performance of MRs identified by MT with respect to fault detection capability and statement coverage. The empirical results show that MT is able to identify MRs for numeric programs effectively, and the identified MRs have high fault detection capability and statement coverage. The work presented in this paper advances the field of MT by providing a simple yet practical approach to the MR identification problem.","PeriodicalId":21899,"journal":{"name":"Software: Practice and Experience","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135884027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
faas‐sim: A trace‐driven simulation framework for serverless edge computing platforms faas‐sim:用于无服务器边缘计算平台的跟踪驱动仿真框架
Pub Date : 2023-10-18 DOI: 10.1002/spe.3277
Philipp Raith, Thomas Rausch, Alireza Furutanpey, Schahram Dustdar
Abstract This paper presents faas‐sim , a simulation framework tailored to serverless edge computing platforms. In serverless computing, platform operators are tasked with efficiently managing distributed computing infrastructure completely abstracted from application developers. To that end, platform operators and researchers need tools to design, build, and evaluate resource management techniques that efficiently use of infrastructure while optimizing application performance. This challenge is exacerbated in edge computing scenarios, where, compared to cloud computing, there is a lack of reference architectures, design tools, or standardized benchmarks. faas‐sim bridges this gap by providing (a) a generalized model of serverless systems that builds on the function‐as‐a‐service abstraction, (b) a simulator that uses trace data from real‐world edge computing testbeds and representative workloads, and (c) a network topology generator to model and simulate distributed and heterogeneous edge‐cloud systems. We present the conceptual design, implementation, and a thorough evaluation of faas‐sim . By running experiments on both real‐world test beds and replicating them using faas‐sim , we show that the simulator provides accurate results and reasonable simulation performance. We have profiled a wide range of edge computing infrastructure and workloads, focusing on typical edge computing scenarios such as edge AI inference or data processing. Moreover, we present several instances where we have successfully used faas‐sim to either design, optimize, or evaluate serverless edge computing systems.
摘要本文提出了faas‐sim,一种为无服务器边缘计算平台量身定制的仿真框架。在无服务器计算中,平台操作员的任务是有效地管理完全从应用程序开发人员中抽象出来的分布式计算基础设施。为此,平台运营商和研究人员需要工具来设计、构建和评估资源管理技术,以有效利用基础设施,同时优化应用程序性能。这一挑战在边缘计算场景中更加严重,与云计算相比,边缘计算缺乏参考架构、设计工具或标准化基准。Faas - sim通过提供(a)建立在功能即服务抽象基础上的无服务器系统的通用模型,(b)使用来自现实世界边缘计算试验台和代表性工作负载的跟踪数据的模拟器,以及(c)网络拓扑生成器来建模和模拟分布式和异构边缘云系统,从而弥合了这一差距。我们提出了faas - sim的概念设计、实现和全面评估。通过在两个真实世界的测试平台上运行实验并使用faas - sim进行复制,我们表明该模拟器提供了准确的结果和合理的仿真性能。我们已经介绍了广泛的边缘计算基础设施和工作负载,重点关注典型的边缘计算场景,如边缘人工智能推理或数据处理。此外,我们还介绍了几个成功使用faas - sim来设计、优化或评估无服务器边缘计算系统的实例。
{"title":"<i>faas‐sim</i>: A trace‐driven simulation framework for serverless edge computing platforms","authors":"Philipp Raith, Thomas Rausch, Alireza Furutanpey, Schahram Dustdar","doi":"10.1002/spe.3277","DOIUrl":"https://doi.org/10.1002/spe.3277","url":null,"abstract":"Abstract This paper presents faas‐sim , a simulation framework tailored to serverless edge computing platforms. In serverless computing, platform operators are tasked with efficiently managing distributed computing infrastructure completely abstracted from application developers. To that end, platform operators and researchers need tools to design, build, and evaluate resource management techniques that efficiently use of infrastructure while optimizing application performance. This challenge is exacerbated in edge computing scenarios, where, compared to cloud computing, there is a lack of reference architectures, design tools, or standardized benchmarks. faas‐sim bridges this gap by providing (a) a generalized model of serverless systems that builds on the function‐as‐a‐service abstraction, (b) a simulator that uses trace data from real‐world edge computing testbeds and representative workloads, and (c) a network topology generator to model and simulate distributed and heterogeneous edge‐cloud systems. We present the conceptual design, implementation, and a thorough evaluation of faas‐sim . By running experiments on both real‐world test beds and replicating them using faas‐sim , we show that the simulator provides accurate results and reasonable simulation performance. We have profiled a wide range of edge computing infrastructure and workloads, focusing on typical edge computing scenarios such as edge AI inference or data processing. Moreover, we present several instances where we have successfully used faas‐sim to either design, optimize, or evaluate serverless edge computing systems.","PeriodicalId":21899,"journal":{"name":"Software: Practice and Experience","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135883003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
DOMICO: Checking conformance between domain models and implementations DOMICO:检查领域模型和实现之间的一致性
Pub Date : 2023-10-15 DOI: 10.1002/spe.3272
Chenxing Zhong, He Zhang, Huang Huang, Zhikun Chen, Chao Li, Xiaodong Liu, Shanshan Li
Abstract As a predominant design method for microsservices architecture (MSA), domain‐driven design (DDD) utilizes a series of standard patterns in both models and implementations to effectively support the design of architectural elements. However, an implementation may deviate from its original domain model that uses certain patterns. The deviation between a domain model and its implementation is a type of architectural drift , which needs to be detected promptly. This paper proposes an approach, namely DOMICO, to check the conformance between the domain model and its implementation, by which the conformance is formalized by defining eight common structural patterns of domain modeling and their representations in both models and the corresponding source code. Based on the formalization, our approach can not only identify the discrepancies (e.g., divergence, absence, and modification) with respect to pattern elements, but also detect possible violations of 24 compliance rules imposed by the patterns. To validate DOMICO, we performed a case study to investigate its use in a supply chain project and its performance. The results show that DOMICO can accurately identify 100% inconsistency issues in the cases examined. As the first conformance checking approach for DDD, DOMICO can be integrated into the regular domain modeling process and help ensure the conformity of microservice implementations to models.
作为微服务架构(MSA)的主要设计方法,领域驱动设计(DDD)在模型和实现中使用一系列标准模式来有效地支持架构元素的设计。然而,实现可能会偏离其使用某些模式的原始域模型。领域模型与其实现之间的偏差是一种架构漂移,需要及时检测。本文提出了一种检查领域模型与其实现之间一致性的方法,即DOMICO,该方法通过定义领域建模的八种常见结构模式及其在模型和相应源代码中的表示来形式化一致性。基于形式化,我们的方法不仅可以识别与模式元素相关的差异(例如,分歧、缺失和修改),还可以检测模式强加的24条遵从性规则的可能违反。为了验证DOMICO,我们进行了一个案例研究,以调查其在供应链项目中的使用及其性能。结果表明,DOMICO可以准确地识别所检查案例中100%的不一致问题。作为DDD的第一种一致性检查方法,DOMICO可以集成到常规的领域建模过程中,并帮助确保微服务实现与模型的一致性。
{"title":"DOMICO: Checking conformance between domain models and implementations","authors":"Chenxing Zhong, He Zhang, Huang Huang, Zhikun Chen, Chao Li, Xiaodong Liu, Shanshan Li","doi":"10.1002/spe.3272","DOIUrl":"https://doi.org/10.1002/spe.3272","url":null,"abstract":"Abstract As a predominant design method for microsservices architecture (MSA), domain‐driven design (DDD) utilizes a series of standard patterns in both models and implementations to effectively support the design of architectural elements. However, an implementation may deviate from its original domain model that uses certain patterns. The deviation between a domain model and its implementation is a type of architectural drift , which needs to be detected promptly. This paper proposes an approach, namely DOMICO, to check the conformance between the domain model and its implementation, by which the conformance is formalized by defining eight common structural patterns of domain modeling and their representations in both models and the corresponding source code. Based on the formalization, our approach can not only identify the discrepancies (e.g., divergence, absence, and modification) with respect to pattern elements, but also detect possible violations of 24 compliance rules imposed by the patterns. To validate DOMICO, we performed a case study to investigate its use in a supply chain project and its performance. The results show that DOMICO can accurately identify 100% inconsistency issues in the cases examined. As the first conformance checking approach for DDD, DOMICO can be integrated into the regular domain modeling process and help ensure the conformity of microservice implementations to models.","PeriodicalId":21899,"journal":{"name":"Software: Practice and Experience","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136184636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Special Issue on benchmarking, experimentation tools, and reproducible practices for data‐intensive systems from edge to cloud 关于从边缘到云的数据密集型系统的基准测试,实验工具和可重复实践的特刊
Pub Date : 2023-10-11 DOI: 10.1002/spe.3282
Lauritz Thamsen, David Bermbach, Demetris Trihinas
As data analytics and machine learning increasingly permeate our cities, factories, and homes, the computing infrastructure for data-intensive systems becomes more challenging. That is, the vision of pervasive, intelligent, and cyber-physical IoT systems will not be realized with centralized cloud resources alone. Such resources are simply too far away from sensor-equipped devices and users, resulting in high latency, bandwidth bottlenecks, and unnecessary energy consumption. In addition, there are often privacy and security requirements that mandate distributed architectures. As a result, new distributed computing paradigms are emerging that promise to bring computing and storage closer to data sources and users. The emerging distributed computing environments of edge and fog computing provide additional resources within mobile networks, ISP infrastructure, and even LEO satellites. These diverse and dynamic computing environments pose significant challenges to the performance, dependability, and efficiency of data-intensive systems running on such infrastructure. At the same time, it is far less clear how to properly benchmark, evaluate, and test the behavior of systems that span IoT devices, edge nodes, and cloud resources. For example, IoT sensor data stream processing systems can be leveraged to continuously optimize the operation of urban infrastructures (such as public transportation systems, water networks, or medical infrastructures). The behavior of such systems must be thoroughly assessed before they can be deployed to edge and fog infrastructure. In addition, these systems must be evaluated reproducibly under the expected computing environment conditions, including variations of those conditions, given the inherently unsteady nature of IoT environments. In addition, there is growing concern about the energy consumption and greenhouse gas emissions of ICT (and especially distributed ML-based applications), which further warrants close examination of the behavior of new data-intensive applications. Despite significant research and development efforts to improve benchmarking, experimentation tools, and reproducible practices for data-intensive systems spanning from the edge to the cloud, more research is urgently needed. We therefore invited high-quality research papers on this topic for this special issue of Software: Practice and Experience, and we were able to select two out of four submissions for this special issue with the help of our reviewers. The first accepted paper is titled “faas-sim: A Trace-Driven Simulation Framework for Serverless Edge Computing Platforms”.1 It is co-authored by Philipp Raith, Thomas Rausch, Alireza Furutanpey, and Schahram Dustdar. The paper presents the design and implementation of a new simulation framework, “faas-sim,” for modeling and evaluating serverless software architectures spanning the edge-cloud continuum based on a scenario description, a given network topology, and workload traces. The new si
随着数据分析和机器学习越来越多地渗透到我们的城市、工厂和家庭,数据密集型系统的计算基础设施变得更具挑战性。也就是说,普及的、智能的、网络物理的物联网系统的愿景将不会仅仅通过集中的云资源来实现。这些资源离配备传感器的设备和用户太远,导致高延迟、带宽瓶颈和不必要的能源消耗。此外,通常还存在强制采用分布式体系结构的隐私和安全需求。因此,新的分布式计算范式正在出现,有望使计算和存储更接近数据源和用户。边缘和雾计算的新兴分布式计算环境在移动网络、ISP基础设施甚至LEO卫星中提供了额外的资源。这些多样化和动态的计算环境对运行在这些基础设施上的数据密集型系统的性能、可靠性和效率提出了重大挑战。与此同时,如何正确地对跨物联网设备、边缘节点和云资源的系统进行基准测试、评估和测试,目前还不太清楚。例如,可以利用物联网传感器数据流处理系统来持续优化城市基础设施(如公共交通系统、供水网络或医疗基础设施)的运行。在将此类系统部署到边缘和雾基础设施之前,必须对其行为进行彻底评估。此外,考虑到物联网环境固有的不稳定性,这些系统必须在预期的计算环境条件下进行可重复性评估,包括这些条件的变化。此外,人们越来越关注ICT(尤其是基于分布式机器学习的应用)的能源消耗和温室气体排放,这进一步需要对新的数据密集型应用的行为进行仔细检查。尽管为从边缘到云的数据密集型系统改进基准测试、实验工具和可重复实践做出了重大的研究和开发努力,但迫切需要更多的研究。因此,我们在本期《软件:实践与经验》特刊中邀请了有关该主题的高质量研究论文,并且在审稿人的帮助下,我们能够从四份提交中选择两份。第一篇被接受的论文题为“faas-sim:无服务器边缘计算平台的跟踪驱动仿真框架”本书由Philipp Raith、Thomas Rausch、Alireza Furutanpey和Schahram Dustdar共同撰写。本文介绍了一种新的仿真框架“faas-sim”的设计和实现,用于基于场景描述、给定网络拓扑和工作负载跟踪对跨边缘云连续体的无服务器软件架构进行建模和评估。在性能评估、资源规划、协同仿真和科学评估等方面对该仿真器进行了验证。作者还对faas-sim的网络仿真和资源利用率进行了评价。此外,他们强调了faas-sim带来的痕迹,并提供了使用faas-sim的已发表研究的概述。第二篇论文的题目是“开发和测试碳感知应用的软件在环模拟”本书由Philipp Wiesner、Marvin Steinke、Henrik Nickel、Yazan Kitana和Odej Kao共同撰写。作为依赖纯模拟或纯真实测试平台的替代方案,本文建议使用软件在环模拟和混合测试平台来测试能源模拟背景下的碳感知软件应用程序。本文描述了一个原型“Vessim”的设计和实现,以及两个演示新工具功能和特征的实验。通过这种方式,本文展示了消息代理如何可靠且真实地将当前正在运行的被测应用程序连接到实时模拟中,同时持续测量或建模能源需求。我们感谢《华尔街日报》总编辑Rajkumar Buyya博士邀请我们组织本期特刊。我们也非常感谢杂志社的宝贵支持。此外,我们非常感谢我们的审稿人提供的彻底和周到的审查。最后,我们感谢向我们特刊投稿的作者的辛勤工作和信任。
{"title":"Special Issue on benchmarking, experimentation tools, and reproducible practices for <scp>data‐intensive</scp> systems from edge to cloud","authors":"Lauritz Thamsen, David Bermbach, Demetris Trihinas","doi":"10.1002/spe.3282","DOIUrl":"https://doi.org/10.1002/spe.3282","url":null,"abstract":"As data analytics and machine learning increasingly permeate our cities, factories, and homes, the computing infrastructure for data-intensive systems becomes more challenging. That is, the vision of pervasive, intelligent, and cyber-physical IoT systems will not be realized with centralized cloud resources alone. Such resources are simply too far away from sensor-equipped devices and users, resulting in high latency, bandwidth bottlenecks, and unnecessary energy consumption. In addition, there are often privacy and security requirements that mandate distributed architectures. As a result, new distributed computing paradigms are emerging that promise to bring computing and storage closer to data sources and users. The emerging distributed computing environments of edge and fog computing provide additional resources within mobile networks, ISP infrastructure, and even LEO satellites. These diverse and dynamic computing environments pose significant challenges to the performance, dependability, and efficiency of data-intensive systems running on such infrastructure. At the same time, it is far less clear how to properly benchmark, evaluate, and test the behavior of systems that span IoT devices, edge nodes, and cloud resources. For example, IoT sensor data stream processing systems can be leveraged to continuously optimize the operation of urban infrastructures (such as public transportation systems, water networks, or medical infrastructures). The behavior of such systems must be thoroughly assessed before they can be deployed to edge and fog infrastructure. In addition, these systems must be evaluated reproducibly under the expected computing environment conditions, including variations of those conditions, given the inherently unsteady nature of IoT environments. In addition, there is growing concern about the energy consumption and greenhouse gas emissions of ICT (and especially distributed ML-based applications), which further warrants close examination of the behavior of new data-intensive applications. Despite significant research and development efforts to improve benchmarking, experimentation tools, and reproducible practices for data-intensive systems spanning from the edge to the cloud, more research is urgently needed. We therefore invited high-quality research papers on this topic for this special issue of Software: Practice and Experience, and we were able to select two out of four submissions for this special issue with the help of our reviewers. The first accepted paper is titled “faas-sim: A Trace-Driven Simulation Framework for Serverless Edge Computing Platforms”.1 It is co-authored by Philipp Raith, Thomas Rausch, Alireza Furutanpey, and Schahram Dustdar. The paper presents the design and implementation of a new simulation framework, “faas-sim,” for modeling and evaluating serverless software architectures spanning the edge-cloud continuum based on a scenario description, a given network topology, and workload traces. The new si","PeriodicalId":21899,"journal":{"name":"Software: Practice and Experience","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136210540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CDTC: Automatically establishing the trace links between class diagrams in design phase and source code CDTC:在设计阶段自动建立类图和源代码之间的跟踪链接
Pub Date : 2023-10-05 DOI: 10.1002/spe.3270
Fangwei Chen, Li Zhang, Xiaoli Lian
Abstract Context The UML class diagram is commonly used to model functional structures and software code structures in both the preliminary and detailed design stages. And the abstraction level of UML class diagrams is usually higher than that of source code. Usually, there is a lack of trace links between these class diagrams and the source code, which may cause difficulties in understanding the source code, and affect the software evolution and maintenance. Objective The main goal of this article is to establish the trace links between highly abstracted UML class diagrams in the design phase and source code, and eventually help practitioners better understand source code. Method We propose an approach for the automated trace link establishment between UML class diagrams in the design phase and source code. To address the problem of abstraction level gap between them, we extend the UML class diagram by mining the synonymous phrases of class names and deducing the latent missing relationships between classes from multiple design documents. Then we build the trace links with a two‐phase approach including initial construction with fuzzy matching and further optimization by class relationship inference. Results Experiments on five open‐source projects show that the recalls of our approach are over 94%, and the F2‐scores are over 88%, with the gains of 30% to 60% than the four baselines. Conclusion Our work can be a reference for establishing the initial trace links between highly‐abstracted UML class diagrams and source code. Towards the higher abstraction of design diagrams, we extend UML class diagrams with the statistical analysis on multiple design documents. To guarantee the quality of trace links, we design a two‐phase approach by obtaining the “full but not good enough” trace links and filtering the “probably wrong” links. Experiments show that the main techniques of our approach behave as important role for tracing between high‐level class diagrams and source code.
UML类图通常用于在初步和详细设计阶段对功能结构和软件代码结构进行建模。并且UML类图的抽象级别通常比源代码的抽象级别高。通常,在这些类图和源代码之间缺乏跟踪链接,这可能会导致理解源代码的困难,并影响软件的发展和维护。本文的主要目标是在设计阶段建立高度抽象的UML类图和源代码之间的跟踪链接,并最终帮助实践者更好地理解源代码。方法提出了一种在设计阶段的UML类图和源代码之间建立自动跟踪链接的方法。为了解决它们之间抽象层次差距的问题,我们通过挖掘类名的同义短语和从多个设计文档中推断类之间潜在的缺失关系来扩展UML类图。然后采用模糊匹配的初始构造和类关系推理的进一步优化两阶段方法构建轨迹链接。结果在5个开源项目上的实验表明,我们的方法的召回率超过94%,F2得分超过88%,比4个基线提高了30% ~ 60%。我们的工作可以作为在高度抽象的UML类图和源代码之间建立初始跟踪链接的参考。为了对设计图进行更高的抽象,我们扩展了UML类图,并对多个设计文档进行了统计分析。为了保证跟踪链接的质量,我们设计了一种两阶段的方法,即获取“完整但不够好”的跟踪链接,过滤“可能错误”的链接。实验表明,我们方法的主要技术在高级类图和源代码之间的跟踪中发挥了重要作用。
{"title":"CDTC: Automatically establishing the trace links between class diagrams in design phase and source code","authors":"Fangwei Chen, Li Zhang, Xiaoli Lian","doi":"10.1002/spe.3270","DOIUrl":"https://doi.org/10.1002/spe.3270","url":null,"abstract":"Abstract Context The UML class diagram is commonly used to model functional structures and software code structures in both the preliminary and detailed design stages. And the abstraction level of UML class diagrams is usually higher than that of source code. Usually, there is a lack of trace links between these class diagrams and the source code, which may cause difficulties in understanding the source code, and affect the software evolution and maintenance. Objective The main goal of this article is to establish the trace links between highly abstracted UML class diagrams in the design phase and source code, and eventually help practitioners better understand source code. Method We propose an approach for the automated trace link establishment between UML class diagrams in the design phase and source code. To address the problem of abstraction level gap between them, we extend the UML class diagram by mining the synonymous phrases of class names and deducing the latent missing relationships between classes from multiple design documents. Then we build the trace links with a two‐phase approach including initial construction with fuzzy matching and further optimization by class relationship inference. Results Experiments on five open‐source projects show that the recalls of our approach are over 94%, and the F2‐scores are over 88%, with the gains of 30% to 60% than the four baselines. Conclusion Our work can be a reference for establishing the initial trace links between highly‐abstracted UML class diagrams and source code. Towards the higher abstraction of design diagrams, we extend UML class diagrams with the statistical analysis on multiple design documents. To guarantee the quality of trace links, we design a two‐phase approach by obtaining the “full but not good enough” trace links and filtering the “probably wrong” links. Experiments show that the main techniques of our approach behave as important role for tracing between high‐level class diagrams and source code.","PeriodicalId":21899,"journal":{"name":"Software: Practice and Experience","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135482642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Energy‐aware resource management in fog computing for IoT applications: A review, taxonomy, and future directions 物联网应用中雾计算的能源感知资源管理:综述、分类和未来方向
Pub Date : 2023-10-01 DOI: 10.1002/spe.3273
Sayed Mohsen Hashemi, Amir Sahafi, Amir Masoud Rahmani, Mahdi Bohlouli
Abstract The energy demand for Internet of Things (IoT) applications is increasing with a rise in IoT devices. Rising costs and energy demands can cause serious problems. Fog computing (FC) has recently emerged as a model for location‐aware tasks, data processing, fast computing, and energy consumption reduction. The Fog computing model assists cloud computing in fast processing at the network's edge, which also exerts a vital role in cloud computing. Due to the fast computing in fog servers, different quality of service (QoS) approaches have been proposed in various sections of the fog system, and several quality factors have been considered in this regard. Despite the significance of QoS in Fog computing, no extensive study has focused on QoS and energy consumption methods in this area. Therefore, this article investigates previous research on the use and guarantee of Fog computing. This article reviews six general approaches that discuss the published articles between 2015 and late May 2023. The focal point of this paper is evaluating Fog computing and the energy consumption strategy. This article further shows the advantages, disadvantages, tools, types of evaluation, and quality factors according to the selected approaches. Based on the reviewed studies, some open issues and challenges in Fog computing energy consumption management are suggested for further study.
物联网(IoT)应用的能源需求随着物联网设备的增加而增加。不断上升的成本和能源需求可能会导致严重的问题。雾计算(FC)最近作为位置感知任务、数据处理、快速计算和降低能耗的一种模型而出现。雾计算模型帮助云计算在网络边缘进行快速处理,在云计算中也发挥着至关重要的作用。由于雾服务器的快速计算,在雾系统的各个部分提出了不同的服务质量(QoS)方法,并在这方面考虑了几个质量因素。尽管QoS在雾计算中具有重要意义,但在这一领域还没有广泛的研究关注QoS和能量消耗方法。因此,本文对雾计算的使用和保障进行了研究。本文回顾了六种通用方法,讨论了2015年至2023年5月底之间发表的文章。本文的重点是评价雾计算及其能耗策略。本文根据所选择的方法进一步展示了优点、缺点、工具、评估类型和质量因素。在综述研究成果的基础上,提出了雾计算能耗管理中存在的一些有待进一步研究的问题和挑战。
{"title":"Energy‐aware resource management in fog computing for <scp>IoT</scp> applications: A review, taxonomy, and future directions","authors":"Sayed Mohsen Hashemi, Amir Sahafi, Amir Masoud Rahmani, Mahdi Bohlouli","doi":"10.1002/spe.3273","DOIUrl":"https://doi.org/10.1002/spe.3273","url":null,"abstract":"Abstract The energy demand for Internet of Things (IoT) applications is increasing with a rise in IoT devices. Rising costs and energy demands can cause serious problems. Fog computing (FC) has recently emerged as a model for location‐aware tasks, data processing, fast computing, and energy consumption reduction. The Fog computing model assists cloud computing in fast processing at the network's edge, which also exerts a vital role in cloud computing. Due to the fast computing in fog servers, different quality of service (QoS) approaches have been proposed in various sections of the fog system, and several quality factors have been considered in this regard. Despite the significance of QoS in Fog computing, no extensive study has focused on QoS and energy consumption methods in this area. Therefore, this article investigates previous research on the use and guarantee of Fog computing. This article reviews six general approaches that discuss the published articles between 2015 and late May 2023. The focal point of this paper is evaluating Fog computing and the energy consumption strategy. This article further shows the advantages, disadvantages, tools, types of evaluation, and quality factors according to the selected approaches. Based on the reviewed studies, some open issues and challenges in Fog computing energy consumption management are suggested for further study.","PeriodicalId":21899,"journal":{"name":"Software: Practice and Experience","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135406605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RADF: Architecture decomposition for function as a service RADF:作为服务的功能的体系结构分解
Pub Date : 2023-09-28 DOI: 10.1002/spe.3276
Lulai Zhu, Damian Andrew Tamburri, Giuliano Casale
Abstract As the most successful realization of serverless, function as a service (FaaS) brings in a novel cloud computing paradigm that can save operating costs, reduce management effort, enable seamless scalability, and augment development productivity. Migration of an existing application to the serverless architecture is, however, an intricate task as a great number of decisions need to be made along the way. We propose in this paper RADF, a semi‐automatic approach that decomposes a monolith into serverless functions by analyzing the business logic inherent in the interface of the application. The proposed approach adopts a two‐stage refactoring strategy, where a coarse‐grained decomposition is performed at first, followed by a fine‐grained one. As such, the decomposition process is simplified into smaller steps and adaptable to generate a solution at either microservice or function level. We have implemented RADF in a holistic DevOps methodology and evaluated its capability for microservice identification and feasibility for code refactoring. In the evaluation experiments, RADF achieves lower coupling and relatively balanced cohesion, compared to previous decomposition approaches.
作为最成功的无服务器实现,功能即服务(FaaS)带来了一种新颖的云计算范式,可以节省运营成本,减少管理工作量,实现无缝可伸缩性,并提高开发效率。但是,将现有应用程序迁移到无服务器体系结构是一项复杂的任务,因为在此过程中需要做出大量决策。我们在本文中提出了RADF,这是一种半自动方法,通过分析应用程序接口中固有的业务逻辑,将单体分解为无服务器功能。提出的方法采用两阶段重构策略,首先执行粗粒度分解,然后执行细粒度分解。因此,分解过程被简化为更小的步骤,并适用于在微服务或功能级别生成解决方案。我们已经在一个整体的DevOps方法中实现了RADF,并评估了它在微服务识别和代码重构方面的能力。在评价实验中,与之前的分解方法相比,RADF实现了较低的耦合和相对平衡的内聚。
{"title":"RADF: Architecture decomposition for function as a service","authors":"Lulai Zhu, Damian Andrew Tamburri, Giuliano Casale","doi":"10.1002/spe.3276","DOIUrl":"https://doi.org/10.1002/spe.3276","url":null,"abstract":"Abstract As the most successful realization of serverless, function as a service (FaaS) brings in a novel cloud computing paradigm that can save operating costs, reduce management effort, enable seamless scalability, and augment development productivity. Migration of an existing application to the serverless architecture is, however, an intricate task as a great number of decisions need to be made along the way. We propose in this paper RADF, a semi‐automatic approach that decomposes a monolith into serverless functions by analyzing the business logic inherent in the interface of the application. The proposed approach adopts a two‐stage refactoring strategy, where a coarse‐grained decomposition is performed at first, followed by a fine‐grained one. As such, the decomposition process is simplified into smaller steps and adaptable to generate a solution at either microservice or function level. We have implemented RADF in a holistic DevOps methodology and evaluated its capability for microservice identification and feasibility for code refactoring. In the evaluation experiments, RADF achieves lower coupling and relatively balanced cohesion, compared to previous decomposition approaches.","PeriodicalId":21899,"journal":{"name":"Software: Practice and Experience","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135386129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
IMDAC: A robust intelligent software defect prediction model via multi‐objective optimization and end‐to‐end hybrid deep learning networks IMDAC:基于多目标优化和端到端混合深度学习网络的鲁棒智能软件缺陷预测模型
Pub Date : 2023-09-28 DOI: 10.1002/spe.3274
Kun Zhu, Nana Zhang, Changjun Jiang, Dandan Zhu
Abstract Software defect prediction (SDP) aims to build an effective prediction model for historical defect data from software repositories by some specialized techniques or algorithms, and predict the defect proneness of new software modules. Nevertheless, the complex internal intrinsic structure hidden behind the defect data makes it challenging for the built prediction model to capture the most expressive defect feature representations, and largely limits the SDP performance. Fortunately, artificial intelligence is interacting closely with humans and provides powerful intelligent technical support for addressing these SDP issues. In this article, we propose a robust intelligent SDP model called IMDAC based on deep learning and soft computing techniques. This model has three main advantages: (1) an effective deep generative network—InfoGAN (information maximizing GANs) is employed to conduct data augmentation, namely generating sufficient defect instances and achieving defect class balance simultaneously. (2) Select the fewest representative feature subset for the minimum error via an advanced multi‐objective optimization approach—MSEA (multi‐stage evolutionary algorithm). (3) Build a powerful end‐to‐end deep defect predictor by hybrid deep learning techniques—DAE (Denoising AutoEncoder) and CNN (convolutional neural network), which can not only reconstruct a clean “repaired” input with strong robustness and generalization capabilities via DAE, but also learn the abstract deep semantic features with strong discriminating capability via CNN. Experimental results verify the superiority and robustness of the IMDAC model across 15 software projects.
摘要软件缺陷预测(Software defect prediction, SDP)旨在通过一些专门的技术或算法,对软件库中的历史缺陷数据建立有效的预测模型,预测新软件模块的缺陷倾向。然而,隐藏在缺陷数据背后的复杂的内部固有结构使得所构建的预测模型很难捕捉到最具表现力的缺陷特征表示,这在很大程度上限制了SDP的性能。幸运的是,人工智能正在与人类密切互动,为解决这些SDP问题提供强大的智能技术支持。在本文中,我们提出了一个基于深度学习和软计算技术的鲁棒智能SDP模型IMDAC。该模型有三个主要优点:(1)利用有效的深度生成网络——信息最大化gan (information maximize GANs)进行数据扩充,即生成足够的缺陷实例,同时实现缺陷类平衡。(2)采用一种先进的多目标优化方法-多阶段进化算法(msea),选择具有最小误差的最小代表性特征子集。(3)采用去噪自动编码器(Denoising AutoEncoder)和卷积神经网络(CNN)的混合深度学习技术构建强大的端到端深度缺陷预测器,该预测器不仅可以通过DAE重建具有较强鲁棒性和泛化能力的干净“修复”输入,还可以通过CNN学习具有较强判别能力的抽象深度语义特征。实验结果验证了IMDAC模型在15个软件项目中的优越性和鲁棒性。
{"title":"IMDAC: A robust intelligent software defect prediction model via multi‐objective optimization and end‐to‐end hybrid deep learning networks","authors":"Kun Zhu, Nana Zhang, Changjun Jiang, Dandan Zhu","doi":"10.1002/spe.3274","DOIUrl":"https://doi.org/10.1002/spe.3274","url":null,"abstract":"Abstract Software defect prediction (SDP) aims to build an effective prediction model for historical defect data from software repositories by some specialized techniques or algorithms, and predict the defect proneness of new software modules. Nevertheless, the complex internal intrinsic structure hidden behind the defect data makes it challenging for the built prediction model to capture the most expressive defect feature representations, and largely limits the SDP performance. Fortunately, artificial intelligence is interacting closely with humans and provides powerful intelligent technical support for addressing these SDP issues. In this article, we propose a robust intelligent SDP model called IMDAC based on deep learning and soft computing techniques. This model has three main advantages: (1) an effective deep generative network—InfoGAN (information maximizing GANs) is employed to conduct data augmentation, namely generating sufficient defect instances and achieving defect class balance simultaneously. (2) Select the fewest representative feature subset for the minimum error via an advanced multi‐objective optimization approach—MSEA (multi‐stage evolutionary algorithm). (3) Build a powerful end‐to‐end deep defect predictor by hybrid deep learning techniques—DAE (Denoising AutoEncoder) and CNN (convolutional neural network), which can not only reconstruct a clean “repaired” input with strong robustness and generalization capabilities via DAE, but also learn the abstract deep semantic features with strong discriminating capability via CNN. Experimental results verify the superiority and robustness of the IMDAC model across 15 software projects.","PeriodicalId":21899,"journal":{"name":"Software: Practice and Experience","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135385382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Software‐in‐the‐loop simulation for developing and testing carbon‐aware applications 用于开发和测试碳感知应用程序的软件在环模拟
Pub Date : 2023-09-25 DOI: 10.1002/spe.3275
Philipp Wiesner, Marvin Steinke, Henrik Nickel, Yazan Kitana, Odej Kao
Abstract The growing electricity demand of IT infrastructure has raised significant concerns about its carbon footprint. To mitigate the associated emissions of computing systems, current efforts therefore increasingly focus on aligning the power usage of software with the availability of clean energy. To operate, such carbon‐aware applications require visibility and control over relevant metrics and configurations of the energy system. However, research and development of novel energy system abstraction layers and interfaces remain difficult due to the scarcity of available testing environments: Real testbeds are expensive to build and maintain, while existing simulation testbeds are unable to interact with real computing systems. To provide a widely applicable approach for developing and testing carbon‐aware software, we propose a method for integrating real applications into a simulated energy system through software‐in‐the‐loop simulation. The integration offers an API for accessing the energy system, while continuously modeling the computing system's power demand within the simulation. Our system allows for the integration of physical as well as virtual compute nodes, and can help accelerate research on carbon‐aware computing systems in the future.
IT基础设施不断增长的电力需求引起了人们对其碳足迹的重大关注。为了减少计算系统的相关排放,目前的努力越来越集中在使软件的电力使用与清洁能源的可用性保持一致上。为了操作,这种碳感知应用需要对能源系统的相关指标和配置进行可见性和控制。然而,由于缺乏可用的测试环境,新型能源系统抽象层和接口的研究和开发仍然很困难:真实的测试平台构建和维护成本高昂,而现有的仿真测试平台无法与真实的计算系统进行交互。为了提供一种广泛适用于开发和测试碳感知软件的方法,我们提出了一种通过软件在环模拟将实际应用集成到模拟能源系统中的方法。该集成提供了一个访问能源系统的API,同时在仿真中连续建模计算系统的功率需求。我们的系统允许物理和虚拟计算节点的集成,并且可以帮助加速未来对碳感知计算系统的研究。
{"title":"Software‐in‐the‐loop simulation for developing and testing carbon‐aware applications","authors":"Philipp Wiesner, Marvin Steinke, Henrik Nickel, Yazan Kitana, Odej Kao","doi":"10.1002/spe.3275","DOIUrl":"https://doi.org/10.1002/spe.3275","url":null,"abstract":"Abstract The growing electricity demand of IT infrastructure has raised significant concerns about its carbon footprint. To mitigate the associated emissions of computing systems, current efforts therefore increasingly focus on aligning the power usage of software with the availability of clean energy. To operate, such carbon‐aware applications require visibility and control over relevant metrics and configurations of the energy system. However, research and development of novel energy system abstraction layers and interfaces remain difficult due to the scarcity of available testing environments: Real testbeds are expensive to build and maintain, while existing simulation testbeds are unable to interact with real computing systems. To provide a widely applicable approach for developing and testing carbon‐aware software, we propose a method for integrating real applications into a simulated energy system through software‐in‐the‐loop simulation. The integration offers an API for accessing the energy system, while continuously modeling the computing system's power demand within the simulation. Our system allows for the integration of physical as well as virtual compute nodes, and can help accelerate research on carbon‐aware computing systems in the future.","PeriodicalId":21899,"journal":{"name":"Software: Practice and Experience","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135769111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
Software: Practice and Experience
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1