首页 > 最新文献

Software: Practice and Experience最新文献

英文 中文
SCARS: Suturing wounds due to conflicts between non-functional requirements in autonomous and robotic systems SCARS:缝合自主系统和机器人系统中因非功能性要求之间的冲突而造成的伤口
Pub Date : 2023-12-15 DOI: 10.1002/spe.3297
Mandira Roy, Raunak Bag, Novarun Deb, Agostino Cortesi, Rituparna Chaki, Nabendu Chaki
In autonomous and robotic systems, the functional requirements (FRs) and non-functional requirements (NFRs) are gathered from multiple stakeholders. The different stakeholder requirements are associated with different components of the robotic system and with the contexts in which the system may operate. This aggregation of requirements from different sources (multiple stakeholders) often results in inconsistent or conflicting sets of requirements. Conflicts among NFRs for robotic systems heavily depend on features of actual execution contexts. It is essential to analyze the inconsistencies and conflicts among the requirements in the early planning phase to design the robotic systems in a systematic manner. In this work, we design and experimentally evaluate a framework, called SCARS, providing: (a) a domain-specific language extending the ROS2 Domain Specific Language (DSL) concepts by considering the different environmental contexts in which the system has to operate, (b) support to analyze their impact on NFRs, and (c) the computation of the optimal degree of NFR satisfaction that can be achieved within different system configurations. The effectiveness of SCARS has been validated on the iRobot®�$$ {}^{circledR } $$� Create®�$$ {}^{circledR } $$�3 robot using Gazebo simulation.
在自主机器人系统中,功能需求(FRs)和非功能需求(NFRs)是从多个利益相关者那里收集来的。不同利益相关者的需求与机器人系统的不同组件以及系统的运行环境相关联。将不同来源(多个利益相关者)的需求汇总在一起,往往会产生不一致或相互冲突的需求集。机器人系统 NFR 之间的冲突在很大程度上取决于实际执行环境的特征。在早期规划阶段分析需求之间的不一致和冲突对于系统地设计机器人系统至关重要。在这项工作中,我们设计了一个名为 SCARS 的框架,并对其进行了实验评估,该框架提供:(a) 一种特定领域语言,通过考虑系统必须在其中运行的不同环境背景,扩展了 ROS2 特定领域语言 (DSL) 概念;(b) 支持分析它们对 NFR 的影响;(c) 计算在不同系统配置下可实现的最佳 NFR 满足程度。SCARS 的有效性已在 iRobot®$$ {}^{circledR } 上得到验证。$$ Create®$$ {}^{circledR }$$3 机器人上使用 Gazebo 仿真验证了 SCARS 的有效性。
{"title":"SCARS: Suturing wounds due to conflicts between non-functional requirements in autonomous and robotic systems","authors":"Mandira Roy, Raunak Bag, Novarun Deb, Agostino Cortesi, Rituparna Chaki, Nabendu Chaki","doi":"10.1002/spe.3297","DOIUrl":"https://doi.org/10.1002/spe.3297","url":null,"abstract":"In autonomous and robotic systems, the functional requirements (FRs) and non-functional requirements (NFRs) are gathered from multiple stakeholders. The different stakeholder requirements are associated with different components of the robotic system and with the contexts in which the system may operate. This aggregation of requirements from different sources (multiple stakeholders) often results in inconsistent or conflicting sets of requirements. Conflicts among NFRs for robotic systems heavily depend on features of actual execution contexts. It is essential to analyze the inconsistencies and conflicts among the requirements in the early planning phase to design the robotic systems in a systematic manner. In this work, we design and experimentally evaluate a framework, called SCARS, providing: (a) a domain-specific language extending the ROS2 Domain Specific Language (DSL) concepts by considering the different environmental contexts in which the system has to operate, (b) support to analyze their impact on NFRs, and (c) the computation of the optimal degree of NFR satisfaction that can be achieved within different system configurations. The effectiveness of SCARS has been validated on the iRobot<math altimg=\"urn:x-wiley:spe:media:spe3297:spe3297-math-0001\" display=\"inline\" location=\"graphic/spe3297-math-0001.png\" overflow=\"scroll\">\u0000<semantics>\u0000<mrow>\u0000<msup>\u0000<mrow></mrow>\u0000<mrow>\u0000<mi>®</mi>\u0000</mrow>\u0000</msup>\u0000</mrow>\u0000$$ {}^{circledR } $$</annotation>\u0000</semantics></math> Create<math altimg=\"urn:x-wiley:spe:media:spe3297:spe3297-math-0002\" display=\"inline\" location=\"graphic/spe3297-math-0002.png\" overflow=\"scroll\">\u0000<semantics>\u0000<mrow>\u0000<msup>\u0000<mrow></mrow>\u0000<mrow>\u0000<mi>®</mi>\u0000</mrow>\u0000</msup>\u0000</mrow>\u0000$$ {}^{circledR } $$</annotation>\u0000</semantics></math>3 robot using Gazebo simulation.","PeriodicalId":21899,"journal":{"name":"Software: Practice and Experience","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138688452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An elastic framework construction method based on task migration in edge computing 基于边缘计算任务迁移的弹性框架构建方法
Pub Date : 2023-12-13 DOI: 10.1002/spe.3302
Yonglin Pu, Ziyang Li, Jiong Yu, Liang Lu, Binglei Guo
Edge computing (EC) serves as an effective technology, empowering end-users to attain high bandwidth and low latency by offloading tasks with high computational demands from mobile devices to edge servers. However, a major challenge arises when the processing load fluctuates continuously, leading to a performance bottleneck due to the inability to rescale edge node (EN) resources. To address this problem, the approach of task migration is introduced, and EN load prediction model, the resource constrained model, optimal communication overhead model, optimal task migration model, and energy consumption model are built to form a theoretical foundation from which to propose a task migration based resilient framework construction method in EC. With the aid of the domino effect and the combined effect of task migration, a dynamic node-growing algorithm (DNGA) and a dynamic node-shrinking algorithm (DNSA), both based on the task migration strategy, are proposed. Specifically, the DNGA smoothly expands the EN scale when the processing load increases, while the DNSA shrinks the EN scale when the processing load decreases. The experimental results show that for standard benchmarks deployed on an elastic framework, the proposed method realizes a smooth scaling mechanism in the EC, which reduces the latency and improves the reliability of data processing.
边缘计算(EC)是一种有效的技术,通过将具有高计算需求的任务从移动设备卸载到边缘服务器,使最终用户能够获得高带宽和低延迟。然而,当处理负载持续波动时,由于无法重新调整边缘节点(EN)资源而导致性能瓶颈,这将带来一个主要挑战。针对这一问题,引入了任务迁移方法,建立了网络负荷预测模型、资源约束模型、最优通信开销模型、最优任务迁移模型和能耗模型,为提出基于任务迁移的电子商务弹性框架构建方法奠定了理论基础。利用多米诺效应和任务迁移的综合效应,提出了基于任务迁移策略的动态节点生长算法(DNGA)和动态节点缩减算法(DNSA)。当处理负荷增加时,DNGA平滑地扩展EN规模,而当处理负荷减少时,DNSA平滑地缩小EN规模。实验结果表明,对于部署在弹性框架上的标准基准测试,该方法实现了EC中的平滑缩放机制,减少了延迟,提高了数据处理的可靠性。
{"title":"An elastic framework construction method based on task migration in edge computing","authors":"Yonglin Pu, Ziyang Li, Jiong Yu, Liang Lu, Binglei Guo","doi":"10.1002/spe.3302","DOIUrl":"https://doi.org/10.1002/spe.3302","url":null,"abstract":"Edge computing (EC) serves as an effective technology, empowering end-users to attain high bandwidth and low latency by offloading tasks with high computational demands from mobile devices to edge servers. However, a major challenge arises when the processing load fluctuates continuously, leading to a performance bottleneck due to the inability to rescale edge node (EN) resources. To address this problem, the approach of task migration is introduced, and EN load prediction model, the resource constrained model, optimal communication overhead model, optimal task migration model, and energy consumption model are built to form a theoretical foundation from which to propose a task migration based resilient framework construction method in EC. With the aid of the domino effect and the combined effect of task migration, a dynamic node-growing algorithm (DNGA) and a dynamic node-shrinking algorithm (DNSA), both based on the task migration strategy, are proposed. Specifically, the DNGA smoothly expands the EN scale when the processing load increases, while the DNSA shrinks the EN scale when the processing load decreases. The experimental results show that for standard benchmarks deployed on an elastic framework, the proposed method realizes a smooth scaling mechanism in the EC, which reduces the latency and improves the reliability of data processing.","PeriodicalId":21899,"journal":{"name":"Software: Practice and Experience","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138631528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FLight: A lightweight federated learning framework in edge and fog computing Flight:边缘和雾计算中的轻量级联合学习框架
Pub Date : 2023-12-12 DOI: 10.1002/spe.3300
Wuji Zhu, Mohammad Goudarzi, Rajkumar Buyya
The number of Internet of Things (IoT) applications, especially latency-sensitive ones, have been significantly increased. So, cloud computing, as one of the main enablers of the IoT that offers centralized services, cannot solely satisfy the requirements of IoT applications. Edge/fog computing, as a distributed computing paradigm, processes, and stores IoT data at the edge of the network, offering low latency, reduced network traffic, and higher bandwidth. The edge/fog resources are often less powerful compared to cloud, and IoT data is dispersed among many geo-distributed servers. Hence, Federated Learning (FL), which is a machine learning approach that enables multiple distributed servers to collaborate on building models without exchanging the raw data, is well-suited to edge/fog computing environments, where data privacy is of paramount importance. Besides, to manage different FL tasks on edge/fog computing environments, a lightweight resource management framework is required to manage different incoming FL tasks while does not incur significant overhead on the system. Accordingly, in this article, we propose a lightweight FL framework, called FLight, to be deployed on a diverse range of devices, ranging from resource-limited edge/fog devices to powerful cloud servers. FLight is implemented based on the FogBus2 framework, which is a containerized distributed resource management framework. Moreover, FLight integrates both synchronous and asynchronous models of FL. Besides, we propose a lightweight heuristic-based worker selection algorithm to select a suitable set of available workers to participate in the training step to obtain higher training time efficiency. The obtained results demonstrate the efficiency of the FLight. The worker selection technique reduces the training time of reaching 80% accuracy by 34% compared to sequential training, while asynchronous one helps to improve synchronous FL training time by 64%.
物联网(IoT)应用,尤其是对延迟敏感的应用数量大幅增加。因此,云计算作为提供集中式服务的物联网主要推动力之一,无法完全满足物联网应用的要求。边缘/雾计算作为一种分布式计算模式,可在网络边缘处理和存储物联网数据,提供低延迟、更少的网络流量和更高的带宽。与云计算相比,边缘/雾计算资源通常功能较弱,而且物联网数据分散在许多地理分布的服务器上。因此,联邦学习(Federated Learning,FL)是一种机器学习方法,它能让多个分布式服务器在不交换原始数据的情况下协作构建模型,非常适合数据隐私至关重要的边缘/雾计算环境。此外,要在边缘/雾计算环境中管理不同的 FL 任务,需要一个轻量级资源管理框架来管理传入的不同 FL 任务,同时又不会给系统带来大量开销。因此,在本文中,我们提出了一个轻量级 FL 框架,称为 FLight,可部署在各种设备上,从资源有限的边缘/雾设备到功能强大的云服务器。FLight 基于 FogBus2 框架实现,这是一个容器化分布式资源管理框架。此外,FLight 还集成了 FL 的同步和异步模型。此外,我们还提出了一种基于启发式的轻量级工人选择算法,以选择一组合适的可用工人参与训练步骤,从而获得更高的训练时间效率。结果证明了 FLight 的高效性。与顺序训练相比,工人选择技术将达到 80% 准确率的训练时间缩短了 34%,而异步训练则将同步 FL 训练时间缩短了 64%。
{"title":"FLight: A lightweight federated learning framework in edge and fog computing","authors":"Wuji Zhu, Mohammad Goudarzi, Rajkumar Buyya","doi":"10.1002/spe.3300","DOIUrl":"https://doi.org/10.1002/spe.3300","url":null,"abstract":"The number of Internet of Things (IoT) applications, especially latency-sensitive ones, have been significantly increased. So, cloud computing, as one of the main enablers of the IoT that offers centralized services, cannot solely satisfy the requirements of IoT applications. Edge/fog computing, as a distributed computing paradigm, processes, and stores IoT data at the edge of the network, offering low latency, reduced network traffic, and higher bandwidth. The edge/fog resources are often less powerful compared to cloud, and IoT data is dispersed among many geo-distributed servers. Hence, Federated Learning (FL), which is a machine learning approach that enables multiple distributed servers to collaborate on building models without exchanging the raw data, is well-suited to edge/fog computing environments, where data privacy is of paramount importance. Besides, to manage different FL tasks on edge/fog computing environments, a lightweight resource management framework is required to manage different incoming FL tasks while does not incur significant overhead on the system. Accordingly, in this article, we propose a lightweight FL framework, called FLight, to be deployed on a diverse range of devices, ranging from resource-limited edge/fog devices to powerful cloud servers. FLight is implemented based on the FogBus2 framework, which is a containerized distributed resource management framework. Moreover, FLight integrates both synchronous and asynchronous models of FL. Besides, we propose a lightweight heuristic-based worker selection algorithm to select a suitable set of available workers to participate in the training step to obtain higher training time efficiency. The obtained results demonstrate the efficiency of the FLight. The worker selection technique reduces the training time of reaching 80% accuracy by 34% compared to sequential training, while asynchronous one helps to improve synchronous FL training time by 64%.","PeriodicalId":21899,"journal":{"name":"Software: Practice and Experience","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138576699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Parsing millions of URLs per second 每秒解析数百万个 URL
Pub Date : 2023-12-09 DOI: 10.1002/spe.3296
Yagiz Nizipli, Daniel Lemire
URLs are fundamental elements of web applications. By applying vector algorithms, we built a fast standard-compliant C++ implementation. Our parser uses three times fewer instructions than competing parsers following the WHATWG standard (e.g., Servo's rust-url) and up to eight times fewer instructions than the popular curl parser. The Node.js environment adopted our C++ library. In our tests on realistic data, a recent Node.js version (20.0) with our parser is four to five times faster than the last version with the legacy URL parser.
URL 是网络应用程序的基本要素。通过应用向量算法,我们构建了一个符合标准的快速 C++ 实现。我们的解析器比遵循 WHATWG 标准的同类解析器(如 Servo 的 rust-url)少用了三倍的指令,比流行的 curl 解析器少用了多达八倍的指令。Node.js 环境采用了我们的 C++ 库。在我们对现实数据的测试中,使用我们的解析器的最新 Node.js 版本(20.0)比使用传统 URL 解析器的上一版本快四到五倍。
{"title":"Parsing millions of URLs per second","authors":"Yagiz Nizipli, Daniel Lemire","doi":"10.1002/spe.3296","DOIUrl":"https://doi.org/10.1002/spe.3296","url":null,"abstract":"URLs are fundamental elements of web applications. By applying vector algorithms, we built a fast standard-compliant C++ implementation. Our parser uses three times fewer instructions than competing parsers following the WHATWG standard (e.g., Servo's rust-url) and up to eight times fewer instructions than the popular curl parser. The Node.js environment adopted our C++ library. In our tests on realistic data, a recent Node.js version (20.0) with our parser is four to five times faster than the last version with the legacy URL parser.","PeriodicalId":21899,"journal":{"name":"Software: Practice and Experience","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138562682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A cloud-edge service offloading method for the metaverse in smart manufacturing 面向智能制造中的元世界的云边服务卸载方法
Pub Date : 2023-12-09 DOI: 10.1002/spe.3301
Haolong Xiang, Xuyun Zhang, Muhammad Bilal
With the development of artificial intelligence, cloud-edge computing and virtual reality, the industrial design that originally depends on human imagination and computing power can be transitioned to metaverse applications in smart manufacturing, which offloads the services of metaverse to cloud and edge platforms for enhancing quality of service (QoS), considering inadequate computing power of terminal devices like industrial sensors and access points (APs). However, large overhead and privacy exposure occur during data transmission to cloud, while edge computing devices (ECDs) are at risk of overloading with redundant service requests and difficult central control. To address these challenges, this paper proposes a minority game (MG) based cloud-edge service offloading method named COM for metaverse manufacturing. Technically, MG possesses a distribution mechanism that can minimize reliance on centralized control, and gains its effectiveness in resource allocation. Besides, a dynamic control of cut-off value is supplemented on the basis of MG for better adaptability to network variations. Then, agents in COM (i.e., APs) leverage reinforcement learning (RL) to work on MG history, offloading decision, QoS mapping to state, action and reward, for further optimizing distributed offloading decision-making. Finally, COM is evaluated using a variety of real-world datasets of manufacturing. The results indicate that COM has 5.38% higher QoS and 8.58% higher privacy level comparing to benchmark method.
随着人工智能、云边计算和虚拟现实技术的发展,考虑到工业传感器和接入点(AP)等终端设备的计算能力不足,智能制造中原本依赖人类想象力和计算能力的工业设计可以过渡到元宇宙应用,将元宇宙的服务卸载到云和边缘平台,以提高服务质量(QoS)。然而,在向云传输数据的过程中会产生大量开销并暴露隐私,而边缘计算设备(ECD)则面临着冗余服务请求过载和难以集中控制的风险。为应对这些挑战,本文提出了一种基于少数人博弈(MG)的云边服务卸载方法,并将其命名为面向元数据制造的 COM。从技术上讲,MG 拥有一种分配机制,可以最大限度地减少对集中控制的依赖,并提高资源分配的有效性。此外,为了更好地适应网络变化,在 MG 的基础上补充了对截止值的动态控制。然后,COM 中的代理(即接入点)利用强化学习(RL)来处理 MG 历史、卸载决策、QoS 与状态、行动和奖励的映射,从而进一步优化分布式卸载决策。最后,使用各种实际制造数据集对 COM 进行了评估。结果表明,与基准方法相比,COM 的 QoS 和隐私水平分别提高了 5.38% 和 8.58%。
{"title":"A cloud-edge service offloading method for the metaverse in smart manufacturing","authors":"Haolong Xiang, Xuyun Zhang, Muhammad Bilal","doi":"10.1002/spe.3301","DOIUrl":"https://doi.org/10.1002/spe.3301","url":null,"abstract":"With the development of artificial intelligence, cloud-edge computing and virtual reality, the industrial design that originally depends on human imagination and computing power can be transitioned to metaverse applications in smart manufacturing, which offloads the services of metaverse to cloud and edge platforms for enhancing quality of service (QoS), considering inadequate computing power of terminal devices like industrial sensors and access points (APs). However, large overhead and privacy exposure occur during data transmission to cloud, while edge computing devices (ECDs) are at risk of overloading with redundant service requests and difficult central control. To address these challenges, this paper proposes a minority game (MG) based cloud-edge service offloading method named COM for metaverse manufacturing. Technically, MG possesses a distribution mechanism that can minimize reliance on centralized control, and gains its effectiveness in resource allocation. Besides, a dynamic control of cut-off value is supplemented on the basis of MG for better adaptability to network variations. Then, agents in COM (i.e., APs) leverage reinforcement learning (RL) to work on MG history, offloading decision, QoS mapping to state, action and reward, for further optimizing distributed offloading decision-making. Finally, COM is evaluated using a variety of real-world datasets of manufacturing. The results indicate that COM has 5.38% higher QoS and 8.58% higher privacy level comparing to benchmark method.","PeriodicalId":21899,"journal":{"name":"Software: Practice and Experience","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138562839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PyScribe–Learning to describe python code PyScribe-学习描述 python 代码
Pub Date : 2023-12-09 DOI: 10.1002/spe.3291
Juncai Guo, Jin Liu, Xiao Liu, Yao Wan, Yanjie Zhao, Li Li, Kui Liu, Jacques Klein, Tegawendé F. Bissyandé
Code comment generation, which attempts to summarize the functionality of source code in textual descriptions, plays an important role in automatic software development research. Currently, several structural neural networks have been exploited to preserve the syntax structure of source code based on abstract syntax trees (ASTs). However, they can not well capture both the long-distance and local relations between nodes while retaining the overall structural information of AST. To mitigate this problem, we present a prototype tool titled PyScribe, which extends the Transformer model to a new encoder-decoder-based framework. Particularly, the triplet position is designed and integrated into the node-level and edge-level structural features of AST for producing Python code comments automatically. This paper, to the best of our knowledge, makes the first effort to model the edges of AST as an explicit component for improved code representation. By specifying triplet positions for each node and edge, the overall structural information can be well preserved in the learning process. Moreover, the captured node and edge features go through a two-stage decoding process to yield higher qualified comments. To evaluate the effectiveness of PyScribe, we resort to a large dataset of code-comment pairs by mining Jupyter Notebooks from GitHub, for which we have made it publicly available to support further studies. The experimental results reveal that PyScribe is indeed effective, outperforming the state-ofthe-art by achieving an average BLEU score (i.e., av-BLEU) of �$$ approx $$�0.28.
代码注释生成试图用文本描述来概括源代码的功能,在自动软件开发研究中发挥着重要作用。目前,已有一些结构神经网络被用来保存基于抽象语法树(AST)的源代码语法结构。然而,它们无法在保留 AST 整体结构信息的同时,很好地捕捉节点之间的远距离关系和局部关系。为了缓解这一问题,我们提出了一个名为 PyScribe 的原型工具,它将 Transformer 模型扩展到一个基于编码器-解码器的新框架。特别是,我们设计了三连音位置,并将其集成到 AST 的节点级和边缘级结构特征中,用于自动生成 Python 代码注释。据我们所知,本文首次尝试将 AST 的边作为显式组件建模,以改进代码表示。通过为每个节点和边缘指定三元组位置,可以在学习过程中很好地保留整体结构信息。此外,捕捉到的节点和边缘特征会经过两级解码过程,以生成更高质量的注释。为了评估 PyScribe 的有效性,我们从 GitHub 的 Jupyter Notebook 中挖掘了大量代码-注释对数据集,并将其公开以支持进一步的研究。实验结果表明,PyScribe 确实很有效,其平均 BLEU 得分(即 av-BLEU)达到了 ≈$$ approx $$0.28,超过了现有技术。
{"title":"PyScribe–Learning to describe python code","authors":"Juncai Guo, Jin Liu, Xiao Liu, Yao Wan, Yanjie Zhao, Li Li, Kui Liu, Jacques Klein, Tegawendé F. Bissyandé","doi":"10.1002/spe.3291","DOIUrl":"https://doi.org/10.1002/spe.3291","url":null,"abstract":"Code comment generation, which attempts to summarize the functionality of source code in textual descriptions, plays an important role in automatic software development research. Currently, several structural neural networks have been exploited to preserve the syntax structure of source code based on abstract syntax trees (ASTs). However, they can not well capture both the long-distance and local relations between nodes while retaining the overall structural information of AST. To mitigate this problem, we present a prototype tool titled <span>PyScribe</span>, which extends the Transformer model to a new encoder-decoder-based framework. Particularly, the triplet position is designed and integrated into the node-level and edge-level structural features of AST for producing Python code comments automatically. This paper, to the best of our knowledge, makes the first effort to model the edges of AST as an explicit component for improved code representation. By specifying triplet positions for each node and edge, the overall structural information can be well preserved in the learning process. Moreover, the captured node and edge features go through a two-stage decoding process to yield higher qualified comments. To evaluate the effectiveness of <span>PyScribe</span>, we resort to a large dataset of code-comment pairs by mining Jupyter Notebooks from GitHub, for which we have made it publicly available to support further studies. The experimental results reveal that <span>PyScribe</span> is indeed effective, outperforming the state-ofthe-art by achieving an average BLEU score (i.e., av-BLEU) of <math altimg=\"urn:x-wiley:spe:media:spe3291:spe3291-math-0001\" display=\"inline\" location=\"graphic/spe3291-math-0001.png\" overflow=\"scroll\">\u0000<semantics>\u0000<mrow>\u0000<mo form=\"prefix\">≈</mo>\u0000</mrow>\u0000$$ approx $$</annotation>\u0000</semantics></math>0.28.","PeriodicalId":21899,"journal":{"name":"Software: Practice and Experience","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138562664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Designing power-efficient SRAM cells with SGFinFETs using LECTOR technique 使用LECTOR技术设计具有sgfinfet的高能效SRAM单元
Pub Date : 2023-12-04 DOI: 10.1002/spe.3293
Sivaiah Sankranti, S. Roji Marjorie
Static random-access memory (SRAM) plays a vital component of digital systems. The main issue of SRAM cells is power leakage, which results in an increase in chip area. Therefore this manuscript proposes a shorted-gate fin-type field-effect transistor based SRAM cell utilizing leakage control transistor technique (SGFinFETs-SRAM-LECTOR) for decreasing the leakage power delay by improving the static noise margins (SNMs) together with power delay product (PDP). Here, the SGFinFETs-SRAM-LECTOR is primarily applied to stacking enhancement for lessening the leakage power dissipation (LPD). Two more transistors are used in LECTOR for reducing the leakage current with delay, which is based on transistor stacking. LECTOR employs two more transistors that are connected in series between pull-up and pull-down networks that means additional SG FinFETs PMOS transistor insertions amongst the pull-up network and output terminal, additional SG FinFETs NMOS transistor insertions amidst the pull down network and output terminal. These additional transistors can decrease the leakage current. The simulation of the proposed approach is implemented in HSPICE simulation tool. Some metrics are computed to validate the efficacy of the proposed approach. Finally, the proposed technique reaches 11.31%, 51.47%, 45.46% less read delay, 44.44%, 26.33%, 33.45% less write delay, 36.12%, 45.28%, 26.45% less read power, 34.5%, 33.56%, 22.41% less write power, 37.4%, 15.3%, 26.54% high read SNM, 33.67%, 35.8%,12.09% high write SNM when analyzed to the existing models.
静态随机存取存储器(SRAM)是数字系统的重要组成部分。SRAM单元的主要问题是功率泄漏,这会导致芯片面积的增加。因此,本文提出了一种利用泄漏控制晶体管技术(sgfinfet -SRAM- ector)的基于短栅翅片型场效应晶体管的SRAM单元,通过提高静态噪声裕度(SNMs)和功率延迟积(PDP)来降低泄漏功率延迟。在这里,sgfinfet - sram - lector主要用于堆叠增强,以减少泄漏功耗(LPD)。在LECTOR中增加了两个晶体管,通过晶体管堆叠来减小泄漏电流的延时。LECTOR采用两个以上的晶体管,在上拉和下拉网络之间串联连接,这意味着在上拉网络和输出端之间插入额外的SG finfet PMOS晶体管,在下拉网络和输出端插入额外的SG finfet NMOS晶体管。这些附加的晶体管可以减小漏电流。在HSPICE仿真工具中对该方法进行了仿真。计算了一些度量来验证所提出方法的有效性。最后,通过对现有模型的分析,该方法的读延迟降低11.31%、51.47%、45.46%,写延迟降低44.44%、26.33%、33.45%,读功率降低36.12%、45.28%、26.45%,写功率降低34.5%、33.56%、22.41%,高读SNM达到37.4%、15.3%、26.54%,高写SNM达到33.67%、35.8%、12.09%。
{"title":"Designing power-efficient SRAM cells with SGFinFETs using LECTOR technique","authors":"Sivaiah Sankranti, S. Roji Marjorie","doi":"10.1002/spe.3293","DOIUrl":"https://doi.org/10.1002/spe.3293","url":null,"abstract":"Static random-access memory (SRAM) plays a vital component of digital systems. The main issue of SRAM cells is power leakage, which results in an increase in chip area. Therefore this manuscript proposes a shorted-gate fin-type field-effect transistor based SRAM cell utilizing leakage control transistor technique (SGFinFETs-SRAM-LECTOR) for decreasing the leakage power delay by improving the static noise margins (SNMs) together with power delay product (PDP). Here, the SGFinFETs-SRAM-LECTOR is primarily applied to stacking enhancement for lessening the leakage power dissipation (LPD). Two more transistors are used in LECTOR for reducing the leakage current with delay, which is based on transistor stacking. LECTOR employs two more transistors that are connected in series between pull-up and pull-down networks that means additional SG FinFETs PMOS transistor insertions amongst the pull-up network and output terminal, additional SG FinFETs NMOS transistor insertions amidst the pull down network and output terminal. These additional transistors can decrease the leakage current. The simulation of the proposed approach is implemented in HSPICE simulation tool. Some metrics are computed to validate the efficacy of the proposed approach. Finally, the proposed technique reaches 11.31%, 51.47%, 45.46% less read delay, 44.44%, 26.33%, 33.45% less write delay, 36.12%, 45.28%, 26.45% less read power, 34.5%, 33.56%, 22.41% less write power, 37.4%, 15.3%, 26.54% high read SNM, 33.67%, 35.8%,12.09% high write SNM when analyzed to the existing models.","PeriodicalId":21899,"journal":{"name":"Software: Practice and Experience","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138513645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A survey on energy-efficient workflow scheduling algorithms in cloud computing 云计算中高效工作流调度算法的研究进展
Pub Date : 2023-12-03 DOI: 10.1002/spe.3292
Prateek Verma, Ashish Kumar Maurya, Rama Shankar Yadav
The advancements in computing and storage capabilities of machines and their fusion with new technologies like the Internet of Thing (IoT), 5G networks, and artificial intelligence, to name a few, has resulted in a paradigm shift in the way computing is done in a cloud environment. In addition, the ever-increasing user demand for cloud services and resources has resulted in cloud service providers (CSPs) expanding the scale of their data center facilities. This has increased energy consumption leading to more carbon dioxide emission levels. Hence, it becomes all the more important to design scheduling algorithms that optimize the use of cloud resources with minimum energy consumption. This paper surveys state-of-the-art algorithms for scheduling workflow tasks to cloud resources with a focus on reducing energy consumption. For this, we categorize different workflow scheduling algorithms based on the scheduling approaches used and provide an analytical discussion of the algorithms covered in the paper. Further, we provide a detailed classification of different energy-efficient strategies used by CSPs for energy saving in data centers. Finally, we describe some of the popular real-world workflow applications as well as highlight important emerging trends and open issues in cloud computing for future research directions.
机器计算和存储能力的进步,以及它们与物联网(IoT)、5G网络和人工智能等新技术的融合,导致了云环境中计算方式的范式转变。此外,用户对云服务和资源的需求不断增长,导致云服务提供商(csp)扩大了其数据中心设施的规模。这增加了能源消耗,导致更多的二氧化碳排放水平。因此,设计以最小能耗优化云资源使用的调度算法就显得尤为重要。本文研究了将工作流任务调度到云资源的最先进算法,重点是降低能耗。为此,我们根据所使用的调度方法对不同的工作流调度算法进行了分类,并对本文所涉及的算法进行了分析讨论。此外,我们还提供了csp用于数据中心节能的不同节能策略的详细分类。最后,我们描述了一些流行的现实世界工作流应用,并强调了云计算中重要的新兴趋势和未来研究方向的开放问题。
{"title":"A survey on energy-efficient workflow scheduling algorithms in cloud computing","authors":"Prateek Verma, Ashish Kumar Maurya, Rama Shankar Yadav","doi":"10.1002/spe.3292","DOIUrl":"https://doi.org/10.1002/spe.3292","url":null,"abstract":"The advancements in computing and storage capabilities of machines and their fusion with new technologies like the Internet of Thing (IoT), 5G networks, and artificial intelligence, to name a few, has resulted in a paradigm shift in the way computing is done in a cloud environment. In addition, the ever-increasing user demand for cloud services and resources has resulted in cloud service providers (CSPs) expanding the scale of their data center facilities. This has increased energy consumption leading to more carbon dioxide emission levels. Hence, it becomes all the more important to design scheduling algorithms that optimize the use of cloud resources with minimum energy consumption. This paper surveys state-of-the-art algorithms for scheduling workflow tasks to cloud resources with a focus on reducing energy consumption. For this, we categorize different workflow scheduling algorithms based on the scheduling approaches used and provide an analytical discussion of the algorithms covered in the paper. Further, we provide a detailed classification of different energy-efficient strategies used by CSPs for energy saving in data centers. Finally, we describe some of the popular real-world workflow applications as well as highlight important emerging trends and open issues in cloud computing for future research directions.","PeriodicalId":21899,"journal":{"name":"Software: Practice and Experience","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138513643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An edge-assisted federated contrastive learning method with local intrinsic dimensionality in noisy label environment 噪声标签环境下具有局部固有维数的边缘辅助联邦对比学习方法
Pub Date : 2023-11-30 DOI: 10.1002/spe.3295
Siyuan Wu, Guoming Zhang, Fei Dai, Bowen Liu, Wanchun Dou
The advent of federated learning (FL) has presented a viable solution for distributed training in edge environment, while simultaneously ensuring the preservation of privacy. In real-world scenarios, edge devices may be subject to label noise caused by environmental differences, automated weakly supervised annotation, malicious tampering, or even human error. However, the potential of the noisy samples have not been fully leveraged by prior studies on FL aimed at addressing label noise. Rather, they have primarily focused on conventional filtering or correction techniques to alleviate the impact of noisy labels. To tackle this challenge, a method, named DETECTION, is proposed in this article. It aims at effectively detecting noisy clients and mitigating the adverse impact of label noise while preserving data privacy. Specially, a confidence scoring mechanism based on local intrinsic dimensionality (LID) is investigated for distinguishing noisy clients from clean clients. Then, a loss function based on prototype contrastive learning is designed to optimize the local model. To address the varying levels of noise across clients, a LID weighted aggregation strategy (LA) is introduced. Experimental results on three datasets demonstrate the effectiveness of DETECTION in addressing the issue of label noise in FL while maintaining data privacy.
联邦学习(FL)的出现为边缘环境下的分布式训练提供了一种可行的解决方案,同时保证了隐私的保护。在现实场景中,边缘设备可能会受到由环境差异、自动弱监督注释、恶意篡改甚至人为错误引起的标签噪声的影响。然而,之前针对标签噪声的FL研究并未充分利用噪声样本的潜力。相反,他们主要关注于传统的滤波或校正技术,以减轻噪声标签的影响。为了解决这个问题,本文提出了一种名为DETECTION的方法。它旨在有效地检测噪声客户端,减轻标签噪声的不利影响,同时保护数据隐私。特别地,研究了一种基于局部固有维数(LID)的置信度评分机制,用于区分噪声客户端和干净客户端。然后,设计了基于原型对比学习的损失函数对局部模型进行优化。为了解决客户机之间不同程度的噪声,引入了LID加权聚合策略(LA)。在三个数据集上的实验结果证明了DETECTION在解决FL中标签噪声问题的同时保持数据隐私的有效性。
{"title":"An edge-assisted federated contrastive learning method with local intrinsic dimensionality in noisy label environment","authors":"Siyuan Wu, Guoming Zhang, Fei Dai, Bowen Liu, Wanchun Dou","doi":"10.1002/spe.3295","DOIUrl":"https://doi.org/10.1002/spe.3295","url":null,"abstract":"The advent of federated learning (FL) has presented a viable solution for distributed training in edge environment, while simultaneously ensuring the preservation of privacy. In real-world scenarios, edge devices may be subject to label noise caused by environmental differences, automated weakly supervised annotation, malicious tampering, or even human error. However, the potential of the noisy samples have not been fully leveraged by prior studies on FL aimed at addressing label noise. Rather, they have primarily focused on conventional filtering or correction techniques to alleviate the impact of noisy labels. To tackle this challenge, a method, named <b>DETECTION</b>, is proposed in this article. It aims at effectively detecting noisy clients and mitigating the adverse impact of label noise while preserving data privacy. Specially, a confidence scoring mechanism based on local intrinsic dimensionality (LID) is investigated for distinguishing noisy clients from clean clients. Then, a loss function based on prototype contrastive learning is designed to optimize the local model. To address the varying levels of noise across clients, a LID weighted aggregation strategy (LA) is introduced. Experimental results on three datasets demonstrate the effectiveness of DETECTION in addressing the issue of label noise in FL while maintaining data privacy.","PeriodicalId":21899,"journal":{"name":"Software: Practice and Experience","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138513629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Special issue on efficient management of microservice-based systems and applications 关于基于微服务的系统和应用程序的有效管理的特刊
Pub Date : 2023-11-30 DOI: 10.1002/spe.3298
Minxian Xu, Schahram Dustdar, Massimo Villari, Rajkumar Buyya

The advent of microservice architecture marks a transition from conventional monolithic applications to a landscape of loosely linked, lightweight, and autonomous microservice components. The primary objective is to ensure strong environmental uniformity, portability across various operating systems, and robust resource isolation. Leading cloud service providers such as Amazon, Microsoft, Google, and Alibaba have widely embraced microservices within their infrastructures. This adoption is geared toward automating application management and optimizing system performance. Consequently, addressing the automation of tasks like deployment, maintenance, auto-scaling, and networking of microservices becomes pivotal. This underscores the importance of efficient management of systems and applications built on microservices as a critical research challenge.

Efficient management methods must not only ensure the quality of service (QoS) across multiple microservices units (containers) but also provide greater control over individual components. However, the dynamic and varied nature of microservice applications and environments significantly amplifies the complexity of these management approaches. Each microservice unit can be deployed and operated independently, catering to distinct functionalities and business objectives. Furthermore, microservices can interact and combine through lightweight communication techniques to form a complete application. The expanding scale of microservice-based systems and their intricate interdependencies pose challenges in terms of load distribution and resource management at the infrastructure level. Furthermore, as cloud workloads surge in resource demands, bandwidth consumption, and QoS requirements, the traditional cloud computing environment extends to fog and edge infrastructures that are in close proximity to end users. As a result, current microservice management approaches need further enhancement to address the mounting resource diversity, application distribution, workload profiles, security prerequisites, and scalability demands across hybrid cloud infrastructures.

Keeping this in mind, this special issue addressed some of the aspects related to efficient management of microservice-based systems and applications with the focus on various challenges faced, and promising solutions to address such challenges by using software engineering, machine learning and deep learning techniques. We have received 21 submissions in this issue, and we accepted six high-quality submissions for publication after a rigorous review process with at least three reviewers for each paper. The authors are from diverse countries, including the USA, China, UK, Germany, India, Brazil, etc. Each of the accepted papers is summarized as follows.

In the first article, Batista et al.1 presented two strategies for handling asynchronous workloads associated with tax integration in a multi-tenant micros

微服务架构的出现标志着传统的单片应用程序向松散链接、轻量级和自治的微服务组件的过渡。主要目标是确保强大的环境一致性、跨各种操作系统的可移植性和健壮的资源隔离。亚马逊、微软、谷歌和阿里巴巴等领先的云服务提供商已经在其基础设施中广泛采用了微服务。这种采用是为了自动化应用程序管理和优化系统性能。因此,解决微服务的部署、维护、自动扩展和联网等任务的自动化问题变得至关重要。这强调了高效管理基于微服务的系统和应用程序的重要性,这是一项关键的研究挑战。有效的管理方法不仅要确保跨多个微服务单元(容器)的服务质量(QoS),还要对单个组件提供更好的控制。然而,微服务应用程序和环境的动态性和多样性极大地增加了这些管理方法的复杂性。每个微服务单元都可以独立部署和操作,以满足不同的功能和业务目标。此外,微服务可以通过轻量级通信技术进行交互和组合,形成一个完整的应用程序。基于微服务的系统规模的扩大及其复杂的相互依赖关系在基础设施级别的负载分配和资源管理方面提出了挑战。此外,随着云工作负载在资源需求、带宽消耗和QoS需求方面的激增,传统的云计算环境扩展到离最终用户很近的雾和边缘基础设施。因此,当前的微服务管理方法需要进一步增强,以解决不断增加的资源多样性、应用程序分布、工作负载配置文件、安全性先决条件和跨混合云基础架构的可伸缩性需求。考虑到这一点,本期专题讨论了与基于微服务的系统和应用程序的有效管理相关的一些方面,重点关注所面临的各种挑战,以及通过使用软件工程、机器学习和深度学习技术来解决这些挑战的有希望的解决方案。我们收到了21篇投稿,经过严格的评审程序,每篇论文至少有三位审稿人,我们接受了6篇高质量的投稿发表。作者来自不同的国家,包括美国、中国、英国、德国、印度、巴西等。每一篇被接受的论文总结如下。在第一篇文章中,Batista等人介绍了两种策略,用于在特定于公司上下文的多租户微服务架构中处理与税务集成相关的异步工作负载。最初的方法利用轮询,使用队列作为分布式锁。第二种方法称为单个活动使用者,它采用基于推送的技术,利用消息代理的逻辑进行消息传递。这些方法旨在优化涉及容器副本和租户数量不断增加的场景中的资源分配。在第二篇文章中,Kumar等人2介绍了一种旨在增强微服务部署中的QoS的资源分配模型。该模型利用微调的向日葵鲸鱼优化算法,在物理机器上战略性地部署基于容器的服务,通过有效利用CPU和内存资源来优化其执行能力。该技术的主要目标是实现工作负载的有效分配,防止资源浪费,并最终增强QoS参数。在第三篇论文中,Zhu等人3介绍了RADF,这是一种半自动方法,通过分析应用程序接口中存在的固有业务逻辑,将单体分解为无服务器功能。所建议的方法采用两阶段重构策略,首先执行粗粒度分解,然后执行细粒度分解。这种方法将分解过程简化为更小、更易于管理的步骤,提供了在微服务或功能级别生成解决方案的适应性。在第四篇论文中,w<s:1> rz等人4确定了用于分区的应用程序的主要任务和子任务。随后,他们概述了程序流程,以确定哪些应用程序任务可以转换为功能,并阐明了它们之间的相互依赖性。在最后的步骤中,他们精确地指定单个功能,如果有必要,还会合并那些被认为太小的功能,以减轻通信开销或维护挑战。 与前面的方法相反,它们的方法普遍适用于不同大小的应用程序,确保结果函数的大小适当,以便在功能即服务环境中有效执行。在第五篇论文中,Zhong等人介绍了DOMICO,这是一种用于评估领域模型及其实现之间一致性的方法。这种一致性是通过领域建模中8种常见结构模式的形式化以及它们在模型和相关源代码中的表示来建立的。利用这种形式化,该方法可以查明与模式元素有关的差异,例如偏差、遗漏和修改。此外,DOMICO能够检测到这些模式所施加的24条合规规则的潜在违规行为。在第六篇论文中,Zhang等人6提出了一种块重用机制,旨在在容器更新期间有效识别节点本地重复数据。这种机制有助于减少图像构建所需的数据传输量。作者管理云和远程云节点的块重用机制过程,确保容器更新数据和图像重建准备的相关资源开销保持在可接受的阈值范围内。我们希望在本期特刊中接受的投稿将帮助期刊的读者和更广泛的研究社区获得有关当前研究挑战、技术和解决方案的知识。我们还希望这将鼓励他们进一步在基于微服务的集群的有效管理的不同方面进行工作。
{"title":"Special issue on efficient management of microservice-based systems and applications","authors":"Minxian Xu, Schahram Dustdar, Massimo Villari, Rajkumar Buyya","doi":"10.1002/spe.3298","DOIUrl":"https://doi.org/10.1002/spe.3298","url":null,"abstract":"<p>The advent of microservice architecture marks a transition from conventional monolithic applications to a landscape of loosely linked, lightweight, and autonomous microservice components. The primary objective is to ensure strong environmental uniformity, portability across various operating systems, and robust resource isolation. Leading cloud service providers such as Amazon, Microsoft, Google, and Alibaba have widely embraced microservices within their infrastructures. This adoption is geared toward automating application management and optimizing system performance. Consequently, addressing the automation of tasks like deployment, maintenance, auto-scaling, and networking of microservices becomes pivotal. This underscores the importance of efficient management of systems and applications built on microservices as a critical research challenge.</p>\u0000<p>Efficient management methods must not only ensure the quality of service (QoS) across multiple microservices units (containers) but also provide greater control over individual components. However, the dynamic and varied nature of microservice applications and environments significantly amplifies the complexity of these management approaches. Each microservice unit can be deployed and operated independently, catering to distinct functionalities and business objectives. Furthermore, microservices can interact and combine through lightweight communication techniques to form a complete application. The expanding scale of microservice-based systems and their intricate interdependencies pose challenges in terms of load distribution and resource management at the infrastructure level. Furthermore, as cloud workloads surge in resource demands, bandwidth consumption, and QoS requirements, the traditional cloud computing environment extends to fog and edge infrastructures that are in close proximity to end users. As a result, current microservice management approaches need further enhancement to address the mounting resource diversity, application distribution, workload profiles, security prerequisites, and scalability demands across hybrid cloud infrastructures.</p>\u0000<p>Keeping this in mind, this special issue addressed some of the aspects related to efficient management of microservice-based systems and applications with the focus on various challenges faced, and promising solutions to address such challenges by using software engineering, machine learning and deep learning techniques. We have received 21 submissions in this issue, and we accepted six high-quality submissions for publication after a rigorous review process with at least three reviewers for each paper. The authors are from diverse countries, including the USA, China, UK, Germany, India, Brazil, etc. Each of the accepted papers is summarized as follows.</p>\u0000<p>In the first article, Batista et al.<span><sup>1</sup></span> presented two strategies for handling asynchronous workloads associated with tax integration in a multi-tenant micros","PeriodicalId":21899,"journal":{"name":"Software: Practice and Experience","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138513630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Software: Practice and Experience
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1