首页 > 最新文献

Performance Evaluation最新文献

英文 中文
Analysis of a queue-length-dependent vacation queue with bulk service, N-policy, set-up time and cost optimization 带批量服务、N 政策、设置时间和成本优化的队列长度依赖型休假队列分析
IF 1 4区 计算机科学 Q4 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-11-20 DOI: 10.1016/j.peva.2024.102459
P. Karan, S. Pradhan
Due to the extensive applications of bulk service vacation queues in manufacturing industries, inventory systems, wireless sensor networks for deducing energy consumption etc., in this article, we analyze the steady-state behavior of an infinite-buffer group arrival bulk service queue with vacation scenario, set-up time and N-threshold policy. Here the customers arrive according to the compound Poisson process and the server originates the service process with minimum ‘a’ customers and can give service to maximum ‘b’ customers at a time. We adopt batch-size-dependent service time as well as queue-length-dependent vacation duration which improve the system’s performance significantly. The N-threshold policy is proposed to awaken the server from a vacation/dormant state where the service station starts the set-up procedure after the accumulation of pre-decided ‘N’ customers. Using the supplementary variable technique, firstly, we derive the set of system equations in the steady-state. After that, we obtain the bivariate probability generating functions (pgfs) of queue content and size of the departing batch, the queue content and type of vacation taken by the server at vacation completion epoch and also the single pgf of queue content at the end of set-up time. We extract the joint distribution from those generating functions using the roots method and derive a simple algebraic relation between the probabilities at departure and arbitrary epoch. We also provide assorted numerical results to validate our proposed methodology and obtained theoretical results. The impact of the system parameters on the performance measures is presented through tables and graphs. Finally, a cost optimization function is provided for the benefit of system designers.
由于批量服务休假队列在制造业、库存系统、用于推断能源消耗的无线传感器网络等领域的广泛应用,本文分析了具有休假场景、设置时间和 N 个阈值策略的无限缓冲区群到达批量服务队列的稳态行为。在此,客户根据复合泊松过程到达,服务器以最小的 "a "客户启动服务过程,每次可为最大的 "b "客户提供服务。我们采用了与批量大小相关的服务时间和与队列长度相关的休假时间,这大大提高了系统的性能。我们提出了 N 门限策略,用于将服务器从休假/休眠状态唤醒,即服务站在预先确定的 "N "个客户累积后开始设置程序。利用补充变量技术,我们首先推导出稳态下的系统方程组。然后,我们得到了离开批次的队列内容和规模的双变量概率生成函数(pgfs)、服务器在休假结束时的队列内容和休假类型,以及设置时间结束时队列内容的单变量概率生成函数(pgf)。我们使用根法从这些生成函数中提取联合分布,并推导出出发和任意时间点概率之间的简单代数关系。我们还提供了各种数值结果,以验证我们提出的方法和获得的理论结果。我们还通过表格和图表展示了系统参数对性能指标的影响。最后,我们还提供了一个成本优化函数,供系统设计人员参考。
{"title":"Analysis of a queue-length-dependent vacation queue with bulk service, N-policy, set-up time and cost optimization","authors":"P. Karan,&nbsp;S. Pradhan","doi":"10.1016/j.peva.2024.102459","DOIUrl":"10.1016/j.peva.2024.102459","url":null,"abstract":"<div><div>Due to the extensive applications of bulk service vacation queues in manufacturing industries, inventory systems, wireless sensor networks for deducing energy consumption etc., in this article, we analyze the steady-state behavior of an infinite-buffer group arrival bulk service queue with vacation scenario, set-up time and <span><math><mi>N</mi></math></span>-threshold policy. Here the customers arrive according to the compound Poisson process and the server originates the service process with minimum ‘<span><math><mi>a</mi></math></span>’ customers and can give service to maximum ‘<span><math><mi>b</mi></math></span>’ customers at a time. We adopt batch-size-dependent service time as well as queue-length-dependent vacation duration which improve the system’s performance significantly. The <span><math><mi>N</mi></math></span>-threshold policy is proposed to awaken the server from a vacation/dormant state where the service station starts the set-up procedure after the accumulation of pre-decided ‘<span><math><mi>N</mi></math></span>’ customers. Using the supplementary variable technique, firstly, we derive the set of system equations in the steady-state. After that, we obtain the bivariate probability generating functions (pgfs) of queue content and size of the departing batch, the queue content and type of vacation taken by the server at vacation completion epoch and also the single pgf of queue content at the end of set-up time. We extract the joint distribution from those generating functions using the roots method and derive a simple algebraic relation between the probabilities at departure and arbitrary epoch. We also provide assorted numerical results to validate our proposed methodology and obtained theoretical results. The impact of the system parameters on the performance measures is presented through tables and graphs. Finally, a cost optimization function is provided for the benefit of system designers.</div></div>","PeriodicalId":19964,"journal":{"name":"Performance Evaluation","volume":"167 ","pages":"Article 102459"},"PeriodicalIF":1.0,"publicationDate":"2024-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142723450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FedCust: Offloading hyperparameter customization for federated learning FedCust:为联合学习卸载超参数定制功能
IF 1 4区 计算机科学 Q4 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-11-16 DOI: 10.1016/j.peva.2024.102450
Syed Zawad , Xiaolong Ma , Jun Yi , Cheng Li , Minjia Zhang , Lei Yang , Feng Yan , Yuxiong He
Federated Learning (FL) is a new machine learning paradigm that enables training models collaboratively across clients without sharing private data. In FL, data is non-uniformly distributed among clients (i.e., data heterogeneity) and cannot be redistributed nor monitored like in conventional machine learning due to privacy constraints. Such data heterogeneity and privacy requirements bring new challenges for learning hyperparameter optimization as the training dynamics change across clients even within the same training round and they are difficult to be measured due to privacy. The state-of-the-art in hyperparameter customization can greatly improve FL model accuracy but also incur significant computing overheads and power consumption on client devices, and slowdown the training process. To address the prohibitively expensive cost challenge, we explore the possibility of offloading hyperparameter customization to servers. We propose FedCust, a framework that offloads expensive hyperparameter customization cost from the client devices to the central server without violating privacy constraints. Our key discovery is that it is not necessary to do hyperparameter customization for every client, and clients with similar data heterogeneity can use the same hyperparameters to achieve good training performance. We propose heterogeneity measurement metrics for clustering clients into groups such that clients within the same group share hyperparameters. FedCust uses the proxy data from initial model design to emulate different heterogeneity groups and perform hyperparameter customization on the server side without accessing client data nor information. To make the hyperparameter customization scalable, FedCust further employs a Bayesian-strengthened tuner to significantly accelerates the hyperparameter customization speed. Extensive evaluation demonstrates that FedCust achieves up to 7/2/4/4/6% better accuracy than the widely adopted one-size-fits-all approach on popular FL benchmarks FEMNIST, Shakespeare, Cifar100, Cifar10, and Fashion-MNIST respectively, while being scalable and reducing computation, memory, and energy consumption on the client devices, without compromising privacy constraints.
联合学习(FL)是一种新的机器学习范式,它能在不共享隐私数据的情况下跨客户端协作训练模型。在联机学习中,数据在客户端之间是非均匀分布的(即数据异构性),由于隐私限制,不能像传统机器学习那样进行再分配或监控。这种数据异质性和隐私要求给学习超参数优化带来了新的挑战,因为即使在同一轮训练中,不同客户的训练动态也会发生变化,而且由于隐私原因,很难对其进行测量。最先进的超参数定制技术可以大大提高 FL 模型的准确性,但同时也会在客户端设备上产生巨大的计算开销和功耗,并减慢训练过程。为了解决成本过高的难题,我们探索了将超参数定制卸载到服务器上的可能性。我们提出了 FedCust,这是一个在不违反隐私约束的情况下将昂贵的超参数定制成本从客户端设备卸载到中央服务器的框架。我们的主要发现是,没有必要为每个客户端进行超参数定制,具有相似数据异质性的客户端可以使用相同的超参数来实现良好的训练性能。我们提出了异质性测量指标,用于将客户机聚类成组,使同组内的客户机共享超参数。FedCust 使用初始模型设计中的代理数据来模拟不同的异质性组,并在服务器端执行超参数定制,而无需访问客户端数据或信息。为了使超参数定制具有可扩展性,FedCust 进一步采用了贝叶斯强化调谐器,显著加快了超参数定制速度。广泛的评估表明,在流行的 FL 基准 FEMNIST、Shakespeare、Cifar100、Cifar10 和 Fashion-MNIST 上,FedCust 比广泛采用的 "一刀切 "方法分别提高了高达 7/2/4/4/6%的准确率,同时还具有可扩展性,降低了客户端设备的计算量、内存和能耗,而且不影响隐私约束。
{"title":"FedCust: Offloading hyperparameter customization for federated learning","authors":"Syed Zawad ,&nbsp;Xiaolong Ma ,&nbsp;Jun Yi ,&nbsp;Cheng Li ,&nbsp;Minjia Zhang ,&nbsp;Lei Yang ,&nbsp;Feng Yan ,&nbsp;Yuxiong He","doi":"10.1016/j.peva.2024.102450","DOIUrl":"10.1016/j.peva.2024.102450","url":null,"abstract":"<div><div>Federated Learning (FL) is a new machine learning paradigm that enables training models collaboratively across clients without sharing private data. In FL, data is non-uniformly distributed among clients (i.e., data heterogeneity) and cannot be redistributed nor monitored like in conventional machine learning due to privacy constraints. Such data heterogeneity and privacy requirements bring new challenges for learning hyperparameter optimization as the training dynamics change across clients even within the same training round and they are difficult to be measured due to privacy. The state-of-the-art in hyperparameter customization can greatly improve FL model accuracy but also incur significant computing overheads and power consumption on client devices, and slowdown the training process. To address the prohibitively expensive cost challenge, we explore the possibility of offloading hyperparameter customization to servers. We propose <em>FedCust</em>, a framework that offloads expensive hyperparameter customization cost from the client devices to the central server without violating privacy constraints. Our key discovery is that it is not necessary to do hyperparameter customization for every client, and clients with similar data heterogeneity can use the same hyperparameters to achieve good training performance. We propose heterogeneity measurement metrics for clustering clients into groups such that clients within the same group share hyperparameters. <em>FedCust</em> uses the proxy data from initial model design to emulate different heterogeneity groups and perform hyperparameter customization on the server side without accessing client data nor information. To make the hyperparameter customization scalable, <em>FedCust</em> further employs a Bayesian-strengthened tuner to significantly accelerates the hyperparameter customization speed. Extensive evaluation demonstrates that <em>FedCust</em> achieves up to 7/2/4/4/6% better accuracy than the widely adopted one-size-fits-all approach on popular FL benchmarks FEMNIST, Shakespeare, Cifar100, Cifar10, and Fashion-MNIST respectively, while being scalable and reducing computation, memory, and energy consumption on the client devices, without compromising privacy constraints.</div></div>","PeriodicalId":19964,"journal":{"name":"Performance Evaluation","volume":"167 ","pages":"Article 102450"},"PeriodicalIF":1.0,"publicationDate":"2024-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142704986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Trust your local scaler: A continuous, decentralized approach to autoscaling 信任您的本地扩展器持续、分散的自动扩展方法
IF 1 4区 计算机科学 Q4 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-11-08 DOI: 10.1016/j.peva.2024.102452
Martin Straesser , Stefan Geissler , Stanislav Lange , Lukas Kilian Schumann , Tobias Hossfeld , Samuel Kounev
Autoscaling is a critical component of modern cloud computing environments, improving flexibility, efficiency, and cost-effectiveness. Current approaches use centralized autoscalers that make decisions based on averaged monitoring data of managed service instances in fixed intervals. In this scheme, autoscalers are single points of failure, tightly coupled to monitoring systems, and limited in reaction times, making non-optimal scaling decisions costly. This paper presents an approach for continuous decentralized autoscaling, where decisions are made on a service instance level. By distributing scaling decisions of instances over time, autoscaling evolves into a quasi-continuous process, enabling great adaptability to different workload patterns. We analyze our approach on different abstraction levels, including a model-based, simulation-based, and real-world evaluation. Proof-of-concept experiments show that our approach is able to scale different applications deployed in containers and virtual machines in realistic environments, yielding better scaling performance compared to established baseline autoscalers, especially in scenarios with highly dynamic workloads.
自动扩展是现代云计算环境的重要组成部分,可提高灵活性、效率和成本效益。当前的方法使用集中式自动扩展器,根据固定时间间隔内托管服务实例的平均监控数据做出决策。在这种方案中,自动扩展器是单点故障,与监控系统紧密耦合,反应时间有限,使得非最佳扩展决策成本高昂。本文提出了一种持续分散式自动扩展方法,可在服务实例级别上做出决策。通过在一段时间内分散实例的扩展决策,自动扩展演变成了一个准连续过程,从而极大地适应了不同的工作负载模式。我们从不同的抽象层面分析了我们的方法,包括基于模型、基于仿真和基于真实世界的评估。概念验证实验表明,我们的方法能够在现实环境中扩展部署在容器和虚拟机中的不同应用,与已有的基线自动扩展器相比,尤其是在工作负载高度动态的场景中,能产生更好的扩展性能。
{"title":"Trust your local scaler: A continuous, decentralized approach to autoscaling","authors":"Martin Straesser ,&nbsp;Stefan Geissler ,&nbsp;Stanislav Lange ,&nbsp;Lukas Kilian Schumann ,&nbsp;Tobias Hossfeld ,&nbsp;Samuel Kounev","doi":"10.1016/j.peva.2024.102452","DOIUrl":"10.1016/j.peva.2024.102452","url":null,"abstract":"<div><div>Autoscaling is a critical component of modern cloud computing environments, improving flexibility, efficiency, and cost-effectiveness. Current approaches use centralized autoscalers that make decisions based on averaged monitoring data of managed service instances in fixed intervals. In this scheme, autoscalers are single points of failure, tightly coupled to monitoring systems, and limited in reaction times, making non-optimal scaling decisions costly. This paper presents an approach for continuous decentralized autoscaling, where decisions are made on a service instance level. By distributing scaling decisions of instances over time, autoscaling evolves into a quasi-continuous process, enabling great adaptability to different workload patterns. We analyze our approach on different abstraction levels, including a model-based, simulation-based, and real-world evaluation. Proof-of-concept experiments show that our approach is able to scale different applications deployed in containers and virtual machines in realistic environments, yielding better scaling performance compared to established baseline autoscalers, especially in scenarios with highly dynamic workloads.</div></div>","PeriodicalId":19964,"journal":{"name":"Performance Evaluation","volume":"167 ","pages":"Article 102452"},"PeriodicalIF":1.0,"publicationDate":"2024-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142662941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enabling scalable and adaptive machine learning training via serverless computing on public cloud 通过公共云上的无服务器计算实现可扩展的自适应机器学习训练
IF 1 4区 计算机科学 Q4 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-11-06 DOI: 10.1016/j.peva.2024.102451
Ahsan Ali , Xiaolong Ma , Syed Zawad , Paarijaat Aditya , Istemi Ekin Akkus , Ruichuan Chen , Lei Yang , Feng Yan
In today’s production machine learning (ML) systems, models are continuously trained, improved, and deployed. ML design and training are becoming a continuous workflow of various tasks that have dynamic resource demands. Serverless computing is an emerging cloud paradigm that provides transparent resource management and scaling for users and has the potential to revolutionize the routine of ML design and training. However, hosting modern ML workflows on existing serverless platforms has non-trivial challenges due to their intrinsic design limitations such as stateless nature, limited communication support across function instances, and limited function execution duration. These limitations result in a lack of an overarching view and adaptation mechanism for training dynamics, and an amplification of existing problems in ML workflows.
To address the above challenges, we propose SMLT, an automated, scalable and adaptive serverless framework on public cloud to enable efficient and user-centric ML design and training. SMLT employs an automated and adaptive scheduling mechanism to dynamically optimize the deployment and resource scaling for ML tasks during training. SMLT further enables user-centric ML workflow execution by supporting user-specified training deadline and budget limit. In addition, by providing an end-to-end design, SMLT solves the intrinsic problems in public cloud serverless platforms such as the communication overhead, limited function execution duration, need for repeated initialization, and also provides explicit fault tolerance for ML training. SMLT is open-sourced and compatible with all major ML frameworks. Our experimental evaluation with large, sophisticated modern ML models demonstrates that SMLT outperforms the state-of-the-art VM-based systems and existing public cloud serverless ML training frameworks in both training speed (up to 8×) and monetary cost (up to 3×).
在当今的生产型机器学习(ML)系统中,模型需要不断训练、改进和部署。ML 的设计和训练正在成为各种任务的连续工作流程,而这些任务都有动态的资源需求。无服务器计算是一种新兴的云计算模式,可为用户提供透明的资源管理和扩展,并有可能彻底改变 ML 设计和训练的常规工作。然而,在现有的无服务器平台上托管现代 ML 工作流面临着非同小可的挑战,原因在于其固有的设计限制,例如无状态特性、跨功能实例的通信支持有限以及功能执行持续时间有限。为了应对上述挑战,我们在公共云上提出了一个自动化、可扩展和自适应的无服务器框架--SMLT,以实现高效和以用户为中心的 ML 设计和训练。SMLT 采用自动化自适应调度机制,在训练过程中动态优化 ML 任务的部署和资源扩展。通过支持用户指定的训练截止日期和预算限制,SMLT 进一步实现了以用户为中心的 ML 工作流执行。此外,通过提供端到端设计,SMLT 解决了公有云无服务器平台的固有问题,如通信开销、有限的函数执行时间、需要重复初始化等,还为 ML 训练提供了显式容错。SMLT 是开源的,兼容所有主要的 ML 框架。我们使用大型、复杂的现代 ML 模型进行的实验评估表明,SMLT 在训练速度(高达 8 倍)和货币成本(高达 3 倍)方面都优于最先进的基于虚拟机的系统和现有的公共云无服务器 ML 训练框架。
{"title":"Enabling scalable and adaptive machine learning training via serverless computing on public cloud","authors":"Ahsan Ali ,&nbsp;Xiaolong Ma ,&nbsp;Syed Zawad ,&nbsp;Paarijaat Aditya ,&nbsp;Istemi Ekin Akkus ,&nbsp;Ruichuan Chen ,&nbsp;Lei Yang ,&nbsp;Feng Yan","doi":"10.1016/j.peva.2024.102451","DOIUrl":"10.1016/j.peva.2024.102451","url":null,"abstract":"<div><div>In today’s production machine learning (ML) systems, models are continuously trained, improved, and deployed. ML design and training are becoming a continuous workflow of various tasks that have dynamic resource demands. Serverless computing is an emerging cloud paradigm that provides transparent resource management and scaling for users and has the potential to revolutionize the routine of ML design and training. However, hosting modern ML workflows on existing serverless platforms has non-trivial challenges due to their intrinsic design limitations such as stateless nature, limited communication support across function instances, and limited function execution duration. These limitations result in a lack of an overarching view and adaptation mechanism for training dynamics, and an amplification of existing problems in ML workflows.</div><div>To address the above challenges, we propose <span>SMLT</span>, an automated, scalable and adaptive serverless framework on public cloud to enable efficient and user-centric ML design and training. <span>SMLT</span> employs an automated and adaptive scheduling mechanism to dynamically optimize the deployment and resource scaling for ML tasks during training. <span>SMLT</span> further enables user-centric ML workflow execution by supporting user-specified training deadline and budget limit. In addition, by providing an end-to-end design, <span>SMLT</span> solves the intrinsic problems in public cloud serverless platforms such as the communication overhead, limited function execution duration, need for repeated initialization, and also provides explicit fault tolerance for ML training. <span>SMLT</span> is open-sourced and compatible with all major ML frameworks. Our experimental evaluation with large, sophisticated modern ML models demonstrates that <span>SMLT</span> outperforms the state-of-the-art VM-based systems and existing public cloud serverless ML training frameworks in both training speed (up to 8<span><math><mo>×</mo></math></span>) and monetary cost (up to 3<span><math><mo>×</mo></math></span>).</div></div>","PeriodicalId":19964,"journal":{"name":"Performance Evaluation","volume":"167 ","pages":"Article 102451"},"PeriodicalIF":1.0,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142704985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Symbolic state-space exploration meets statistical model checking 符号状态空间探索与统计模型检查的结合
IF 1 4区 计算机科学 Q4 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-11-02 DOI: 10.1016/j.peva.2024.102449
Mathis Niehage, Anne Remke
Efficient reachability analysis, as well as statistical model checking have been proposed for the evaluation of Hybrid Petri nets with general transitions (HPnGs). Both have different (dis-)advantages. The performance of statistical simulation suffers in large models and the number of required simulation runs to achieve a relatively small confidence interval increases considerably. The approach introduced for analytical reachability analysis of HPnGs, however, becomes infeasible for a large number of random variables. To overcome these limitations, this paper applies statistical model checking (SMC) for a stochastic variant of the Signal Temporal Logic (STL) to a pre-computed symbolic state-space representation of HPnGs, i.e., the Parametric Location Tree (PLT), which has previously been used for model checking HPnGs. Furthermore, we define how to reduce the PLT for a given state-based and path-based STL property, by introducing a three-valued interpretation of a given STL property for every location of the PLT. This paper applies learning in the presence of nondeterminism and considers four different scheduler classes. The proposed improvement is especially useful if a large number of training runs is necessary to optimize the probability that a given STL property holds. A case study on a water tank model shows the feasibility of the approach, as well as improved computation times, when applying the above-mentioned reduction for varying time bounds. We validate our results with existing analytical and simulation tools, as applicable for different types of schedulers.
为评估具有一般转换的混合 Petri 网(HPnGs),人们提出了高效的可达性分析和统计模型检查。这两种方法都有不同的优势。在大型模型中,统计模拟的性能会受到影响,而且要达到相对较小的置信区间所需的模拟运行次数也会大大增加。然而,为分析 HPnGs 的可达性而引入的方法对于大量随机变量来说是不可行的。为了克服这些局限性,本文将信号时态逻辑(STL)随机变体的统计模型检查(SMC)应用于 HPnGs 的预计算符号状态空间表示法,即参数位置树(PLT)。此外,通过为 PLT 的每个位置引入对给定 STL 属性的三值解释,我们定义了如何针对给定的基于状态和基于路径的 STL 属性缩小 PLT。本文将学习应用于存在非确定性的情况,并考虑了四种不同的调度器类别。如果需要大量的训练运行来优化给定 STL 属性成立的概率,那么所提出的改进方法就特别有用。一项关于水箱模型的案例研究表明了该方法的可行性,以及在应用上述针对不同时间界限的缩减方法时计算时间的改进。我们利用适用于不同类型调度器的现有分析和仿真工具验证了我们的结果。
{"title":"Symbolic state-space exploration meets statistical model checking","authors":"Mathis Niehage,&nbsp;Anne Remke","doi":"10.1016/j.peva.2024.102449","DOIUrl":"10.1016/j.peva.2024.102449","url":null,"abstract":"<div><div>Efficient reachability analysis, as well as statistical model checking have been proposed for the evaluation of Hybrid Petri nets with general transitions (HPnGs). Both have different (dis-)advantages. The performance of statistical simulation suffers in large models and the number of required simulation runs to achieve a relatively small confidence interval increases considerably. The approach introduced for analytical reachability analysis of HPnGs, however, becomes infeasible for a large number of random variables. To overcome these limitations, this paper applies statistical model checking (SMC) for a stochastic variant of the Signal Temporal Logic (STL) to a pre-computed symbolic state-space representation of HPnGs, i.e., the Parametric Location Tree (PLT), which has previously been used for model checking HPnGs. Furthermore, we define how to reduce the PLT for a given <em>state-based</em> and <em>path-based</em> STL property, by introducing a three-valued interpretation of a given STL property for every location of the PLT. This paper applies learning in the presence of nondeterminism and considers four different scheduler classes. The proposed improvement is especially useful if a large number of training runs is necessary to optimize the probability that a given STL property holds. A case study on a water tank model shows the feasibility of the approach, as well as improved computation times, when applying the above-mentioned reduction for varying time bounds. We validate our results with existing analytical and simulation tools, as applicable for different types of schedulers.</div></div>","PeriodicalId":19964,"journal":{"name":"Performance Evaluation","volume":"167 ","pages":"Article 102449"},"PeriodicalIF":1.0,"publicationDate":"2024-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142662857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spatial queues with nearest neighbour shifts 近邻移动的空间队列
IF 1 4区 计算机科学 Q4 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-11-01 DOI: 10.1016/j.peva.2024.102448
B.R. Vinay Kumar , Lasse Leskelä
This work studies queues in a Euclidean space. Consider N servers that are distributed uniformly in [0,1]d. Customers arrive at the servers according to independent stationary processes. Upon arrival, they probabilistically decide whether to join the queue they arrived at, or shift to one of the nearest neighbours. Such shifting strategies affect the load on the servers, and may cause some of the servers to become overloaded. We derive a law of large numbers and a central limit theorem for the fraction of overloaded servers in the system as the total number of servers N. Additionally, in the one-dimensional case (d=1), we evaluate the expected fraction of overloaded servers for any finite N. Numerical experiments are provided to support our theoretical results. Typical applications of the results include electric vehicles queueing at charging stations, and queues in airports or supermarkets.
本作品研究欧几里得空间中的队列。考虑均匀分布在 [0,1]d 中的 N 台服务器。客户根据独立的静态过程到达服务器。到达后,他们以概率方式决定是加入他们到达的队列,还是转移到最近的邻近队列中。这种转移策略会影响服务器的负载,并可能导致某些服务器超载。我们推导出服务器总数 N→∞ 时,系统中超载服务器比例的大数定律和中心极限定理。此外,在一维情况下(d=1),我们评估了任何有限 N 的预期超载服务器分数。这些结果的典型应用包括电动汽车在充电站排队、机场或超市排队等。
{"title":"Spatial queues with nearest neighbour shifts","authors":"B.R. Vinay Kumar ,&nbsp;Lasse Leskelä","doi":"10.1016/j.peva.2024.102448","DOIUrl":"10.1016/j.peva.2024.102448","url":null,"abstract":"<div><div>This work studies queues in a Euclidean space. Consider <span><math><mi>N</mi></math></span> servers that are distributed uniformly in <span><math><msup><mrow><mrow><mo>[</mo><mn>0</mn><mo>,</mo><mn>1</mn><mo>]</mo></mrow></mrow><mrow><mi>d</mi></mrow></msup></math></span>. Customers arrive at the servers according to independent stationary processes. Upon arrival, they probabilistically decide whether to join the queue they arrived at, or shift to one of the nearest neighbours. Such shifting strategies affect the load on the servers, and may cause some of the servers to become overloaded. We derive a law of large numbers and a central limit theorem for the fraction of overloaded servers in the system as the total number of servers <span><math><mrow><mi>N</mi><mo>→</mo><mi>∞</mi></mrow></math></span>. Additionally, in the one-dimensional case (<span><math><mrow><mi>d</mi><mo>=</mo><mn>1</mn></mrow></math></span>), we evaluate the expected fraction of overloaded servers for any finite <span><math><mi>N</mi></math></span>. Numerical experiments are provided to support our theoretical results. Typical applications of the results include electric vehicles queueing at charging stations, and queues in airports or supermarkets.</div></div>","PeriodicalId":19964,"journal":{"name":"Performance Evaluation","volume":"166 ","pages":"Article 102448"},"PeriodicalIF":1.0,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142578133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An experimental study on beamforming architecture and full-duplex wireless across two operational outdoor massive MIMO networks 关于波束成形架构和两个室外大规模多输入多输出网络全双工无线技术的实验研究
IF 1 4区 计算机科学 Q4 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-09-27 DOI: 10.1016/j.peva.2024.102447
Hadi Hosseini , Ahmed Almutairi , Syed Muhammad Hashir , Ehsan Aryafar , Joseph Camp
Full-duplex (FD) wireless communication refers to a communication system in which both ends of a wireless link transmit and receive data simultaneously in the same frequency band. One of the major challenges of FD communication is self-interference (SI), which refers to the interference caused by transmitting elements of a radio to its own receiving elements. Fully digital beamforming is a technique used to conduct beamforming and has been recently repurposed to also reduce SI. However, the cost of fully digital systems dramatically increases with the number of antennas, as each antenna requires an independent Tx-Rx RF chain. Hybrid beamforming systems use a much smaller number of RF chains to feed the same number of antennas, and hence can significantly reduce the deployment cost. In this paper, we aim to quantify the performance gap between these two radio architectures in terms of SI cancellation and system capacity in FD multi-user Multiple Input Multiple Output (MIMO) setups. We first obtained over-the-air channel measurement data on two outdoor massive MIMO deployments over the course of three months. We next study SoftNull and M-HBFD as two state-of-the-art transmit (Tx) beamforming based FD systems, and introduce two new joint transmit-receive (Tx-Rx) beamforming based FD systems named TR-FD2 and TR-HBFD for fully digital and hybrid radio architectures, respectively. We show that the hybrid beamforming systems can achieve 80%–99% of the fully digital systems capacity, depending on the number of users. Our results show that it is possible to get many benefits associated with fully digital massive MIMO systems with a hybrid beamforming architecture at a fraction of the cost.
全双工(FD)无线通信是指无线链路的两端在同一频段同时发送和接收数据的通信系统。全双工通信的主要挑战之一是自干扰(SI),即无线电发射元件对自身接收元件的干扰。全数字波束成形是一种用于波束成形的技术,最近被重新用于减少 SI。然而,全数字系统的成本随着天线数量的增加而急剧增加,因为每个天线都需要一个独立的 Tx-Rx 射频链。混合波束成形系统使用数量少得多的射频链来馈送相同数量的天线,因此可以显著降低部署成本。本文旨在量化这两种无线电架构在 FD 多用户多输入多输出(MIMO)设置中的 SI 消除和系统容量方面的性能差距。我们首先获得了两个室外大规模 MIMO 部署的空中信道测量数据,历时三个月。接下来,我们研究了 SoftNull 和 M-HBFD 这两种最先进的基于发射(Tx)波束成形的 FD 系统,并引入了两种新的基于发射-接收(Tx-Rx)联合波束成形的 FD 系统,分别命名为 TR-FD2 和 TR-HBFD,用于全数字和混合无线电架构。我们的研究表明,根据用户数量的不同,混合波束成形系统的容量可达到全数字系统容量的 80%-99%。我们的研究结果表明,采用混合波束成形架构的全数字大规模多输入多输出(MIMO)系统可以以极低的成本获得许多相关优势。
{"title":"An experimental study on beamforming architecture and full-duplex wireless across two operational outdoor massive MIMO networks","authors":"Hadi Hosseini ,&nbsp;Ahmed Almutairi ,&nbsp;Syed Muhammad Hashir ,&nbsp;Ehsan Aryafar ,&nbsp;Joseph Camp","doi":"10.1016/j.peva.2024.102447","DOIUrl":"10.1016/j.peva.2024.102447","url":null,"abstract":"<div><div>Full-duplex (FD) wireless communication refers to a communication system in which both ends of a wireless link transmit and receive data simultaneously in the same frequency band. One of the major challenges of FD communication is self-interference (SI), which refers to the interference caused by transmitting elements of a radio to its own receiving elements. Fully digital beamforming is a technique used to conduct beamforming and has been recently repurposed to also reduce SI. However, the cost of fully digital systems dramatically increases with the number of antennas, as each antenna requires an independent Tx-Rx RF chain. Hybrid beamforming systems use a much smaller number of RF chains to feed the same number of antennas, and hence can significantly reduce the deployment cost. In this paper, we aim to quantify the performance gap between these two radio architectures in terms of SI cancellation and system capacity in FD multi-user Multiple Input Multiple Output (MIMO) setups. We first obtained over-the-air channel measurement data on two outdoor massive MIMO deployments over the course of three months. We next study SoftNull and M-HBFD as two state-of-the-art transmit (Tx) beamforming based FD systems, and introduce two new joint transmit-receive (Tx-Rx) beamforming based FD systems named TR-FD<span><math><msup><mrow></mrow><mrow><mn>2</mn></mrow></msup></math></span> and TR-HBFD for fully digital and hybrid radio architectures, respectively. We show that the hybrid beamforming systems can achieve 80%–99% of the fully digital systems capacity, depending on the number of users. Our results show that it is possible to get many benefits associated with fully digital massive MIMO systems with a hybrid beamforming architecture at a fraction of the cost.</div></div>","PeriodicalId":19964,"journal":{"name":"Performance Evaluation","volume":"166 ","pages":"Article 102447"},"PeriodicalIF":1.0,"publicationDate":"2024-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142427974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Probabilistic performance evaluation of the class-A device in LoRaWAN protocol on the MAC layer LoRaWAN 协议中 A 类设备在 MAC 层的概率性能评估
IF 1 4区 计算机科学 Q4 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-09-21 DOI: 10.1016/j.peva.2024.102446
Mi Chen , Lynda Mokdad , Jalel Ben-Othman , Jean-Michel Fourneau
LoRaWAN is a network technology that provides a long-range wireless network while maintaining low energy consumption. It adopts the pure Aloha MAC protocol and the duty-cycle limitation at both uplink and downlink on the MAC layer to conserve energy. Additionally, LoRaWAN employs orthogonal parameters to mitigate collisions. However, synchronization in star-of-star topology networks and the complicated collision mechanism make it challenging to conduct a quantitative performance evaluation in LoRaWAN. Our previous work proposes a Probabilistic Timed Automata (PTA) model to represent the uplink transmission in LoRaWAN. It is a mathematical model that presents the nondeterministic and probabilistic choice with time passing. However, this model remains a work in progress. This study extends the PTA model to depict Class-A devices in the LoRaWAN protocol. The complete characteristics of LoRaWAN’s MAC layer, such as duty-cycle limits, bidirectional communication, and confirmed message transmission, are accurately modeled. Furthermore, a comprehensive collision model is integrated into the PTA. Various properties are verified using the probabilistic model checker PRISM, and quantitative properties are calculated under diverse scenarios. This quantitative analysis provides valuable insights into the performance and behavior of LoRaWAN networks under varying conditions.
LoRaWAN 是一种既能提供远距离无线网络,又能保持低能耗的网络技术。它采用纯 Aloha MAC 协议和 MAC 层上下行链路的占空比限制来节约能源。此外,LoRaWAN 采用正交参数来减少碰撞。然而,星形拓扑网络中的同步和复杂的碰撞机制使得在 LoRaWAN 中进行定量性能评估具有挑战性。我们之前的工作提出了一个概率定时自动机(PTA)模型来表示 LoRaWAN 中的上行链路传输。这是一个数学模型,它呈现了时间流逝的非确定性和概率选择。然而,这一模型仍在研究之中。本研究扩展了 PTA 模型,以描述 LoRaWAN 协议中的 A 类设备。LoRaWAN 的 MAC 层的全部特性,如占空比限制、双向通信和确认信息传输,都得到了精确建模。此外,PTA 中还集成了全面的碰撞模型。使用概率模型检查器 PRISM 验证了各种属性,并计算了各种情况下的定量属性。这种定量分析为了解 LoRaWAN 网络在不同条件下的性能和行为提供了宝贵的见解。
{"title":"Probabilistic performance evaluation of the class-A device in LoRaWAN protocol on the MAC layer","authors":"Mi Chen ,&nbsp;Lynda Mokdad ,&nbsp;Jalel Ben-Othman ,&nbsp;Jean-Michel Fourneau","doi":"10.1016/j.peva.2024.102446","DOIUrl":"10.1016/j.peva.2024.102446","url":null,"abstract":"<div><div>LoRaWAN is a network technology that provides a long-range wireless network while maintaining low energy consumption. It adopts the pure Aloha MAC protocol and the duty-cycle limitation at both uplink and downlink on the MAC layer to conserve energy. Additionally, LoRaWAN employs orthogonal parameters to mitigate collisions. However, synchronization in star-of-star topology networks and the complicated collision mechanism make it challenging to conduct a quantitative performance evaluation in LoRaWAN. Our previous work proposes a Probabilistic Timed Automata (PTA) model to represent the uplink transmission in LoRaWAN. It is a mathematical model that presents the nondeterministic and probabilistic choice with time passing. However, this model remains a work in progress. This study extends the PTA model to depict Class-A devices in the LoRaWAN protocol. The complete characteristics of LoRaWAN’s MAC layer, such as duty-cycle limits, bidirectional communication, and confirmed message transmission, are accurately modeled. Furthermore, a comprehensive collision model is integrated into the PTA. Various properties are verified using the probabilistic model checker PRISM, and quantitative properties are calculated under diverse scenarios. This quantitative analysis provides valuable insights into the performance and behavior of LoRaWAN networks under varying conditions.</div></div>","PeriodicalId":19964,"journal":{"name":"Performance Evaluation","volume":"166 ","pages":"Article 102446"},"PeriodicalIF":1.0,"publicationDate":"2024-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142314790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimal resource management for multi-access edge computing without using cross-layer communication 不使用跨层通信的多接入边缘计算的最优资源管理
IF 1 4区 计算机科学 Q4 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-09-12 DOI: 10.1016/j.peva.2024.102445
Ankita Koley, Chandramani Singh

We consider a Multi-access Edge Computing (MEC) system with a set of users, a base station (BS) with an attached MEC server, and a cloud server. The users can serve the service requests locally or can offload them to the BS which in turn can serve a subset of the offloaded requests at the MEC and can forward the requests to the cloud server. The user devices and the MEC server can be dynamically configured to serve different classes of services. The service requests offloaded to the BS incur offloading costs and those forwarded to the cloud incur additional costs; the costs could represent service charges or delays. Aggregate cost minimization subject to stability warrants optimal service scheduling and offloading at the users and the MEC server, at their application layers, and optimal uplink packet scheduling at the users’ MAC layers. Classical back-pressure (BP) based solutions entail cross-layer message exchange, and hence are not viable. We propose virtual queue-based drift-plus-penalty algorithms that are throughput optimal, and achieve the optimal delay arbitrarily closely but do not require cross-layer communication. We first consider an MEC system without local computation, and subsequently, extend our framework to incorporate local computation also. We demonstrate that the proposed algorithms offer almost the same performance as BP based algorithms. These algorithms contain tuneable parameters that offer a trade off between queue lengths at the users and the BS and the offloading costs.

我们考虑的多接入边缘计算(MEC)系统包含一组用户、一个带有 MEC 服务器的基站(BS)和一个云服务器。用户可以在本地提供服务请求,也可以将服务请求卸载到基站,而基站又可以在 MEC 上提供卸载请求的子集,并将请求转发到云服务器。用户设备和 MEC 服务器可以动态配置,以提供不同类别的服务。卸载到 BS 的服务请求会产生卸载成本,而转发到云的服务请求则会产生额外成本;这些成本可能是服务费或延迟。在保持稳定的前提下,总成本最小化要求在用户和 MEC 服务器的应用层实现最佳服务调度和卸载,并在用户的 MAC 层实现最佳上行链路数据包调度。基于背压(BP)的经典解决方案需要跨层信息交换,因此不可行。我们提出了基于虚拟队列的漂移加惩罚算法,该算法吞吐量最优,可任意达到最佳延迟,但不需要跨层通信。我们首先考虑了不带本地计算的 MEC 系统,随后扩展了我们的框架,将本地计算也纳入其中。我们证明,所提出的算法与基于 BP 的算法具有几乎相同的性能。这些算法包含可调整的参数,可在用户和 BS 的队列长度与卸载成本之间进行权衡。
{"title":"Optimal resource management for multi-access edge computing without using cross-layer communication","authors":"Ankita Koley,&nbsp;Chandramani Singh","doi":"10.1016/j.peva.2024.102445","DOIUrl":"10.1016/j.peva.2024.102445","url":null,"abstract":"<div><p>We consider a Multi-access Edge Computing (MEC) system with a set of users, a base station (BS) with an attached MEC server, and a cloud server. The users can serve the service requests locally or can offload them to the BS which in turn can serve a subset of the offloaded requests at the MEC and can forward the requests to the cloud server. The user devices and the MEC server can be dynamically configured to serve different classes of services. The service requests offloaded to the BS incur offloading costs and those forwarded to the cloud incur additional costs; the costs could represent service charges or delays. Aggregate cost minimization subject to stability warrants optimal service scheduling and offloading at the users and the MEC server, at their application layers, and optimal uplink packet scheduling at the users’ MAC layers. Classical back-pressure (BP) based solutions entail cross-layer message exchange, and hence are not viable. We propose virtual queue-based drift-plus-penalty algorithms that are throughput optimal, and achieve the optimal delay arbitrarily closely but do not require cross-layer communication. We first consider an MEC system without local computation, and subsequently, extend our framework to incorporate local computation also. We demonstrate that the proposed algorithms offer almost the same performance as BP based algorithms. These algorithms contain tuneable parameters that offer a trade off between queue lengths at the users and the BS and the offloading costs.</p></div>","PeriodicalId":19964,"journal":{"name":"Performance Evaluation","volume":"166 ","pages":"Article 102445"},"PeriodicalIF":1.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142241260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient handling of sporadic messages in FlexRay 在 FlexRay 中高效处理零星信息
IF 1 4区 计算机科学 Q4 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-09-06 DOI: 10.1016/j.peva.2024.102444
Sunil Kumar P.R. , Manjunath A.S. , Vinod V.

FlexRay is a high-bandwidth protocol that supports hard-deadline periodic and sporadic traffic in modern in-vehicle communication networks. The dynamic segment of FlexRay is used for transmitting hard deadline sporadic messages. In this paper, we describe an algorithm to minimize the duration of the dynamic segment in a FlexRay cycle, yielding better results than existing algorithms in the literature. The proposed algorithm consists of two phases. In the first phase, we assume that a sporadic message instance contends for service with only one instance of each higher-priority message. The lower bound provided by the first phase serves as the initial guess for the number of mini-slots used in the second phase, where an exact scheduling analysis is performed. In the second phase, a sporadic message may contend for service with multiple instances of each higher-priority message. This two-phase approach is efficient because the first phase has low overhead and its estimate greatly reduces the number of iterations needed in the second phase. We conducted experiments using the dataset provided in the literature as well as the SAE benchmark dataset. The experimental results demonstrate superior bandwidth minimization and computational efficiency compared to other algorithms.

FlexRay 是一种高带宽协议,支持现代车载通信网络中的硬限期定期和零星流量。FlexRay 的动态段用于传输硬限期零星信息。在本文中,我们介绍了一种最小化 FlexRay 循环中动态段持续时间的算法,其结果优于文献中的现有算法。所提出的算法包括两个阶段。在第一阶段,我们假设零星信息实例只与每个较高优先级信息的一个实例争夺服务。第一阶段提供的下限可作为第二阶段使用的迷你槽数量的初始猜测,第二阶段将进行精确的调度分析。在第二阶段,零星信息可能会与每个优先级较高信息的多个实例争夺服务。这种两阶段方法之所以高效,是因为第一阶段的开销较低,其估计值大大减少了第二阶段所需的迭代次数。我们使用文献中提供的数据集和 SAE 基准数据集进行了实验。实验结果表明,与其他算法相比,该算法在带宽最小化和计算效率方面更胜一筹。
{"title":"Efficient handling of sporadic messages in FlexRay","authors":"Sunil Kumar P.R. ,&nbsp;Manjunath A.S. ,&nbsp;Vinod V.","doi":"10.1016/j.peva.2024.102444","DOIUrl":"10.1016/j.peva.2024.102444","url":null,"abstract":"<div><p>FlexRay is a high-bandwidth protocol that supports hard-deadline periodic and sporadic traffic in modern in-vehicle communication networks. The dynamic segment of FlexRay is used for transmitting hard deadline sporadic messages. In this paper, we describe an algorithm to minimize the duration of the dynamic segment in a FlexRay cycle, yielding better results than existing algorithms in the literature. The proposed algorithm consists of two phases. In the first phase, we assume that a sporadic message instance contends for service with only one instance of each higher-priority message. The lower bound provided by the first phase serves as the initial guess for the number of mini-slots used in the second phase, where an exact scheduling analysis is performed. In the second phase, a sporadic message may contend for service with multiple instances of each higher-priority message. This two-phase approach is efficient because the first phase has low overhead and its estimate greatly reduces the number of iterations needed in the second phase. We conducted experiments using the dataset provided in the literature as well as the SAE benchmark dataset. The experimental results demonstrate superior bandwidth minimization and computational efficiency compared to other algorithms.</p></div>","PeriodicalId":19964,"journal":{"name":"Performance Evaluation","volume":"166 ","pages":"Article 102444"},"PeriodicalIF":1.0,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142162926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Performance Evaluation
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1