All Badges available in this process are awarded to the paper “Spatial/Temporal Locality-based Load-sharing in Speculative Discrete Event Simulation on Multi-core Machines”. The authors have uploaded their artifacts to Zenodo, which ensures a long-term retention of the artifact. This paper can thus receive the Artifacts Available badge. The artifact allows for easy re-running of experiments for 14 figures and 4 tables. All of the dependencies are documented. The software in the artifact runs correctly with minimal intervention, and is relevant to the paper, earning the Artifacts Evaluated–Functional badge. The experimental results are reproduced in 9 experiments, which gains the Results Reproduced badge. Furthermore, since the artifact is also available on GitHub, the paper is assigned the Artifacts Evaluated–Reusable badge.
{"title":"Reproducibility Report for the Paper:","authors":"Wen Jun Tan","doi":"10.1145/3674144","DOIUrl":"https://doi.org/10.1145/3674144","url":null,"abstract":"<p>All Badges available in this process are awarded to the paper “Spatial/Temporal Locality-based Load-sharing in Speculative Discrete Event Simulation on Multi-core Machines”. The authors have uploaded their artifacts to Zenodo, which ensures a long-term retention of the artifact. This paper can thus receive the <i>Artifacts Available</i> badge. The artifact allows for easy re-running of experiments for 14 figures and 4 tables. All of the dependencies are documented. The software in the artifact runs correctly with minimal intervention, and is relevant to the paper, earning the <i>Artifacts Evaluated–Functional</i> badge. The experimental results are reproduced in 9 experiments, which gains the <i>Results Reproduced</i> badge. Furthermore, since the artifact is also available on GitHub, the paper is assigned the <i>Artifacts Evaluated–Reusable</i> badge.</p>","PeriodicalId":50943,"journal":{"name":"ACM Transactions on Modeling and Computer Simulation","volume":"86 1","pages":""},"PeriodicalIF":0.9,"publicationDate":"2024-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141507200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Simulating and predicting the performance of a distributed software system that works under stringent real-time constraints poses significant challenges, particularly when dealing with legacy systems being in production use, where any disruption is intolerable. This challenge is exacerbated in the context of a System Under Evaluation (SUE) that operates within a resource-sharing environment, running concurrently with numerous other software components. In this paper, we introduce an innovative toolset designed for predicting the performance of such complex and time-critical software systems. Our toolset builds upon the RAST (Regression Analysis, Simulation, and load Testing) approach, significantly enhanced in this paper compared to its initial version. While current state-of-the-art methods for performance prediction often rely on data collected by Application Performance Monitoring (APM), the unavailability of APM tools for existing systems and the complexities associated with integrating them into legacy software necessitate alternative approaches. Our toolset, therefore, utilizes readily accessible system request logs as a substitute for APM data. We describe the enhancements made to the original RAST approach, we outline the design and implementation of our RAST-based toolset, and we showcase its simulation accuracy and effectiveness using the publicly available TeaStore benchmarking system. To ensure the reproducibility of our experiments, we provide open access to our toolset’s implementation and the utilized TeaStore model.
模拟和预测在严格的实时限制条件下工作的分布式软件系统的性能是一项重大挑战,尤其是在处理生产使用中的遗留系统时,任何中断都是不可容忍的。如果被评估系统(SUE)在资源共享的环境中运行,并与许多其他软件组件同时运行,那么这一挑战就会更加严峻。在本文中,我们介绍了一种创新工具集,旨在预测此类复杂且时间紧迫的软件系统的性能。我们的工具集建立在 RAST(回归分析、模拟和负载测试)方法的基础上,与最初版本相比,本文对其进行了大幅改进。虽然目前最先进的性能预测方法通常依赖于应用性能监控(APM)收集的数据,但由于现有系统无法使用 APM 工具,而且将其集成到传统软件中也很复杂,因此有必要采用其他方法。因此,我们的工具集利用可随时访问的系统请求日志来替代 APM 数据。我们介绍了对原始 RAST 方法的改进,概述了基于 RAST 的工具集的设计和实施,并使用公开的 TeaStore 基准测试系统展示了其模拟的准确性和有效性。为了确保实验的可重复性,我们提供了工具集实现和所使用的 TeaStore 模型的开放访问权限。
{"title":"A Toolset for Predicting Performance of Legacy Real-Time Software Based on the RAST Approach","authors":"Juri Tomak, Sergei Gorlatch","doi":"10.1145/3673897","DOIUrl":"https://doi.org/10.1145/3673897","url":null,"abstract":"<p>Simulating and predicting the performance of a distributed software system that works under stringent real-time constraints poses significant challenges, particularly when dealing with legacy systems being in production use, where any disruption is intolerable. This challenge is exacerbated in the context of a System Under Evaluation (SUE) that operates within a resource-sharing environment, running concurrently with numerous other software components. In this paper, we introduce an innovative toolset designed for predicting the performance of such complex and time-critical software systems. Our toolset builds upon the RAST (<underline>R</underline>egression <underline>A</underline>nalysis, <underline>S</underline>imulation, and load <underline>T</underline>esting) approach, significantly enhanced in this paper compared to its initial version. While current state-of-the-art methods for performance prediction often rely on data collected by Application Performance Monitoring (APM), the unavailability of APM tools for existing systems and the complexities associated with integrating them into legacy software necessitate alternative approaches. Our toolset, therefore, utilizes readily accessible system request logs as a substitute for APM data. We describe the enhancements made to the original RAST approach, we outline the design and implementation of our RAST-based toolset, and we showcase its simulation accuracy and effectiveness using the publicly available TeaStore benchmarking system. To ensure the reproducibility of our experiments, we provide open access to our toolset’s implementation and the utilized TeaStore model.</p>","PeriodicalId":50943,"journal":{"name":"ACM Transactions on Modeling and Computer Simulation","volume":"187 1","pages":""},"PeriodicalIF":0.9,"publicationDate":"2024-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141507196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Parallel and distributed computing enable the execution of large and complex simulations. Yet, the usual separation of (headless) simulation execution and (subsequent, offline) output analysis often renders the simulation endeavor long and inefficient. Recently, Visual Interactive Simulation (VIS) tools and methods that address this end-to-end efficiency are gaining relevance, offering in-situ visualization, real-time debugging, and computational steering. Here, the typically distributed computing nature of the simulation execution poses synchronization challenges between the headless simulation engine and the user-facing frontend required for Visual Interactive Simulation. To the best of our knowledge, state-of-the-art synchronization approaches fall short due to their rigidity and inability to adapt to real-time user-centric changes. This paper introduces a novel adaptive algorithm to dynamically adjust the simulation’s pacing through a buffer-based framework, informed by predictive workload analysis. Our extensive experimental evaluation across diverse synthetic scenarios illustrates our method’s effectiveness in enhancing runtime efficiency and synchronicity, significantly reducing end-to-end time while minimizing user interaction delays, thereby addressing key limitations of existing synchronization strategies.
{"title":"Adaptive Synchronization and Pacing Control for Visual Interactive Simulation","authors":"Zhuoxiao Meng, Mingyue Gao, Margherita Grossi, Anibal Siguenza-Torres, Stefano Bortoli, Christoph Sommer, Alois Knoll","doi":"10.1145/3673898","DOIUrl":"https://doi.org/10.1145/3673898","url":null,"abstract":"<p>Parallel and distributed computing enable the execution of large and complex simulations. Yet, the usual separation of (headless) simulation execution and (subsequent, offline) output analysis often renders the simulation endeavor long and inefficient. Recently, Visual Interactive Simulation (VIS) tools and methods that address this end-to-end efficiency are gaining relevance, offering <i>in-situ</i> visualization, real-time debugging, and computational steering. Here, the typically distributed computing nature of the simulation execution poses synchronization challenges between the headless simulation engine and the user-facing frontend required for Visual Interactive Simulation. To the best of our knowledge, state-of-the-art synchronization approaches fall short due to their rigidity and inability to adapt to real-time user-centric changes. This paper introduces a novel adaptive algorithm to dynamically adjust the simulation’s pacing through a buffer-based framework, informed by predictive workload analysis. Our extensive experimental evaluation across diverse synthetic scenarios illustrates our method’s effectiveness in enhancing runtime efficiency and synchronicity, significantly reducing end-to-end time while minimizing user interaction delays, thereby addressing key limitations of existing synchronization strategies.</p>","PeriodicalId":50943,"journal":{"name":"ACM Transactions on Modeling and Computer Simulation","volume":"24 1","pages":""},"PeriodicalIF":0.9,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141507198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Adelinde M Uhrmacher, Peter Frazier, Reiner Hähnle, Franziska Klügl, Fabian Lorig, Bertram Ludäscher, Laura Nenzi, Cristina Ruiz-Martin, Bernhard Rumpe, Claudia Szabo, Gabriel Wainer, Pia Wilsdorf
Simulation has become, in many application areas, a sine-qua-non. Most recently, COVID-19 has underlined the importance of simulation studies and limitations in current practices and methods. We identify four goals of methodological work for addressing these limitations. The first is to provide better support for capturing, representing, and evaluating the context of simulation studies, including research questions, assumptions, requirements, and activities contributing to a simulation study. In addition, the composition of simulation models and other simulation studies’ products must be supported beyond syntactical coherence, including aspects of semantics and purpose, enabling their effective reuse. A higher degree of automating simulation studies will contribute to more systematic, standardized simulation studies and their efficiency. Finally, it is essential to invest increased effort into effectively communicating results and the processes involved in simulation studies to enable their use in research and decision-making. These goals are not pursued independently of each other, but they will benefit from and sometimes even rely on advances in other subfields. In the present paper, we explore the basis and interdependencies evident in current research and practice and delineate future research directions based on these considerations.
{"title":"Context, Composition, Automation, and Communication - The C2AC Roadmap for Modeling and Simulation","authors":"Adelinde M Uhrmacher, Peter Frazier, Reiner Hähnle, Franziska Klügl, Fabian Lorig, Bertram Ludäscher, Laura Nenzi, Cristina Ruiz-Martin, Bernhard Rumpe, Claudia Szabo, Gabriel Wainer, Pia Wilsdorf","doi":"10.1145/3673226","DOIUrl":"https://doi.org/10.1145/3673226","url":null,"abstract":"<p>Simulation has become, in many application areas, a sine-qua-non. Most recently, COVID-19 has underlined the importance of simulation studies and limitations in current practices and methods. We identify four goals of methodological work for addressing these limitations. The first is to provide better support for capturing, representing, and evaluating the context of simulation studies, including research questions, assumptions, requirements, and activities contributing to a simulation study. In addition, the composition of simulation models and other simulation studies’ products must be supported beyond syntactical coherence, including aspects of semantics and purpose, enabling their effective reuse. A higher degree of automating simulation studies will contribute to more systematic, standardized simulation studies and their efficiency. Finally, it is essential to invest increased effort into effectively communicating results and the processes involved in simulation studies to enable their use in research and decision-making. These goals are not pursued independently of each other, but they will benefit from and sometimes even rely on advances in other subfields. In the present paper, we explore the basis and interdependencies evident in current research and practice and delineate future research directions based on these considerations.</p>","PeriodicalId":50943,"journal":{"name":"ACM Transactions on Modeling and Computer Simulation","volume":"162 1","pages":""},"PeriodicalIF":0.9,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141507197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Erik Skau, Andrew Hollis, Stephan Eidenbenz, Kim Rasmussen, Boian Alexandrov
Monitoring of industrial processes is a critical capability in industry and in government to ensure reliability of production cycles, quick emergency response, and national security. Process monitoring allows users to gauge the progress of an organization in an industrial process or predict the degradation or aging of machine parts in processes taking place at a remote location. Similar to many data science applications, we usually only have access to limited raw data, such as satellite imagery, short video clips, event logs, and signatures captured by a small set of sensors. To combat data scarcity, we leverage the knowledge of Subject Matter Experts (SMEs) who are familiar with the actions of interest. SMEs provide expert knowledge of the essential activities required for task completion and the resources necessary to carry out each of these activities. Various process mining techniques have been developed for this type of analysis; typically such approaches combine theoretical process models built based on domain expert insights with ad-hoc integration of available pieces of raw data. Here, we introduce a novel mathematically sound method that integrates theoretical process models (as proposed by SMEs) with interrelated minimal Hidden Markov Models (HMM), built via nonnegative tensor factorization. Our method consolidates: (a) theoretical process models, (b) HMMs, (c) coupled nonnegative matrix-tensor factorizations, and (d) custom model selection. To demonstrate our methodology and its abilities, we apply it on simple synthetic and real world process models.
{"title":"Generating Hidden Markov Models from Process Models Through Nonnegative Tensor Factorization","authors":"Erik Skau, Andrew Hollis, Stephan Eidenbenz, Kim Rasmussen, Boian Alexandrov","doi":"10.1145/3664813","DOIUrl":"https://doi.org/10.1145/3664813","url":null,"abstract":"<p>Monitoring of industrial processes is a critical capability in industry and in government to ensure reliability of production cycles, quick emergency response, and national security. Process monitoring allows users to gauge the progress of an organization in an industrial process or predict the degradation or aging of machine parts in processes taking place at a remote location. Similar to many data science applications, we usually only have access to limited raw data, such as satellite imagery, short video clips, event logs, and signatures captured by a small set of sensors. To combat data scarcity, we leverage the knowledge of Subject Matter Experts (SMEs) who are familiar with the actions of interest. SMEs provide expert knowledge of the essential activities required for task completion and the resources necessary to carry out each of these activities. Various process mining techniques have been developed for this type of analysis; typically such approaches combine theoretical process models built based on domain expert insights with ad-hoc integration of available pieces of raw data. Here, we introduce a novel mathematically sound method that integrates theoretical process models (as proposed by SMEs) with interrelated minimal Hidden Markov Models (HMM), built via nonnegative tensor factorization. Our method consolidates: (a) theoretical process models, (b) HMMs, (c) coupled nonnegative matrix-tensor factorizations, and (d) custom model selection. To demonstrate our methodology and its abilities, we apply it on simple synthetic and real world process models.</p>","PeriodicalId":50943,"journal":{"name":"ACM Transactions on Modeling and Computer Simulation","volume":"22 1","pages":""},"PeriodicalIF":0.9,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141507203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anibal Siguenza-Torres, Alexander Wieder, Zhuoxiao Meng, Santiago Narvaez Rivas, Mingyue Gao, Margherita Grossi, Xiaorui Du, Stefano Bortoli, Wentong Cai, Alois Knoll
Driven by our work on a large-scale distributed microscopic road traffic simulator, we present ENHANCE, a novel re-partitioning approach that allows incorporating fine-grained simulator-specific cost models into the partitioning process to account for the actual performance characteristics of the simulator.
The use of explicit cost models enables partitioning for heterogeneous resources, which are a common occurrence in cloud deployments. Importantly, ENHANCE can be used in conjunction with other partitioning approaches by further enhancing partitions according to provided cost models. We demonstrate the benefits of our approach in an experimental evaluation showing performance improvements of up to 29% against METIS under heterogeneous conditions. Taking a different perspective, the partitioning produced by ENHANCE can provide similar performance as METIS, but using up to 20% fewer resources.
{"title":"ENHANCE: Multilevel Heterogeneous Performance-Aware Re-Partitioning Algorithm For Microscopic Vehicle Traffic Simulation","authors":"Anibal Siguenza-Torres, Alexander Wieder, Zhuoxiao Meng, Santiago Narvaez Rivas, Mingyue Gao, Margherita Grossi, Xiaorui Du, Stefano Bortoli, Wentong Cai, Alois Knoll","doi":"10.1145/3670401","DOIUrl":"https://doi.org/10.1145/3670401","url":null,"abstract":"<p>Driven by our work on a large-scale distributed microscopic road traffic simulator, we present ENHANCE, a novel re-partitioning approach that allows incorporating fine-grained simulator-specific cost models into the partitioning process to account for the actual performance characteristics of the simulator. </p><p>The use of explicit cost models enables partitioning for heterogeneous resources, which are a common occurrence in cloud deployments. Importantly, ENHANCE can be used in conjunction with other partitioning approaches by further <i>enhancing</i> partitions according to provided cost models. We demonstrate the benefits of our approach in an experimental evaluation showing performance improvements of up to 29% against METIS under heterogeneous conditions. Taking a different perspective, the partitioning produced by ENHANCE can provide similar performance as METIS, but using up to 20% fewer resources.</p>","PeriodicalId":50943,"journal":{"name":"ACM Transactions on Modeling and Computer Simulation","volume":"72 1","pages":""},"PeriodicalIF":0.9,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141258991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The advent of Multi-Access Edge Computing (MEC) has enabled service providers to mitigate high network latencies often encountered in accessing cloud services. The key idea of MEC involves service providers deploying containerized application services on MEC servers situated near Internet-of-Things (IoT) device users. The users access these services via wireless base stations with ultra low latency. Computation tasks of IoT devices can then either be executed locally on the devices or on the MEC servers. A key cornerstone of the MEC environment is an offloading policy utilized to determine whether to execute computation tasks on IoT devices or to offload the tasks to MEC servers for processing. In this work, we propose a two phase Probabilistic Model Checking based offloading policy catering to IoT device user preferences. The first stage evaluates the trade-offs between local vs server execution while the second stage evaluates the trade-offs between choice of wireless communication bands for offloaded tasks. We present experimental results in practical scenarios on data gathered from an IoT test-bed setup with benchmark applications to show the benefits of an adaptive preference-aware approach over conventional approaches in the MEC offloading context.
{"title":"Computation Offloading and Band Selection for IoT Devices in Multi-Access Edge Computing","authors":"Kaustabha Ray, Ansuman Banerjee","doi":"10.1145/3670400","DOIUrl":"https://doi.org/10.1145/3670400","url":null,"abstract":"<p>The advent of Multi-Access Edge Computing (MEC) has enabled service providers to mitigate high network latencies often encountered in accessing cloud services. The key idea of MEC involves service providers deploying containerized application services on MEC servers situated near Internet-of-Things (IoT) device users. The users access these services via wireless base stations with ultra low latency. Computation tasks of IoT devices can then either be executed locally on the devices or on the MEC servers. A key cornerstone of the MEC environment is an offloading policy utilized to determine whether to execute computation tasks on IoT devices or to offload the tasks to MEC servers for processing. In this work, we propose a two phase Probabilistic Model Checking based offloading policy catering to IoT device user preferences. The first stage evaluates the trade-offs between local vs server execution while the second stage evaluates the trade-offs between choice of wireless communication bands for offloaded tasks. We present experimental results in practical scenarios on data gathered from an IoT test-bed setup with benchmark applications to show the benefits of an adaptive preference-aware approach over conventional approaches in the MEC offloading context.</p>","PeriodicalId":50943,"journal":{"name":"ACM Transactions on Modeling and Computer Simulation","volume":"24 1","pages":""},"PeriodicalIF":0.9,"publicationDate":"2024-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141259078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pedro R. D'Argenio, Juan Fraire, Arnd Hartmanns, Fernando Raverta
In delay-tolerant networks (DTNs) with uncertain contact plans, the communication episodes and their reliabilities are known a priori. To maximise the end-to-end delivery probability, a bounded network-wide number of message copies are allowed. The resulting multi-copy routing optimization problem is naturally modelled as a Markov decision process with distributed information. In this paper, we provide an in-depth comparison of three solution approaches: statistical model checking with scheduler sampling, the analytical RUCoP algorithm based on probabilistic model checking, and an implementation of concurrent Q-learning. We use an extensive benchmark set comprising random networks, scalable binomial topologies, and realistic ring-road low Earth orbit satellite networks. We evaluate the obtained message delivery probabilities as well as the computational effort. Our results show that all three approaches are suitable tools for obtaining reliable routes in DTN, and expose a trade-off between scalability and solution quality.
{"title":"Comparing Statistical, Analytical, and Learning-Based Routing Approaches for Delay-Tolerant Networks","authors":"Pedro R. D'Argenio, Juan Fraire, Arnd Hartmanns, Fernando Raverta","doi":"10.1145/3665927","DOIUrl":"https://doi.org/10.1145/3665927","url":null,"abstract":"<p>In delay-tolerant networks (DTNs) with uncertain contact plans, the communication episodes and their reliabilities are known a priori. To maximise the end-to-end delivery probability, a bounded network-wide number of message copies are allowed. The resulting multi-copy routing optimization problem is naturally modelled as a Markov decision process with distributed information. In this paper, we provide an in-depth comparison of three solution approaches: statistical model checking with scheduler sampling, the analytical RUCoP algorithm based on probabilistic model checking, and an implementation of concurrent Q-learning. We use an extensive benchmark set comprising random networks, scalable binomial topologies, and realistic ring-road low Earth orbit satellite networks. We evaluate the obtained message delivery probabilities as well as the computational effort. Our results show that all three approaches are suitable tools for obtaining reliable routes in DTN, and expose a trade-off between scalability and solution quality.</p>","PeriodicalId":50943,"journal":{"name":"ACM Transactions on Modeling and Computer Simulation","volume":"162 1","pages":""},"PeriodicalIF":0.9,"publicationDate":"2024-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141149017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We consider the problem of finding a system with the best primary performance measure among a finite number of simulated systems in the presence of subjective stochastic constraints on secondary performance measures. When no feasible system exists, the decision maker may be willing to relax some constraint thresholds. We take multiple threshold values for each constraint as a user’s input and propose indifference-zone procedures that perform the phases of feasibility check and selection-of-the-best sequentially or simultaneously. Given that there is no change in the underlying simulated systems, our procedures recycle simulation observations to conduct feasibility checks across all potential thresholds. We prove that the proposed procedures yield the best system in the most desirable feasible region possible with at least a pre-specified probability. Our experimental results show that our procedures perform well with respect to the number of observations required to make a decision, as compared with straight-forward procedures that repeatedly solve the problem for each set of constraint thresholds, and that our simultaneously-running procedure provides the best overall performance.
{"title":"Selection of the Best in the Presence of Subjective Stochastic Constraints","authors":"Yuwei Zhou, Sigrun Andradottir, Seong-Hee Kim","doi":"10.1145/3664814","DOIUrl":"https://doi.org/10.1145/3664814","url":null,"abstract":"<p>We consider the problem of finding a system with the best primary performance measure among a finite number of simulated systems in the presence of subjective stochastic constraints on secondary performance measures. When no feasible system exists, the decision maker may be willing to relax some constraint thresholds. We take multiple threshold values for each constraint as a user’s input and propose indifference-zone procedures that perform the phases of feasibility check and selection-of-the-best sequentially or simultaneously. Given that there is no change in the underlying simulated systems, our procedures recycle simulation observations to conduct feasibility checks across all potential thresholds. We prove that the proposed procedures yield the best system in the most desirable feasible region possible with at least a pre-specified probability. Our experimental results show that our procedures perform well with respect to the number of observations required to make a decision, as compared with straight-forward procedures that repeatedly solve the problem for each set of constraint thresholds, and that our simultaneously-running procedure provides the best overall performance.</p>","PeriodicalId":50943,"journal":{"name":"ACM Transactions on Modeling and Computer Simulation","volume":"196 1","pages":""},"PeriodicalIF":0.9,"publicationDate":"2024-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140936857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents an algorithm for determining the unknown rates in the sequential processes of a Stochastic Process Algebra (SPA) model, provided that the rates in the combined flat model are given. Such a rate lifting is useful for model reverse engineering and model repair. Technically, the algorithm works by solving systems of nonlinear equations and – if necessary – adjusting the model’s synchronisation structure, without changing its transition system. The adjustments cause an augmentation of a transition’s context and thus enable additional control over the transition rate. The complete pseudo-code of the rate lifting algorithm is included and discussed in the paper, and its practical usefulness is demonstrated by two case studies. The approach taken by the algorithm exploits some structural and behavioural properties of SPA systems, which are formulated here for the first time and could be very beneficial also in other contexts, such as compositional system verification.
{"title":"Rate Lifting for Stochastic Process Algebra by Transition Context Augmentation","authors":"Amin Soltanieh, Markus Siegle","doi":"10.1145/3656582","DOIUrl":"https://doi.org/10.1145/3656582","url":null,"abstract":"<p>This paper presents an algorithm for determining the unknown rates in the sequential processes of a Stochastic Process Algebra (SPA) model, provided that the rates in the combined flat model are given. Such a rate lifting is useful for model reverse engineering and model repair. Technically, the algorithm works by solving systems of nonlinear equations and – if necessary – adjusting the model’s synchronisation structure, without changing its transition system. The adjustments cause an augmentation of a transition’s context and thus enable additional control over the transition rate. The complete pseudo-code of the rate lifting algorithm is included and discussed in the paper, and its practical usefulness is demonstrated by two case studies. The approach taken by the algorithm exploits some structural and behavioural properties of SPA systems, which are formulated here for the first time and could be very beneficial also in other contexts, such as compositional system verification.</p>","PeriodicalId":50943,"journal":{"name":"ACM Transactions on Modeling and Computer Simulation","volume":"79 1","pages":""},"PeriodicalIF":0.9,"publicationDate":"2024-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140600074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}