Background: Simulations play a central role in epidemiological analysis and design of prophylactic measures. Spatially explicit, agent-based models provide temporo-geospatial information that cannot be obtained from traditional equation-based and individual-based epidemic models. Since, simulation of large agent-based models is time consuming, optimistically synchronized parallel simulation holds considerable promise to significantly decrease simulation execution times. Problem: Realizing efficient and scalable optimistic parallel simulations on modern distributed memory supercomputers is a challenge due to the spatially-explicit nature of agent-based models. Specifically, conceptual movement of agents results in large number of inter-process messages which significantly increase synchronization overheads and degrades overall performance. Proposed solution: To reduce inter-process messages, this paper proposes and experimentally evaluates two approaches involving single and multiple active-proxy agents. The Single Active Proxy (SAP) approach essentially accomplishes logical process migration (without any support from underlying simulation kernel) reflecting conceptual movement of the agents. The Multiple Active Proxy (MAP) approach improves upon SAP by utilizing multiple agents at boundaries between processes to further reduce inter-process messages thereby improving scalability and performance. The experiments conducted using a range of models indicate that SAP provides 200% improvement over the base case and MAP provides 15% to 25% improvement over SAP depending on the model.
背景:模拟在流行病学分析和预防措施设计中发挥着核心作用。空间明确的基于主体的模型提供了从传统的基于方程和基于个体的流行病模型无法获得的时间-地理空间信息。由于大型基于代理的模型的仿真非常耗时,乐观的同步并行仿真有望显著减少仿真执行时间。问题:由于基于智能体模型的空间显式特性,在现代分布式内存超级计算机上实现高效和可扩展的乐观并行模拟是一个挑战。具体来说,代理的概念移动会导致大量的进程间消息,这会显著增加同步开销并降低整体性能。建议的解决方案:为了减少进程间消息,本文提出并实验评估了涉及单个和多个主动代理代理的两种方法。单活动代理(Single Active Proxy, SAP)方法基本上完成了逻辑流程迁移(不需要底层仿真内核的任何支持),反映了代理的概念移动。多活动代理(MAP)方法通过在进程之间的边界使用多个代理来改进SAP,从而进一步减少进程间消息,从而提高可伸缩性和性能。使用一系列模型进行的实验表明,SAP比基本情况提供了200%的改进,MAP比SAP提供了15%到25%的改进,具体取决于模型。
{"title":"Accelerating parallel agent-based epidemiological simulations","authors":"D. Rao","doi":"10.1145/2601381.2601387","DOIUrl":"https://doi.org/10.1145/2601381.2601387","url":null,"abstract":"Background: Simulations play a central role in epidemiological analysis and design of prophylactic measures. Spatially explicit, agent-based models provide temporo-geospatial information that cannot be obtained from traditional equation-based and individual-based epidemic models. Since, simulation of large agent-based models is time consuming, optimistically synchronized parallel simulation holds considerable promise to significantly decrease simulation execution times.\u0000 Problem: Realizing efficient and scalable optimistic parallel simulations on modern distributed memory supercomputers is a challenge due to the spatially-explicit nature of agent-based models. Specifically, conceptual movement of agents results in large number of inter-process messages which significantly increase synchronization overheads and degrades overall performance.\u0000 Proposed solution: To reduce inter-process messages, this paper proposes and experimentally evaluates two approaches involving single and multiple active-proxy agents. The Single Active Proxy (SAP) approach essentially accomplishes logical process migration (without any support from underlying simulation kernel) reflecting conceptual movement of the agents. The Multiple Active Proxy (MAP) approach improves upon SAP by utilizing multiple agents at boundaries between processes to further reduce inter-process messages thereby improving scalability and performance. The experiments conducted using a range of models indicate that SAP provides 200% improvement over the base case and MAP provides 15% to 25% improvement over SAP depending on the model.","PeriodicalId":255272,"journal":{"name":"SIGSIM Principles of Advanced Discrete Simulation","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124183278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Weather forecasting and climate modeling are grand challenge problems because of the complexity and diversity of the processes that must be simulated. The Earth system modeling community is driven to finer resolution grids and faster execution times by the need to provide accurate weather and seasonal forecasts, long term climate projections, and information about societal impacts such as droughts and floods. The models used in these simulations are generally written by teams of specialists, with each team focusing on a specific physical domain, such as the atmosphere, ocean, or sea ice. These specialized components are connected where their surfaces meet to form composite models that are largely self-consistent and allow for important cross-domain feedbacks. Since the components are often developed independently, there is a need for standard component interfaces and "coupling" software that transforms and transfers data so that outputs match expected inputs in the composite modeling system. The Earth System Modeling Framework (ESMF) project began in 2002 as a multi-agency effort to define a standard component interface and architecture, and to pool resources to develop shareable utilities for common functions such as grid remapping, time management and I/O. The ESMF development team was charged with making the infrastructure sufficiently general to accommodate many different numerical approaches and legacy modeling systems, as well as making it reliable, portable, well-documented, accurate, and high performance. To satisfy this charge, the development team needed to develop innovative numerical and computational methods, a formal and rigorous approach to interoperability, and distributed development and testing processes that promote software quality. ESMF has evolved to become the leading U.S. framework in the climate and weather communities, with users including the Navy, NASA, the National Weather Service, and community models supported by the National Science Foundation. In this talk, we will present ESMF's evolution, approach, and future plans.
{"title":"The earth system modeling framework: interoperability infrastructure for high performance weather and climate models","authors":"C. DeLuca","doi":"10.1145/2601381.2611130","DOIUrl":"https://doi.org/10.1145/2601381.2611130","url":null,"abstract":"Weather forecasting and climate modeling are grand challenge problems because of the complexity and diversity of the processes that must be simulated. The Earth system modeling community is driven to finer resolution grids and faster execution times by the need to provide accurate weather and seasonal forecasts, long term climate projections, and information about societal impacts such as droughts and floods. The models used in these simulations are generally written by teams of specialists, with each team focusing on a specific physical domain, such as the atmosphere, ocean, or sea ice. These specialized components are connected where their surfaces meet to form composite models that are largely self-consistent and allow for important cross-domain feedbacks. Since the components are often developed independently, there is a need for standard component interfaces and \"coupling\" software that transforms and transfers data so that outputs match expected inputs in the composite modeling system. The Earth System Modeling Framework (ESMF) project began in 2002 as a multi-agency effort to define a standard component interface and architecture, and to pool resources to develop shareable utilities for common functions such as grid remapping, time management and I/O. The ESMF development team was charged with making the infrastructure sufficiently general to accommodate many different numerical approaches and legacy modeling systems, as well as making it reliable, portable, well-documented, accurate, and high performance. To satisfy this charge, the development team needed to develop innovative numerical and computational methods, a formal and rigorous approach to interoperability, and distributed development and testing processes that promote software quality.\u0000 ESMF has evolved to become the leading U.S. framework in the climate and weather communities, with users including the Navy, NASA, the National Weather Service, and community models supported by the National Science Foundation. In this talk, we will present ESMF's evolution, approach, and future plans.","PeriodicalId":255272,"journal":{"name":"SIGSIM Principles of Advanced Discrete Simulation","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115774244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The growing adoption of Fog computing for the sensitive-time IoT applications allows to facilitate the real-time actions and to enhance their efficiency and performance. In fact, keeping the data in the distributed Fog network brings the advantages and power of the Cloud closer to where data are generated while saving network bandwidth and reducing latency and operational costs. However, due to the diversity of the Fog nodes, IoT system distribution and data sharing, how and where to place the produced data with low latency is a main challenge. Moreover, a data placement based on a single replica cannot meet the data access requirements of all data consumers that have different topology positions. Thus, in this paper, we propose a multi-objective optimization data placement model in a hybrid Fog-Cloud environment based on multiple data replicas. It aims to find better distributed data storage while optimizing the overall system latency and the used storage space by minimizing the data replicas and following full and partial data replication methods. Further, we propose a greedy algorithm $iFogDP_h$ which uses a refined method to find a solution for assigning the IoT data to the appropriate data hosts in polynomial time by reducing the time required to transfer data for storage, access and replication. We conducted the experiments on iFogSim, a toolkit for modeling and simulation of Fog environments. The experimental results show the effectiveness of our proposed solution in terms of latency, storage overhead and the number of data replicas compared to the existing strategies.
{"title":"An IoT-oriented Multiple Data Replicas Placement Strategy in Hybrid Fog-Cloud Environment","authors":"N. Salah, Narjès Bellamine Ben Saoud","doi":"10.1145/3437959.3459251","DOIUrl":"https://doi.org/10.1145/3437959.3459251","url":null,"abstract":"The growing adoption of Fog computing for the sensitive-time IoT applications allows to facilitate the real-time actions and to enhance their efficiency and performance. In fact, keeping the data in the distributed Fog network brings the advantages and power of the Cloud closer to where data are generated while saving network bandwidth and reducing latency and operational costs. However, due to the diversity of the Fog nodes, IoT system distribution and data sharing, how and where to place the produced data with low latency is a main challenge. Moreover, a data placement based on a single replica cannot meet the data access requirements of all data consumers that have different topology positions. Thus, in this paper, we propose a multi-objective optimization data placement model in a hybrid Fog-Cloud environment based on multiple data replicas. It aims to find better distributed data storage while optimizing the overall system latency and the used storage space by minimizing the data replicas and following full and partial data replication methods. Further, we propose a greedy algorithm $iFogDP_h$ which uses a refined method to find a solution for assigning the IoT data to the appropriate data hosts in polynomial time by reducing the time required to transfer data for storage, access and replication. We conducted the experiments on iFogSim, a toolkit for modeling and simulation of Fog environments. The experimental results show the effectiveness of our proposed solution in terms of latency, storage overhead and the number of data replicas compared to the existing strategies.","PeriodicalId":255272,"journal":{"name":"SIGSIM Principles of Advanced Discrete Simulation","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125456318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}