Pub Date : 2023-12-09DOI: 10.1177/00375497231212543
Marcin Wozniak
Pedestrian traffic in a city is subject to fluctuations throughout the day due to a variety of factors. The understanding of these variations can be achieved using properly calibrated agent-based simulation models that capture the dynamics of pedestrian movement. However, despite their significance, such models are currently underrepresented in scientific discussions. In addition, acquiring real-world pedestrian localization data for model calibration poses challenges. To address these issues, this paper presents an agent-based model specifically designed to examine pedestrian traffic fluctuations at a mesoscale level. The model uses popular times data from the Google Places service and population data from the Geographic Information System (GIS) for accurate calibration. As a result, it effectively captures the real-world dynamics of pedestrian movement in the city center. By harnessing the advantages of agent-based modeling (ABM), the model generates several valuable insights into daily pedestrian traffic. It estimates the capacity and speed of pedestrian flows and determines the daily load within the simulated area. Moreover, it enables the identification of bottlenecks and areas characterized by varying levels of pedestrian density. The model’s validation process involves comparing its output with empirical studies and pedestrian traffic data from selected points of interest (POIs). The model successfully captures key aspects associated with fundamental diagrams of pedestrian flow. Furthermore, the dynamics of pedestrians closely align with Google Places popular times data for the chosen POIs. Overall, this research contributes to advancing pedestrian traffic management and optimizing public transport organization by employing empirically calibrated agent-based simulation models.
{"title":"From dawn to dusk: daily fluctuations in pedestrian traffic in the city center","authors":"Marcin Wozniak","doi":"10.1177/00375497231212543","DOIUrl":"https://doi.org/10.1177/00375497231212543","url":null,"abstract":"Pedestrian traffic in a city is subject to fluctuations throughout the day due to a variety of factors. The understanding of these variations can be achieved using properly calibrated agent-based simulation models that capture the dynamics of pedestrian movement. However, despite their significance, such models are currently underrepresented in scientific discussions. In addition, acquiring real-world pedestrian localization data for model calibration poses challenges. To address these issues, this paper presents an agent-based model specifically designed to examine pedestrian traffic fluctuations at a mesoscale level. The model uses popular times data from the Google Places service and population data from the Geographic Information System (GIS) for accurate calibration. As a result, it effectively captures the real-world dynamics of pedestrian movement in the city center. By harnessing the advantages of agent-based modeling (ABM), the model generates several valuable insights into daily pedestrian traffic. It estimates the capacity and speed of pedestrian flows and determines the daily load within the simulated area. Moreover, it enables the identification of bottlenecks and areas characterized by varying levels of pedestrian density. The model’s validation process involves comparing its output with empirical studies and pedestrian traffic data from selected points of interest (POIs). The model successfully captures key aspects associated with fundamental diagrams of pedestrian flow. Furthermore, the dynamics of pedestrians closely align with Google Places popular times data for the chosen POIs. Overall, this research contributes to advancing pedestrian traffic management and optimizing public transport organization by employing empirically calibrated agent-based simulation models.","PeriodicalId":501452,"journal":{"name":"SIMULATION","volume":"2 8","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138585748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-04DOI: 10.1177/00375497231212198
H. Khalil, G. Wainer
Carbon dioxide concentration in enclosed spaces is an air quality indicator that affects occupants’ well-being. To maintain healthy carbon dioxide levels indoors, enclosed space settings must be adjusted to maximize air quality while minimizing energy consumption. Studying the effect of these settings on carbon dioxide concentration levels is not feasible through physical experimentation and data collection. This problem can be solved by using validated simulation models, generating indoor settings scenarios, simulating those scenarios, and studying results. In previous work, we presented a formal Cellular Discrete Event System Specifications simulation model for studying carbon dioxide dispersion in rooms with various settings. However, designers may need to predict the results of altering large combinations of settings on air quality. Generating and simulating multiple scenarios with different combinations of space settings to test their effect on indoor air quality is time-consuming. In this research, we solve the two problems of the lack of ground truth data and the inefficiency of producing and studying simulation results for many combinations of settings by proposing a novel framework. The framework utilizes a Cellular Discrete Event System Specifications model, simulates different scenarios of enclosed spaces with various settings, and collects simulation results to form a data set to train a deep neural network. Without needing to generate all possible scenarios, the trained deep neural network is used to predict unknown settings of the closed space when other settings are altered. The framework facilitates configuring enclosed spaces to enhance air quality. We illustrate the framework uses through a case study.
{"title":"A framework for modeling, generating, simulating, and predicting carbon dioxide dispersion indoors using cell-DEVS and deep learning","authors":"H. Khalil, G. Wainer","doi":"10.1177/00375497231212198","DOIUrl":"https://doi.org/10.1177/00375497231212198","url":null,"abstract":"Carbon dioxide concentration in enclosed spaces is an air quality indicator that affects occupants’ well-being. To maintain healthy carbon dioxide levels indoors, enclosed space settings must be adjusted to maximize air quality while minimizing energy consumption. Studying the effect of these settings on carbon dioxide concentration levels is not feasible through physical experimentation and data collection. This problem can be solved by using validated simulation models, generating indoor settings scenarios, simulating those scenarios, and studying results. In previous work, we presented a formal Cellular Discrete Event System Specifications simulation model for studying carbon dioxide dispersion in rooms with various settings. However, designers may need to predict the results of altering large combinations of settings on air quality. Generating and simulating multiple scenarios with different combinations of space settings to test their effect on indoor air quality is time-consuming. In this research, we solve the two problems of the lack of ground truth data and the inefficiency of producing and studying simulation results for many combinations of settings by proposing a novel framework. The framework utilizes a Cellular Discrete Event System Specifications model, simulates different scenarios of enclosed spaces with various settings, and collects simulation results to form a data set to train a deep neural network. Without needing to generate all possible scenarios, the trained deep neural network is used to predict unknown settings of the closed space when other settings are altered. The framework facilitates configuring enclosed spaces to enhance air quality. We illustrate the framework uses through a case study.","PeriodicalId":501452,"journal":{"name":"SIMULATION","volume":"61 17","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138605100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-04DOI: 10.1177/00375497231214563
Jinfeng Zhong, Luyen Le Ngoc, E. Negre, Marie-Hélène Abel
Climate change has led to an increase in the frequency and intensity of natural disasters, necessitating the development of efficient crisis management strategies for population sheltering. However, existing research on this topic primarily focuses on the use of public resources such as ambulances and fire trucks, which may sometimes be insufficient due to high demand and impacted locations, worsening the shortage of resources. This research introduces an ontology-based crisis simulation system for population sheltering management that focuses on the integration and distribution of citizen–volunteer drivers/vehicles into the evacuation process. Recognizing the limitations of public resources in current crisis management models, our approach incorporates citizen resources to enhance overall evacuation capacity. We develop an ontology to standardize crisis management knowledge, frame vehicle distribution as a recommendation problem, and design a simulation module incorporating a constraint-based recommender system. The proposed scenario illustrates how the simulation system can recommend citizen resources during crisis situations by considering the constraints to be satisfied. With our system, we aim at helping stakeholders to be prepared for various disaster scenarios: optimizing resource allocation and reducing time to make decisions by decision-makers.
{"title":"Ontology-based crisis simulation system for population sheltering management","authors":"Jinfeng Zhong, Luyen Le Ngoc, E. Negre, Marie-Hélène Abel","doi":"10.1177/00375497231214563","DOIUrl":"https://doi.org/10.1177/00375497231214563","url":null,"abstract":"Climate change has led to an increase in the frequency and intensity of natural disasters, necessitating the development of efficient crisis management strategies for population sheltering. However, existing research on this topic primarily focuses on the use of public resources such as ambulances and fire trucks, which may sometimes be insufficient due to high demand and impacted locations, worsening the shortage of resources. This research introduces an ontology-based crisis simulation system for population sheltering management that focuses on the integration and distribution of citizen–volunteer drivers/vehicles into the evacuation process. Recognizing the limitations of public resources in current crisis management models, our approach incorporates citizen resources to enhance overall evacuation capacity. We develop an ontology to standardize crisis management knowledge, frame vehicle distribution as a recommendation problem, and design a simulation module incorporating a constraint-based recommender system. The proposed scenario illustrates how the simulation system can recommend citizen resources during crisis situations by considering the constraints to be satisfied. With our system, we aim at helping stakeholders to be prepared for various disaster scenarios: optimizing resource allocation and reducing time to make decisions by decision-makers.","PeriodicalId":501452,"journal":{"name":"SIMULATION","volume":"3 7","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138601305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-04DOI: 10.1177/00375497231214565
Salazar Javier Eduardo, Shih-Hsien Tseng
Several manufacturing industries try to reduce transportation waste using automated material handling systems, which can enhance the transportation of raw materials from one location to another in the production line of a manufacturing area. The issue with transportation and job flow is a critical factor in a production line because some production stations need to wait for the work-in-progress to be delivered. Automated guided vehicle (AGV) transportation needs a setup of traffic control over a factory’s physical infrastructure and simulation. Doing so can help showcase and evaluate possible deficiencies that can be improved in the real job flow scenario of the production line. The design of experiment plays a huge role in finding and explaining variations of information under conditions that are regularly put as a hypothesis to reflect or describe the variation. A simulation model is implemented by adopting simplified AGV parameters. The model development follows the structure of system specification → machine specification → AGV specification → discrete-event simulation model → experimental design → analysis of performance indicators (PIs). To precisely reflect an alternative for evaluating aforementioned issues, this study proposes the model stated above and an analysis that is based on the PIs. Analysis of variance (ANOVA) results are chosen to analyze different factors affecting the PIs. Using the factorial ANOVA test results, this study uses one-way and two-way interactions to compare the relationship between job flow time, AGVs, AGV utilization, number of AGVs, and average waiting time.
{"title":"Design of experiment and simulation approach for analyzing automated guided vehicle performance indicators in a production line","authors":"Salazar Javier Eduardo, Shih-Hsien Tseng","doi":"10.1177/00375497231214565","DOIUrl":"https://doi.org/10.1177/00375497231214565","url":null,"abstract":"Several manufacturing industries try to reduce transportation waste using automated material handling systems, which can enhance the transportation of raw materials from one location to another in the production line of a manufacturing area. The issue with transportation and job flow is a critical factor in a production line because some production stations need to wait for the work-in-progress to be delivered. Automated guided vehicle (AGV) transportation needs a setup of traffic control over a factory’s physical infrastructure and simulation. Doing so can help showcase and evaluate possible deficiencies that can be improved in the real job flow scenario of the production line. The design of experiment plays a huge role in finding and explaining variations of information under conditions that are regularly put as a hypothesis to reflect or describe the variation. A simulation model is implemented by adopting simplified AGV parameters. The model development follows the structure of system specification → machine specification → AGV specification → discrete-event simulation model → experimental design → analysis of performance indicators (PIs). To precisely reflect an alternative for evaluating aforementioned issues, this study proposes the model stated above and an analysis that is based on the PIs. Analysis of variance (ANOVA) results are chosen to analyze different factors affecting the PIs. Using the factorial ANOVA test results, this study uses one-way and two-way interactions to compare the relationship between job flow time, AGVs, AGV utilization, number of AGVs, and average waiting time.","PeriodicalId":501452,"journal":{"name":"SIMULATION","volume":"23 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138604327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-19DOI: 10.1177/00375497231208481
Zongfu Xie, Jinjin Liu, Yawei Ji, Wanwan Li, Chunxiao Dong, Bin Yang
With the rapid development of cognitive radio technology, multilayer heterogeneous cognitive radio computing platforms with large computing, high-throughput, ultralarge bandwidth and ultralow latency have become a research hotspot. Aiming at the core scheduling problems of multilayer heterogeneous computing platforms, this paper abstracts the bidirectional interconnection topology, node computing capacity, and internode communication capability of the heterogeneous computing platform into an undirected graph model and abstracts the nodes with dependencies, nodes’ computing requirements, and internode communication requirements in streaming tasks into a directed acyclic graph (DAG) model so as to transform the task-scheduling problem into a deployment-scheduling problem from DAG to undirected graph. To efficiently solve this graph model, this paper calculates and forms a component scheduling sequence based on the dependencies of functional components in streaming domain tasks. Then, according to the scheduling sequence, ant colony optimization (ACO) algorithms, such as ant colonies and Q-learning select functional components, deploy components to different computing nodes, calculate the scheduling cost, guide the solution space search of agents, and complete the scenario migration adaptation of the scheduling algorithms to intelligent scheduling of domain tasks. So, this paper proposes the ACO field task intelligent scheduling algorithm based on Q-learning optimization (QACO). QACO uses the Q-table matrix of Q-learning as the initial pheromone of the ant colony algorithm, which not only solves the dimensional disaster of the Q-learning algorithm but also accelerates the convergence speed of the ant colony intelligent scheduling algorithm, reduces the task scheduling length, and further enhances the search ability of the existing scheduling algorithm to solve the spatial set. Based on the randomly generated DAG domain task map, three experimental test scenarios are designed to verify the algorithm performance. The experimental results show that compared with the Q-learning, ACO, and genetic algorithms (GA) algorithms, the proposed algorithm improves the convergence speed of the solution by 72.3%, 63.4%, and 64% on average, reduces the scheduling length by 2.8%, 2.2%, and 0.9% on average, and increases the parallel acceleration ratio by 2.8%, 2.1%, and 0.9% on average, respectively. The practical application value of the algorithm is analyzed through typical radar task simulation, but the load balancing of the algorithm needs to be further improved.
{"title":"ACO intelligent task scheduling algorithm based on Q-learning optimization in a multilayer cognitive radio platform","authors":"Zongfu Xie, Jinjin Liu, Yawei Ji, Wanwan Li, Chunxiao Dong, Bin Yang","doi":"10.1177/00375497231208481","DOIUrl":"https://doi.org/10.1177/00375497231208481","url":null,"abstract":"With the rapid development of cognitive radio technology, multilayer heterogeneous cognitive radio computing platforms with large computing, high-throughput, ultralarge bandwidth and ultralow latency have become a research hotspot. Aiming at the core scheduling problems of multilayer heterogeneous computing platforms, this paper abstracts the bidirectional interconnection topology, node computing capacity, and internode communication capability of the heterogeneous computing platform into an undirected graph model and abstracts the nodes with dependencies, nodes’ computing requirements, and internode communication requirements in streaming tasks into a directed acyclic graph (DAG) model so as to transform the task-scheduling problem into a deployment-scheduling problem from DAG to undirected graph. To efficiently solve this graph model, this paper calculates and forms a component scheduling sequence based on the dependencies of functional components in streaming domain tasks. Then, according to the scheduling sequence, ant colony optimization (ACO) algorithms, such as ant colonies and Q-learning select functional components, deploy components to different computing nodes, calculate the scheduling cost, guide the solution space search of agents, and complete the scenario migration adaptation of the scheduling algorithms to intelligent scheduling of domain tasks. So, this paper proposes the ACO field task intelligent scheduling algorithm based on Q-learning optimization (QACO). QACO uses the Q-table matrix of Q-learning as the initial pheromone of the ant colony algorithm, which not only solves the dimensional disaster of the Q-learning algorithm but also accelerates the convergence speed of the ant colony intelligent scheduling algorithm, reduces the task scheduling length, and further enhances the search ability of the existing scheduling algorithm to solve the spatial set. Based on the randomly generated DAG domain task map, three experimental test scenarios are designed to verify the algorithm performance. The experimental results show that compared with the Q-learning, ACO, and genetic algorithms (GA) algorithms, the proposed algorithm improves the convergence speed of the solution by 72.3%, 63.4%, and 64% on average, reduces the scheduling length by 2.8%, 2.2%, and 0.9% on average, and increases the parallel acceleration ratio by 2.8%, 2.1%, and 0.9% on average, respectively. The practical application value of the algorithm is analyzed through typical radar task simulation, but the load balancing of the algorithm needs to be further improved.","PeriodicalId":501452,"journal":{"name":"SIMULATION","volume":"5 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139260819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-17DOI: 10.1177/00375497231209998
Olivier Gillet, É. Daudé, Arnaud Saval, Clément Caron, P. Taillandier, P. Tranouez, Sebastien Rey-Coyrehourcq, J. Komorowski
The seismic and fumarollic activity of La Soufrière de Gaudeloupe increased in 1992. Continuing unrest led the Observatoire volocanologique et sismologique of Guadeloupe (OVSG-IPGP) to recommend in July 1999 to the authorities that the volcano alert be set to “Vigilance” (yellow). The OVSG-IPGP has been particularly vigilant and reinforced its monitoring following another significant increase of unrest in 2017 that culminated in magnitude 4.1 felt earthquake and a probable failed phreatic eruption. Volcanic activity remains difficult to forecast precisely, so the only way to stay safe, in case of an impending eruption, is to move away from the threatened area. This can be a major problem for the authorities and the population. In the French overseas departments, despite the presence of several volcanoes, there is limited experience in managing volcanic emergencies, especially in areas with a high population density and strategic assets, such as the Basse-Terre region of Guadeloupe. Therefore, it is crucial to devise and assess an emergency management strategy to identify potential problems and dangers that may arise during a mass evacuation. Crisis exercises can be planned to prepare the authorities and the population, but they are rarely carried out due to the human and resource costs involved. A series of evacuation scenarios are evaluated through simulations. The scenarios model staged and simultaneous evacuations with different speeds of individual response times. The aim of this research is to evaluate the two main evacuation strategies defined in the current volcano emergency response plan for La Soufrière of Guadeloupe, revised in 2018 by the authorities. This paper describes a calibrated agent-based model of mass evacuation and its exploration focusing on the potential staged evacuations of the southern Basse-Terre area. The overall objectives of this research are to: (1) test the evacuation strategy of the current emergency plan, and (2) provide relevant information to stakeholders. The results of these simulations suggest that there is no significant difference between the two evacuation strategies. It is estimated that 95% of the population will be evacuated within 20 h with a simultaneous or a staged evacuation. Whatever the scenario, the simulation results show high levels of road congestion. However, the staged evacuation will significantly reduce the number of vehicles on the network during the peak time of the evacuation and therefore reduce dangerous situations and the potential for adding crises within a volcanic crisis.
{"title":"Modeling staged and simultaneous evacuation during a volcanic crisis of La Soufrière of Guadeloupe (France)","authors":"Olivier Gillet, É. Daudé, Arnaud Saval, Clément Caron, P. Taillandier, P. Tranouez, Sebastien Rey-Coyrehourcq, J. Komorowski","doi":"10.1177/00375497231209998","DOIUrl":"https://doi.org/10.1177/00375497231209998","url":null,"abstract":"The seismic and fumarollic activity of La Soufrière de Gaudeloupe increased in 1992. Continuing unrest led the Observatoire volocanologique et sismologique of Guadeloupe (OVSG-IPGP) to recommend in July 1999 to the authorities that the volcano alert be set to “Vigilance” (yellow). The OVSG-IPGP has been particularly vigilant and reinforced its monitoring following another significant increase of unrest in 2017 that culminated in magnitude 4.1 felt earthquake and a probable failed phreatic eruption. Volcanic activity remains difficult to forecast precisely, so the only way to stay safe, in case of an impending eruption, is to move away from the threatened area. This can be a major problem for the authorities and the population. In the French overseas departments, despite the presence of several volcanoes, there is limited experience in managing volcanic emergencies, especially in areas with a high population density and strategic assets, such as the Basse-Terre region of Guadeloupe. Therefore, it is crucial to devise and assess an emergency management strategy to identify potential problems and dangers that may arise during a mass evacuation. Crisis exercises can be planned to prepare the authorities and the population, but they are rarely carried out due to the human and resource costs involved. A series of evacuation scenarios are evaluated through simulations. The scenarios model staged and simultaneous evacuations with different speeds of individual response times. The aim of this research is to evaluate the two main evacuation strategies defined in the current volcano emergency response plan for La Soufrière of Guadeloupe, revised in 2018 by the authorities. This paper describes a calibrated agent-based model of mass evacuation and its exploration focusing on the potential staged evacuations of the southern Basse-Terre area. The overall objectives of this research are to: (1) test the evacuation strategy of the current emergency plan, and (2) provide relevant information to stakeholders. The results of these simulations suggest that there is no significant difference between the two evacuation strategies. It is estimated that 95% of the population will be evacuated within 20 h with a simultaneous or a staged evacuation. Whatever the scenario, the simulation results show high levels of road congestion. However, the staged evacuation will significantly reduce the number of vehicles on the network during the peak time of the evacuation and therefore reduce dangerous situations and the potential for adding crises within a volcanic crisis.","PeriodicalId":501452,"journal":{"name":"SIMULATION","volume":"29 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139264919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-16DOI: 10.1177/00375497231209996
Yue Zhang, Jie Tan
The development and popularization of new energy vehicles have become a global consensus. The shortage and unreasonable layout of electric vehicle charging infrastructure (EVCI) have severely restricted the development of electric vehicles. In the literature, many methods can be used to optimize the layout of charging stations (CSs) for producing good layout designs. However, more realistic evaluation and validation should be used to assess and validate these layout options. This study suggested an agent-based simulation (ABS) model to evaluate the layout designs of EVCI and simulate the driving and charging behaviors of electric taxis (ETs). In the case study of Shenzhen, China, geographical positioning system (GPS) trajectory data were used to extract the temporal and spatial patterns of ETs, which were then used to calibrate and validate the actions of ETs in the simulation. The ABS model was developed in a geographic information system (GIS) context of an urban road network with traveling speeds of 24 h to account for the effects of traffic conditions. After the high-resolution simulation, evaluation results of the performance of EVCI and the behaviors of ETs can be provided in detail and in summary. Sensitivity analysis demonstrates the accuracy of simulation implementation and aids in understanding the effect of model parameters on system performance. Maximizing the time satisfaction of ET users and reducing the workload variance of EVCI were the two goals of a multiobjective layout optimization technique based on the Pareto frontier. The location plans for the new CS based on Pareto analysis can significantly enhance both metrics through simulation evaluation.
发展和普及新能源汽车已成为全球共识。电动汽车充电基础设施(EVCI)的短缺和布局不合理严重制约了电动汽车的发展。在文献中,有许多方法可用于优化充电站(CS)布局,以产生良好的布局设计。但是,应该使用更现实的评估和验证方法来评估和验证这些布局方案。本研究建议使用基于代理的模拟(ABS)模型来评估 EVCI 的布局设计,并模拟电动出租车(ET)的驾驶和充电行为。在中国深圳的案例研究中,使用地理定位系统(GPS)轨迹数据提取电动出租车的时空模式,然后用于校准和验证电动出租车在模拟中的行为。ABS 模型是在地理信息系统(GIS)的背景下开发的,以 24 小时行驶速度的城市路网为背景,考虑了交通状况的影响。在进行高分辨率模拟后,可提供 EVCI 性能和 ET 行为的详细和简要评估结果。敏感性分析证明了模拟实施的准确性,并有助于理解模型参数对系统性能的影响。最大化 ET 用户的时间满意度和减少 EVCI 的工作量差异是基于帕累托前沿的多目标布局优化技术的两个目标。通过仿真评估,基于帕累托分析的新 CS 位置规划可显著提高这两个指标。
{"title":"A data-driven approach of layout evaluation for electric vehicle charging infrastructure using agent-based simulation and GIS","authors":"Yue Zhang, Jie Tan","doi":"10.1177/00375497231209996","DOIUrl":"https://doi.org/10.1177/00375497231209996","url":null,"abstract":"The development and popularization of new energy vehicles have become a global consensus. The shortage and unreasonable layout of electric vehicle charging infrastructure (EVCI) have severely restricted the development of electric vehicles. In the literature, many methods can be used to optimize the layout of charging stations (CSs) for producing good layout designs. However, more realistic evaluation and validation should be used to assess and validate these layout options. This study suggested an agent-based simulation (ABS) model to evaluate the layout designs of EVCI and simulate the driving and charging behaviors of electric taxis (ETs). In the case study of Shenzhen, China, geographical positioning system (GPS) trajectory data were used to extract the temporal and spatial patterns of ETs, which were then used to calibrate and validate the actions of ETs in the simulation. The ABS model was developed in a geographic information system (GIS) context of an urban road network with traveling speeds of 24 h to account for the effects of traffic conditions. After the high-resolution simulation, evaluation results of the performance of EVCI and the behaviors of ETs can be provided in detail and in summary. Sensitivity analysis demonstrates the accuracy of simulation implementation and aids in understanding the effect of model parameters on system performance. Maximizing the time satisfaction of ET users and reducing the workload variance of EVCI were the two goals of a multiobjective layout optimization technique based on the Pareto frontier. The location plans for the new CS based on Pareto analysis can significantly enhance both metrics through simulation evaluation.","PeriodicalId":501452,"journal":{"name":"SIMULATION","volume":"9 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139269361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Some architects struggle to choose the best form of how the building meets the ground and may benefit from a suggestion based on precedents. This paper presents a novel proof of concept workflow that enables machine learning (ML) to automatically classify three-dimensional (3D) prototypes with respect to formulating the most appropriate building/ground relationship. Here, ML, a branch of artificial intelligence (AI), can ascertain the most appropriate relationship from a set of examples provided by trained architects. Moreover, the system classifies 3D prototypes of architectural precedent models based on a topological graph instead of 2D images. The system takes advantage of two primary technologies. The first is a software library that enhances the representation of 3D models through non-manifold topology (Topologic). The second is an end-to-end deep graph convolutional neural network (DGCNN). The experimental workflow in this paper consists of two stages. First, a generative simulation system for a 3D prototype of architectural precedents created a large synthetic database of building/ground relationships with numerous topological variations. This geometrical model then underwent conversion into semantically rich topological dual graphs. Second, the prototype architectural graphs were imported to the DGCNN model for graph classification. While using a unique data set prevents direct comparison, our experiments have shown that the proposed workflow achieves highly accurate results that align with DGCNN’s performance on benchmark graphs. This research demonstrates the potential of AI to help designers identify the topology of architectural solutions and place them within the most relevant architectural canons.
{"title":"Graph machine learning classification using architectural 3D topological models","authors":"Abdulrahman Alymani, Wassim Jabi, Padraig Corcoran","doi":"10.1177/00375497221105894","DOIUrl":"https://doi.org/10.1177/00375497221105894","url":null,"abstract":"<p>Some architects struggle to choose the best form of how the building meets the ground and may benefit from a suggestion based on precedents. This paper presents a novel proof of concept workflow that enables machine learning (ML) to automatically classify three-dimensional (3D) prototypes with respect to formulating the most appropriate building/ground relationship. Here, ML, a branch of artificial intelligence (AI), can ascertain the most appropriate relationship from a set of examples provided by trained architects. Moreover, the system classifies 3D prototypes of architectural precedent models based on a topological graph instead of 2D images. The system takes advantage of two primary technologies. The first is a software library that enhances the representation of 3D models through non-manifold topology (Topologic). The second is an end-to-end deep graph convolutional neural network (DGCNN). The experimental workflow in this paper consists of two stages. First, a generative simulation system for a 3D prototype of architectural precedents created a large synthetic database of building/ground relationships with numerous topological variations. This geometrical model then underwent conversion into semantically rich topological dual graphs. Second, the prototype architectural graphs were imported to the DGCNN model for graph classification. While using a unique data set prevents direct comparison, our experiments have shown that the proposed workflow achieves highly accurate results that align with DGCNN’s performance on benchmark graphs. This research demonstrates the potential of AI to help designers identify the topology of architectural solutions and place them within the most relevant architectural canons.</p>","PeriodicalId":501452,"journal":{"name":"SIMULATION","volume":"39 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138535723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}