{"title":"Cunetsim:基于GPU的大规模移动网络仿真测试平台","authors":"B. Bilel, Nikaein Navid","doi":"10.1109/ICCITECHNOL.2012.6285829","DOIUrl":null,"url":null,"abstract":"Most of the existing packet-level simulation tools are designed to perform experiments modeling small to medium scale networks. The main reason of this limitation is the amount of available computation power and memory in CPU-based simulation environments. To enable efficient packet-level simulation for large scale scenarios, we introduce a CPU-GPU co-simulation framework where synchronization and experiment design are performed in CPU and node's logical processes are executed in parallel in GPU according to the master/worker model. The framework is developed using the Compute-Unified Device Architecture (CUDA) API and denoted as Cunetsim, CUDA network simulator. In this work, we study the node mobility and connectivity as they are among the most time consuming task when large scale networks are simulated. Simulation results show that Cunetsim runtime remains stable and that it achieves significantly lower runtime than existing approaches when computing mobility and connectivity with no degradation in the accuracy of the results. Further, the connectivity is achieved up to 870 times faster than Sinalgo, which presents the best performances know until now.","PeriodicalId":435718,"journal":{"name":"2012 International Conference on Communications and Information Technology (ICCIT)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"11","resultStr":"{\"title\":\"Cunetsim: A GPU based simulation testbed for large scale mobile networks\",\"authors\":\"B. Bilel, Nikaein Navid\",\"doi\":\"10.1109/ICCITECHNOL.2012.6285829\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Most of the existing packet-level simulation tools are designed to perform experiments modeling small to medium scale networks. The main reason of this limitation is the amount of available computation power and memory in CPU-based simulation environments. To enable efficient packet-level simulation for large scale scenarios, we introduce a CPU-GPU co-simulation framework where synchronization and experiment design are performed in CPU and node's logical processes are executed in parallel in GPU according to the master/worker model. The framework is developed using the Compute-Unified Device Architecture (CUDA) API and denoted as Cunetsim, CUDA network simulator. In this work, we study the node mobility and connectivity as they are among the most time consuming task when large scale networks are simulated. Simulation results show that Cunetsim runtime remains stable and that it achieves significantly lower runtime than existing approaches when computing mobility and connectivity with no degradation in the accuracy of the results. Further, the connectivity is achieved up to 870 times faster than Sinalgo, which presents the best performances know until now.\",\"PeriodicalId\":435718,\"journal\":{\"name\":\"2012 International Conference on Communications and Information Technology (ICCIT)\",\"volume\":\"43 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2012-06-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"11\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2012 International Conference on Communications and Information Technology (ICCIT)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCITECHNOL.2012.6285829\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 International Conference on Communications and Information Technology (ICCIT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCITECHNOL.2012.6285829","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Cunetsim: A GPU based simulation testbed for large scale mobile networks
Most of the existing packet-level simulation tools are designed to perform experiments modeling small to medium scale networks. The main reason of this limitation is the amount of available computation power and memory in CPU-based simulation environments. To enable efficient packet-level simulation for large scale scenarios, we introduce a CPU-GPU co-simulation framework where synchronization and experiment design are performed in CPU and node's logical processes are executed in parallel in GPU according to the master/worker model. The framework is developed using the Compute-Unified Device Architecture (CUDA) API and denoted as Cunetsim, CUDA network simulator. In this work, we study the node mobility and connectivity as they are among the most time consuming task when large scale networks are simulated. Simulation results show that Cunetsim runtime remains stable and that it achieves significantly lower runtime than existing approaches when computing mobility and connectivity with no degradation in the accuracy of the results. Further, the connectivity is achieved up to 870 times faster than Sinalgo, which presents the best performances know until now.