{"title":"采用多CUDA设备加速LTL模型检查","authors":"J. Barnat, Petr Bauch, L. Brim, Milan Ceska","doi":"10.1109/ICPADS.2010.82","DOIUrl":null,"url":null,"abstract":"Recently, the CUDA technology has been used to accelerate many computation demanding tasks. For example, in our previous work we have shown how CUDA technology can be employed to accelerate the process of Linear Temporal Logic (LTL) Model Checking. While the raw computing power of a CUDA enabled device is tremendous, the applicability of the technology is quite often limited to small or middle-sized instances of the problems being solved. This is because the memory that a single device is equipped with, is simply not large enough to cope with large or realistic instances of the problem, which is also the case of our CUDA-aware LTL Model Checking solution. In this paper we suggest how to overcome this limitations by employing multiple (two in our case) CUDA devices for acceleration of our fine-grained communication-intensive parallel algorithm for LTL Model Checking.","PeriodicalId":365914,"journal":{"name":"2010 IEEE 16th International Conference on Parallel and Distributed Systems","volume":"24 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2010-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"23","resultStr":"{\"title\":\"Employing Multiple CUDA Devices to Accelerate LTL Model Checking\",\"authors\":\"J. Barnat, Petr Bauch, L. Brim, Milan Ceska\",\"doi\":\"10.1109/ICPADS.2010.82\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recently, the CUDA technology has been used to accelerate many computation demanding tasks. For example, in our previous work we have shown how CUDA technology can be employed to accelerate the process of Linear Temporal Logic (LTL) Model Checking. While the raw computing power of a CUDA enabled device is tremendous, the applicability of the technology is quite often limited to small or middle-sized instances of the problems being solved. This is because the memory that a single device is equipped with, is simply not large enough to cope with large or realistic instances of the problem, which is also the case of our CUDA-aware LTL Model Checking solution. In this paper we suggest how to overcome this limitations by employing multiple (two in our case) CUDA devices for acceleration of our fine-grained communication-intensive parallel algorithm for LTL Model Checking.\",\"PeriodicalId\":365914,\"journal\":{\"name\":\"2010 IEEE 16th International Conference on Parallel and Distributed Systems\",\"volume\":\"24 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2010-12-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"23\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2010 IEEE 16th International Conference on Parallel and Distributed Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICPADS.2010.82\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2010 IEEE 16th International Conference on Parallel and Distributed Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICPADS.2010.82","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Employing Multiple CUDA Devices to Accelerate LTL Model Checking
Recently, the CUDA technology has been used to accelerate many computation demanding tasks. For example, in our previous work we have shown how CUDA technology can be employed to accelerate the process of Linear Temporal Logic (LTL) Model Checking. While the raw computing power of a CUDA enabled device is tremendous, the applicability of the technology is quite often limited to small or middle-sized instances of the problems being solved. This is because the memory that a single device is equipped with, is simply not large enough to cope with large or realistic instances of the problem, which is also the case of our CUDA-aware LTL Model Checking solution. In this paper we suggest how to overcome this limitations by employing multiple (two in our case) CUDA devices for acceleration of our fine-grained communication-intensive parallel algorithm for LTL Model Checking.