{"title":"Preemption of a CUDA Kernel Function","authors":"Jon C. Calhoun, Hai Jiang","doi":"10.1109/SNPD.2012.53","DOIUrl":null,"url":null,"abstract":"As graphics processing units (GPUs) gain adoption as general purpose parallel compute devices, several key problems need to be addressed in order for their use to become more practical and more user friendly. One such problem is special functions designed to execute on GPUs called kernel functions are non-preempt able. Once the kernel is issued to the GPU it will remain there till either execution finishes or it is killed. If the kernel uses all the execution units of the GPU, then no other kernels are able to be executed. This paper proposes a way to apply preemption to the executing kernel function. The kernel at some point in its execution will be able to save its state, halt execution, and free up the GPU's execution units for other kernels to run. After a given amount of time the halted kernel will be able to regain control of the GPU and complete its execution as if it never was halted in the first place. Experimental results have demonstrated the effectiveness of the proposed scheme.","PeriodicalId":387936,"journal":{"name":"2012 13th ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2012-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"14","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 13th ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SNPD.2012.53","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 14
Abstract
As graphics processing units (GPUs) gain adoption as general purpose parallel compute devices, several key problems need to be addressed in order for their use to become more practical and more user friendly. One such problem is special functions designed to execute on GPUs called kernel functions are non-preempt able. Once the kernel is issued to the GPU it will remain there till either execution finishes or it is killed. If the kernel uses all the execution units of the GPU, then no other kernels are able to be executed. This paper proposes a way to apply preemption to the executing kernel function. The kernel at some point in its execution will be able to save its state, halt execution, and free up the GPU's execution units for other kernels to run. After a given amount of time the halted kernel will be able to regain control of the GPU and complete its execution as if it never was halted in the first place. Experimental results have demonstrated the effectiveness of the proposed scheme.