{"title":"Approximate dynamic programming algorithms for United States Air Force officer sustainment","authors":"Joseph C. Hoecherl, M. Robbins, R. Hill, D. Ahner","doi":"10.1109/WSC.2016.7822341","DOIUrl":null,"url":null,"abstract":"We consider the problem of making accession and promotion decisions in the United States Air Force officer sustainment system. Accession decisions determine how many officers should be hired into the system at the lowest grade for each career specialty. Promotion decisions determine how many officers should be promoted to the next highest grade. We formulate a Markov decision process model to examine this military workforce planning problem. The large size of the problem instance motivating this research suggests that classical exact dynamic programming methods are inappropriate. As such, we develop and test approximate dynamic programming (ADP) algorithms to determine high-quality personnel policies relative to current practice. Our best ADP algorithm attains a statistically significant 2.8 percent improvement over the sustainment line policy currently employed by the USAF which serves as the benchmark policy.","PeriodicalId":367269,"journal":{"name":"2016 Winter Simulation Conference (WSC)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 Winter Simulation Conference (WSC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/WSC.2016.7822341","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6
Abstract
We consider the problem of making accession and promotion decisions in the United States Air Force officer sustainment system. Accession decisions determine how many officers should be hired into the system at the lowest grade for each career specialty. Promotion decisions determine how many officers should be promoted to the next highest grade. We formulate a Markov decision process model to examine this military workforce planning problem. The large size of the problem instance motivating this research suggests that classical exact dynamic programming methods are inappropriate. As such, we develop and test approximate dynamic programming (ADP) algorithms to determine high-quality personnel policies relative to current practice. Our best ADP algorithm attains a statistically significant 2.8 percent improvement over the sustainment line policy currently employed by the USAF which serves as the benchmark policy.