Dongrong Yang, Xin Wu, Xinyi Li, Ryan Mansfield, Yibo Xie, Qiuwen Wu, Q Jackie Wu, Yang Sheng
{"title":"利用深度强化学习对头颈部(HN)癌症进行强度调制放射治疗(IMRT)的自动治疗规划。","authors":"Dongrong Yang, Xin Wu, Xinyi Li, Ryan Mansfield, Yibo Xie, Qiuwen Wu, Q Jackie Wu, Yang Sheng","doi":"10.1088/1361-6560/ad965d","DOIUrl":null,"url":null,"abstract":"<p><strong>Purpose: </strong>
To develop a deep reinforcement learning (DRL) agent to self-interact with the treatment planning system (TPS) to automatically generate intensity modulated radiation therapy (IMRT) treatment plans for head-and-neck (HN) cancer with consistent organ-at-risk (OAR) sparing performance.
Methods:
With IRB approval, one hundred and twenty HN patients receiving IMRT were included. The DRL agent was trained with 20 patients. During each inverse optimization process, the intermediate dosimetric endpoints' value, dose volume constraints value and structure objective function loss were collected as the DRL states. By adjusting the objective constraints as actions, the agent learned to seek optimal rewards by balancing OAR sparing and planning target volume (PTV) coverage. Reward computed from current dose-volume-histogram (DVH) endpoints and clinical objectives were sent back to the agent to update action policy during model training. The trained agent was evaluated with the rest 100 patients. 
Results:
The DRL agent was able to generate a clinically acceptable IMRT plan within 12.4±3.1 minutes without human intervention. DRL plans showed lower PTV maximum dose (109.2%) compared to clinical plans (112.4%) (p<.05). Average median dose of left parotid, right parotid, oral cavity, larynx, pharynx of DRL plans were 15.6Gy, 12.2Gy, 25.7Gy, 27.3Gy and 32.1Gy respectively, comparable to 17.1 Gy,15.7Gy, 24.4Gy, 23.7Gy and 35.5Gy of corresponding clinical plans. The maximum dose of cord+5mm, brainstem and mandible were also comparable between the two groups. In addition, DRL plans demonstrated reduced variability, as evidenced by smaller 95% confidence intervals. The total MU of the DRL plans was 1611 vs 1870 (p<.05) of clinical plans. The results signaled the DRL's consistent planning strategy compared to the planners' occasional back-and-forth decision-making during planning.
Conclusion:
The proposed deep reinforcement learning (DRL) agent is capable of efficiently generating HN IMRT plans with consistent quality. 
.</p>","PeriodicalId":20185,"journal":{"name":"Physics in medicine and biology","volume":" ","pages":""},"PeriodicalIF":3.3000,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Automated treatment planning with deep reinforcement learning for head-and-neck (HN) cancer intensity modulated radiation therapy (IMRT).\",\"authors\":\"Dongrong Yang, Xin Wu, Xinyi Li, Ryan Mansfield, Yibo Xie, Qiuwen Wu, Q Jackie Wu, Yang Sheng\",\"doi\":\"10.1088/1361-6560/ad965d\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Purpose: </strong>
To develop a deep reinforcement learning (DRL) agent to self-interact with the treatment planning system (TPS) to automatically generate intensity modulated radiation therapy (IMRT) treatment plans for head-and-neck (HN) cancer with consistent organ-at-risk (OAR) sparing performance.
Methods:
With IRB approval, one hundred and twenty HN patients receiving IMRT were included. The DRL agent was trained with 20 patients. During each inverse optimization process, the intermediate dosimetric endpoints' value, dose volume constraints value and structure objective function loss were collected as the DRL states. By adjusting the objective constraints as actions, the agent learned to seek optimal rewards by balancing OAR sparing and planning target volume (PTV) coverage. Reward computed from current dose-volume-histogram (DVH) endpoints and clinical objectives were sent back to the agent to update action policy during model training. The trained agent was evaluated with the rest 100 patients. 
Results:
The DRL agent was able to generate a clinically acceptable IMRT plan within 12.4±3.1 minutes without human intervention. DRL plans showed lower PTV maximum dose (109.2%) compared to clinical plans (112.4%) (p<.05). Average median dose of left parotid, right parotid, oral cavity, larynx, pharynx of DRL plans were 15.6Gy, 12.2Gy, 25.7Gy, 27.3Gy and 32.1Gy respectively, comparable to 17.1 Gy,15.7Gy, 24.4Gy, 23.7Gy and 35.5Gy of corresponding clinical plans. The maximum dose of cord+5mm, brainstem and mandible were also comparable between the two groups. In addition, DRL plans demonstrated reduced variability, as evidenced by smaller 95% confidence intervals. The total MU of the DRL plans was 1611 vs 1870 (p<.05) of clinical plans. The results signaled the DRL's consistent planning strategy compared to the planners' occasional back-and-forth decision-making during planning.
Conclusion:
The proposed deep reinforcement learning (DRL) agent is capable of efficiently generating HN IMRT plans with consistent quality. 
.</p>\",\"PeriodicalId\":20185,\"journal\":{\"name\":\"Physics in medicine and biology\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":3.3000,\"publicationDate\":\"2024-11-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Physics in medicine and biology\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://doi.org/10.1088/1361-6560/ad965d\",\"RegionNum\":3,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, BIOMEDICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Physics in medicine and biology","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1088/1361-6560/ad965d","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
Automated treatment planning with deep reinforcement learning for head-and-neck (HN) cancer intensity modulated radiation therapy (IMRT).
Purpose:
To develop a deep reinforcement learning (DRL) agent to self-interact with the treatment planning system (TPS) to automatically generate intensity modulated radiation therapy (IMRT) treatment plans for head-and-neck (HN) cancer with consistent organ-at-risk (OAR) sparing performance.
Methods:
With IRB approval, one hundred and twenty HN patients receiving IMRT were included. The DRL agent was trained with 20 patients. During each inverse optimization process, the intermediate dosimetric endpoints' value, dose volume constraints value and structure objective function loss were collected as the DRL states. By adjusting the objective constraints as actions, the agent learned to seek optimal rewards by balancing OAR sparing and planning target volume (PTV) coverage. Reward computed from current dose-volume-histogram (DVH) endpoints and clinical objectives were sent back to the agent to update action policy during model training. The trained agent was evaluated with the rest 100 patients.
Results:
The DRL agent was able to generate a clinically acceptable IMRT plan within 12.4±3.1 minutes without human intervention. DRL plans showed lower PTV maximum dose (109.2%) compared to clinical plans (112.4%) (p<.05). Average median dose of left parotid, right parotid, oral cavity, larynx, pharynx of DRL plans were 15.6Gy, 12.2Gy, 25.7Gy, 27.3Gy and 32.1Gy respectively, comparable to 17.1 Gy,15.7Gy, 24.4Gy, 23.7Gy and 35.5Gy of corresponding clinical plans. The maximum dose of cord+5mm, brainstem and mandible were also comparable between the two groups. In addition, DRL plans demonstrated reduced variability, as evidenced by smaller 95% confidence intervals. The total MU of the DRL plans was 1611 vs 1870 (p<.05) of clinical plans. The results signaled the DRL's consistent planning strategy compared to the planners' occasional back-and-forth decision-making during planning.
Conclusion:
The proposed deep reinforcement learning (DRL) agent is capable of efficiently generating HN IMRT plans with consistent quality.
.
期刊介绍:
The development and application of theoretical, computational and experimental physics to medicine, physiology and biology. Topics covered are: therapy physics (including ionizing and non-ionizing radiation); biomedical imaging (e.g. x-ray, magnetic resonance, ultrasound, optical and nuclear imaging); image-guided interventions; image reconstruction and analysis (including kinetic modelling); artificial intelligence in biomedical physics and analysis; nanoparticles in imaging and therapy; radiobiology; radiation protection and patient dose monitoring; radiation dosimetry