药物质量自适应学习与退出临床试验患者招募优化

Zhili Tian, Weidong Han, Warrren B Powell
{"title":"药物质量自适应学习与退出临床试验患者招募优化","authors":"Zhili Tian, Weidong Han, Warrren B Powell","doi":"10.1287/MSOM.2020.0936","DOIUrl":null,"url":null,"abstract":"Problem definition: Clinical trials are crucial to new drug development. This study investigates optimal patient enrollment in clinical trials with interim analyses, which are analyses of treatment responses from patients at intermediate points. Our model considers uncertainties in patient enrollment and drug treatment effectiveness. We consider the benefits of completing a trial early and the cost of accelerating a trial by maximizing the net present value of drug cumulative profit. Academic/practical relevance: Clinical trials frequently account for the largest cost in drug development, and patient enrollment is an important problem in trial management. Our study develops a dynamic program, accurately capturing the dynamics of the problem, to optimize patient enrollment while learning the treatment effectiveness of an investigated drug. Methodology: The model explicitly captures both the physical state (enrolled patients) and belief states about the effectiveness of the investigated drug and a standard treatment drug. Using Bayesian updates and dynamic programming, we establish monotonicity of the value function in state variables and characterize an optimal enrollment policy. We also introduce, for the first time, the use of backward approximate dynamic programming (ADP) for this problem class. We illustrate the findings using a clinical trial program from a leading firm. Our study performs sensitivity analyses of the input parameters on the optimal enrollment policy. Results: The value function is monotonic in cumulative patient enrollment and the average responses of treatment for the investigated drug and standard treatment drug. The optimal enrollment policy is nondecreasing in the average response from patients using the investigated drug and is nonincreasing in cumulative patient enrollment in periods between two successive interim analyses. The forward ADP algorithm (or backward ADP algorithm) exploiting the monotonicity of the value function reduced the run time from 1.5 months using the exact method to a day (or 20 minutes) within 4% of the exact method. Through an application to a leading firm’s clinical trial program, the study demonstrates that the firm can have a sizable gain of drug profit following the optimal policy that our model provides. Managerial implications: We developed a new model for improving the management of clinical trials. Our study provides insights of an optimal policy and insights into the sensitivity of value function to the dropout rate and prior probability distribution. A firm can have a sizable gain in the drug’s profit by managing its trials using the optimal policies and the properties of value function. We illustrated that firms can use the ADP algorithms to develop their patient enrollment strategies.","PeriodicalId":18108,"journal":{"name":"Manuf. Serv. Oper. Manag.","volume":"37 1 1","pages":"580-599"},"PeriodicalIF":0.0000,"publicationDate":"2021-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Adaptive Learning of Drug Quality and Optimization of Patient Recruitment for Clinical Trials with Dropouts\",\"authors\":\"Zhili Tian, Weidong Han, Warrren B Powell\",\"doi\":\"10.1287/MSOM.2020.0936\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Problem definition: Clinical trials are crucial to new drug development. This study investigates optimal patient enrollment in clinical trials with interim analyses, which are analyses of treatment responses from patients at intermediate points. Our model considers uncertainties in patient enrollment and drug treatment effectiveness. We consider the benefits of completing a trial early and the cost of accelerating a trial by maximizing the net present value of drug cumulative profit. Academic/practical relevance: Clinical trials frequently account for the largest cost in drug development, and patient enrollment is an important problem in trial management. Our study develops a dynamic program, accurately capturing the dynamics of the problem, to optimize patient enrollment while learning the treatment effectiveness of an investigated drug. Methodology: The model explicitly captures both the physical state (enrolled patients) and belief states about the effectiveness of the investigated drug and a standard treatment drug. Using Bayesian updates and dynamic programming, we establish monotonicity of the value function in state variables and characterize an optimal enrollment policy. We also introduce, for the first time, the use of backward approximate dynamic programming (ADP) for this problem class. We illustrate the findings using a clinical trial program from a leading firm. Our study performs sensitivity analyses of the input parameters on the optimal enrollment policy. Results: The value function is monotonic in cumulative patient enrollment and the average responses of treatment for the investigated drug and standard treatment drug. The optimal enrollment policy is nondecreasing in the average response from patients using the investigated drug and is nonincreasing in cumulative patient enrollment in periods between two successive interim analyses. The forward ADP algorithm (or backward ADP algorithm) exploiting the monotonicity of the value function reduced the run time from 1.5 months using the exact method to a day (or 20 minutes) within 4% of the exact method. Through an application to a leading firm’s clinical trial program, the study demonstrates that the firm can have a sizable gain of drug profit following the optimal policy that our model provides. Managerial implications: We developed a new model for improving the management of clinical trials. Our study provides insights of an optimal policy and insights into the sensitivity of value function to the dropout rate and prior probability distribution. A firm can have a sizable gain in the drug’s profit by managing its trials using the optimal policies and the properties of value function. We illustrated that firms can use the ADP algorithms to develop their patient enrollment strategies.\",\"PeriodicalId\":18108,\"journal\":{\"name\":\"Manuf. Serv. Oper. Manag.\",\"volume\":\"37 1 1\",\"pages\":\"580-599\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-03-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Manuf. Serv. Oper. Manag.\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1287/MSOM.2020.0936\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Manuf. Serv. Oper. Manag.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1287/MSOM.2020.0936","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

摘要

问题定义:临床试验对新药开发至关重要。本研究通过中期分析来研究临床试验的最佳患者入组,中期分析是对患者在中间点的治疗反应的分析。我们的模型考虑了患者入组和药物治疗效果的不确定性。我们通过最大化药物累积利润的净现值来考虑早期完成试验的收益和加速试验的成本。学术/实践相关性:临床试验往往占药物开发的最大成本,患者入组是试验管理的一个重要问题。我们的研究开发了一个动态程序,准确地捕捉问题的动态,以优化患者登记,同时了解所研究药物的治疗效果。方法:该模型明确地捕获了物理状态(入组患者)和关于所研究药物和标准治疗药物有效性的信念状态。利用贝叶斯更新和动态规划,建立了状态变量值函数的单调性,并刻画了最优招生策略。我们还首次介绍了后向近似动态规划(ADP)在这类问题中的应用。我们使用一家领先公司的临床试验项目来说明这些发现。本研究对最优招生政策的输入参数进行了敏感性分析。结果:在所研究药物和标准治疗药物的累计入组患者和平均治疗反应中,值函数均为单调的。最优入组策略是使用所研究药物的患者的平均反应不减少,在两次连续的中期分析之间的累积患者入组不增加。前向ADP算法(或后向ADP算法)利用值函数的单调性,将使用精确方法的运行时间从1.5个月减少到精确方法的4%以内的一天(或20分钟)。通过对一家领先公司的临床试验项目的应用,研究表明,该公司可以在我们的模型提供的最优政策下获得可观的药物利润。管理意义:我们开发了一种改进临床试验管理的新模式。我们的研究提供了最优策略的见解,并深入了解了价值函数对辍学率和先验概率分布的敏感性。企业可以通过使用最优政策和价值函数的性质来管理试验,从而获得可观的利润。我们说明了公司可以使用ADP算法来制定他们的患者登记策略。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Adaptive Learning of Drug Quality and Optimization of Patient Recruitment for Clinical Trials with Dropouts
Problem definition: Clinical trials are crucial to new drug development. This study investigates optimal patient enrollment in clinical trials with interim analyses, which are analyses of treatment responses from patients at intermediate points. Our model considers uncertainties in patient enrollment and drug treatment effectiveness. We consider the benefits of completing a trial early and the cost of accelerating a trial by maximizing the net present value of drug cumulative profit. Academic/practical relevance: Clinical trials frequently account for the largest cost in drug development, and patient enrollment is an important problem in trial management. Our study develops a dynamic program, accurately capturing the dynamics of the problem, to optimize patient enrollment while learning the treatment effectiveness of an investigated drug. Methodology: The model explicitly captures both the physical state (enrolled patients) and belief states about the effectiveness of the investigated drug and a standard treatment drug. Using Bayesian updates and dynamic programming, we establish monotonicity of the value function in state variables and characterize an optimal enrollment policy. We also introduce, for the first time, the use of backward approximate dynamic programming (ADP) for this problem class. We illustrate the findings using a clinical trial program from a leading firm. Our study performs sensitivity analyses of the input parameters on the optimal enrollment policy. Results: The value function is monotonic in cumulative patient enrollment and the average responses of treatment for the investigated drug and standard treatment drug. The optimal enrollment policy is nondecreasing in the average response from patients using the investigated drug and is nonincreasing in cumulative patient enrollment in periods between two successive interim analyses. The forward ADP algorithm (or backward ADP algorithm) exploiting the monotonicity of the value function reduced the run time from 1.5 months using the exact method to a day (or 20 minutes) within 4% of the exact method. Through an application to a leading firm’s clinical trial program, the study demonstrates that the firm can have a sizable gain of drug profit following the optimal policy that our model provides. Managerial implications: We developed a new model for improving the management of clinical trials. Our study provides insights of an optimal policy and insights into the sensitivity of value function to the dropout rate and prior probability distribution. A firm can have a sizable gain in the drug’s profit by managing its trials using the optimal policies and the properties of value function. We illustrated that firms can use the ADP algorithms to develop their patient enrollment strategies.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Introduction to Special Section on Data-Driven Research Challenge Food Donations, Retail Operations, and Retail Pricing The Design of Optimal Pay-as-Bid Procurement Mechanisms Asymmetric Information of Product Authenticity on C2C E-Commerce Platforms: How Can Inspection Services Help? Believing in Analytics: Managers' Adherence to Price Recommendations from a DSS
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1