Reinforcement Learning Data-Acquiring for Causal Inference of Regulatory Networks.

Mohammad Alali, Mahdi Imani
{"title":"Reinforcement Learning Data-Acquiring for Causal Inference of Regulatory Networks.","authors":"Mohammad Alali,&nbsp;Mahdi Imani","doi":"10.23919/acc55779.2023.10155867","DOIUrl":null,"url":null,"abstract":"<p><p>Gene regulatory networks (GRNs) consist of multiple interacting genes whose activities govern various cellular processes. The limitations in genomics data and the complexity of the interactions between components often pose huge uncertainties in the models of these biological systems. Meanwhile, inferring/estimating the interactions between components of the GRNs using data acquired from the normal condition of these biological systems is a challenging or, in some cases, an impossible task. Perturbation is a well-known genomics approach that aims to excite targeted components to gather useful data from these systems. This paper models GRNs using the Boolean network with perturbation, where the network uncertainty appears in terms of unknown interactions between genes. Unlike the existing heuristics and greedy data-acquiring methods, this paper provides an optimal Bayesian formulation of the data-acquiring process in the reinforcement learning context, where the actions are perturbations, and the reward measures step-wise improvement in the inference accuracy. We develop a semi-gradient reinforcement learning method with function approximation for learning near-optimal data-acquiring policy. The obtained policy yields near-exact Bayesian optimality with respect to the entire uncertainty in the regulatory network model, and allows learning the policy offline through planning. We demonstrate the performance of the proposed framework using the well-known p53-Mdm2 negative feedback loop gene regulatory network.</p>","PeriodicalId":74510,"journal":{"name":"Proceedings of the ... American Control Conference. American Control Conference","volume":"2023 ","pages":"3957-3964"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10382224/pdf/nihms-1914206.pdf","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the ... American Control Conference. American Control Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/acc55779.2023.10155867","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2023/7/3 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

Gene regulatory networks (GRNs) consist of multiple interacting genes whose activities govern various cellular processes. The limitations in genomics data and the complexity of the interactions between components often pose huge uncertainties in the models of these biological systems. Meanwhile, inferring/estimating the interactions between components of the GRNs using data acquired from the normal condition of these biological systems is a challenging or, in some cases, an impossible task. Perturbation is a well-known genomics approach that aims to excite targeted components to gather useful data from these systems. This paper models GRNs using the Boolean network with perturbation, where the network uncertainty appears in terms of unknown interactions between genes. Unlike the existing heuristics and greedy data-acquiring methods, this paper provides an optimal Bayesian formulation of the data-acquiring process in the reinforcement learning context, where the actions are perturbations, and the reward measures step-wise improvement in the inference accuracy. We develop a semi-gradient reinforcement learning method with function approximation for learning near-optimal data-acquiring policy. The obtained policy yields near-exact Bayesian optimality with respect to the entire uncertainty in the regulatory network model, and allows learning the policy offline through planning. We demonstrate the performance of the proposed framework using the well-known p53-Mdm2 negative feedback loop gene regulatory network.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
监管网络因果推断的强化学习数据获取。
基因调控网络(GRNs)由多个相互作用的基因组成,这些基因的活性控制着各种细胞过程。基因组学数据的局限性和成分之间相互作用的复杂性往往会给这些生物系统的模型带来巨大的不确定性。同时,使用从这些生物系统的正常条件下获得的数据推断/估计GRN的成分之间的相互作用是一项具有挑战性的任务,在某些情况下,这是一项不可能完成的任务。扰动是一种众所周知的基因组学方法,旨在激发靶向成分从这些系统中收集有用的数据。本文使用带扰动的布尔网络对GRN进行建模,其中网络的不确定性表现为基因之间的未知相互作用。与现有的启发式和贪婪数据获取方法不同,本文提供了在强化学习环境中数据获取过程的最优贝叶斯公式,其中动作是扰动,奖励措施逐步提高推理精度。我们开发了一种具有函数近似的半梯度强化学习方法来学习接近最优的数据获取策略。对于监管网络模型中的整个不确定性,所获得的策略产生了接近精确的贝叶斯最优,并允许通过规划离线学习策略。我们使用众所周知的p53-Mdm2负反馈环基因调控网络证明了所提出的框架的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
2.40
自引率
0.00%
发文量
0
期刊最新文献
Closed-Loop Multimodal Neuromodulation of Vagus Nerve for Control of Heart Rate. Idiographic Dynamic Modeling for Behavioral Interventions with Mixed Data Partitioning and Discrete Simultaneous Perturbation Stochastic Approximation. System Identification and Hybrid Model Predictive Control in Personalized mHealth Interventions for Physical Activity. Reinforcement Learning Data-Acquiring for Causal Inference of Regulatory Networks. Integral Quadratic Constraints with Infinite-Dimensional Channels.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1