是不是所有的模型都错了?

Heiko Enderling, Olaf Wolkenhauer
{"title":"是不是所有的模型都错了?","authors":"Heiko Enderling,&nbsp;Olaf Wolkenhauer","doi":"10.1002/cso2.1008","DOIUrl":null,"url":null,"abstract":"<p>Mathematical modeling in cancer is enjoying a rapid expansion [<span>1</span>]. For collegial discussion across disciplines, many—if not all of us—have used the aphorism that “<i>All models are wrong, but some are useful</i>” [<span>2</span>]. This has been a convenient approach to justify and communicate the praxis of modeling. This is to suggest that the <i>usefulness</i> of a model is not measured by the accuracy of representation but how well it supports the generation, testing, and refinement of hypotheses. A key insight is not to focus on the model as an outcome, but to consider the modeling process and simulated model predictions as “ways of thinking” about complex nonlinear dynamical systems [<span>3</span>]. Here, we discuss the convoluted interpretation of <i>models being wrong</i> in the arena of predictive modeling.</p><p>“<i>All models are wrong, but some are useful</i>” emphasizes the value of abstraction in order to gain insight. While abstraction clearly implies misrepresentation, it allows to explicitly define model assumptions and interpret model results within these limitations – <i>Truth emerges more readily from error than from confusion</i> [<span>4</span>]. It is thus the process of modeling and the discussions about model assumptions that are often considered most valuable in interdisciplinary research. They provide a way of thinking about complex systems and mechanisms underlying observations. Abstractions are being made in cancer biology for every experiment in each laboratory around the world. In vitro cell lines or in vivo mouse experiments are abstractions of complex adaptive evolving human cancers in the complex adaptive dynamic environment called the patient. These \"wet lab\" experiments akin to \"dry lab\" mathematical models offer confirmation or refutation of hypotheses and results, which have to be prospectively evaluated in clinical trials before conclusions can be generalized beyond the abstracted assumptions. The key for any model—mathematical, biological, or clinical—to succeed is an iterative cycle of data-driven modeling and model-driven experimentation [<span>5, 6</span>]. The value of such an effort lies in the insights about mechanisms that can then be attributed to the considered variables [<span>7</span>]. With simplified representations of a system one can learn about the emergence of general patterns, like the occurrence of oscillations, bistability, or chaos [<span>8-10</span>].</p><p>In this context, Alan Turing framed the purpose of a mathematical model in his seminal paper about “The chemical basis of morphogenesis” [<span>11</span>] with “<i>This model will be a simplification and an idealization, and consequently a falsification. It is to be hoped that the features retained for discussion are those of greatest importance in the present state of knowledge</i>.” For many mathematical biology models that are built to explore, test, and generate hypotheses about emerging dynamics, this remains true. “<i>Wrong models</i>” allow us to reevaluate our assumptions, and the lessons learned from these discussions can help formulate revised models and improve our understanding of the underlying dynamics.</p><p>However, mathematical oncology models are deployed not only to simulate emergent properties of complex systems to generate, test, and refine hypotheses, but increasingly also with the intent to make predictions—often how an individual cancer patient will respond to a specific treatment [1]. For predictive modeling, the aphorism “<i>All models are wrong</i>” becomes awkward. In the predictive modeling arena, a <i>useful</i> model should <i>not</i> be <i>wrong</i>. A major hurdle in the application of predictive modeling, in general and in oncology in particular, is communication of model purpose and prediction uncertainty, and how likelihood and risks are interpreted by the end user. With limited data available about a complex adaptive evolving system, “forecasting failures” are common when events that are not represented in the data dominate the subsequent behavior (such as emergence of treatment resistance not being represented in pre-treatment dynamics). If predictive models are trained on historic data but with little patient-specific data over multiple time points, what role could predictive models play in oncology?</p><p>Computer simulations of mathematical models that are based on limited data are merely visualizing plausible disease trajectories forward in time. Predictions could then be made from analyzing the possible trajectories using multiple plausible parameter combinations, from either a single model or multiple models with competing assumptions and different weighting of likely important factors. While in some domains, such as hurricane trajectory forecasts, we trust mathematical models and accept their inherent, well-documented prediction uncertainties [<span>12</span>], it is imperative to improve the communication of what models can and cannot do when it comes to personal health. “<i>Nothing is more difficult to predict than the future</i>1,” and while the uncertainty linked to predictions rises quickly, we may still find use in the model.</p><p>For clinical purpose, predictive models may not need to accurately describe the complex biology of cancer, but to provide a trigger for decision making, often upon binary endpoints. For many years, we have set ourselves the lofty goal of predicting the tumor burden evolution during treatment with ever decreasing error to the actual data [<span>14-16</span>]; yet the clinical endpoint for patients is often not the actual tumor volume dynamics but binary endpoints such as continuous response or cancer progression, tumor control or treatment failure. Machine learning approaches (or simple statistics) can identify threshold values for tumor burden at different time points during therapy that stratify patients into the different outcomes [<span>17-19</span>]. Then, the model purpose becomes to accurately predict whether a tumor will shrink below this threshold or not. A larger error to the data but a correct outcome classification becomes an acceptable tradeoff for better fits but incorrect predictions. With this understanding, we have seen unprecedented model <i>prediction accuracy</i> for individual patients from few response measurements early during therapy [<span>18</span>]. The dilemma is visualized in Figure 1. For both patients, one head and neck cancer patient treated with radiotherapy and one prostate cancer patient treated with intermittent hormone therapy, only a few of the 100 predicted disease trajectories each mimic the eventual clinically observed dynamics. Yet, the majority of the simulations accurately predict disease burden to be above or to be below the learned thresholds for tumor control or treatment resistance.</p><p>Modeling efforts support various goals, linked to different expectation as to what modeling provides to a specific project. For the application of mathematical modeling for personalized medicine, further discussions about what models can and cannot contribute are necessary. For predictive modeling, <i>right</i> or <i>wrong</i> may not be how well the predicted disease dynamics based on uncertain parameter combinations mimic the clinically observed responses and their underlying biology, but the interpretation and actionability of model predictions and their uncertainty. While mathematical models may not be <i>right</i>, they do not have to be <i>wrong</i>. Thus, we may just adopt the philosophy of Assistant Director of Operations Domingo “Ding” Chavez, who taught the young Jack Ryan, Jr., in Tom Clancy's Oath of Office to “<i>Don't practice until you get it right. Practice until you don't get it wrong</i>” [<span>20</span>].</p>","PeriodicalId":72658,"journal":{"name":"Computational and systems oncology","volume":"1 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2021-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1002/cso2.1008","citationCount":"48","resultStr":"{\"title\":\"Are all models wrong?\",\"authors\":\"Heiko Enderling,&nbsp;Olaf Wolkenhauer\",\"doi\":\"10.1002/cso2.1008\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Mathematical modeling in cancer is enjoying a rapid expansion [<span>1</span>]. For collegial discussion across disciplines, many—if not all of us—have used the aphorism that “<i>All models are wrong, but some are useful</i>” [<span>2</span>]. This has been a convenient approach to justify and communicate the praxis of modeling. This is to suggest that the <i>usefulness</i> of a model is not measured by the accuracy of representation but how well it supports the generation, testing, and refinement of hypotheses. A key insight is not to focus on the model as an outcome, but to consider the modeling process and simulated model predictions as “ways of thinking” about complex nonlinear dynamical systems [<span>3</span>]. Here, we discuss the convoluted interpretation of <i>models being wrong</i> in the arena of predictive modeling.</p><p>“<i>All models are wrong, but some are useful</i>” emphasizes the value of abstraction in order to gain insight. While abstraction clearly implies misrepresentation, it allows to explicitly define model assumptions and interpret model results within these limitations – <i>Truth emerges more readily from error than from confusion</i> [<span>4</span>]. It is thus the process of modeling and the discussions about model assumptions that are often considered most valuable in interdisciplinary research. They provide a way of thinking about complex systems and mechanisms underlying observations. Abstractions are being made in cancer biology for every experiment in each laboratory around the world. In vitro cell lines or in vivo mouse experiments are abstractions of complex adaptive evolving human cancers in the complex adaptive dynamic environment called the patient. These \\\"wet lab\\\" experiments akin to \\\"dry lab\\\" mathematical models offer confirmation or refutation of hypotheses and results, which have to be prospectively evaluated in clinical trials before conclusions can be generalized beyond the abstracted assumptions. The key for any model—mathematical, biological, or clinical—to succeed is an iterative cycle of data-driven modeling and model-driven experimentation [<span>5, 6</span>]. The value of such an effort lies in the insights about mechanisms that can then be attributed to the considered variables [<span>7</span>]. With simplified representations of a system one can learn about the emergence of general patterns, like the occurrence of oscillations, bistability, or chaos [<span>8-10</span>].</p><p>In this context, Alan Turing framed the purpose of a mathematical model in his seminal paper about “The chemical basis of morphogenesis” [<span>11</span>] with “<i>This model will be a simplification and an idealization, and consequently a falsification. It is to be hoped that the features retained for discussion are those of greatest importance in the present state of knowledge</i>.” For many mathematical biology models that are built to explore, test, and generate hypotheses about emerging dynamics, this remains true. “<i>Wrong models</i>” allow us to reevaluate our assumptions, and the lessons learned from these discussions can help formulate revised models and improve our understanding of the underlying dynamics.</p><p>However, mathematical oncology models are deployed not only to simulate emergent properties of complex systems to generate, test, and refine hypotheses, but increasingly also with the intent to make predictions—often how an individual cancer patient will respond to a specific treatment [1]. For predictive modeling, the aphorism “<i>All models are wrong</i>” becomes awkward. In the predictive modeling arena, a <i>useful</i> model should <i>not</i> be <i>wrong</i>. A major hurdle in the application of predictive modeling, in general and in oncology in particular, is communication of model purpose and prediction uncertainty, and how likelihood and risks are interpreted by the end user. With limited data available about a complex adaptive evolving system, “forecasting failures” are common when events that are not represented in the data dominate the subsequent behavior (such as emergence of treatment resistance not being represented in pre-treatment dynamics). If predictive models are trained on historic data but with little patient-specific data over multiple time points, what role could predictive models play in oncology?</p><p>Computer simulations of mathematical models that are based on limited data are merely visualizing plausible disease trajectories forward in time. Predictions could then be made from analyzing the possible trajectories using multiple plausible parameter combinations, from either a single model or multiple models with competing assumptions and different weighting of likely important factors. While in some domains, such as hurricane trajectory forecasts, we trust mathematical models and accept their inherent, well-documented prediction uncertainties [<span>12</span>], it is imperative to improve the communication of what models can and cannot do when it comes to personal health. “<i>Nothing is more difficult to predict than the future</i>1,” and while the uncertainty linked to predictions rises quickly, we may still find use in the model.</p><p>For clinical purpose, predictive models may not need to accurately describe the complex biology of cancer, but to provide a trigger for decision making, often upon binary endpoints. For many years, we have set ourselves the lofty goal of predicting the tumor burden evolution during treatment with ever decreasing error to the actual data [<span>14-16</span>]; yet the clinical endpoint for patients is often not the actual tumor volume dynamics but binary endpoints such as continuous response or cancer progression, tumor control or treatment failure. Machine learning approaches (or simple statistics) can identify threshold values for tumor burden at different time points during therapy that stratify patients into the different outcomes [<span>17-19</span>]. Then, the model purpose becomes to accurately predict whether a tumor will shrink below this threshold or not. A larger error to the data but a correct outcome classification becomes an acceptable tradeoff for better fits but incorrect predictions. With this understanding, we have seen unprecedented model <i>prediction accuracy</i> for individual patients from few response measurements early during therapy [<span>18</span>]. The dilemma is visualized in Figure 1. For both patients, one head and neck cancer patient treated with radiotherapy and one prostate cancer patient treated with intermittent hormone therapy, only a few of the 100 predicted disease trajectories each mimic the eventual clinically observed dynamics. Yet, the majority of the simulations accurately predict disease burden to be above or to be below the learned thresholds for tumor control or treatment resistance.</p><p>Modeling efforts support various goals, linked to different expectation as to what modeling provides to a specific project. For the application of mathematical modeling for personalized medicine, further discussions about what models can and cannot contribute are necessary. For predictive modeling, <i>right</i> or <i>wrong</i> may not be how well the predicted disease dynamics based on uncertain parameter combinations mimic the clinically observed responses and their underlying biology, but the interpretation and actionability of model predictions and their uncertainty. While mathematical models may not be <i>right</i>, they do not have to be <i>wrong</i>. Thus, we may just adopt the philosophy of Assistant Director of Operations Domingo “Ding” Chavez, who taught the young Jack Ryan, Jr., in Tom Clancy's Oath of Office to “<i>Don't practice until you get it right. Practice until you don't get it wrong</i>” [<span>20</span>].</p>\",\"PeriodicalId\":72658,\"journal\":{\"name\":\"Computational and systems oncology\",\"volume\":\"1 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-01-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://sci-hub-pdf.com/10.1002/cso2.1008\",\"citationCount\":\"48\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computational and systems oncology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/cso2.1008\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computational and systems oncology","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/cso2.1008","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 48

摘要

癌症领域的数学建模正在迅速发展[1]。对于跨学科的合作讨论,许多人——如果不是我们所有人——都使用了“所有模型都是错误的,但有些模型是有用的”这句格言[2]。这是证明和交流建模实践的一种方便的方法。这表明,模型的有用性不是通过表示的准确性来衡量的,而是通过它对假设的生成、测试和改进的支持程度来衡量的。一个关键的见解是,不要将模型作为结果来关注,而是将建模过程和模拟模型预测视为复杂非线性动力系统的“思维方式”[3]。在这里,我们将讨论预测建模领域中对模型错误的复杂解释。“所有的模型都是错误的,但有些是有用的”强调了为了获得洞察力而抽象的价值。虽然抽象显然意味着错误表述,但它允许明确定义模型假设并在这些限制内解释模型结果——真理更容易从错误中出现,而不是从混乱中出现[4]。因此,在跨学科研究中,建模过程和关于模型假设的讨论通常被认为是最有价值的。它们提供了一种思考复杂系统和潜在观察机制的方法。世界上每个实验室的每个实验都在对癌症生物学进行抽象。体外细胞系或体内小鼠实验是人类癌症在称为患者的复杂适应动态环境中复杂适应进化的抽象。这些类似于“干实验室”数学模型的“湿实验室”实验提供了对假设和结果的证实或反驳,这些假设和结果必须在临床试验中进行前瞻性评估,然后才能在抽象假设之外推广结论。任何模型——数学、生物或临床——成功的关键是数据驱动的建模和模型驱动的实验的迭代循环[5,6]。这种努力的价值在于对机制的洞察,然后可以归因于所考虑的变量[7]。通过系统的简化表示,人们可以了解一般模式的出现,如振荡、双稳态或混沌的发生[8-10]。在这种背景下,艾伦·图灵在他的开创性论文《形态发生的化学基础》[11]中提出了数学模型的目的,“这个模型将是一个简化和理想化,因此是一个证伪。”我们希望保留下来讨论的特征是在目前的知识状态下最重要的特征"对于许多用于探索、测试和生成关于新兴动力学的假设的数学生物学模型来说,这仍然是正确的。“错误的模型”允许我们重新评估我们的假设,从这些讨论中吸取的教训可以帮助制定修订的模型,并提高我们对潜在动力学的理解。然而,数学肿瘤学模型不仅用于模拟复杂系统的紧急特性,以生成、测试和完善假设,而且越来越多地用于预测——通常是单个癌症患者对特定治疗的反应[1]。对于预测建模来说,“所有的模型都是错的”这句格言显得有些尴尬。在预测建模领域,有用的模型不应该是错误的。预测建模应用的一个主要障碍,一般来说,特别是在肿瘤学领域,是模型目的和预测不确定性的沟通,以及最终用户如何解释可能性和风险。由于复杂的自适应进化系统的可用数据有限,当数据中未表示的事件主导后续行为(例如未在预处理动态中表示的治疗耐药性的出现)时,“预测失败”很常见。如果预测模型是在历史数据上训练的,但在多个时间点上很少有患者特定的数据,那么预测模型在肿瘤学中可以发挥什么作用?以有限的数据为基础的数学模型的计算机模拟仅仅是将疾病的发展轨迹可视化。然后,可以使用多个合理的参数组合,从单个模型或具有相互竞争的假设和可能重要因素的不同权重的多个模型,分析可能的轨迹,从而做出预测。虽然在某些领域,如飓风轨迹预测,我们相信数学模型并接受其固有的、有充分记录的预测不确定性[12],但在涉及个人健康时,必须改善模型能做什么和不能做什么的沟通。“没有什么比预测未来更困难的了”,虽然与预测相关的不确定性上升得很快,但我们仍然可以在这个模型中找到用处。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

摘要图片

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Are all models wrong?

Mathematical modeling in cancer is enjoying a rapid expansion [1]. For collegial discussion across disciplines, many—if not all of us—have used the aphorism that “All models are wrong, but some are useful” [2]. This has been a convenient approach to justify and communicate the praxis of modeling. This is to suggest that the usefulness of a model is not measured by the accuracy of representation but how well it supports the generation, testing, and refinement of hypotheses. A key insight is not to focus on the model as an outcome, but to consider the modeling process and simulated model predictions as “ways of thinking” about complex nonlinear dynamical systems [3]. Here, we discuss the convoluted interpretation of models being wrong in the arena of predictive modeling.

All models are wrong, but some are useful” emphasizes the value of abstraction in order to gain insight. While abstraction clearly implies misrepresentation, it allows to explicitly define model assumptions and interpret model results within these limitations – Truth emerges more readily from error than from confusion [4]. It is thus the process of modeling and the discussions about model assumptions that are often considered most valuable in interdisciplinary research. They provide a way of thinking about complex systems and mechanisms underlying observations. Abstractions are being made in cancer biology for every experiment in each laboratory around the world. In vitro cell lines or in vivo mouse experiments are abstractions of complex adaptive evolving human cancers in the complex adaptive dynamic environment called the patient. These "wet lab" experiments akin to "dry lab" mathematical models offer confirmation or refutation of hypotheses and results, which have to be prospectively evaluated in clinical trials before conclusions can be generalized beyond the abstracted assumptions. The key for any model—mathematical, biological, or clinical—to succeed is an iterative cycle of data-driven modeling and model-driven experimentation [5, 6]. The value of such an effort lies in the insights about mechanisms that can then be attributed to the considered variables [7]. With simplified representations of a system one can learn about the emergence of general patterns, like the occurrence of oscillations, bistability, or chaos [8-10].

In this context, Alan Turing framed the purpose of a mathematical model in his seminal paper about “The chemical basis of morphogenesis” [11] with “This model will be a simplification and an idealization, and consequently a falsification. It is to be hoped that the features retained for discussion are those of greatest importance in the present state of knowledge.” For many mathematical biology models that are built to explore, test, and generate hypotheses about emerging dynamics, this remains true. “Wrong models” allow us to reevaluate our assumptions, and the lessons learned from these discussions can help formulate revised models and improve our understanding of the underlying dynamics.

However, mathematical oncology models are deployed not only to simulate emergent properties of complex systems to generate, test, and refine hypotheses, but increasingly also with the intent to make predictions—often how an individual cancer patient will respond to a specific treatment [1]. For predictive modeling, the aphorism “All models are wrong” becomes awkward. In the predictive modeling arena, a useful model should not be wrong. A major hurdle in the application of predictive modeling, in general and in oncology in particular, is communication of model purpose and prediction uncertainty, and how likelihood and risks are interpreted by the end user. With limited data available about a complex adaptive evolving system, “forecasting failures” are common when events that are not represented in the data dominate the subsequent behavior (such as emergence of treatment resistance not being represented in pre-treatment dynamics). If predictive models are trained on historic data but with little patient-specific data over multiple time points, what role could predictive models play in oncology?

Computer simulations of mathematical models that are based on limited data are merely visualizing plausible disease trajectories forward in time. Predictions could then be made from analyzing the possible trajectories using multiple plausible parameter combinations, from either a single model or multiple models with competing assumptions and different weighting of likely important factors. While in some domains, such as hurricane trajectory forecasts, we trust mathematical models and accept their inherent, well-documented prediction uncertainties [12], it is imperative to improve the communication of what models can and cannot do when it comes to personal health. “Nothing is more difficult to predict than the future1,” and while the uncertainty linked to predictions rises quickly, we may still find use in the model.

For clinical purpose, predictive models may not need to accurately describe the complex biology of cancer, but to provide a trigger for decision making, often upon binary endpoints. For many years, we have set ourselves the lofty goal of predicting the tumor burden evolution during treatment with ever decreasing error to the actual data [14-16]; yet the clinical endpoint for patients is often not the actual tumor volume dynamics but binary endpoints such as continuous response or cancer progression, tumor control or treatment failure. Machine learning approaches (or simple statistics) can identify threshold values for tumor burden at different time points during therapy that stratify patients into the different outcomes [17-19]. Then, the model purpose becomes to accurately predict whether a tumor will shrink below this threshold or not. A larger error to the data but a correct outcome classification becomes an acceptable tradeoff for better fits but incorrect predictions. With this understanding, we have seen unprecedented model prediction accuracy for individual patients from few response measurements early during therapy [18]. The dilemma is visualized in Figure 1. For both patients, one head and neck cancer patient treated with radiotherapy and one prostate cancer patient treated with intermittent hormone therapy, only a few of the 100 predicted disease trajectories each mimic the eventual clinically observed dynamics. Yet, the majority of the simulations accurately predict disease burden to be above or to be below the learned thresholds for tumor control or treatment resistance.

Modeling efforts support various goals, linked to different expectation as to what modeling provides to a specific project. For the application of mathematical modeling for personalized medicine, further discussions about what models can and cannot contribute are necessary. For predictive modeling, right or wrong may not be how well the predicted disease dynamics based on uncertain parameter combinations mimic the clinically observed responses and their underlying biology, but the interpretation and actionability of model predictions and their uncertainty. While mathematical models may not be right, they do not have to be wrong. Thus, we may just adopt the philosophy of Assistant Director of Operations Domingo “Ding” Chavez, who taught the young Jack Ryan, Jr., in Tom Clancy's Oath of Office to “Don't practice until you get it right. Practice until you don't get it wrong” [20].

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
2.80
自引率
0.00%
发文量
0
审稿时长
8 weeks
期刊最新文献
Unraveling the dangerous duet between cancer cell plasticity and drug resistance Issue Information Generative adversarial networks applied to gene expression analysis: An interdisciplinary perspective Issue Information Role of heterogeneity in dictating tumorigenesis in epithelial tissues
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1