Raising standards for preclinical research

{"title":"Raising standards for preclinical research","authors":"","doi":"10.1111/ebm2.3","DOIUrl":null,"url":null,"abstract":"<p>Systematic review and meta-analysis are powerful analytical tools. The Cochrane Collaboration, formed in 1993, provides an excellent example of the use of these tools to gather the best evidence regarding the efficacy of interventions in clinical medicine. The use of these tools however is not widespread in preclinical science. Thus Evidence-Based Preclinical Medicine (EBPM) is a new online peer-reviewed open access journal designed to provide a vehicle which fosters the systematic capture and rigorous analysis of all available basic science data on questions relevant to human health. By doing so we aim to raise the standards of preclinical research and improve the efficiency with which preclinical data is translated into improvements in human health.</p><p>The analysis of industrial, agricultural or environmental toxicology, processes of drug discovery and evaluation, disease risk factor modelling and pre- and post-disease behavioural modification as well as early discovery science are all areas where systematic capture of all available data will accelerate our ability to improve human health. The application of rigorous analytical techniques which can give a realistic appreciation of the quality, breadth and potential importance of the available evidence will help researchers decide which hypotheses should be explored further, identify the presence and likely impact of confounding biases and will help health professionals decide which will have an impact on people.</p><p>Most scientists would like to believe that the systems required for these aims are already in place. However, the explosion in the volume of available data makes reliance on traditional systems untenable.</p><p>The problems start with the way we portray science and the aspirations this engenders. In the mass media, text books and popular histories of science and medicine the process of discovery is commonly portrayed a series of Eureka moments. Giant leaps forward made by the greatest minds of an era. But this is not the process. Around the world teams of scientists nibble away at a problem, new ideas are circulated and considered and experiments designed and performed. Many ideas and experiments are dead ends and lead nowhere. But since we learn by our mistakes, knowing how things don't happen refines our knowledge base and nudges us ever closer to the truth by allowing more scientists to focus on the threads that do reveal the true pattern of life.</p><p>Two of the most famous quotes in science speak directly to these issues. Louis Pasteur's “Chance favours only the prepared mind” makes it clear that you have to understand a field if you are to contribute to it. Isaac Newton's “If I have seen further it is by standing on the shoulders of giants” is perhaps more important because it also acknowledges that science is an incremental process. Only a fortunate few are in the right place at the right time and with the right education and knowledge base to finally understand a larger than normal fragment of the puzzle.</p><p>The beauty, but also one of the problems, of science is that it is not a jigsaw puzzle with clearly defined edges. As we learn more we appreciate that there is still more to learn and our horizons expand. With this expansion comes more data for the individual to consume, assimilate and understand sufficiently well to design the next experiment.</p><p>For most of the history of science, speed of communication limited a researcher's ability to gather all of the data. Today the opposite is true. The post-war industrialisation of science and ease of communication means most fields have more data than any individual can readily deal with. For example, between the 1930's through to 1944, fewer than 50 papers mentioned the brain in their title, abstract or keywords each year. By the 1950's an inexorable increase had begun and by 1968, the field of neuroscience, by this simple criteria alone, exceeded 10,000 papers a year. In 2012 more than 70,000 papers fulfilling these criteria were published. The constraints of time now force us to be selective in what we read (potentially ~2000 papers a year if we devote a generous 30 minutes to each paper and half our working time). We should not be surprised that our systems of communicating and funding science and of judging the performance of scientists based on their performance in the former, have grown to value novelty.</p><p>It might be argued that none of this matters for “Blue Sky” discovery science. After all, there are plenty of discoveries still to be made and we will only want to follow the positive ones anyway! This approach is inherently wasteful. For every unreported neutral or negative experiment a series of unwitting future scientists will have the same “novel” idea and purposely repeat the same experiments. It is perhaps ironic that over time, negative and neutral studies will be the most highly reproduced but no one will ever know.</p><p>In preclinical medicine the effects of these problems are amplified and become pernicious. Incomplete knowledge doesn't just contribute to financial risk but to real risk of injury or death to volunteers and patients exposed to novel but poorly understood chemicals. If only the positive experiments with a new candidate drug are published and the neutral or negative results remain hidden, the field will believe the drugs work when they in fact do not. Progress to clinical trial will be wasteful, expose patients to the risk of unforseen side effects, and make finding a drug that does work less likely because human and financial resources are now less available.</p><p>Helping scientists deal with the volume of available data and understand these risks is not well served by the traditional narrative review by an individual or small group of writers. However honest, well read and well intentioned the reviewers, the reader has no knowledge of what was left out or the reasons for doing so. The traditional narrative reviewer is as subject to the fashions of the field as any other and is blind to the impact of publication and other biases within the dataset. Moreover, as a species we are stimulated by novelty and so few narrative reviews devote column space to what didn't work. Yet this information is critical if we are to prevent a growing vortex of ever wasteful uninformed false starts.</p><p>Systematic review provides a scientific approach to collation and interpretation of large volumes of data. Simply detailing the search strategy used and defining inclusion and exclusion criteria allows readers to judge for themselves whether the writers have taken a rigorous approach to finding relevant data and provides that critical element of science, a defined methodology which allows others to confirm and extend the results.</p><p>Electronic dissemination of data means that the results of systematic review can now and will increasingly go beyond just metaphorically joining the dots. Meta-analysis allows the data from systematic review to be aggregated and re-analysed and allows the researcher to discover new trends that are rarely evident within single published data sets or in narrative reviews of these data sets.</p><p>In studies of disease, do the results from one animal model point to involvement of a specific mechanism that can be targeted? Is there a clear dose-response relationship between toxicant exposure and ill health? If choice of animal model used has more impact on outcome than variations in stem cell biology in transplantation experiments, what should we do next? Has a study been replicated so often in animals that the outcome is beyond reasonable doubt and no further replications are required, or is more data still needed?</p><p>A misguided trust in the homogeneity of laboratory experimentation and a very real understanding of the extra costs entailed leads many researchers to perform experiments that are too small and are not protected by randomisation and blinding against the perverse elements of human nature and unforseen experimental variables.</p><p>No individual research study or body of evidence is perfect and by and large we think we understand the things that can go wrong in the scientific process. Honest misinterpretation as a body of data grows is inherent to the iterative process of hypothesis testing. However, we do introduce a range of biases in our quest for novelty. We tend to perform only the experiments most likely to return a positive result. In preclinical medicine this means turning a blind eye to those critical experiments that might reduce the “saleability” of a hypothesis but which are critical if for example a new drug is to survive the rigors of the real world found in the clinic. We also rarely ask whether the publishing researchers made reasonable efforts such as randomisation and blinding to avoid introduction of systematic bias. If they didn't all report doing so, stratifying the data can reveal the extent of such bias and might discourage or alternatively support further effort.</p><p>Small underpowered experiments are also easier to perform and because of the play of chance and a poor appreciation and application of the statistics of hypothesis testing can return a surprisingly high proportion of false positive results.</p><p>While replication studies confirming the results of others remain frowned upon by the research community as a whole as derivative and un-original and are actively discouraged or even forbidden by some ethical review boards and funding bodies, false positives will continue to go undetected. Systematic review and meta-analysis can be used to aggregate individual study data to determine whether the overall data set supports the presence of a real effect.</p><p>These small decisions and similar decisions by reviewers and editors and grant and promotions panels all have the ability to skew the reporting and conduct of science. Because experimental outcomes have a statistically defined distribution about the true result, when enough data is available meta-analysis of systematically collected data can detect and quantify the effect of any publication bias.</p><p>While not biologically interesting this data is important for the researcher. A nearly complete data distribution is likely to indicate that a molecule, for example a new drug candidate, behaves as advertised. A highly skewed data distribution might indicate statistically anomalous publication of a scattering of positive results while the majority of truly neutral or negative data remain in researchers' filing cabinets. While most researchers understand intellectually that publication bias exists they do not view it as a high priority. This might change when EBPM highlights the impact it might be having on their specific domain of the research world. Active suppression of data because its publication might harm vested interests might also be reduced if scientific advisory boards and their ilk demanded to see the distribution of all available data before giving their blessing to investment decisions.</p><p>EBPM is an important step towards the goal of understanding the strengths and weaknesses of the data we use to make important decisions and of ensuring we use the best available data in making those decisions.</p>","PeriodicalId":90826,"journal":{"name":"Evidence-based preclinical medicine","volume":"1 1","pages":"1-3"},"PeriodicalIF":0.0000,"publicationDate":"2014-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1111/ebm2.3","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Evidence-based preclinical medicine","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/ebm2.3","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Systematic review and meta-analysis are powerful analytical tools. The Cochrane Collaboration, formed in 1993, provides an excellent example of the use of these tools to gather the best evidence regarding the efficacy of interventions in clinical medicine. The use of these tools however is not widespread in preclinical science. Thus Evidence-Based Preclinical Medicine (EBPM) is a new online peer-reviewed open access journal designed to provide a vehicle which fosters the systematic capture and rigorous analysis of all available basic science data on questions relevant to human health. By doing so we aim to raise the standards of preclinical research and improve the efficiency with which preclinical data is translated into improvements in human health.

The analysis of industrial, agricultural or environmental toxicology, processes of drug discovery and evaluation, disease risk factor modelling and pre- and post-disease behavioural modification as well as early discovery science are all areas where systematic capture of all available data will accelerate our ability to improve human health. The application of rigorous analytical techniques which can give a realistic appreciation of the quality, breadth and potential importance of the available evidence will help researchers decide which hypotheses should be explored further, identify the presence and likely impact of confounding biases and will help health professionals decide which will have an impact on people.

Most scientists would like to believe that the systems required for these aims are already in place. However, the explosion in the volume of available data makes reliance on traditional systems untenable.

The problems start with the way we portray science and the aspirations this engenders. In the mass media, text books and popular histories of science and medicine the process of discovery is commonly portrayed a series of Eureka moments. Giant leaps forward made by the greatest minds of an era. But this is not the process. Around the world teams of scientists nibble away at a problem, new ideas are circulated and considered and experiments designed and performed. Many ideas and experiments are dead ends and lead nowhere. But since we learn by our mistakes, knowing how things don't happen refines our knowledge base and nudges us ever closer to the truth by allowing more scientists to focus on the threads that do reveal the true pattern of life.

Two of the most famous quotes in science speak directly to these issues. Louis Pasteur's “Chance favours only the prepared mind” makes it clear that you have to understand a field if you are to contribute to it. Isaac Newton's “If I have seen further it is by standing on the shoulders of giants” is perhaps more important because it also acknowledges that science is an incremental process. Only a fortunate few are in the right place at the right time and with the right education and knowledge base to finally understand a larger than normal fragment of the puzzle.

The beauty, but also one of the problems, of science is that it is not a jigsaw puzzle with clearly defined edges. As we learn more we appreciate that there is still more to learn and our horizons expand. With this expansion comes more data for the individual to consume, assimilate and understand sufficiently well to design the next experiment.

For most of the history of science, speed of communication limited a researcher's ability to gather all of the data. Today the opposite is true. The post-war industrialisation of science and ease of communication means most fields have more data than any individual can readily deal with. For example, between the 1930's through to 1944, fewer than 50 papers mentioned the brain in their title, abstract or keywords each year. By the 1950's an inexorable increase had begun and by 1968, the field of neuroscience, by this simple criteria alone, exceeded 10,000 papers a year. In 2012 more than 70,000 papers fulfilling these criteria were published. The constraints of time now force us to be selective in what we read (potentially ~2000 papers a year if we devote a generous 30 minutes to each paper and half our working time). We should not be surprised that our systems of communicating and funding science and of judging the performance of scientists based on their performance in the former, have grown to value novelty.

It might be argued that none of this matters for “Blue Sky” discovery science. After all, there are plenty of discoveries still to be made and we will only want to follow the positive ones anyway! This approach is inherently wasteful. For every unreported neutral or negative experiment a series of unwitting future scientists will have the same “novel” idea and purposely repeat the same experiments. It is perhaps ironic that over time, negative and neutral studies will be the most highly reproduced but no one will ever know.

In preclinical medicine the effects of these problems are amplified and become pernicious. Incomplete knowledge doesn't just contribute to financial risk but to real risk of injury or death to volunteers and patients exposed to novel but poorly understood chemicals. If only the positive experiments with a new candidate drug are published and the neutral or negative results remain hidden, the field will believe the drugs work when they in fact do not. Progress to clinical trial will be wasteful, expose patients to the risk of unforseen side effects, and make finding a drug that does work less likely because human and financial resources are now less available.

Helping scientists deal with the volume of available data and understand these risks is not well served by the traditional narrative review by an individual or small group of writers. However honest, well read and well intentioned the reviewers, the reader has no knowledge of what was left out or the reasons for doing so. The traditional narrative reviewer is as subject to the fashions of the field as any other and is blind to the impact of publication and other biases within the dataset. Moreover, as a species we are stimulated by novelty and so few narrative reviews devote column space to what didn't work. Yet this information is critical if we are to prevent a growing vortex of ever wasteful uninformed false starts.

Systematic review provides a scientific approach to collation and interpretation of large volumes of data. Simply detailing the search strategy used and defining inclusion and exclusion criteria allows readers to judge for themselves whether the writers have taken a rigorous approach to finding relevant data and provides that critical element of science, a defined methodology which allows others to confirm and extend the results.

Electronic dissemination of data means that the results of systematic review can now and will increasingly go beyond just metaphorically joining the dots. Meta-analysis allows the data from systematic review to be aggregated and re-analysed and allows the researcher to discover new trends that are rarely evident within single published data sets or in narrative reviews of these data sets.

In studies of disease, do the results from one animal model point to involvement of a specific mechanism that can be targeted? Is there a clear dose-response relationship between toxicant exposure and ill health? If choice of animal model used has more impact on outcome than variations in stem cell biology in transplantation experiments, what should we do next? Has a study been replicated so often in animals that the outcome is beyond reasonable doubt and no further replications are required, or is more data still needed?

A misguided trust in the homogeneity of laboratory experimentation and a very real understanding of the extra costs entailed leads many researchers to perform experiments that are too small and are not protected by randomisation and blinding against the perverse elements of human nature and unforseen experimental variables.

No individual research study or body of evidence is perfect and by and large we think we understand the things that can go wrong in the scientific process. Honest misinterpretation as a body of data grows is inherent to the iterative process of hypothesis testing. However, we do introduce a range of biases in our quest for novelty. We tend to perform only the experiments most likely to return a positive result. In preclinical medicine this means turning a blind eye to those critical experiments that might reduce the “saleability” of a hypothesis but which are critical if for example a new drug is to survive the rigors of the real world found in the clinic. We also rarely ask whether the publishing researchers made reasonable efforts such as randomisation and blinding to avoid introduction of systematic bias. If they didn't all report doing so, stratifying the data can reveal the extent of such bias and might discourage or alternatively support further effort.

Small underpowered experiments are also easier to perform and because of the play of chance and a poor appreciation and application of the statistics of hypothesis testing can return a surprisingly high proportion of false positive results.

While replication studies confirming the results of others remain frowned upon by the research community as a whole as derivative and un-original and are actively discouraged or even forbidden by some ethical review boards and funding bodies, false positives will continue to go undetected. Systematic review and meta-analysis can be used to aggregate individual study data to determine whether the overall data set supports the presence of a real effect.

These small decisions and similar decisions by reviewers and editors and grant and promotions panels all have the ability to skew the reporting and conduct of science. Because experimental outcomes have a statistically defined distribution about the true result, when enough data is available meta-analysis of systematically collected data can detect and quantify the effect of any publication bias.

While not biologically interesting this data is important for the researcher. A nearly complete data distribution is likely to indicate that a molecule, for example a new drug candidate, behaves as advertised. A highly skewed data distribution might indicate statistically anomalous publication of a scattering of positive results while the majority of truly neutral or negative data remain in researchers' filing cabinets. While most researchers understand intellectually that publication bias exists they do not view it as a high priority. This might change when EBPM highlights the impact it might be having on their specific domain of the research world. Active suppression of data because its publication might harm vested interests might also be reduced if scientific advisory boards and their ilk demanded to see the distribution of all available data before giving their blessing to investment decisions.

EBPM is an important step towards the goal of understanding the strengths and weaknesses of the data we use to make important decisions and of ensuring we use the best available data in making those decisions.

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
提高临床前研究标准
不完整的知识不仅会导致财务风险,还会导致志愿者和患者接触到新的但鲜为人知的化学物质时受伤或死亡的真实风险。如果只有一种新的候选药物的阳性实验被公布,而中性或阴性结果仍然被隐藏,那么该领域将相信这些药物有效,而事实上它们并不有效。临床试验的进展将是浪费,使患者面临无法预见的副作用的风险,并使找到有效药物的可能性降低,因为现在人力和财力资源不足。个人或一小群作家的传统叙事评论并不能很好地帮助科学家处理大量可用数据并了解这些风险。无论评论者多么诚实、博览群书、用心良苦,读者都不知道遗漏了什么或遗漏的原因。传统的叙事评论者和其他评论者一样受制于该领域的时尚,对出版的影响和数据集中的其他偏见视而不见。此外,作为一个物种,我们受到新奇事物的刺激,因此很少有叙事评论将专栏空间用于不起作用的内容。然而,如果我们要防止不断增加的浪费性、不知情的错误开始,这些信息至关重要。系统综述为大量数据的整理和解释提供了一种科学的方法。只需详细说明所使用的搜索策略并定义包含和排除标准,读者就可以自行判断作者是否采取了严格的方法来寻找相关数据,并提供了科学的关键元素,一种定义的方法,允许其他人确认和扩展结果。数据的电子传播意味着系统审查的结果现在可以而且将越来越多地超越隐喻性的连接点。荟萃分析允许对系统综述中的数据进行汇总和重新分析,并允许研究人员发现在单个已发表的数据集或这些数据集的叙述性综述中很少明显的新趋势。在疾病研究中,一个动物模型的结果是否表明参与了一种可以靶向的特定机制?毒物暴露与健康不良之间是否存在明确的剂量反应关系?如果在移植实验中,所用动物模型的选择比干细胞生物学的变化对结果的影响更大,我们下一步该怎么办?一项研究是否经常在动物身上复制,以至于结果毫无疑问,不需要进一步复制,还是还需要更多的数据?对实验室实验的同质性的错误信任和对所需额外成本的真实理解导致许多研究人员进行的实验太小,并且没有受到随机性和盲目性的保护,无法对抗人性的反常因素和不可见的实验变量。没有一项单独的研究或证据是完美的,总的来说,我们认为我们理解科学过程中可能出现的问题。随着数据的增长,诚实的误解是假设检验迭代过程中固有的。然而,我们在追求新颖性的过程中确实引入了一系列偏见。我们倾向于只进行最有可能得到积极结果的实验。在临床前医学中,这意味着对那些可能会降低假说“可销售性”的关键实验视而不见,但如果一种新药要在临床上发现的现实世界中生存下来,这些实验是至关重要的。我们也很少询问出版研究人员是否做出了合理的努力,如随机化和盲法,以避免引入系统性偏见。如果不是所有人都报告这样做,那么对数据进行分层可以揭示这种偏见的程度,并可能阻碍或支持进一步的努力。功率不足的小实验也更容易进行,而且由于偶然性的发挥以及对假设检验统计数据的理解和应用不力,可能会返回高比例的假阳性结果。尽管证实其他研究结果的复制研究仍然被整个研究界视为衍生物和非原创研究,并被一些伦理审查委员会和资助机构积极劝阻甚至禁止,但假阳性仍将继续未被发现。系统综述和荟萃分析可用于汇总个体研究数据,以确定整体数据集是否支持存在真正的效果。评审员、编辑、拨款和晋升小组的这些小决定和类似决定都有能力扭曲科学的报道和行为。 由于实验结果对真实结果有统计定义的分布,当有足够的数据可用时,对系统收集的数据进行荟萃分析可以检测和量化任何发表偏倚的影响。虽然这些数据在生物学上并不有趣,但对研究人员来说很重要。一个几乎完整的数据分布可能表明一个分子,例如一种新的候选药物,正如广告中所说的那样。高度偏斜的数据分布可能表明阳性结果的统计异常发布,而大多数真正中性或阴性的数据仍保留在研究人员的文件柜中。虽然大多数研究人员在理智上理解出版偏见的存在,但他们并不认为这是一个高度优先事项。当EBPM强调它可能对他们研究世界的特定领域产生的影响时,这种情况可能会改变。如果科学咨询委员会及其同类要求在批准投资决策之前查看所有可用数据的分布情况,那么由于数据的发布可能会损害既得利益而对数据的积极压制也可能减少。EBPM是朝着了解我们用于做出重要决策的数据的优势和劣势以及确保我们在做出这些决策时使用最佳可用数据的目标迈出的重要一步。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Improving our understanding of the in vivo modelling of psychotic disorders: A protocol for a systematic review and meta-analysis. Study protocol – A systematic review and meta‐analysis of hypothermia in experimental traumatic brain injury: Why have promising animal studies not been replicated in pragmatic clinical trials? Protocol for a systematic review of effect sizes and statistical power in the rodent fear conditioning literature From a mouse: systematic analysis reveals limitations of experiments testing interventions in Alzheimer's disease mouse models Protocol for meta-analysis of temperature reduction in animal models of cardiac arrest
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1