{"title":"提高临床前研究标准","authors":"","doi":"10.1111/ebm2.3","DOIUrl":null,"url":null,"abstract":"<p>Systematic review and meta-analysis are powerful analytical tools. The Cochrane Collaboration, formed in 1993, provides an excellent example of the use of these tools to gather the best evidence regarding the efficacy of interventions in clinical medicine. The use of these tools however is not widespread in preclinical science. Thus Evidence-Based Preclinical Medicine (EBPM) is a new online peer-reviewed open access journal designed to provide a vehicle which fosters the systematic capture and rigorous analysis of all available basic science data on questions relevant to human health. By doing so we aim to raise the standards of preclinical research and improve the efficiency with which preclinical data is translated into improvements in human health.</p><p>The analysis of industrial, agricultural or environmental toxicology, processes of drug discovery and evaluation, disease risk factor modelling and pre- and post-disease behavioural modification as well as early discovery science are all areas where systematic capture of all available data will accelerate our ability to improve human health. The application of rigorous analytical techniques which can give a realistic appreciation of the quality, breadth and potential importance of the available evidence will help researchers decide which hypotheses should be explored further, identify the presence and likely impact of confounding biases and will help health professionals decide which will have an impact on people.</p><p>Most scientists would like to believe that the systems required for these aims are already in place. However, the explosion in the volume of available data makes reliance on traditional systems untenable.</p><p>The problems start with the way we portray science and the aspirations this engenders. In the mass media, text books and popular histories of science and medicine the process of discovery is commonly portrayed a series of Eureka moments. Giant leaps forward made by the greatest minds of an era. But this is not the process. Around the world teams of scientists nibble away at a problem, new ideas are circulated and considered and experiments designed and performed. Many ideas and experiments are dead ends and lead nowhere. But since we learn by our mistakes, knowing how things don't happen refines our knowledge base and nudges us ever closer to the truth by allowing more scientists to focus on the threads that do reveal the true pattern of life.</p><p>Two of the most famous quotes in science speak directly to these issues. Louis Pasteur's “Chance favours only the prepared mind” makes it clear that you have to understand a field if you are to contribute to it. Isaac Newton's “If I have seen further it is by standing on the shoulders of giants” is perhaps more important because it also acknowledges that science is an incremental process. Only a fortunate few are in the right place at the right time and with the right education and knowledge base to finally understand a larger than normal fragment of the puzzle.</p><p>The beauty, but also one of the problems, of science is that it is not a jigsaw puzzle with clearly defined edges. As we learn more we appreciate that there is still more to learn and our horizons expand. With this expansion comes more data for the individual to consume, assimilate and understand sufficiently well to design the next experiment.</p><p>For most of the history of science, speed of communication limited a researcher's ability to gather all of the data. Today the opposite is true. The post-war industrialisation of science and ease of communication means most fields have more data than any individual can readily deal with. For example, between the 1930's through to 1944, fewer than 50 papers mentioned the brain in their title, abstract or keywords each year. By the 1950's an inexorable increase had begun and by 1968, the field of neuroscience, by this simple criteria alone, exceeded 10,000 papers a year. In 2012 more than 70,000 papers fulfilling these criteria were published. The constraints of time now force us to be selective in what we read (potentially ~2000 papers a year if we devote a generous 30 minutes to each paper and half our working time). We should not be surprised that our systems of communicating and funding science and of judging the performance of scientists based on their performance in the former, have grown to value novelty.</p><p>It might be argued that none of this matters for “Blue Sky” discovery science. After all, there are plenty of discoveries still to be made and we will only want to follow the positive ones anyway! This approach is inherently wasteful. For every unreported neutral or negative experiment a series of unwitting future scientists will have the same “novel” idea and purposely repeat the same experiments. It is perhaps ironic that over time, negative and neutral studies will be the most highly reproduced but no one will ever know.</p><p>In preclinical medicine the effects of these problems are amplified and become pernicious. Incomplete knowledge doesn't just contribute to financial risk but to real risk of injury or death to volunteers and patients exposed to novel but poorly understood chemicals. If only the positive experiments with a new candidate drug are published and the neutral or negative results remain hidden, the field will believe the drugs work when they in fact do not. Progress to clinical trial will be wasteful, expose patients to the risk of unforseen side effects, and make finding a drug that does work less likely because human and financial resources are now less available.</p><p>Helping scientists deal with the volume of available data and understand these risks is not well served by the traditional narrative review by an individual or small group of writers. However honest, well read and well intentioned the reviewers, the reader has no knowledge of what was left out or the reasons for doing so. The traditional narrative reviewer is as subject to the fashions of the field as any other and is blind to the impact of publication and other biases within the dataset. Moreover, as a species we are stimulated by novelty and so few narrative reviews devote column space to what didn't work. Yet this information is critical if we are to prevent a growing vortex of ever wasteful uninformed false starts.</p><p>Systematic review provides a scientific approach to collation and interpretation of large volumes of data. Simply detailing the search strategy used and defining inclusion and exclusion criteria allows readers to judge for themselves whether the writers have taken a rigorous approach to finding relevant data and provides that critical element of science, a defined methodology which allows others to confirm and extend the results.</p><p>Electronic dissemination of data means that the results of systematic review can now and will increasingly go beyond just metaphorically joining the dots. Meta-analysis allows the data from systematic review to be aggregated and re-analysed and allows the researcher to discover new trends that are rarely evident within single published data sets or in narrative reviews of these data sets.</p><p>In studies of disease, do the results from one animal model point to involvement of a specific mechanism that can be targeted? Is there a clear dose-response relationship between toxicant exposure and ill health? If choice of animal model used has more impact on outcome than variations in stem cell biology in transplantation experiments, what should we do next? Has a study been replicated so often in animals that the outcome is beyond reasonable doubt and no further replications are required, or is more data still needed?</p><p>A misguided trust in the homogeneity of laboratory experimentation and a very real understanding of the extra costs entailed leads many researchers to perform experiments that are too small and are not protected by randomisation and blinding against the perverse elements of human nature and unforseen experimental variables.</p><p>No individual research study or body of evidence is perfect and by and large we think we understand the things that can go wrong in the scientific process. Honest misinterpretation as a body of data grows is inherent to the iterative process of hypothesis testing. However, we do introduce a range of biases in our quest for novelty. We tend to perform only the experiments most likely to return a positive result. In preclinical medicine this means turning a blind eye to those critical experiments that might reduce the “saleability” of a hypothesis but which are critical if for example a new drug is to survive the rigors of the real world found in the clinic. We also rarely ask whether the publishing researchers made reasonable efforts such as randomisation and blinding to avoid introduction of systematic bias. If they didn't all report doing so, stratifying the data can reveal the extent of such bias and might discourage or alternatively support further effort.</p><p>Small underpowered experiments are also easier to perform and because of the play of chance and a poor appreciation and application of the statistics of hypothesis testing can return a surprisingly high proportion of false positive results.</p><p>While replication studies confirming the results of others remain frowned upon by the research community as a whole as derivative and un-original and are actively discouraged or even forbidden by some ethical review boards and funding bodies, false positives will continue to go undetected. Systematic review and meta-analysis can be used to aggregate individual study data to determine whether the overall data set supports the presence of a real effect.</p><p>These small decisions and similar decisions by reviewers and editors and grant and promotions panels all have the ability to skew the reporting and conduct of science. Because experimental outcomes have a statistically defined distribution about the true result, when enough data is available meta-analysis of systematically collected data can detect and quantify the effect of any publication bias.</p><p>While not biologically interesting this data is important for the researcher. A nearly complete data distribution is likely to indicate that a molecule, for example a new drug candidate, behaves as advertised. A highly skewed data distribution might indicate statistically anomalous publication of a scattering of positive results while the majority of truly neutral or negative data remain in researchers' filing cabinets. While most researchers understand intellectually that publication bias exists they do not view it as a high priority. This might change when EBPM highlights the impact it might be having on their specific domain of the research world. Active suppression of data because its publication might harm vested interests might also be reduced if scientific advisory boards and their ilk demanded to see the distribution of all available data before giving their blessing to investment decisions.</p><p>EBPM is an important step towards the goal of understanding the strengths and weaknesses of the data we use to make important decisions and of ensuring we use the best available data in making those decisions.</p>","PeriodicalId":90826,"journal":{"name":"Evidence-based preclinical medicine","volume":"1 1","pages":"1-3"},"PeriodicalIF":0.0000,"publicationDate":"2014-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1111/ebm2.3","citationCount":"0","resultStr":"{\"title\":\"Raising standards for preclinical research\",\"authors\":\"\",\"doi\":\"10.1111/ebm2.3\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Systematic review and meta-analysis are powerful analytical tools. The Cochrane Collaboration, formed in 1993, provides an excellent example of the use of these tools to gather the best evidence regarding the efficacy of interventions in clinical medicine. The use of these tools however is not widespread in preclinical science. Thus Evidence-Based Preclinical Medicine (EBPM) is a new online peer-reviewed open access journal designed to provide a vehicle which fosters the systematic capture and rigorous analysis of all available basic science data on questions relevant to human health. By doing so we aim to raise the standards of preclinical research and improve the efficiency with which preclinical data is translated into improvements in human health.</p><p>The analysis of industrial, agricultural or environmental toxicology, processes of drug discovery and evaluation, disease risk factor modelling and pre- and post-disease behavioural modification as well as early discovery science are all areas where systematic capture of all available data will accelerate our ability to improve human health. The application of rigorous analytical techniques which can give a realistic appreciation of the quality, breadth and potential importance of the available evidence will help researchers decide which hypotheses should be explored further, identify the presence and likely impact of confounding biases and will help health professionals decide which will have an impact on people.</p><p>Most scientists would like to believe that the systems required for these aims are already in place. However, the explosion in the volume of available data makes reliance on traditional systems untenable.</p><p>The problems start with the way we portray science and the aspirations this engenders. In the mass media, text books and popular histories of science and medicine the process of discovery is commonly portrayed a series of Eureka moments. Giant leaps forward made by the greatest minds of an era. But this is not the process. Around the world teams of scientists nibble away at a problem, new ideas are circulated and considered and experiments designed and performed. Many ideas and experiments are dead ends and lead nowhere. But since we learn by our mistakes, knowing how things don't happen refines our knowledge base and nudges us ever closer to the truth by allowing more scientists to focus on the threads that do reveal the true pattern of life.</p><p>Two of the most famous quotes in science speak directly to these issues. Louis Pasteur's “Chance favours only the prepared mind” makes it clear that you have to understand a field if you are to contribute to it. Isaac Newton's “If I have seen further it is by standing on the shoulders of giants” is perhaps more important because it also acknowledges that science is an incremental process. Only a fortunate few are in the right place at the right time and with the right education and knowledge base to finally understand a larger than normal fragment of the puzzle.</p><p>The beauty, but also one of the problems, of science is that it is not a jigsaw puzzle with clearly defined edges. As we learn more we appreciate that there is still more to learn and our horizons expand. With this expansion comes more data for the individual to consume, assimilate and understand sufficiently well to design the next experiment.</p><p>For most of the history of science, speed of communication limited a researcher's ability to gather all of the data. Today the opposite is true. The post-war industrialisation of science and ease of communication means most fields have more data than any individual can readily deal with. For example, between the 1930's through to 1944, fewer than 50 papers mentioned the brain in their title, abstract or keywords each year. By the 1950's an inexorable increase had begun and by 1968, the field of neuroscience, by this simple criteria alone, exceeded 10,000 papers a year. In 2012 more than 70,000 papers fulfilling these criteria were published. The constraints of time now force us to be selective in what we read (potentially ~2000 papers a year if we devote a generous 30 minutes to each paper and half our working time). We should not be surprised that our systems of communicating and funding science and of judging the performance of scientists based on their performance in the former, have grown to value novelty.</p><p>It might be argued that none of this matters for “Blue Sky” discovery science. After all, there are plenty of discoveries still to be made and we will only want to follow the positive ones anyway! This approach is inherently wasteful. For every unreported neutral or negative experiment a series of unwitting future scientists will have the same “novel” idea and purposely repeat the same experiments. It is perhaps ironic that over time, negative and neutral studies will be the most highly reproduced but no one will ever know.</p><p>In preclinical medicine the effects of these problems are amplified and become pernicious. Incomplete knowledge doesn't just contribute to financial risk but to real risk of injury or death to volunteers and patients exposed to novel but poorly understood chemicals. If only the positive experiments with a new candidate drug are published and the neutral or negative results remain hidden, the field will believe the drugs work when they in fact do not. Progress to clinical trial will be wasteful, expose patients to the risk of unforseen side effects, and make finding a drug that does work less likely because human and financial resources are now less available.</p><p>Helping scientists deal with the volume of available data and understand these risks is not well served by the traditional narrative review by an individual or small group of writers. However honest, well read and well intentioned the reviewers, the reader has no knowledge of what was left out or the reasons for doing so. The traditional narrative reviewer is as subject to the fashions of the field as any other and is blind to the impact of publication and other biases within the dataset. Moreover, as a species we are stimulated by novelty and so few narrative reviews devote column space to what didn't work. Yet this information is critical if we are to prevent a growing vortex of ever wasteful uninformed false starts.</p><p>Systematic review provides a scientific approach to collation and interpretation of large volumes of data. Simply detailing the search strategy used and defining inclusion and exclusion criteria allows readers to judge for themselves whether the writers have taken a rigorous approach to finding relevant data and provides that critical element of science, a defined methodology which allows others to confirm and extend the results.</p><p>Electronic dissemination of data means that the results of systematic review can now and will increasingly go beyond just metaphorically joining the dots. Meta-analysis allows the data from systematic review to be aggregated and re-analysed and allows the researcher to discover new trends that are rarely evident within single published data sets or in narrative reviews of these data sets.</p><p>In studies of disease, do the results from one animal model point to involvement of a specific mechanism that can be targeted? Is there a clear dose-response relationship between toxicant exposure and ill health? If choice of animal model used has more impact on outcome than variations in stem cell biology in transplantation experiments, what should we do next? Has a study been replicated so often in animals that the outcome is beyond reasonable doubt and no further replications are required, or is more data still needed?</p><p>A misguided trust in the homogeneity of laboratory experimentation and a very real understanding of the extra costs entailed leads many researchers to perform experiments that are too small and are not protected by randomisation and blinding against the perverse elements of human nature and unforseen experimental variables.</p><p>No individual research study or body of evidence is perfect and by and large we think we understand the things that can go wrong in the scientific process. Honest misinterpretation as a body of data grows is inherent to the iterative process of hypothesis testing. However, we do introduce a range of biases in our quest for novelty. We tend to perform only the experiments most likely to return a positive result. In preclinical medicine this means turning a blind eye to those critical experiments that might reduce the “saleability” of a hypothesis but which are critical if for example a new drug is to survive the rigors of the real world found in the clinic. We also rarely ask whether the publishing researchers made reasonable efforts such as randomisation and blinding to avoid introduction of systematic bias. If they didn't all report doing so, stratifying the data can reveal the extent of such bias and might discourage or alternatively support further effort.</p><p>Small underpowered experiments are also easier to perform and because of the play of chance and a poor appreciation and application of the statistics of hypothesis testing can return a surprisingly high proportion of false positive results.</p><p>While replication studies confirming the results of others remain frowned upon by the research community as a whole as derivative and un-original and are actively discouraged or even forbidden by some ethical review boards and funding bodies, false positives will continue to go undetected. Systematic review and meta-analysis can be used to aggregate individual study data to determine whether the overall data set supports the presence of a real effect.</p><p>These small decisions and similar decisions by reviewers and editors and grant and promotions panels all have the ability to skew the reporting and conduct of science. Because experimental outcomes have a statistically defined distribution about the true result, when enough data is available meta-analysis of systematically collected data can detect and quantify the effect of any publication bias.</p><p>While not biologically interesting this data is important for the researcher. A nearly complete data distribution is likely to indicate that a molecule, for example a new drug candidate, behaves as advertised. A highly skewed data distribution might indicate statistically anomalous publication of a scattering of positive results while the majority of truly neutral or negative data remain in researchers' filing cabinets. While most researchers understand intellectually that publication bias exists they do not view it as a high priority. This might change when EBPM highlights the impact it might be having on their specific domain of the research world. Active suppression of data because its publication might harm vested interests might also be reduced if scientific advisory boards and their ilk demanded to see the distribution of all available data before giving their blessing to investment decisions.</p><p>EBPM is an important step towards the goal of understanding the strengths and weaknesses of the data we use to make important decisions and of ensuring we use the best available data in making those decisions.</p>\",\"PeriodicalId\":90826,\"journal\":{\"name\":\"Evidence-based preclinical medicine\",\"volume\":\"1 1\",\"pages\":\"1-3\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2014-04-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://sci-hub-pdf.com/10.1111/ebm2.3\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Evidence-based preclinical medicine\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1111/ebm2.3\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Evidence-based preclinical medicine","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/ebm2.3","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Systematic review and meta-analysis are powerful analytical tools. The Cochrane Collaboration, formed in 1993, provides an excellent example of the use of these tools to gather the best evidence regarding the efficacy of interventions in clinical medicine. The use of these tools however is not widespread in preclinical science. Thus Evidence-Based Preclinical Medicine (EBPM) is a new online peer-reviewed open access journal designed to provide a vehicle which fosters the systematic capture and rigorous analysis of all available basic science data on questions relevant to human health. By doing so we aim to raise the standards of preclinical research and improve the efficiency with which preclinical data is translated into improvements in human health.
The analysis of industrial, agricultural or environmental toxicology, processes of drug discovery and evaluation, disease risk factor modelling and pre- and post-disease behavioural modification as well as early discovery science are all areas where systematic capture of all available data will accelerate our ability to improve human health. The application of rigorous analytical techniques which can give a realistic appreciation of the quality, breadth and potential importance of the available evidence will help researchers decide which hypotheses should be explored further, identify the presence and likely impact of confounding biases and will help health professionals decide which will have an impact on people.
Most scientists would like to believe that the systems required for these aims are already in place. However, the explosion in the volume of available data makes reliance on traditional systems untenable.
The problems start with the way we portray science and the aspirations this engenders. In the mass media, text books and popular histories of science and medicine the process of discovery is commonly portrayed a series of Eureka moments. Giant leaps forward made by the greatest minds of an era. But this is not the process. Around the world teams of scientists nibble away at a problem, new ideas are circulated and considered and experiments designed and performed. Many ideas and experiments are dead ends and lead nowhere. But since we learn by our mistakes, knowing how things don't happen refines our knowledge base and nudges us ever closer to the truth by allowing more scientists to focus on the threads that do reveal the true pattern of life.
Two of the most famous quotes in science speak directly to these issues. Louis Pasteur's “Chance favours only the prepared mind” makes it clear that you have to understand a field if you are to contribute to it. Isaac Newton's “If I have seen further it is by standing on the shoulders of giants” is perhaps more important because it also acknowledges that science is an incremental process. Only a fortunate few are in the right place at the right time and with the right education and knowledge base to finally understand a larger than normal fragment of the puzzle.
The beauty, but also one of the problems, of science is that it is not a jigsaw puzzle with clearly defined edges. As we learn more we appreciate that there is still more to learn and our horizons expand. With this expansion comes more data for the individual to consume, assimilate and understand sufficiently well to design the next experiment.
For most of the history of science, speed of communication limited a researcher's ability to gather all of the data. Today the opposite is true. The post-war industrialisation of science and ease of communication means most fields have more data than any individual can readily deal with. For example, between the 1930's through to 1944, fewer than 50 papers mentioned the brain in their title, abstract or keywords each year. By the 1950's an inexorable increase had begun and by 1968, the field of neuroscience, by this simple criteria alone, exceeded 10,000 papers a year. In 2012 more than 70,000 papers fulfilling these criteria were published. The constraints of time now force us to be selective in what we read (potentially ~2000 papers a year if we devote a generous 30 minutes to each paper and half our working time). We should not be surprised that our systems of communicating and funding science and of judging the performance of scientists based on their performance in the former, have grown to value novelty.
It might be argued that none of this matters for “Blue Sky” discovery science. After all, there are plenty of discoveries still to be made and we will only want to follow the positive ones anyway! This approach is inherently wasteful. For every unreported neutral or negative experiment a series of unwitting future scientists will have the same “novel” idea and purposely repeat the same experiments. It is perhaps ironic that over time, negative and neutral studies will be the most highly reproduced but no one will ever know.
In preclinical medicine the effects of these problems are amplified and become pernicious. Incomplete knowledge doesn't just contribute to financial risk but to real risk of injury or death to volunteers and patients exposed to novel but poorly understood chemicals. If only the positive experiments with a new candidate drug are published and the neutral or negative results remain hidden, the field will believe the drugs work when they in fact do not. Progress to clinical trial will be wasteful, expose patients to the risk of unforseen side effects, and make finding a drug that does work less likely because human and financial resources are now less available.
Helping scientists deal with the volume of available data and understand these risks is not well served by the traditional narrative review by an individual or small group of writers. However honest, well read and well intentioned the reviewers, the reader has no knowledge of what was left out or the reasons for doing so. The traditional narrative reviewer is as subject to the fashions of the field as any other and is blind to the impact of publication and other biases within the dataset. Moreover, as a species we are stimulated by novelty and so few narrative reviews devote column space to what didn't work. Yet this information is critical if we are to prevent a growing vortex of ever wasteful uninformed false starts.
Systematic review provides a scientific approach to collation and interpretation of large volumes of data. Simply detailing the search strategy used and defining inclusion and exclusion criteria allows readers to judge for themselves whether the writers have taken a rigorous approach to finding relevant data and provides that critical element of science, a defined methodology which allows others to confirm and extend the results.
Electronic dissemination of data means that the results of systematic review can now and will increasingly go beyond just metaphorically joining the dots. Meta-analysis allows the data from systematic review to be aggregated and re-analysed and allows the researcher to discover new trends that are rarely evident within single published data sets or in narrative reviews of these data sets.
In studies of disease, do the results from one animal model point to involvement of a specific mechanism that can be targeted? Is there a clear dose-response relationship between toxicant exposure and ill health? If choice of animal model used has more impact on outcome than variations in stem cell biology in transplantation experiments, what should we do next? Has a study been replicated so often in animals that the outcome is beyond reasonable doubt and no further replications are required, or is more data still needed?
A misguided trust in the homogeneity of laboratory experimentation and a very real understanding of the extra costs entailed leads many researchers to perform experiments that are too small and are not protected by randomisation and blinding against the perverse elements of human nature and unforseen experimental variables.
No individual research study or body of evidence is perfect and by and large we think we understand the things that can go wrong in the scientific process. Honest misinterpretation as a body of data grows is inherent to the iterative process of hypothesis testing. However, we do introduce a range of biases in our quest for novelty. We tend to perform only the experiments most likely to return a positive result. In preclinical medicine this means turning a blind eye to those critical experiments that might reduce the “saleability” of a hypothesis but which are critical if for example a new drug is to survive the rigors of the real world found in the clinic. We also rarely ask whether the publishing researchers made reasonable efforts such as randomisation and blinding to avoid introduction of systematic bias. If they didn't all report doing so, stratifying the data can reveal the extent of such bias and might discourage or alternatively support further effort.
Small underpowered experiments are also easier to perform and because of the play of chance and a poor appreciation and application of the statistics of hypothesis testing can return a surprisingly high proportion of false positive results.
While replication studies confirming the results of others remain frowned upon by the research community as a whole as derivative and un-original and are actively discouraged or even forbidden by some ethical review boards and funding bodies, false positives will continue to go undetected. Systematic review and meta-analysis can be used to aggregate individual study data to determine whether the overall data set supports the presence of a real effect.
These small decisions and similar decisions by reviewers and editors and grant and promotions panels all have the ability to skew the reporting and conduct of science. Because experimental outcomes have a statistically defined distribution about the true result, when enough data is available meta-analysis of systematically collected data can detect and quantify the effect of any publication bias.
While not biologically interesting this data is important for the researcher. A nearly complete data distribution is likely to indicate that a molecule, for example a new drug candidate, behaves as advertised. A highly skewed data distribution might indicate statistically anomalous publication of a scattering of positive results while the majority of truly neutral or negative data remain in researchers' filing cabinets. While most researchers understand intellectually that publication bias exists they do not view it as a high priority. This might change when EBPM highlights the impact it might be having on their specific domain of the research world. Active suppression of data because its publication might harm vested interests might also be reduced if scientific advisory boards and their ilk demanded to see the distribution of all available data before giving their blessing to investment decisions.
EBPM is an important step towards the goal of understanding the strengths and weaknesses of the data we use to make important decisions and of ensuring we use the best available data in making those decisions.