Background: Meta-research studies, defined as. "research on research", should transparently report search methods used to identify the assessed research. Currently, there is no published evaluation of search methods reporting in meta-research studies. The aim of this study was to assess the characteristics of search methods in dental meta-research studies and to identify factors associated with the completeness of the reported search strategies.
Methods: With a focus on the assessment of reporting quality and methodological quality, we searched in the Web of Science Core Collection database for dental meta-research studies published from the database's inception to February 13, 2024. The extracted data included the examined meta-research studies, characteristics of their authors and journals and search methods reporting of the examined studies. Logistic regression models were applied to examine the associations between relevant variables and search strategy reporting completeness.
Results: The search generated 3,774 documents, and 224 meta-research studies were included in the final analysis. Nearly all studies (99.6%) disclosed their general search methods, but only 130 studies (58%) provided both keywords and Boolean operators. Regression analyses indicated that meta-research studies published more recently, with prospective registration, with a shorter time between the searches and publication, a lack of language restrictions and librarian involvement were more likely to report a more complete search strategy.
Conclusion: The results highlight the importance of unrestricted language searches, structured methodologies and librarian support in improving the quality and transparency of reporting search strategies in dental meta-research.
Background and objectives: In this third of a 3-part series, we use net benefit (NB) graphs to evaluate a risk model that divides D-dimer results into 8 intervals to estimate the probability of pulmonary embolism (PE). This demonstrates the effect of miscalibration on NB graphs.
Method: We evaluate the risk model's performance using pooled data on 6013 participants from 5 PE diagnostic management studies. For a range of values of the "exchange rate" (w, the treatment threshold odds), we obtained NB of applying the risk model by subtracting the number of unnecessary treatments weighted by the exchange rate from the number of appropriate treatments and then dividing by the population size.
Results: In NB graphs, in which the x-axis is scaled linearly with the exchange rate w, miscalibration causes vertical changes in NB. If the risk model overestimates risk, as in this example, the NB graph for the risk model has vertical jumps up. These are due to the sudden gain in NB resulting from less overtreatment when the treatment threshold first exceeds the overestimated predicted risks.
Conclusion: Calculating NB is a logical approach to quantifying the value of a diagnostic test or risk prediction model. In the same dataset at the same treatment threshold probability, the risk model with the higher net benefit is the better model in that dataset. Most net benefit calculations omit the harm of doing the test or applying the risk model, but if it is nontrivial, this harm can be subtracted from the net benefit.