Objectives: To evaluate human-based Medical Subject Headings (MeSH) allocation in articles about 'patient simulation'-a technique that mimics real-life patient scenarios with controlled patient responses.
Methods: A validation set of articles indexed before the Medical Text Indexer-Auto implementation (in 2019) was created with 150 combinations potentially referring to 'patient simulation'. Articles were classified into four categories of simulation studies. Allocation of seven MeSH terms (Simulation Training, Patient Simulation, High Fidelity Simulation Training, Computer Simulation, Patient-Specific Modelling, Virtual Reality, and Virtual Reality Exposure Therapy) was investigated. Accuracy metrics (sensitivity, precision, or positive predictive value) were calculated for each category of studies.
Key findings: A set of 7213 articles was obtained from 53 different word combinations, with 2634 excluded as irrelevant. 'Simulated patient' and 'standardized/standardized patient' were the most used terms. The 4579 included articles, published in 1044 different journals, were classified into: 'Machine/Automation' (8.6%), 'Education' (75.9%) and 'Practice audit' (11.4%); 4.1% were 'Unclear'. Articles were indexed with a median of 10 MeSH (IQR 8-13); however, 45.5% were not indexed with any of the seven MeSH terms. Patient Simulation was the most prevalent MeSH (24.0%). Automation articles were more associated with Computer Simulation MeSH (sensitivity = 54.5%; precision = 25.1%), while Education articles were associated with Patient Simulation MeSH (sensitivity = 40.2%; precision = 80.9%). Practice audit articles were also polarized to Patient Simulation MeSH (sensitivity = 34.6%; precision = 10.5%).
Conclusions: Inconsistent use of free-text words related to patient simulation was observed, as well as inaccuracies in human-based MeSH assignments. These limitations can compromise relevant literature retrieval to support evidence synthesis exercises.