Smartphone biometrics in the EMR: is the 5th vital sign here? @JCOCCI_ASCO commentary by Sankaran and @RahulBanerjeeMD here.
Smartphone biometrics in the EMR: is the 5th vital sign here? @JCOCCI_ASCO commentary by Sankaran and @RahulBanerjeeMD here.
Purpose: Large language model (LLM) artificial intelligences may help physicians appeal insurer denials of prescribed medical services, a task that delays patient care and contributes to burnout. We evaluated LLM performance at this task for denials of radiotherapy services.
Methods: We evaluated generative pretrained transformer 3.5 (GPT-3.5; OpenAI, San Francisco, CA), GPT-4, GPT-4 with internet search functionality (GPT-4web), and GPT-3.5ft. The latter was developed by fine-tuning GPT-3.5 via an OpenAI application programming interface with 53 examples of appeal letters written by radiation oncologists. Twenty test prompts with simulated patient histories were programmatically presented to the LLMs, and output appeal letters were scored by three blinded radiation oncologists for language representation, clinical detail inclusion, clinical reasoning validity, literature citations, and overall readiness for insurer submission.
Results: Interobserver agreement between radiation oncologists' scores was moderate or better for all domains (Cohen's kappa coefficients: 0.41-0.91). GPT-3.5, GPT-4, and GPT-4web wrote letters that were on average linguistically clear, summarized provided clinical histories without confabulation, reasoned appropriately, and were scored useful to expedite the insurance appeal process. GPT-4 and GPT-4web letters demonstrated superior clinical reasoning and were readier for submission than GPT-3.5 letters (P < .001). Fine-tuning increased GPT-3.5ft confabulation and compromised performance compared with other LLMs across all domains (P < .001). All LLMs, including GPT-4web, were poor at supporting clinical assertions with existing, relevant, and appropriately cited primary literature.
Conclusion: When prompted appropriately, three commercially available LLMs drafted letters that physicians deemed would expedite appealing insurer denials of radiotherapy services. LLMs may decrease this task's clerical workload on providers. However, LLM performance worsened when fine-tuned with a task-specific, small training data set.
Purpose: Limited studies have used natural language processing (NLP) in the context of non-small cell lung cancer (NSCLC). This study aimed to validate the application of an NLP model to an NSCLC cohort by extracting NSCLC concepts from free-text medical notes and converting them to structured, interpretable data.
Methods: Patients with a lung neoplasm, NSCLC histology, and treatment information in their notes were selected from a repository of over 27 million patients. From these, 200 were randomly selected for this study with the longest and the most recent note included for each patient. An NLP model developed and validated on a large solid and blood cancer oncology cohort was applied to this NSCLC cohort. Two certified tumor registrars and a curator abstracted concepts from the notes: neoplasm, histology, stage, TNM values, and metastasis sites. This manually abstracted gold standard was compared with the NLP model output. Precision and recall scores were calculated.
Results: The NLP model extracted the NSCLC concepts with excellent precision and recall with the following scores, respectively: Lung neoplasm 100% and 100%, NSCLC histology 99% and 88%, histology correctly linked to neoplasm 98% and 79%, stage value 98.8% and 92%, stage TNM value 93% and 98%, and metastasis site 97% and 89%. High precision is related to a low number of false positives, and therefore, extracted concepts are likely accurate. High recall indicates that the model captured most of the desired concepts.
Conclusion: This study validates that Optum's oncology NLP model has high precision and recall with clinical real-world data and is a reliable model to support research studies and clinical trials. This validation study shows that our nonspecific solid tumor and blood cancer oncology model is generalizable to successfully extract clinical information from specific cancer cohorts.
Purpose: Categorizing patients with cancer by their disease stage can be an important tool when conducting administrative claims-based studies. As claims databases frequently do not capture this information, algorithms are increasingly used to define disease stage. To our knowledge, to date, no study has used an algorithm to categorize patients with bladder cancer (BC) by disease stage (non-muscle-invasive BC [NMIBC], muscle-invasive BC [MIBC], or locally advanced/metastatic urothelial carcinoma [la/mUC]) in a US-based health care claims database.
Methods: A claims-based algorithm was developed to categorize patients by disease stage on the basis of the administrative claims portion of the SEER-Medicare linked data. The algorithm was validated against a reference SEER registry, and the algorithm's parameters were iteratively modified to improve its performance. Patients were included if they had an initial diagnosis of BC between January 2016 and December 2017 recorded in SEER registry data. Medicare claims data were available for these patients until December 31, 2019. The algorithm was evaluated by assessing percentage agreement, Cohen's kappa (κ), specificity, positive predictive value (PPV), and negative predictive value (NPV) against the SEER categorization.
Results: A total of 15,484 patients with SEER-confirmed BC were included: 10,991 (71.0%) with NMIBC, 3,645 (23.5%) with MIBC, and 848 (5.5%) with la/mUC. After multiple rounds of algorithm optimization, the final algorithm had an agreement of 82.5% with SEER, with a κ of 0.58, a PPV of 87.0% for NMIBC, and 76.8% for MIBC and a high NPV for la/mUC of 98.0%.
Conclusion: This claims-based algorithm could be a useful approach for researchers conducting claims-based studies categorizing patients with BC at diagnosis.
Purpose: Emerging evidence suggests that the use of artificial intelligence can assist in the timely detection and optimization of therapeutic approach in patients with prostate cancer. The conventional perspective on radiomics encompassing segmentation and the extraction of radiomic features considers it as an independent and sequential process. However, it is not necessary to adhere to this viewpoint. In this study, we show that besides generating masks from which radiomic features can be extracted, prostate segmentation and reconstruction models provide valuable information in their feature space, which can improve the quality of radiomic signatures models for disease aggressiveness classification.
Materials and methods: We perform 2,244 experiments with deep learning features extracted from 13 different models trained using different anatomic zones and characterize how modeling decisions, such as deep feature aggregation and dimensionality reduction, affect performance.
Results: While models using deep features from full gland and radiomic features consistently lead to improved disease aggressiveness prediction performance, others are detrimental. Our results suggest that the use of deep features can be beneficial, but an appropriate and comprehensive assessment is necessary to ensure that their inclusion does not harm predictive performance.
Conclusion: The study findings reveal that incorporating deep features derived from autoencoder models trained to reconstruct the full prostate gland (both zonal models show worse performance than radiomics only models), combined with radiomic features, often lead to a statistically significant increase in model performance for disease aggressiveness classification. Additionally, the results also demonstrate that the choice of feature selection is key to achieving good performance, with principal component analysis (PCA) and PCA + relief being the best approaches and that there is no clear difference between the three proposed latent representation extraction techniques.
Purpose: Data on end-of-life care (EOLC) quality, assessed through evidence-based quality measures (QMs), are difficult to obtain. Natural language processing (NLP) enables efficient quality measurement and is not yet used for children with serious illness. We sought to validate a pediatric-specific EOLC-QM keyword library and evaluate EOLC-QM attainment among childhood cancer decedents.
Methods: In a single-center cohort of children with cancer who died between 2014 and 2022, we piloted a rule-based NLP approach to examine the content of clinical notes in the last 6 months of life. We identified documented discussions of five EOLC-QMs: goals of care, limitations to life-sustaining treatments (LLST), hospice, palliative care consultation, and preferred location of death. We assessed performance of NLP methods, compared with gold standard manual chart review. We then used NLP to characterize proportions of decedents with documented EOLC-QM discussions and timing of first documentation relative to death.
Results: Among 101 decedents, nearly half were minorities (Hispanic/Latinx [24%], non-Hispanic Black/African American [20%]), female (48%), or diagnosed with solid tumors (43%). Through iterative refinement, our keyword library achieved robust performance statistics (for all EOLC-QMs, F1 score = 1.0). Most decedents had documented discussions regarding goals of care (83%), LLST (83%), and hospice (74%). Fewer decedents had documented discussions regarding palliative care consultation (49%) or preferred location of death (36%). For all five EOLC-QMs, first documentation occurred, on average, >30 days before death.
Conclusion: A high proportion of decedents attained specified EOLC-QMs more than 30 days before death. Our findings indicate that NLP is a feasible approach to measuring quality of care for children with cancer at the end of life and is ripe for multi-center research and quality improvement.
Purpose: A previous study demonstrated that power against the (unobserved) true effect for the primary end point (PEP) of most phase III oncology trials is low, suggesting an increased risk of false-negative findings in the field of late-phase oncology. Fitting models with prognostic covariates is a potential solution to improve power; however, the extent to which trials leverage this approach, and its impact on trial interpretation at scale, is unknown. To that end, we hypothesized that phase III trials using multivariable PEP analyses are more likely to demonstrate superiority versus trials with univariable analyses.
Methods: PEP analyses were reviewed from trials registered on ClinicalTrials.gov. Adjusted odds ratios (aORs) were calculated by logistic regressions.
Results: Of the 535 trials enrolling 454,824 patients, 69% (n = 368) used a multivariable PEP analysis. Trials with multivariable PEP analyses were more likely to demonstrate PEP superiority (57% [209 of 368] v 42% [70 of 167]; aOR, 1.78 [95% CI, 1.18 to 2.72]; P = .007). Among trials with a multivariable PEP model, 16 conditioned on covariates and 352 stratified by covariates. However, 108 (35%) of 312 trials with stratified analyses lost power by categorizing a continuous variable, which was especially common among immunotherapy trials (aOR, 2.45 [95% CI, 1.23 to 4.92]; P = .01).
Conclusion: Trials increasing power by fitting multivariable models were more likely to demonstrate PEP superiority than trials with unadjusted analysis. Underutilization of conditioning models and empirical power loss associated with covariate categorization required by stratification were identified as barriers to power gains. These findings underscore the opportunity to increase power in phase III trials with conventional methodology and improve patient access to effective novel therapies.
Purpose: Identifying cancer symptoms in electronic health record (EHR) narratives is feasible with natural language processing (NLP). However, more efficient NLP systems are needed to detect various symptoms and distinguish observed symptoms from negated symptoms and medication-related side effects. We evaluated the accuracy of NLP in (1) detecting 14 symptom groups (ie, pain, fatigue, swelling, depressed mood, anxiety, nausea/vomiting, pruritus, headache, shortness of breath, constipation, numbness/tingling, decreased appetite, impaired memory, disturbed sleep) and (2) distinguishing observed symptoms in EHR narratives among patients with cancer.
Methods: We extracted 902,508 notes for 11,784 unique patients diagnosed with cancer and developed a gold standard corpus of 1,112 notes labeled for presence or absence of 14 symptom groups. We trained an embeddings-augmented NLP system integrating human and machine intelligence and conventional machine learning algorithms. NLP metrics were calculated on a gold standard corpus subset for testing.
Results: The interannotator agreement for labeling the gold standard corpus was excellent at 92%. The embeddings-augmented NLP model achieved the best performance (F1 score = 0.877). The highest NLP accuracy was observed in pruritus (F1 score = 0.937) while the lowest accuracy was in swelling (F1 score = 0.787). After classifying the entire data set with embeddings-augmented NLP, we found that 41% of the notes included symptom documentation. Pain was the most documented symptom (29% of all notes) while impaired memory was the least documented (0.7% of all notes).
Conclusion: We illustrated the feasibility of detecting 14 symptom groups in EHR narratives and showed that an embeddings-augmented NLP system outperforms conventional machine learning algorithms in detecting symptom information and differentiating observed symptoms from negated symptoms and medication-related side effects.
Purpose: Electronic health records (EHRs) are valuable information repositories that offer insights for enhancing clinical research on breast cancer (BC) using real-world data. The objective of this study was to develop a natural language processing (NLP) model specifically designed to extract structured data from BC pathology reports written in natural language.
Methods: During the initial phase, the algorithm's development cohort comprised 193 pathology reports from 116 patients with BC from 2012 to 2016. A rule-based NLP algorithm was applied to extract 26 variables for analysis and was compared with the manual extraction of data performed by both a data entry specialist and an oncologist. Following the first approach, the data set was expanded to include 513 reports, and a Named Entity Recognition (NER)-NLP model was trained and evaluated using K-fold cross-validation.
Results: The first approach led to a concordance analysis, which revealed an 82.9% agreement between the algorithm and the oncologist, whereas the concordance between the data entry specialist and the oncologist was 90.8%. The second training approach introduced the definition of an NER-NLP model, in which the accuracy showed remarkable potential (97.8%). Notably, the model demonstrated remarkable performance, especially for parameters such as estrogen receptor, progesterone receptor, human epidermal growth factor receptor 2, and Ki-67 (F1-score 1.0).
Conclusion: The present study aligns with the rapidly evolving field of artificial intelligence (AI) applications in oncology, seeking to expedite the development of complex cancer databases and registries. The results of the model are currently undergoing postprocessing procedures to organize the data into tabular structures, facilitating their utilization in real-world clinical and research endeavors.