Purpose: Our study is motivated by evaluating the role of hematopoietic cell transplantation (HCT) after chimeric antigen receptor T-cell (CAR-T) therapy for ALL, a debated topic. Because patients may receive HCT at different times after CAR-T infusion or never, HCT post-CAR-T should be considered as a time-varying covariate (TVC).
Methods: Standard Cox models and Kaplan-Meier (KM) curves (naïve method) assume that TVC status is known and fixed at baseline, which can yield biased estimates. Landmark analysis is a popular alternative but depends on a chosen landmark time. Time-dependent (TD) Cox model is better suited for TVC although visualizing survival curves is complex. The newly proposed Smith-Zee method generates appropriate survival curves from TD Cox models.
Results: To address these challenges, we developed an open-source R Shiny tool integrating multiple models (naïve Cox, landmark Cox, and TD Cox) and curves (naïve KM, landmark KM, Smith-Zee, and Extended KM) to facilitate TVC analysis. Reanalysis of post-CAR-T HCT's effect on leukemia-free survival (LFS) showed consistent results between naïve and TD Cox models, whereas landmark analyses varied by landmark time. A separate data analysis of chronic graft-versus-host disease and survival showed that substantial differences emerged across statistical methods. Simulations revealed increased bias in naïve methods when TVC changed late and minimal bias when TVC changes occurred early relative to time to events.
Conclusion: We recommend TD Cox models and Smith-Zee curves for robust TVC analysis. Our R Shiny tool supports standardized analyses without requiring data sharing, thereby promoting collaboration across different institutions and providing a practical tool to advance survival analysis in oncology research.
{"title":"Novel R Shiny Tool for Survival Analysis With Time-Varying Covariate in Oncology Studies: Overcoming Biases and Enhancing Collaboration.","authors":"Yimei Li, Yang Qiao, Fei Gao, Jordan Gauthier, Qiang Ed Zhang, Jenna Voutsinas, Wendy Leisenring, Ted Gooley, Corinne Summers, Alexandre Hirayama, Cameron J Turtle, Rebecca Gardner, Jarcy Zee, Qian Vicky Wu","doi":"10.1200/CCI-25-00225","DOIUrl":"10.1200/CCI-25-00225","url":null,"abstract":"<p><strong>Purpose: </strong>Our study is motivated by evaluating the role of hematopoietic cell transplantation (HCT) after chimeric antigen receptor T-cell (CAR-T) therapy for ALL, a debated topic. Because patients may receive HCT at different times after CAR-T infusion or never, HCT post-CAR-T should be considered as a time-varying covariate (TVC).</p><p><strong>Methods: </strong>Standard Cox models and Kaplan-Meier (KM) curves (naïve method) assume that TVC status is known and fixed at baseline, which can yield biased estimates. Landmark analysis is a popular alternative but depends on a chosen landmark time. Time-dependent (TD) Cox model is better suited for TVC although visualizing survival curves is complex. The newly proposed Smith-Zee method generates appropriate survival curves from TD Cox models.</p><p><strong>Results: </strong>To address these challenges, we developed an open-source R Shiny tool integrating multiple models (naïve Cox, landmark Cox, and TD Cox) and curves (naïve KM, landmark KM, Smith-Zee, and Extended KM) to facilitate TVC analysis. Reanalysis of post-CAR-T HCT's effect on leukemia-free survival (LFS) showed consistent results between naïve and TD Cox models, whereas landmark analyses varied by landmark time. A separate data analysis of chronic graft-versus-host disease and survival showed that substantial differences emerged across statistical methods. Simulations revealed increased bias in naïve methods when TVC changed late and minimal bias when TVC changes occurred early relative to time to events.</p><p><strong>Conclusion: </strong>We recommend TD Cox models and Smith-Zee curves for robust TVC analysis. Our R Shiny tool supports standardized analyses without requiring data sharing, thereby promoting collaboration across different institutions and providing a practical tool to advance survival analysis in oncology research.</p>","PeriodicalId":51626,"journal":{"name":"JCO Clinical Cancer Informatics","volume":"10 ","pages":"e2500225"},"PeriodicalIF":2.8,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12885575/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146094767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2026-02-12DOI: 10.1200/CCI-25-00257
Adam P Yan, Emily Saso, Julia Shannon, Heather Laird, Alyssa Ramdeo, Robin Deliva, Samantha Baron, Bren Cardiff, Daniel Rosenfield, Ashley Graham, Mihir Ramnani, Zahra Syed, Denise Connolly, Allison Starr, Priya Patel, L Lee Dupuis, Lillian Sung
Purpose: Calls to implement routine symptom screening among pediatric oncology patients are increasing. Objectives were to develop and evaluate the usability of Symptom Screening in Pediatrics (SSPedi), a validated patient reported outcome tool, when integrated into the Epic electronic health record.
Methods: We developed self-report and proxy-report SSPedi in Epic's patient portal MyChart and enrolled patients with cancer age 12-18 years or their parent/guardians, and parents/guardians of patients with cancer age 2-18 years. Participants were enrolled in three cohorts of 10 participants per cohort. A clinical research associate evaluated the participants' ability to correctly complete eight tasks including finding and completing SSPedi on a scheduled day and when unscheduled, locating tips to manage symptoms, and viewing past SSPedi reports. Participants self-reported ease or difficulty in completing each task. Modifications were made to refine SSPedi in Epic after the enrollment of each cohort of 10 patients on the basis of feedback.
Results: We enrolled 30 participants, including 21 parents/guardians and nine patients. Overall, 60% were correctly able to find SSPedi on a scheduled reminder day and 33% were able to find SSPedi on an unscheduled day. Once found, 70% of participants could complete SSPedi correctly. Only 33% could correctly view SSPedi trends over time. By self-report, 20 of 30 participants (67%) found SSPedi easy or very easy to use overall. This increased to 100% in the final cohort of 10 participants.
Conclusion: We integrated SSPedi into Epic. Participants can successfully complete SSPedi when scheduled on a reminder day. They found it more challenging to complete SSPedi without a reminder and to view past SSPedi reports. Implementation will require patient and parent/or guardian training and support.
{"title":"Integrating Symptom Screening in Pediatrics Into the Epic Electronic Health Record: Development and Acceptability for Pediatric Cancer Patients.","authors":"Adam P Yan, Emily Saso, Julia Shannon, Heather Laird, Alyssa Ramdeo, Robin Deliva, Samantha Baron, Bren Cardiff, Daniel Rosenfield, Ashley Graham, Mihir Ramnani, Zahra Syed, Denise Connolly, Allison Starr, Priya Patel, L Lee Dupuis, Lillian Sung","doi":"10.1200/CCI-25-00257","DOIUrl":"https://doi.org/10.1200/CCI-25-00257","url":null,"abstract":"<p><strong>Purpose: </strong>Calls to implement routine symptom screening among pediatric oncology patients are increasing. Objectives were to develop and evaluate the usability of Symptom Screening in Pediatrics (SSPedi), a validated patient reported outcome tool, when integrated into the Epic electronic health record.</p><p><strong>Methods: </strong>We developed self-report and proxy-report SSPedi in Epic's patient portal MyChart and enrolled patients with cancer age 12-18 years or their parent/guardians, and parents/guardians of patients with cancer age 2-18 years. Participants were enrolled in three cohorts of 10 participants per cohort. A clinical research associate evaluated the participants' ability to correctly complete eight tasks including finding and completing SSPedi on a scheduled day and when unscheduled, locating tips to manage symptoms, and viewing past SSPedi reports. Participants self-reported ease or difficulty in completing each task. Modifications were made to refine SSPedi in Epic after the enrollment of each cohort of 10 patients on the basis of feedback.</p><p><strong>Results: </strong>We enrolled 30 participants, including 21 parents/guardians and nine patients. Overall, 60% were correctly able to find SSPedi on a scheduled reminder day and 33% were able to find SSPedi on an unscheduled day. Once found, 70% of participants could complete SSPedi correctly. Only 33% could correctly view SSPedi trends over time. By self-report, 20 of 30 participants (67%) found SSPedi easy or very easy to use overall. This increased to 100% in the final cohort of 10 participants.</p><p><strong>Conclusion: </strong>We integrated SSPedi into Epic. Participants can successfully complete SSPedi when scheduled on a reminder day. They found it more challenging to complete SSPedi without a reminder and to view past SSPedi reports. Implementation will require patient and parent/or guardian training and support.</p>","PeriodicalId":51626,"journal":{"name":"JCO Clinical Cancer Informatics","volume":"10 ","pages":"e2500257"},"PeriodicalIF":2.8,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146183303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2026-01-09DOI: 10.1200/CCI-25-00262
Guannan Gong, Jessica Liu, Sameer Pandya, Cristian Taborda, Nathalie Wiesendanger, Nate Price, Will Byron, Andreas Coppi, Patrick Young, Christina Wiess, Haley Dunning, Courtney Barganier, Rachel Brodeur, Neal Fischbach, Patricia LoRusso, Lajos Pusztai, So Yeon Kim, Mariya Rozenblit, Michael Cecchini, Anne Mongiu, Lourdes Mendez, Edward Kaftan, Charles Torre, Harlan Krumholz, Ian Krop, Wade Schulz, Maryam Lustberg, Pamela L Kunz
Purpose: Cancer clinical trial enrollment remains critically low at 5%-7% of adult patients despite exponential growth in available trials. Manual patient-trial matching represents a fundamental bottleneck, whereas current artificial intelligence (AI) and machine learning patient-trial matching systems lack data standardization and compatibility across health systems. We developed and validated a semiautomated clinical trial patient matching (CTPM) tool to improve recruitment efficiency and scalability.
Methods: We created a hybrid rules-based and natural language processing (NLP)-based pipeline that automatically screens patients using structured and unstructured electronic health record data standardized to the Observational Medical Outcomes Partnership (OMOP) common data model. CTPM performance was first evaluated on one metastatic colorectal cancer (CRC) trial by comparing CTPM accuracy and efficiency to manual chart review. Following the single-trial validation, we then implemented the system across 29 clinical trials spanning multiple cancer specialties and phases.
Results: For the single CRC trial, CTPM achieved 94% retrospective and 88% prospective accuracy, matching gold standard clinical chart review with 100% sensitivity. Implementation reduced chart review workload 10-fold and screening time by 41% (3.1 to 1.8 minutes per chart) for those patients who did undergo review. Since September 2022, the system has screened 98,348 patients across 29 trials, identifying 825 eligible candidates and facilitating 117 patient enrollments with 9%-37% consent rates.
Conclusion: This AI and NLP tool demonstrates improved efficiency in clinical trial recruitment by enabling research teams to focus on qualified candidates rather than exhaustive chart reviews. The OMOP-based framework supports scalability across health systems, with potential to address enrollment challenges that limit patient access to clinical trials.
{"title":"Clinical Trial Patient Matching: A Real-Time, Common Data Model and Artificial Intelligence-Driven System for Semiautomated Patient Prescreening in Cancer Clinical Trials.","authors":"Guannan Gong, Jessica Liu, Sameer Pandya, Cristian Taborda, Nathalie Wiesendanger, Nate Price, Will Byron, Andreas Coppi, Patrick Young, Christina Wiess, Haley Dunning, Courtney Barganier, Rachel Brodeur, Neal Fischbach, Patricia LoRusso, Lajos Pusztai, So Yeon Kim, Mariya Rozenblit, Michael Cecchini, Anne Mongiu, Lourdes Mendez, Edward Kaftan, Charles Torre, Harlan Krumholz, Ian Krop, Wade Schulz, Maryam Lustberg, Pamela L Kunz","doi":"10.1200/CCI-25-00262","DOIUrl":"https://doi.org/10.1200/CCI-25-00262","url":null,"abstract":"<p><strong>Purpose: </strong>Cancer clinical trial enrollment remains critically low at 5%-7% of adult patients despite exponential growth in available trials. Manual patient-trial matching represents a fundamental bottleneck, whereas current artificial intelligence (AI) and machine learning patient-trial matching systems lack data standardization and compatibility across health systems. We developed and validated a semiautomated clinical trial patient matching (CTPM) tool to improve recruitment efficiency and scalability.</p><p><strong>Methods: </strong>We created a hybrid rules-based and natural language processing (NLP)-based pipeline that automatically screens patients using structured and unstructured electronic health record data standardized to the Observational Medical Outcomes Partnership (OMOP) common data model. CTPM performance was first evaluated on one metastatic colorectal cancer (CRC) trial by comparing CTPM accuracy and efficiency to manual chart review. Following the single-trial validation, we then implemented the system across 29 clinical trials spanning multiple cancer specialties and phases.</p><p><strong>Results: </strong>For the single CRC trial, CTPM achieved 94% retrospective and 88% prospective accuracy, matching gold standard clinical chart review with 100% sensitivity. Implementation reduced chart review workload 10-fold and screening time by 41% (3.1 to 1.8 minutes per chart) for those patients who did undergo review. Since September 2022, the system has screened 98,348 patients across 29 trials, identifying 825 eligible candidates and facilitating 117 patient enrollments with 9%-37% consent rates.</p><p><strong>Conclusion: </strong>This AI and NLP tool demonstrates improved efficiency in clinical trial recruitment by enabling research teams to focus on qualified candidates rather than exhaustive chart reviews. The OMOP-based framework supports scalability across health systems, with potential to address enrollment challenges that limit patient access to clinical trials.</p>","PeriodicalId":51626,"journal":{"name":"JCO Clinical Cancer Informatics","volume":"10 ","pages":"e2500262"},"PeriodicalIF":2.8,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145946722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2026-01-08DOI: 10.1200/CCI-25-00138
Sonish Sivarajkumar, Subhash Edupuganti, David Lazris, Manisha Bhattacharya, Michael Davis, Devin Dressman, Roby Thomas, Yan Hu, Yang Ren, Hua Xu, Ping Yang, Yufei Huang, Yanshan Wang
Purpose: Manual extraction of treatment outcomes from unstructured oncology clinical notes is a significant challenge for real-world evidence (RWE) generation. This study aimed to develop and evaluate a robust natural language processing (NLP) system to automatically extract cancer treatments and their associated RECIST-based response categories (complete response, partial response, stable disease, and progressive disease) from non-small cell lung cancer (NSCLC) clinical notes.
Methods: This retrospective NLP development and validation study used a corpus of 250 NSCLC oncology notes from University of Pittsburgh Medical Center (UPMC) Hillman Cancer Center, annotated by physician experts. An end-to-end NLP pipeline was designed, integrating a rule-based module for entity extraction (treatments and responses) and a machine learning module using biomedical clinical bidirectional encoder representations from transformers for relation classification. The system's performance was evaluated on a held-out test set, with partial external validation for relation extraction on a Mayo Clinic data set.
Results: The NLP system achieved high overall accuracy. On the UPMC test set (64 notes), the relation classification model attained an area under the receiver operating characteristic curve of 0.94 and an F1 score of 0.92 for linking treatments with documented responses. The rule-based entity extraction demonstrated a macro-averaged F1 score of 0.87 (precision 0.98, recall 0.81). Although precision was high for chemotherapy and most response types (1.00), recall for cancer surgery was 0.45. External validation at Mayo Clinic showed moderate relation extraction F1 scores (range: 0.51-0.64).
Conclusion: The proposed NLP system can reliably extract structured treatment and response information from unstructured NSCLC oncology notes with high accuracy. This automated approach can assist in abstracting critical cancer treatment outcomes from clinical narrative text, thereby streamlining real-world data analysis and supporting the generation of RWE in oncology.
{"title":"Extraction of Treatments and Responses From Non-Small Cell Lung Cancer Clinical Notes Using Natural Language Processing.","authors":"Sonish Sivarajkumar, Subhash Edupuganti, David Lazris, Manisha Bhattacharya, Michael Davis, Devin Dressman, Roby Thomas, Yan Hu, Yang Ren, Hua Xu, Ping Yang, Yufei Huang, Yanshan Wang","doi":"10.1200/CCI-25-00138","DOIUrl":"10.1200/CCI-25-00138","url":null,"abstract":"<p><strong>Purpose: </strong>Manual extraction of treatment outcomes from unstructured oncology clinical notes is a significant challenge for real-world evidence (RWE) generation. This study aimed to develop and evaluate a robust natural language processing (NLP) system to automatically extract cancer treatments and their associated RECIST-based response categories (complete response, partial response, stable disease, and progressive disease) from non-small cell lung cancer (NSCLC) clinical notes.</p><p><strong>Methods: </strong>This retrospective NLP development and validation study used a corpus of 250 NSCLC oncology notes from University of Pittsburgh Medical Center (UPMC) Hillman Cancer Center, annotated by physician experts. An end-to-end NLP pipeline was designed, integrating a rule-based module for entity extraction (treatments and responses) and a machine learning module using biomedical clinical bidirectional encoder representations from transformers for relation classification. The system's performance was evaluated on a held-out test set, with partial external validation for relation extraction on a Mayo Clinic data set.</p><p><strong>Results: </strong>The NLP system achieved high overall accuracy. On the UPMC test set (64 notes), the relation classification model attained an area under the receiver operating characteristic curve of 0.94 and an F1 score of 0.92 for linking treatments with documented responses. The rule-based entity extraction demonstrated a macro-averaged F1 score of 0.87 (precision 0.98, recall 0.81). Although precision was high for chemotherapy and most response types (1.00), recall for cancer surgery was 0.45. External validation at Mayo Clinic showed moderate relation extraction F1 scores (range: 0.51-0.64).</p><p><strong>Conclusion: </strong>The proposed NLP system can reliably extract structured treatment and response information from unstructured NSCLC oncology notes with high accuracy. This automated approach can assist in abstracting critical cancer treatment outcomes from clinical narrative text, thereby streamlining real-world data analysis and supporting the generation of RWE in oncology.</p>","PeriodicalId":51626,"journal":{"name":"JCO Clinical Cancer Informatics","volume":"10 ","pages":"e2500138"},"PeriodicalIF":2.8,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12788794/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145936045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2026-01-06DOI: 10.1200/CCI-25-00194
Peter May, Sina Nokodian, Christoph Nuernbergk, Manuel Knauer, Maike Hefter, Aaron Becker von Rose, Florian Bassermann, Johannes Jung
Purpose: In high-risk specialties such as oncology, errors in clinical documentation can have severe consequences, highlighting a need for enhanced safety checks. We therefore aimed to evaluate the capability of frontier large language models (LLMs) to identify and correct errors in complex clinical documentation in oncology.
Methods: We conducted a two-phase evaluation. First, we assessed LLMs (GPT o4-mini and Gemini 2.5 Pro) on 1,000 synthetic clinical hematology/oncology vignettes with controlled errors, benchmarking against human expert data for error flag detection and sentence localization. Second, we evaluated advanced LLMs and a local LLM (Gemma 3 27B) against six clinicians in detecting single, predefined, and clinically relevant errors, such as wrong risk classifications or omission of critical medication within 90 synthetic discharge summaries from oncologic patients.
Results: LLMs outperformed human benchmark in error flag and sentence localization tasks, with Gemini 2.5 Pro achieving top accuracies of 0.928 and 0.915, respectively. Results were robust across subgroups and scalable, with simultaneous processing of up to 50 vignettes. Within complex discharge summaries, Gemini 2.5 Pro and GPT o4-mini-high identified 97.8% and 87.8% of injected errors, respectively, substantially exceeding the 47.8% average detection rate of human specialists. Gemma 3 27B detected 35.6% of errors. Analysis of error detection overlap revealed a synergistic potential for hybrid human-artificial intelligence (AI) systems.
Conclusion: Frontier LLMs exhibit superior error-detection capabilities and speed compared with both local models and human specialists, who are inherently time-constrained. Although synthetic data provide a controlled testbed, real-world evaluation across diverse errors and documentation styles remains critical. Advanced LLMs can serve as powerful assistants for clinical documentation reviews, substantially reducing the risk of oversight and clinician workload. Integrating LLM-driven error flagging into electronic health record workflows offers a promising strategy for enhancing documentation accuracy, treatment quality, and patient safety in oncology.
{"title":"Artificial Intelligence-Assisted Error Detection in Complex Clinical Documentation: Leveraging Large Language Models to Enhance Patient Safety in Oncology.","authors":"Peter May, Sina Nokodian, Christoph Nuernbergk, Manuel Knauer, Maike Hefter, Aaron Becker von Rose, Florian Bassermann, Johannes Jung","doi":"10.1200/CCI-25-00194","DOIUrl":"10.1200/CCI-25-00194","url":null,"abstract":"<p><strong>Purpose: </strong>In high-risk specialties such as oncology, errors in clinical documentation can have severe consequences, highlighting a need for enhanced safety checks. We therefore aimed to evaluate the capability of frontier large language models (LLMs) to identify and correct errors in complex clinical documentation in oncology.</p><p><strong>Methods: </strong>We conducted a two-phase evaluation. First, we assessed LLMs (GPT o4-mini and Gemini 2.5 Pro) on 1,000 synthetic clinical hematology/oncology vignettes with controlled errors, benchmarking against human expert data for error flag detection and sentence localization. Second, we evaluated advanced LLMs and a local LLM (Gemma 3 27B) against six clinicians in detecting single, predefined, and clinically relevant errors, such as wrong risk classifications or omission of critical medication within 90 synthetic discharge summaries from oncologic patients.</p><p><strong>Results: </strong>LLMs outperformed human benchmark in error flag and sentence localization tasks, with Gemini 2.5 Pro achieving top accuracies of 0.928 and 0.915, respectively. Results were robust across subgroups and scalable, with simultaneous processing of up to 50 vignettes. Within complex discharge summaries, Gemini 2.5 Pro and GPT o4-mini-high identified 97.8% and 87.8% of injected errors, respectively, substantially exceeding the 47.8% average detection rate of human specialists. Gemma 3 27B detected 35.6% of errors. Analysis of error detection overlap revealed a synergistic potential for hybrid human-artificial intelligence (AI) systems.</p><p><strong>Conclusion: </strong>Frontier LLMs exhibit superior error-detection capabilities and speed compared with both local models and human specialists, who are inherently time-constrained. Although synthetic data provide a controlled testbed, real-world evaluation across diverse errors and documentation styles remains critical. Advanced LLMs can serve as powerful assistants for clinical documentation reviews, substantially reducing the risk of oversight and clinician workload. Integrating LLM-driven error flagging into electronic health record workflows offers a promising strategy for enhancing documentation accuracy, treatment quality, and patient safety in oncology.</p>","PeriodicalId":51626,"journal":{"name":"JCO Clinical Cancer Informatics","volume":"10 ","pages":"e2500194"},"PeriodicalIF":2.8,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12794695/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145913830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2026-01-28DOI: 10.1200/CCI-25-00190
Catherine Ning, Dimitris Bertsimas, Per Eystein Lønning, Federico N Auecio, Richard Burkhart, Felix Balzer, Stefan Buettner, Hideo Baba, Itaru Endo, Georgios Stasinos, Johan Gagnière, Cornelis Verhoef, Martin E Kreis, Georgios Antonios Margonis
Purpose: We explore whether survival model performance in underrepresented high- and low-risk subgroups-regions of the prognostic spectrum where clinical decisions are most consequential-can be improved through targeted restructuring of the training data set. Rather than modifying model architecture, we propose a novel risk-stratified sampling method that addresses imbalances in prognostic subgroup density to support more reliable learning in underrepresented tail strata.
Methods: We introduce a novel methodology that partitions patients by baseline prognostic risk and applies matching within each stratum to equalize representation across the risk distribution. We implement this framework on a cohort of 1,799 patients with resected colorectal liver metastases (CRLM), including 1,197 who received adjuvant chemotherapy and 602 who did not. All models used in this study are Cox proportional hazards models trained on the same set of selected variables. Model performance is assessed via Harrell's C index and Integrated Calibration Index, with internal validation using Efron's bias-corrected bootstrapping. External validation is conducted on two independent CRLM data sets.
Results: Cox models trained on risk-balanced cohorts showed consistent improvements in internal validation compared with models trained on the full data set. The proposed approach preserved overall model calibration while noticeably improving stratified C index values in underrepresented high- and low-risk strata of the external cohorts.
Conclusion: Our findings suggest that survival model performance in observational oncology cohorts can be meaningfully improved through targeted rebalancing of the training data across prognostic risk strata. This approach offers a practical and model-agnostic complement to existing methods, especially in applications where predictive reliability across the full risk continuum is critical to downstream clinical decisions.
{"title":"Improving Survival Models in Health Care by Balancing Imbalanced Cohorts: A Novel Approach.","authors":"Catherine Ning, Dimitris Bertsimas, Per Eystein Lønning, Federico N Auecio, Richard Burkhart, Felix Balzer, Stefan Buettner, Hideo Baba, Itaru Endo, Georgios Stasinos, Johan Gagnière, Cornelis Verhoef, Martin E Kreis, Georgios Antonios Margonis","doi":"10.1200/CCI-25-00190","DOIUrl":"https://doi.org/10.1200/CCI-25-00190","url":null,"abstract":"<p><strong>Purpose: </strong>We explore whether survival model performance in underrepresented high- and low-risk subgroups-regions of the prognostic spectrum where clinical decisions are most consequential-can be improved through targeted restructuring of the training data set. Rather than modifying model architecture, we propose a novel risk-stratified sampling method that addresses imbalances in prognostic subgroup density to support more reliable learning in underrepresented tail strata.</p><p><strong>Methods: </strong>We introduce a novel methodology that partitions patients by baseline prognostic risk and applies matching within each stratum to equalize representation across the risk distribution. We implement this framework on a cohort of 1,799 patients with resected colorectal liver metastases (CRLM), including 1,197 who received adjuvant chemotherapy and 602 who did not. All models used in this study are Cox proportional hazards models trained on the same set of selected variables. Model performance is assessed via Harrell's C index and Integrated Calibration Index, with internal validation using Efron's bias-corrected bootstrapping. External validation is conducted on two independent CRLM data sets.</p><p><strong>Results: </strong>Cox models trained on risk-balanced cohorts showed consistent improvements in internal validation compared with models trained on the full data set. The proposed approach preserved overall model calibration while noticeably improving stratified C index values in underrepresented high- and low-risk strata of the external cohorts.</p><p><strong>Conclusion: </strong>Our findings suggest that survival model performance in observational oncology cohorts can be meaningfully improved through targeted rebalancing of the training data across prognostic risk strata. This approach offers a practical and model-agnostic complement to existing methods, especially in applications where predictive reliability across the full risk continuum is critical to downstream clinical decisions.</p>","PeriodicalId":51626,"journal":{"name":"JCO Clinical Cancer Informatics","volume":"10 ","pages":"e2500190"},"PeriodicalIF":2.8,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12854512/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146094800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2026-01-14DOI: 10.1200/CCI-25-00177
Joshi Hogenboom, Varsha Gouthamchand, Charlotte Cairns, Silvie H M Janssen, Kirsty Way, Andre L A J Dekker, Winette T A van der Graaf, Anne-Sophie Darlington, Olga Husson, Leonard Y L Wee, Johan van Soest, Aiara Lobo Gomes
Purpose: Rare diseases are difficult to fully capture, and regularly call for large, geographically dispersed initiatives. Such initiatives are often met with data harmonization challenges. These challenges render data incompatible and impede successful realization. The STRONG AYA project is such an initiative, specifically focusing on adolescent and young adult (AYAs) with cancer. STRONG AYA is setting up a federated data infrastructure containing data of varying format. Here, we elaborate on how we used health care-agnostic semantic web technologies to overcome such challenges.
Methods: We structured the STRONG AYA case-mix and core outcome measures concepts and their properties as knowledge graphs. Having identified the corresponding standard terminologies, we developed a semantic map on the basis of the knowledge graphs and the here introduced annotation helper plugin for Flyover. Flyover is a tool that converts structured data into resource description framework (RDF) triples and enables semantic interoperability. As a demonstration, we mapped data that are to be included in the STRONG AYA infrastructure.
Results: The knowledge graphs provided a comprehensive overview of the large number of STRONG AYA concepts. The semantic terminology mapping and annotation helper allowed us to query data with incomprehensible terminologies, without changing them. Both the knowledge graphs and semantic map were made available on a Hugo webpage for increased transparency and understanding.
Conclusion: The use of semantic web technologies, such as RDF and knowledge graphs, is a viable solution to overcome challenges regarding data interoperability and reusability for a federated AYA cancer data infrastructure without being bound to rigid standardized schemas. The linkage of semantically meaningful concepts to otherwise incomprehensible data elements demonstrates how by using these domain-agnostic technologies we made nonstandardized health care data interoperable.
{"title":"Knowledge Representation of a Multicenter Adolescent and Young Adult Cancer Infrastructure: Development of the STRONG AYA Knowledge Graph.","authors":"Joshi Hogenboom, Varsha Gouthamchand, Charlotte Cairns, Silvie H M Janssen, Kirsty Way, Andre L A J Dekker, Winette T A van der Graaf, Anne-Sophie Darlington, Olga Husson, Leonard Y L Wee, Johan van Soest, Aiara Lobo Gomes","doi":"10.1200/CCI-25-00177","DOIUrl":"10.1200/CCI-25-00177","url":null,"abstract":"<p><strong>Purpose: </strong>Rare diseases are difficult to fully capture, and regularly call for large, geographically dispersed initiatives. Such initiatives are often met with data harmonization challenges. These challenges render data incompatible and impede successful realization. The STRONG AYA project is such an initiative, specifically focusing on adolescent and young adult (AYAs) with cancer. STRONG AYA is setting up a federated data infrastructure containing data of varying format. Here, we elaborate on how we used health care-agnostic semantic web technologies to overcome such challenges.</p><p><strong>Methods: </strong>We structured the STRONG AYA case-mix and core outcome measures concepts and their properties as knowledge graphs. Having identified the corresponding standard terminologies, we developed a semantic map on the basis of the knowledge graphs and the here introduced annotation helper plugin for <i>Flyover</i>. <i>Flyover</i> is a tool that converts structured data into resource description framework (RDF) triples and enables semantic interoperability. As a demonstration, we mapped data that are to be included in the STRONG AYA infrastructure.</p><p><strong>Results: </strong>The knowledge graphs provided a comprehensive overview of the large number of STRONG AYA concepts. The semantic terminology mapping and annotation helper allowed us to query data with incomprehensible terminologies, without changing them. Both the knowledge graphs and semantic map were made available on a Hugo webpage for increased transparency and understanding.</p><p><strong>Conclusion: </strong>The use of semantic web technologies, such as RDF and knowledge graphs, is a viable solution to overcome challenges regarding data interoperability and reusability for a federated AYA cancer data infrastructure without being bound to rigid standardized schemas. The linkage of semantically meaningful concepts to otherwise incomprehensible data elements demonstrates how by using these domain-agnostic technologies we made nonstandardized health care data interoperable.</p>","PeriodicalId":51626,"journal":{"name":"JCO Clinical Cancer Informatics","volume":"10 ","pages":"e2500177"},"PeriodicalIF":2.8,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12834280/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145985727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2026-01-06DOI: 10.1200/CCI-25-00255
William McGahan, Nick Butler, Thomas O'Rourke, Bernard Mark Smithers, David Cavallucci
Purpose: To address incomplete and inconsistent classification of pancreatic cancer resectability according to International Association of Pancreatology (IAP) anatomic, biologic, and conditional criteria.
Materials and methods: We designed, implemented, and evaluated an interoperable, web-based platform that captured structured pretreatment data and performed algorithm-driven resectability classification. Linked modules supported referral to and discussion at multidisciplinary team meetings (MDTMs) at two quaternary hospitals (June 2021-February 2022) and populated downstream documentation. In a pre-post study, Pearson χ2 test and multivariable logistic regression (odds ratios [ORs] with 95% CI) compared data completeness (primary end point), as well as the distribution of IAP-defined resectability and treatment intent (secondary end points). In the postintervention cohort, overall survival (OS) was stratified by IAP resectability using Kaplan-Meier curves and compared using the log-rank test. Hazard ratios (HRs) with 95% CIs and log-rank statistics were calculated for individual resectability criteria using Cox models. All tests were two-sided with nominal significance (P < .05). An embedded module evaluated workflow integration and user experience.
Results: Ninety-five patients with pancreatic cancer were referred to MDTMs during the intervention period, of whom 71 were eligible. Compared with 71 preintervention patients, the system improved documentation of tumor-vessel relationships (OR, 9.39 [95% CI, 4.43 to 21.7]), locoregional lymphadenopathy (OR, 30.5 [95% CI, 11.1 to 102]), and performance status (PS; OR, 3.34 [95% CI, 1.67 to 6.85]), reducing the number with unknown resectability (OR, 0.10 [95% CI, 0.03 to 0.25]). PS ≥ 2 (HR, 2.16 [95% CI, 1.06 to 4.43]) and serum CA19.9 ≥ 500 U/mL (HR, 1.94 [95% CI, 1.03 to 3.63]) were significantly associated with OS, whereas anatomic criteria were not.
Discussion: A synoptic intervention integrated into MDTM workflows across multiple sites improved structured data capture, reduced unknown resectability, and highlighted the relevance of biologic and conditional criteria in addition to tumor anatomy.
{"title":"Synoptic Multidisciplinary Team Meeting Workflows to Promote Guideline-Based Classification of Resectability in Pancreatic Cancer: A Multicenter Prospective Study.","authors":"William McGahan, Nick Butler, Thomas O'Rourke, Bernard Mark Smithers, David Cavallucci","doi":"10.1200/CCI-25-00255","DOIUrl":"10.1200/CCI-25-00255","url":null,"abstract":"<p><strong>Purpose: </strong>To address incomplete and inconsistent classification of pancreatic cancer resectability according to International Association of Pancreatology (IAP) anatomic, biologic, and conditional criteria.</p><p><strong>Materials and methods: </strong>We designed, implemented, and evaluated an interoperable, web-based platform that captured structured pretreatment data and performed algorithm-driven resectability classification. Linked modules supported referral to and discussion at multidisciplinary team meetings (MDTMs) at two quaternary hospitals (June 2021-February 2022) and populated downstream documentation. In a pre-post study, Pearson χ<sup>2</sup> test and multivariable logistic regression (odds ratios [ORs] with 95% CI) compared data completeness (primary end point), as well as the distribution of IAP-defined resectability and treatment intent (secondary end points). In the postintervention cohort, overall survival (OS) was stratified by IAP resectability using Kaplan-Meier curves and compared using the log-rank test. Hazard ratios (HRs) with 95% CIs and log-rank statistics were calculated for individual resectability criteria using Cox models. All tests were two-sided with nominal significance (<i>P</i> < .05). An embedded module evaluated workflow integration and user experience.</p><p><strong>Results: </strong>Ninety-five patients with pancreatic cancer were referred to MDTMs during the intervention period, of whom 71 were eligible. Compared with 71 preintervention patients, the system improved documentation of tumor-vessel relationships (OR, 9.39 [95% CI, 4.43 to 21.7]), locoregional lymphadenopathy (OR, 30.5 [95% CI, 11.1 to 102]), and performance status (PS; OR, 3.34 [95% CI, 1.67 to 6.85]), reducing the number with <i>unknown</i> resectability (OR, 0.10 [95% CI, 0.03 to 0.25]). PS ≥ 2 (HR, 2.16 [95% CI, 1.06 to 4.43]) and serum CA19.9 ≥ 500 U/mL (HR, 1.94 [95% CI, 1.03 to 3.63]) were significantly associated with OS, whereas anatomic criteria were not.</p><p><strong>Discussion: </strong>A synoptic intervention integrated into MDTM workflows across multiple sites improved structured data capture, reduced <i>unknown</i> resectability, and highlighted the relevance of biologic and conditional criteria in addition to tumor anatomy.</p>","PeriodicalId":51626,"journal":{"name":"JCO Clinical Cancer Informatics","volume":"10 ","pages":"e2500255"},"PeriodicalIF":2.8,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145913860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2026-01-07DOI: 10.1200/CCI-25-00286
Jiasheng Wang, Kirti Arora, David M Swoboda, Aziz Nazha
Purpose: Clinical guidelines are essential for evidence-based oncology care but are often long, complex, and difficult to navigate. We developed a multiagent artificial intelligence (AI) system to accurately retrieve and interpret guideline content in response to guideline-based clinical questions.
Methods: We included 34 ASCO guidelines published between January 2021 and December 2024. Using a multiagent framework, we assigned distinct roles to AI agents: a Coordinator Agent selected the relevant guideline, specialized Tumor Board Agents extracted information from text, tables, and figures, and a Reviewer Agent synthesized a final answer. A total of 100 open-ended questions were created on the basis of the guideline content. The system's performance was compared with GPT-4o, Claude 3.7, Gemini 2.5 flash, DeepSeek-R1, and the ASCO Guidelines Assistant.
Results: The multi-agent system achieved (94% [95% CI, 89.3 to 98.7]) accuracy in selecting the correct guidelines and (90% [95% CI, 84.1 to 95.9]) accuracy in answering questions. This significantly outperformed GPT-4o (48%), Claude 3.7 (49%), Gemini 2.5 (50%), DeepSeek-R1 (58%), and the ASCO Guidelines Assistant (67%, all P < .01, McNemar's test). Most errors were due to incorrect guideline selection or misinterpretation; no hallucinated answers were observed. Removing the Coordinator Agent reduced accuracy to 40%, and excluding tables and figures reduced accuracy to 51%.
Conclusion: By assigning specialized tasks to AI agents and incorporating visual elements from clinical guidelines, our system outperformed existing tools in accurately answering oncology questions. This pilot study, limited to ASCO guidelines, may improve access to guideline-based care.
{"title":"Tumor Board-Inspired Multiagent Artificial Intelligence System for Interpreting Oncology Guidelines.","authors":"Jiasheng Wang, Kirti Arora, David M Swoboda, Aziz Nazha","doi":"10.1200/CCI-25-00286","DOIUrl":"https://doi.org/10.1200/CCI-25-00286","url":null,"abstract":"<p><strong>Purpose: </strong>Clinical guidelines are essential for evidence-based oncology care but are often long, complex, and difficult to navigate. We developed a multiagent artificial intelligence (AI) system to accurately retrieve and interpret guideline content in response to guideline-based clinical questions.</p><p><strong>Methods: </strong>We included 34 ASCO guidelines published between January 2021 and December 2024. Using a multiagent framework, we assigned distinct roles to AI agents: a Coordinator Agent selected the relevant guideline, specialized Tumor Board Agents extracted information from text, tables, and figures, and a Reviewer Agent synthesized a final answer. A total of 100 open-ended questions were created on the basis of the guideline content. The system's performance was compared with GPT-4o, Claude 3.7, Gemini 2.5 flash, DeepSeek-R1, and the ASCO Guidelines Assistant.</p><p><strong>Results: </strong>The multi-agent system achieved (94% [95% CI, 89.3 to 98.7]) accuracy in selecting the correct guidelines and (90% [95% CI, 84.1 to 95.9]) accuracy in answering questions. This significantly outperformed GPT-4o (48%), Claude 3.7 (49%), Gemini 2.5 (50%), DeepSeek-R1 (58%), and the ASCO Guidelines Assistant (67%, all <i>P</i> < .01, McNemar's test). Most errors were due to incorrect guideline selection or misinterpretation; no hallucinated answers were observed. Removing the Coordinator Agent reduced accuracy to 40%, and excluding tables and figures reduced accuracy to 51%.</p><p><strong>Conclusion: </strong>By assigning specialized tasks to AI agents and incorporating visual elements from clinical guidelines, our system outperformed existing tools in accurately answering oncology questions. This pilot study, limited to ASCO guidelines, may improve access to guideline-based care.</p>","PeriodicalId":51626,"journal":{"name":"JCO Clinical Cancer Informatics","volume":"10 ","pages":"e2500286"},"PeriodicalIF":2.8,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145918949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2026-01-07DOI: 10.1200/CCI-24-00334
Anobel Y Odisho, Andrew W Liu, William A Pace, Marvin N Carlisle, Robert Krumm, Janet E Cowan, Peter R Carroll, Matthew R Cooperberg
Purpose: Radiology reports are stored as plain text in most electronic health records, rendering the data computationally inaccessible. Large language models are powerful tools for analyzing unstructured text but relatively untested in urologic oncology. We aimed to develop a pipeline to extract data from plain text prostate magnetic resonance imaging (MRI) reports using GPT4.0 and compare the accuracy to manually abstracted data.
Methods: We developed a data pipeline using a secure, enterprise-wide deployment of OpenAI's GPT-4.0 to automatically extract data elements from prostate MRI report text when presented with prostate MRI reports. Identical prompts and reports were sent multiple times to determine response variability. We extracted 15 data elements per report and compared accuracy to a manually abstracted gold standard.
Results: Across 424 prostate MRI reports, GPT-4.0 response accuracy was consistently above 95%. Individual field accuracies were 98.3% (96.3%-99.3%) for prostate-specific antigen density, 97.4% (95.4%-98.7%) for extracapsular extension, and 98.1% (96.3%-99.2%) for TNM stage, and had a median of 98.1% (96.3%-99.2%), a mean of 97.2% (95.2%-98.3%), and a range of 99.8% (98.7%-100.0%) for number of suspicious lesions to 87.7% (84.2%-90.7%) for identification of lesion location in the base of the prostate. Response variability over five repeated runs ranged from 0.14% to 3.61%, differed based on the data element extracted (P < .001), and was inversely correlated with accuracy (P < .001). In disagreements between manual and GPT-4.0 extracted data, GPT-4.0 responses were more often deemed correct by an additional reviewer.
Conclusion: GPT-4.0 had high accuracy with low variability in extracting data points from prostate cancer MRI reports with low upfront programming requirements. This represents an effective tool to expedite medical data extraction for clinical and research use cases.
{"title":"Generative Artificial Intelligence Successfully Automates Data Extraction From Unstructured Magnetic Resonance Imaging Reports: Feasibility in Prostate Cancer Care.","authors":"Anobel Y Odisho, Andrew W Liu, William A Pace, Marvin N Carlisle, Robert Krumm, Janet E Cowan, Peter R Carroll, Matthew R Cooperberg","doi":"10.1200/CCI-24-00334","DOIUrl":"https://doi.org/10.1200/CCI-24-00334","url":null,"abstract":"<p><strong>Purpose: </strong>Radiology reports are stored as plain text in most electronic health records, rendering the data computationally inaccessible. Large language models are powerful tools for analyzing unstructured text but relatively untested in urologic oncology. We aimed to develop a pipeline to extract data from plain text prostate magnetic resonance imaging (MRI) reports using GPT4.0 and compare the accuracy to manually abstracted data.</p><p><strong>Methods: </strong>We developed a data pipeline using a secure, enterprise-wide deployment of OpenAI's GPT-4.0 to automatically extract data elements from prostate MRI report text when presented with prostate MRI reports. Identical prompts and reports were sent multiple times to determine response variability. We extracted 15 data elements per report and compared accuracy to a manually abstracted gold standard.</p><p><strong>Results: </strong>Across 424 prostate MRI reports, GPT-4.0 response accuracy was consistently above 95%. Individual field accuracies were 98.3% (96.3%-99.3%) for prostate-specific antigen density, 97.4% (95.4%-98.7%) for extracapsular extension, and 98.1% (96.3%-99.2%) for TNM stage, and had a median of 98.1% (96.3%-99.2%), a mean of 97.2% (95.2%-98.3%), and a range of 99.8% (98.7%-100.0%) for number of suspicious lesions to 87.7% (84.2%-90.7%) for identification of lesion location in the base of the prostate. Response variability over five repeated runs ranged from 0.14% to 3.61%, differed based on the data element extracted (<i>P</i> < .001), and was inversely correlated with accuracy (<i>P</i> < .001). In disagreements between manual and GPT-4.0 extracted data, GPT-4.0 responses were more often deemed correct by an additional reviewer.</p><p><strong>Conclusion: </strong>GPT-4.0 had high accuracy with low variability in extracting data points from prostate cancer MRI reports with low upfront programming requirements. This represents an effective tool to expedite medical data extraction for clinical and research use cases.</p>","PeriodicalId":51626,"journal":{"name":"JCO Clinical Cancer Informatics","volume":"10 ","pages":"e2400334"},"PeriodicalIF":2.8,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145919369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}