Adam Abel, Antonio Chambers, Jeff Fill, Heinz Reiske, Ming Lu, Amanda Morris, Paul Faya, Rose C Beck, Michael J Pontecorvo, Emily C Collins, Andrew E Schade, Mark A Mintun, Michael E Hodsdon
Background: Blood-based biomarkers, especially P-tau217, have been gaining interest as diagnostic tools to measure Alzheimer disease (AD) pathology.
Methods: We developed a plasma P-tau217 chemiluminescent immunoassay using 4G10E2 and IBA493 as antibodies, a synthetic tau peptide as calibrator, and the Quanterix SP-X imager. Analytical validation performed in a College of American Pathologists-accredited CLIA laboratory involved multiple kit lots, operators, timepoints, and imagers. Florbetapir positron emission tomography was used to quantify amyloid for clinical validation.
Results: Precision across 80 runs was ≤20% CV using 23 patient-derived samples ranging from 0.09 U/mL to 3.35 U/mL. No significant lot-to-lot differences were observed. There was no interference from purified tau (2N4R) or lipemia, but hemolysis greater than 2 + was not acceptable. Functional analytical sensitivity (lower limit of quantitation) was 0.08 U/mL. Linearity studies support the use of a standard 1:2 plasma dilution. Samples demonstrated stability at 7 freeze/thaw cycles, with room temperature and refrigerated stability established for up to 72 hours. The final analytical measurement range was 0.08 to 2.81 U/mL. The calibration curve maintained ≤20% CV for raw signal intensity and 80% to 120% relative error for back-fitted concentration using a log-log power regression. Initial clinical assessment using plasma samples from 1091 individuals screened in TRAILBLAZER-ALZ 2 demonstrated an area under the curve of 91.6% (95% CI 0.90-0.94) with brain amyloid as the comparator. Positive and negative predictive value was >90% and >85%, respectively.
Conclusions: Through analytical validation, this assay demonstrated robust performance across multiple lots, operators, and instruments and could be used as a tool for diagnosing AD.
{"title":"Analytical Validation and Performance of a Blood-Based P-tau217 Diagnostic Test for Alzheimer Disease.","authors":"Adam Abel, Antonio Chambers, Jeff Fill, Heinz Reiske, Ming Lu, Amanda Morris, Paul Faya, Rose C Beck, Michael J Pontecorvo, Emily C Collins, Andrew E Schade, Mark A Mintun, Michael E Hodsdon","doi":"10.1093/jalm/jfae155","DOIUrl":"https://doi.org/10.1093/jalm/jfae155","url":null,"abstract":"<p><strong>Background: </strong>Blood-based biomarkers, especially P-tau217, have been gaining interest as diagnostic tools to measure Alzheimer disease (AD) pathology.</p><p><strong>Methods: </strong>We developed a plasma P-tau217 chemiluminescent immunoassay using 4G10E2 and IBA493 as antibodies, a synthetic tau peptide as calibrator, and the Quanterix SP-X imager. Analytical validation performed in a College of American Pathologists-accredited CLIA laboratory involved multiple kit lots, operators, timepoints, and imagers. Florbetapir positron emission tomography was used to quantify amyloid for clinical validation.</p><p><strong>Results: </strong>Precision across 80 runs was ≤20% CV using 23 patient-derived samples ranging from 0.09 U/mL to 3.35 U/mL. No significant lot-to-lot differences were observed. There was no interference from purified tau (2N4R) or lipemia, but hemolysis greater than 2 + was not acceptable. Functional analytical sensitivity (lower limit of quantitation) was 0.08 U/mL. Linearity studies support the use of a standard 1:2 plasma dilution. Samples demonstrated stability at 7 freeze/thaw cycles, with room temperature and refrigerated stability established for up to 72 hours. The final analytical measurement range was 0.08 to 2.81 U/mL. The calibration curve maintained ≤20% CV for raw signal intensity and 80% to 120% relative error for back-fitted concentration using a log-log power regression. Initial clinical assessment using plasma samples from 1091 individuals screened in TRAILBLAZER-ALZ 2 demonstrated an area under the curve of 91.6% (95% CI 0.90-0.94) with brain amyloid as the comparator. Positive and negative predictive value was >90% and >85%, respectively.</p><p><strong>Conclusions: </strong>Through analytical validation, this assay demonstrated robust performance across multiple lots, operators, and instruments and could be used as a tool for diagnosing AD.</p>","PeriodicalId":46361,"journal":{"name":"Journal of Applied Laboratory Medicine","volume":" ","pages":""},"PeriodicalIF":1.8,"publicationDate":"2025-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143013726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Samrat Yeramaneni, Stephanie T Chang, Ramsey C Cheung, Donald B Chalfin, Kinpritma Sangha, H Roma Levy, Artem T Boltyenkov
Background: Global metabolic dysfunction-associated steatotic liver disease (MASLD) prevalence is estimated at 30% and projected to reach 55.7% by 2040. In the Veterans Affairs (VA) healthcare system, an estimated 1.8 million veterans have metabolic dysfunction-associated steatohepatitis (MASH).
Methods: Adult patients at risk for MASLD in a VA healthcare system underwent Fibrosis-4 (FIB-4) and Enhanced Liver Fibrosis (ELF®) testing. Referral rates and cost savings were compared among 6 noninvasive testing (NIT) strategies using these 2 tests independently or sequentially at various cutoffs.
Results: Enrolled patients (N = 254) had a mean age of 65.3 ± 9.3 years and mean body mass index (BMI) of 31.7 ± 6, 87.4% male: 78.3% were non-Hispanic/Latino, and 96.5% had type 2 diabetes mellitus (T2DM). Among the 6 evaluated strategies, using FIB-4 followed by ELF at a 9.8 cutoff yielded the highest proportion of patients retained in primary care without need of referral to hepatology clinic (165/227; 72.7%), and was associated with the lowest costs ($407.62). Compared to the FIB-4 only strategy, FIB-4/ELF with a 9.8 cutoff strategy resulted in 26% fewer referrals and 8.47% lower costs. In the subgroup of patients with BMI >32, there were 25.17% fewer referrals and costs were 8.31% lower.
Conclusions: Our study suggests that sequential use of ELF with a 9.8 cutoff following indeterminate FIB-4 tests results in lower referral rates and lower care costs in a veteran population at risk of MASLD. Adding ELF as a sequential test after indeterminate FIB-4 might help reduce the number of referrals and overall cost of care.
{"title":"Comparison of Referral Rates and Costs Using Fibrosis-4 and Enhanced Liver Fibrosis (ELF) Testing Strategies for Initial Evaluation of Metabolic Dysfunction-Associated Steatotic Liver Disease (MASLD) in a Veteran Population.","authors":"Samrat Yeramaneni, Stephanie T Chang, Ramsey C Cheung, Donald B Chalfin, Kinpritma Sangha, H Roma Levy, Artem T Boltyenkov","doi":"10.1093/jalm/jfae154","DOIUrl":"https://doi.org/10.1093/jalm/jfae154","url":null,"abstract":"<p><strong>Background: </strong>Global metabolic dysfunction-associated steatotic liver disease (MASLD) prevalence is estimated at 30% and projected to reach 55.7% by 2040. In the Veterans Affairs (VA) healthcare system, an estimated 1.8 million veterans have metabolic dysfunction-associated steatohepatitis (MASH).</p><p><strong>Methods: </strong>Adult patients at risk for MASLD in a VA healthcare system underwent Fibrosis-4 (FIB-4) and Enhanced Liver Fibrosis (ELF®) testing. Referral rates and cost savings were compared among 6 noninvasive testing (NIT) strategies using these 2 tests independently or sequentially at various cutoffs.</p><p><strong>Results: </strong>Enrolled patients (N = 254) had a mean age of 65.3 ± 9.3 years and mean body mass index (BMI) of 31.7 ± 6, 87.4% male: 78.3% were non-Hispanic/Latino, and 96.5% had type 2 diabetes mellitus (T2DM). Among the 6 evaluated strategies, using FIB-4 followed by ELF at a 9.8 cutoff yielded the highest proportion of patients retained in primary care without need of referral to hepatology clinic (165/227; 72.7%), and was associated with the lowest costs ($407.62). Compared to the FIB-4 only strategy, FIB-4/ELF with a 9.8 cutoff strategy resulted in 26% fewer referrals and 8.47% lower costs. In the subgroup of patients with BMI >32, there were 25.17% fewer referrals and costs were 8.31% lower.</p><p><strong>Conclusions: </strong>Our study suggests that sequential use of ELF with a 9.8 cutoff following indeterminate FIB-4 tests results in lower referral rates and lower care costs in a veteran population at risk of MASLD. Adding ELF as a sequential test after indeterminate FIB-4 might help reduce the number of referrals and overall cost of care.</p>","PeriodicalId":46361,"journal":{"name":"Journal of Applied Laboratory Medicine","volume":" ","pages":""},"PeriodicalIF":1.8,"publicationDate":"2025-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142985176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bucky Lozier, Thomas Martins, Patricia Slev, Abdulrahman Saadalla
Background: Detection of serum-specific immunoglobulin G (sIgG) to Aspergillus fumigatus traditionally relied on precipitin assays, which lack standardization and have poor analytical sensitivity. Automated quantitative immunoassays are now more widely used alternatives. A challenge, however, is determining reference interval (RI) cutoffs indicative of disease presence.
Methods: Sera from 152 local healthy donors were tested for Aspergillus fumigatus sIgG using the ImmunoCAP assay to calculate a nonparametric RI cutoff. Results from 178 patient samples cotested by the precipitin and ImmunoCAP assays were analyzed using receiver operator characteristic (ROC) curve to determine an optimal sIgG concentration for precipitin positivity. Clinical information available for 46 patients tested by the ImmunoCAP assay was also used to estimate an optimal sIgG cutoff for pulmonary aspergillosis diagnosis.
Results: Specific-IgG concentration at 81.5 mcg/mL corresponded to the 97.5th percentile of tested healthy donors. The ROC-driven optimal IgG cutoff for precipitin positivity was at 40.4 mcg/mL with 67.8% sensitivity [95% confidence interval (CI): 54.4% to 79.4%%] and 72.3% specificity (95% CI: 63.3% to 80.1%). Using clinical diagnoses, an IgG concentration at 64.7 mcg/mL had optimal sensitivity (77.8%; 95% CI: 61.9% to 88.3%) and specificity (66.7%, 95% CI 39.1% to 86.2%) for pulmonary aspergillosis.
Conclusions: Our healthy donor-driven RI cutoff was higher than estimated optimal sIgG values based on precipitin positivity and disease presence. As fungal sIgG levels can be impacted by local environmental exposures, and given the limited size of our clinical dataset, adopting an assay cutoff based on precipitin results (40.4 mcg/mL) can be more objective.
{"title":"Determination of Positivity Cutoff for an Automated Aspergillus fumigatus-Specific Immunoglobulin-G Assay in a National Reference Laboratory.","authors":"Bucky Lozier, Thomas Martins, Patricia Slev, Abdulrahman Saadalla","doi":"10.1093/jalm/jfae157","DOIUrl":"https://doi.org/10.1093/jalm/jfae157","url":null,"abstract":"<p><strong>Background: </strong>Detection of serum-specific immunoglobulin G (sIgG) to Aspergillus fumigatus traditionally relied on precipitin assays, which lack standardization and have poor analytical sensitivity. Automated quantitative immunoassays are now more widely used alternatives. A challenge, however, is determining reference interval (RI) cutoffs indicative of disease presence.</p><p><strong>Methods: </strong>Sera from 152 local healthy donors were tested for Aspergillus fumigatus sIgG using the ImmunoCAP assay to calculate a nonparametric RI cutoff. Results from 178 patient samples cotested by the precipitin and ImmunoCAP assays were analyzed using receiver operator characteristic (ROC) curve to determine an optimal sIgG concentration for precipitin positivity. Clinical information available for 46 patients tested by the ImmunoCAP assay was also used to estimate an optimal sIgG cutoff for pulmonary aspergillosis diagnosis.</p><p><strong>Results: </strong>Specific-IgG concentration at 81.5 mcg/mL corresponded to the 97.5th percentile of tested healthy donors. The ROC-driven optimal IgG cutoff for precipitin positivity was at 40.4 mcg/mL with 67.8% sensitivity [95% confidence interval (CI): 54.4% to 79.4%%] and 72.3% specificity (95% CI: 63.3% to 80.1%). Using clinical diagnoses, an IgG concentration at 64.7 mcg/mL had optimal sensitivity (77.8%; 95% CI: 61.9% to 88.3%) and specificity (66.7%, 95% CI 39.1% to 86.2%) for pulmonary aspergillosis.</p><p><strong>Conclusions: </strong>Our healthy donor-driven RI cutoff was higher than estimated optimal sIgG values based on precipitin positivity and disease presence. As fungal sIgG levels can be impacted by local environmental exposures, and given the limited size of our clinical dataset, adopting an assay cutoff based on precipitin results (40.4 mcg/mL) can be more objective.</p>","PeriodicalId":46361,"journal":{"name":"Journal of Applied Laboratory Medicine","volume":" ","pages":""},"PeriodicalIF":1.8,"publicationDate":"2025-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142972729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Erin H Graf, Andrew Bryan, Michael Bowers, Thomas E Grys
Metagenomic sequencing of plasma has been advertised by Karius, Inc. as a way to diagnose a variety of infectious syndromes. Due to the lack of robust evidence of clinical utility, our laboratory began actively stewarding Karius testing. Microbiology Directors recommended cancelation of Karius orders when certain criteria were identified. We set out to review Karius test requests in a 52-month period of stewardship, during which we recommended cancellation on 21 of 57 orders (37%). Of Karius tests sent on samples with negative conventional testing, only 3 (7%) had positive results for Karius with plausible explanatory etiologies. Of these three cases, two were empirically covered for the positive finding without improvement and one case was never treated. Twelve (29%) had positive results that were noted by infectious diseases (ID) to reflect insignificant detections. Given the 4-fold higher detection of insignificant Karius results, we set out to systematically analyze the literature for the experience of insignificant detections at other centers. When we compared studies that included healthy controls or had clinical adjudication of positive Karius findings by ID physicians, we found a median of 17.5% of individual patients that had positive insignificant detections of potential pathogenic bacteria or fungi. The most frequently detected species were as likely to be clinically adjudicated to be insignificant as they were to be significant within the same studies. Overall, these findings highlight limited utility of Karius testing and a need for careful stewardship, not only to ensure it is sent on patients who may benefit, but also to ensure results of potential pathogens are interpreted cautiously.
{"title":"One Size Fits Small: The Narrow Utility for Plasma Metagenomics.","authors":"Erin H Graf, Andrew Bryan, Michael Bowers, Thomas E Grys","doi":"10.1093/jalm/jfae122","DOIUrl":"https://doi.org/10.1093/jalm/jfae122","url":null,"abstract":"<p><p>Metagenomic sequencing of plasma has been advertised by Karius, Inc. as a way to diagnose a variety of infectious syndromes. Due to the lack of robust evidence of clinical utility, our laboratory began actively stewarding Karius testing. Microbiology Directors recommended cancelation of Karius orders when certain criteria were identified. We set out to review Karius test requests in a 52-month period of stewardship, during which we recommended cancellation on 21 of 57 orders (37%). Of Karius tests sent on samples with negative conventional testing, only 3 (7%) had positive results for Karius with plausible explanatory etiologies. Of these three cases, two were empirically covered for the positive finding without improvement and one case was never treated. Twelve (29%) had positive results that were noted by infectious diseases (ID) to reflect insignificant detections. Given the 4-fold higher detection of insignificant Karius results, we set out to systematically analyze the literature for the experience of insignificant detections at other centers. When we compared studies that included healthy controls or had clinical adjudication of positive Karius findings by ID physicians, we found a median of 17.5% of individual patients that had positive insignificant detections of potential pathogenic bacteria or fungi. The most frequently detected species were as likely to be clinically adjudicated to be insignificant as they were to be significant within the same studies. Overall, these findings highlight limited utility of Karius testing and a need for careful stewardship, not only to ensure it is sent on patients who may benefit, but also to ensure results of potential pathogens are interpreted cautiously.</p>","PeriodicalId":46361,"journal":{"name":"Journal of Applied Laboratory Medicine","volume":"10 1","pages":"171-183"},"PeriodicalIF":1.8,"publicationDate":"2025-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142923709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Laboratory analysis of blood cultures is vital to the accurate and timely diagnosis of bloodstream infections. However, the reliability of the test depends on clinical compliance with standard operating procedures that limit the risk of inconclusive or incorrect results. False-negative blood culture results due to inadequate volumes of blood can result in misdiagnosis, delay therapy, and increase patients' risk of developing or dying from bloodstream infections. Likewise, commonly occurring bacteria or fungi on human skin (i.e., commensal organisms) can contaminate the blood culture during collection and increase the risk of false positives, compromising care and leading to unnecessary antibiotic therapy and prolonged hospitalization. In December 2022, a Centers for Medicare & Medicaid Services (CMS) consensus-based entity (CBE) endorsed the Centers for Disease Control and Prevention's (CDC) proposal for a new patient safety measure to address these concerns. CDC developed this quality measure to promote the standardization of blood culture best practices and improve laboratory diagnosis of bloodstream infections nationally. This special report will emphasize the importance of standardizing blood culture collection and describe the need for a national patient safety measure, new quality tools, and next steps.
{"title":"Blood Culture Contamination and Diagnostic Stewardship: From a Clinical Laboratory Quality Monitor to a National Patient Safety Measure.","authors":"Jake D Bunn, Nancy E Cornish","doi":"10.1093/jalm/jfae132","DOIUrl":"10.1093/jalm/jfae132","url":null,"abstract":"<p><p>Laboratory analysis of blood cultures is vital to the accurate and timely diagnosis of bloodstream infections. However, the reliability of the test depends on clinical compliance with standard operating procedures that limit the risk of inconclusive or incorrect results. False-negative blood culture results due to inadequate volumes of blood can result in misdiagnosis, delay therapy, and increase patients' risk of developing or dying from bloodstream infections. Likewise, commonly occurring bacteria or fungi on human skin (i.e., commensal organisms) can contaminate the blood culture during collection and increase the risk of false positives, compromising care and leading to unnecessary antibiotic therapy and prolonged hospitalization. In December 2022, a Centers for Medicare & Medicaid Services (CMS) consensus-based entity (CBE) endorsed the Centers for Disease Control and Prevention's (CDC) proposal for a new patient safety measure to address these concerns. CDC developed this quality measure to promote the standardization of blood culture best practices and improve laboratory diagnosis of bloodstream infections nationally. This special report will emphasize the importance of standardizing blood culture collection and describe the need for a national patient safety measure, new quality tools, and next steps.</p>","PeriodicalId":46361,"journal":{"name":"Journal of Applied Laboratory Medicine","volume":"10 1","pages":"162-170"},"PeriodicalIF":1.8,"publicationDate":"2025-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11709119/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142923630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Meredith G Parsons, Ryan A Hobbs, Julie Schmidt, Anna E Merrill
Background: Heparin-induced thrombocytopenia (HIT) is a potentially life-threatening adverse drug reaction with numerous diagnostic challenges. Diagnosis of HIT begins with 4T score clinical assessment, followed by laboratory testing for those not deemed low risk. Laboratory testing for HIT includes screening [enzyme-linked immunosorbent assay (ELISA)] and confirmatory [serotonin release assay (SRA)] assays, wherein SRA testing can be pursued following a positive ELISA result. These tests aid diagnosis of HIT, but also introduce interpretive challenges, additional costs, and delays in clinical intervention.
Methods: A retrospective review of 1011 HIT ELISA and 169 SRA tests performed over 5 years was conducted. ELISA percent inhibition and ELISA low-heparin optical density (OD) were evaluated for positive predictive value (PPV). Based on these findings, HIT ELISA reporting and management algorithm changes were implemented and metrics compared for 5 months pre- and post-intervention to assess intervention success.
Results: Equivocal and positive HIT ELISA interpretation showed poor PPV based on percent inhibition (0.20 and 0.32, respectively). However, rising low-heparin OD correlated with increasing PPV (PPV of 0.00 for OD values 0.40-1.00, 0.29 for values 1.00-1.99, and 0.91 for values >2.00). Data-driven intervention decreased ELISA positivity rates (13% to 5%), decreased rates of SRA confirmatory testing overall (13% to 9%), decreased SRA testing rates for patients with non-negative ELISAs (78% to 43%), and increased heparin resumption (20% to 57%). Hematology consults remained relatively stable (78% and 86%).
Conclusions: Low-heparin OD-based HIT ELISA interpretation yielded enhanced PPV compared with percent inhibition-based interpretation. Implementation of data-driven changes improved testing stewardship and clinical management for patients with non-negative ELISAs.
{"title":"Increasing the Diagnostic Utility of Heparin-Induced Thrombocytopenia (HIT) Testing: An Academic Medical Center's Utilization Analysis and Intervention.","authors":"Meredith G Parsons, Ryan A Hobbs, Julie Schmidt, Anna E Merrill","doi":"10.1093/jalm/jfae131","DOIUrl":"https://doi.org/10.1093/jalm/jfae131","url":null,"abstract":"<p><strong>Background: </strong>Heparin-induced thrombocytopenia (HIT) is a potentially life-threatening adverse drug reaction with numerous diagnostic challenges. Diagnosis of HIT begins with 4T score clinical assessment, followed by laboratory testing for those not deemed low risk. Laboratory testing for HIT includes screening [enzyme-linked immunosorbent assay (ELISA)] and confirmatory [serotonin release assay (SRA)] assays, wherein SRA testing can be pursued following a positive ELISA result. These tests aid diagnosis of HIT, but also introduce interpretive challenges, additional costs, and delays in clinical intervention.</p><p><strong>Methods: </strong>A retrospective review of 1011 HIT ELISA and 169 SRA tests performed over 5 years was conducted. ELISA percent inhibition and ELISA low-heparin optical density (OD) were evaluated for positive predictive value (PPV). Based on these findings, HIT ELISA reporting and management algorithm changes were implemented and metrics compared for 5 months pre- and post-intervention to assess intervention success.</p><p><strong>Results: </strong>Equivocal and positive HIT ELISA interpretation showed poor PPV based on percent inhibition (0.20 and 0.32, respectively). However, rising low-heparin OD correlated with increasing PPV (PPV of 0.00 for OD values 0.40-1.00, 0.29 for values 1.00-1.99, and 0.91 for values >2.00). Data-driven intervention decreased ELISA positivity rates (13% to 5%), decreased rates of SRA confirmatory testing overall (13% to 9%), decreased SRA testing rates for patients with non-negative ELISAs (78% to 43%), and increased heparin resumption (20% to 57%). Hematology consults remained relatively stable (78% and 86%).</p><p><strong>Conclusions: </strong>Low-heparin OD-based HIT ELISA interpretation yielded enhanced PPV compared with percent inhibition-based interpretation. Implementation of data-driven changes improved testing stewardship and clinical management for patients with non-negative ELISAs.</p>","PeriodicalId":46361,"journal":{"name":"Journal of Applied Laboratory Medicine","volume":"10 1","pages":"26-35"},"PeriodicalIF":1.8,"publicationDate":"2025-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142923706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Background: Laboratory stewardship programs are increasingly adopted to enhance test utilization and improve patient care. Despite their potential, implementation within complex healthcare systems remains challenging. Benchmarking metrics helps institutions compare their performance against peers or best practices. However, the application in laboratory stewardship is underrepresented in the literature.
Methods: The PLUGS (Patient-centered Laboratory Utiliazation Guidance Services) Informatics Working Group developed guidelines to address common test utilization issues. Metrics were based on data that are easily retrievable and calculable. Three key benchmarks were chosen for a pilot study: the ratio of 25-hydroxyvitamin D to 1,25-dihydroxyvitamin D test orders, the ratio of thyroid-stimulating hormone (TSH) to free thyroxine (FT4) test orders, and the percentage of iron workup orders after an initial low mean corpuscular volume (MCV). Institutions analyzed their own data and we established optimal benchmarks through inter-laboratory comparisons.
Results: Nine laboratories evaluated vitamin D testing, with 2 implementing stewardship interventions beforehand. A benchmark of 50:1 was established, where a higher ratio indicates intentional ordering of 1,25-dihydroxyvitamin D. Nine laboratories evaluated thyroid testing, with 3 implementing interventions. The benchmark of 3.5:1 was established, with a higher ratio suggesting judicious TSH ordering. Seven laboratories evaluated iron workups, proposing a benchmark of 50% as a starting metric. Intervention guidelines were provided for laboratories below the benchmarks to promote improvement.
Conclusions: Benchmarking metrics provide a standardized framework for assessing and enhancing test utilization practices across multiple laboratories. Continued collaboration and refinement of benchmarking methodologies is essential in maximizing the impact of laboratory stewardship programs on patient safety and resource utilization.
{"title":"Developing Benchmarking Metrics for Appropriate Ordering of Vitamin D, Thyroid Testing, and Iron Workups.","authors":"Hsuan-Chieh Liao, Alec Saitman, Jane Dickerson","doi":"10.1093/jalm/jfae126","DOIUrl":"https://doi.org/10.1093/jalm/jfae126","url":null,"abstract":"<p><strong>Background: </strong>Laboratory stewardship programs are increasingly adopted to enhance test utilization and improve patient care. Despite their potential, implementation within complex healthcare systems remains challenging. Benchmarking metrics helps institutions compare their performance against peers or best practices. However, the application in laboratory stewardship is underrepresented in the literature.</p><p><strong>Methods: </strong>The PLUGS (Patient-centered Laboratory Utiliazation Guidance Services) Informatics Working Group developed guidelines to address common test utilization issues. Metrics were based on data that are easily retrievable and calculable. Three key benchmarks were chosen for a pilot study: the ratio of 25-hydroxyvitamin D to 1,25-dihydroxyvitamin D test orders, the ratio of thyroid-stimulating hormone (TSH) to free thyroxine (FT4) test orders, and the percentage of iron workup orders after an initial low mean corpuscular volume (MCV). Institutions analyzed their own data and we established optimal benchmarks through inter-laboratory comparisons.</p><p><strong>Results: </strong>Nine laboratories evaluated vitamin D testing, with 2 implementing stewardship interventions beforehand. A benchmark of 50:1 was established, where a higher ratio indicates intentional ordering of 1,25-dihydroxyvitamin D. Nine laboratories evaluated thyroid testing, with 3 implementing interventions. The benchmark of 3.5:1 was established, with a higher ratio suggesting judicious TSH ordering. Seven laboratories evaluated iron workups, proposing a benchmark of 50% as a starting metric. Intervention guidelines were provided for laboratories below the benchmarks to promote improvement.</p><p><strong>Conclusions: </strong>Benchmarking metrics provide a standardized framework for assessing and enhancing test utilization practices across multiple laboratories. Continued collaboration and refinement of benchmarking methodologies is essential in maximizing the impact of laboratory stewardship programs on patient safety and resource utilization.</p>","PeriodicalId":46361,"journal":{"name":"Journal of Applied Laboratory Medicine","volume":"10 1","pages":"184-191"},"PeriodicalIF":1.8,"publicationDate":"2025-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142923648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kelly W Wilhelms, Derek A Braun, A Brian Mochon, William D Lainhart, Sheelagh M Porter, Brandie M A Beuthin, Diane M Blasko, Meghan J Shapiro, Maria A Proytcheva
Background: Many organizations institute laboratory diagnostic stewardship (DS) programs to improve the utilization of laboratory resources.
Methods: In this paper, we describe the road to implementing laboratory DS in a large, not-for-profit integrated delivery network located in the western United States.
Results: Program structure, projects, challenges, and future opportunities are discussed, providing tactics and opportunities that facilities can employ to maximize their initial foray into the DS landscape.
Conclusions: With effective planning and organization, laboratory DS can be implemented by any organization to realize resource optimization and financial benefits while maintaining high levels of patient care.
{"title":"Diagnostic Laboratory Stewardship in a Growing Integrated Delivery Network.","authors":"Kelly W Wilhelms, Derek A Braun, A Brian Mochon, William D Lainhart, Sheelagh M Porter, Brandie M A Beuthin, Diane M Blasko, Meghan J Shapiro, Maria A Proytcheva","doi":"10.1093/jalm/jfae133","DOIUrl":"https://doi.org/10.1093/jalm/jfae133","url":null,"abstract":"<p><strong>Background: </strong>Many organizations institute laboratory diagnostic stewardship (DS) programs to improve the utilization of laboratory resources.</p><p><strong>Methods: </strong>In this paper, we describe the road to implementing laboratory DS in a large, not-for-profit integrated delivery network located in the western United States.</p><p><strong>Results: </strong>Program structure, projects, challenges, and future opportunities are discussed, providing tactics and opportunities that facilities can employ to maximize their initial foray into the DS landscape.</p><p><strong>Conclusions: </strong>With effective planning and organization, laboratory DS can be implemented by any organization to realize resource optimization and financial benefits while maintaining high levels of patient care.</p>","PeriodicalId":46361,"journal":{"name":"Journal of Applied Laboratory Medicine","volume":"10 1","pages":"36-47"},"PeriodicalIF":1.8,"publicationDate":"2025-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142923652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Quality Cultures Are Stronger when Laboratory Medicine Is a Team Sport.","authors":"Stephen E Kahn, Amanda Harrington","doi":"10.1093/jalm/jfae108","DOIUrl":"https://doi.org/10.1093/jalm/jfae108","url":null,"abstract":"","PeriodicalId":46361,"journal":{"name":"Journal of Applied Laboratory Medicine","volume":"10 1","pages":"207-210"},"PeriodicalIF":1.8,"publicationDate":"2025-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142923712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Victoria Higgins, Joshua E Raizman, Felix Leung, Matthew A Lafreniere, Lori A Beach
{"title":"The Lab Report Podcast: Enhancing the Visibility and Communication of Laboratory Medicine.","authors":"Victoria Higgins, Joshua E Raizman, Felix Leung, Matthew A Lafreniere, Lori A Beach","doi":"10.1093/jalm/jfae104","DOIUrl":"https://doi.org/10.1093/jalm/jfae104","url":null,"abstract":"","PeriodicalId":46361,"journal":{"name":"Journal of Applied Laboratory Medicine","volume":"10 1","pages":"211-213"},"PeriodicalIF":1.8,"publicationDate":"2025-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142923718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}