Pub Date : 2025-08-01Epub Date: 2025-04-17DOI: 10.1080/10408363.2025.2488842
Hunter A Miller, Roland Valdes
The application of artificial intelligence (AI) in laboratory medicine will revolutionize predictive modeling using clinical laboratory information. Machine learning (ML), a sub-discipline of AI, involves fitting algorithms to datasets and is broadly used for data-driven predictive modeling in various disciplines. The majority of ML studies reported in systematic reviews lack key aspects of quality assurance. In clinical laboratory medicine, it is important to consider how differences in analytical methodologies, assay calibration, harmonization, pre-analytical errors, interferences, and physiological factors affecting measured analyte concentrations may also affect the downstream robustness and reliability of ML models. In this article, we address the need for quality improvement and proper validation of ML classification models, with the goal of bringing attention to key concepts pertinent to researchers, manuscript reviewers, and journal editors within the field of pathology and laboratory medicine. Several existing predictive modeling guidelines and recommendations can be readily adapted to the development of ML models in laboratory medicine. We summarize a basic overview of ML and key points from current guidelines including advantages and pitfalls of applied ML. In addition, we draw a parallel between validation of clinical assays and ML models in the context of current regulatory frameworks. The importance of classification performance metrics, model explainability, and data quality along with recommendations for strengthening journal submission requirements are also discussed. Although the focus of this article is on the application of ML in laboratory medicine, many of these concepts extend into other areas of medicine and biomedical science as well.
{"title":"Rigorous validation of machine learning in laboratory medicine: guidance toward quality improvement.","authors":"Hunter A Miller, Roland Valdes","doi":"10.1080/10408363.2025.2488842","DOIUrl":"10.1080/10408363.2025.2488842","url":null,"abstract":"<p><p>The application of artificial intelligence (AI) in laboratory medicine will revolutionize predictive modeling using clinical laboratory information. Machine learning (ML), a sub-discipline of AI, involves fitting algorithms to datasets and is broadly used for data-driven predictive modeling in various disciplines. The majority of ML studies reported in systematic reviews lack key aspects of quality assurance. In clinical laboratory medicine, it is important to consider how differences in analytical methodologies, assay calibration, harmonization, pre-analytical errors, interferences, and physiological factors affecting measured analyte concentrations may also affect the downstream robustness and reliability of ML models. In this article, we address the need for quality improvement and proper validation of ML classification models, with the goal of bringing attention to key concepts pertinent to researchers, manuscript reviewers, and journal editors within the field of pathology and laboratory medicine. Several existing predictive modeling guidelines and recommendations can be readily adapted to the development of ML models in laboratory medicine. We summarize a basic overview of ML and key points from current guidelines including advantages and pitfalls of applied ML. In addition, we draw a parallel between validation of clinical assays and ML models in the context of current regulatory frameworks. The importance of classification performance metrics, model explainability, and data quality along with recommendations for strengthening journal submission requirements are also discussed. Although the focus of this article is on the application of ML in laboratory medicine, many of these concepts extend into other areas of medicine and biomedical science as well.</p>","PeriodicalId":10760,"journal":{"name":"Critical reviews in clinical laboratory sciences","volume":" ","pages":"327-346"},"PeriodicalIF":5.5,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12301099/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143955362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-01Epub Date: 2025-03-01DOI: 10.1080/10408363.2025.2462817
Alicia N Lyle, Uliana Danilenko, Otoe Sugahara, Hubert W Vesper
Cardiovascular disease (CVD) is the leading cause of mortality in the United States and globally. This review describes changes in CVD lipid and lipoprotein biomarker measurements that occurred in line with the evolution of clinical practice guidelines for CVD risk assessment and treatment. It also discusses the level of comparability of these biomarker measurements in clinical practice. Comparable and reliable measurements are achieved through assay standardization, which not only depends on correct test calibration but also on factors such as analytical sensitivity, selectivity, susceptibility to factors that can affect the analytical measurement process, and the stability of the test system over time. The current status of standardization for traditional and newer CVD biomarkers is discussed, as are approaches to setting and achieving standardization goals for low-density lipoprotein cholesterol (LDL-C), high-density lipoprotein cholesterol (HDL-C), total cholesterol (TC), triglycerides (TG), lipoprotein(a) (Lp(a)), apolipoproteins (apo) A-I and B, and non-HDL-C. Appropriate levels of standardization for blood lipids are maintained by the Centers for Disease Control and Prevention's (CDC) CVD Biomarkers Standardization Program (CDC CVD BSP) using the analytical performance goals recommended by the National Cholesterol Education Program. The level of measurement agreement that can be achieved is dependent on the characteristics of the analytes and differences in measurement principles between reference measurement procedures and clinical assays. The technical and analytical limitations observed with traditional blood lipids are not observed with apolipoproteins. Additionally, apoB and Lp(a) may more accurately capture CVD risk and residual CVD risk, respectively, than traditional lipids, thus prompting current guidelines to recommend apolipoprotein measurements. This review further discusses CDC's approach to standardization and describes the analytical performance of traditional blood lipids and apoA-I and B observed over the past 11 years. The reference systems for apoA-I and B, previously maintained by a single laboratory, no longer exist, thus requiring the creation of new systems, which is currently underway. This situation emphasizes the importance of a collaborative network of laboratories, such as CDC's Cholesterol Reference Methods Laboratory Network (CRMLN), to ensure standardization sustainability. CDC is supporting the International Federation of Clinical Chemistry and Laboratory Medicine's (IFCC) work to establish such a network for lipoproteins. Ensuring comparability and reliability of CVD biomarker measurements through standardization remains critical for the effective implementation of clinical practice guidelines and for improving patient care. Utilizing experience gained over three decades, CDC CVD BSP will continue to improve the standardization of traditional and emerging CVD biomarkers together with stakeholders.
心血管疾病(CVD)是美国和全球死亡的主要原因。这篇综述描述了随着CVD风险评估和治疗的临床实践指南的发展,CVD脂质和脂蛋白生物标志物测量的变化。它还讨论了这些生物标志物测量在临床实践中的可比性水平。可比较和可靠的测量是通过分析标准化来实现的,这不仅取决于正确的测试校准,还取决于分析灵敏度、选择性、对可能影响分析测量过程的因素的敏感性以及测试系统随时间的稳定性等因素。本文讨论了传统和新型心血管疾病生物标志物的标准化现状,以及低密度脂蛋白胆固醇(LDL-C)、高密度脂蛋白胆固醇(HDL-C)、总胆固醇(TC)、甘油三酯(TG)、脂蛋白(a) (Lp(a))、载脂蛋白(apo) a - i和B以及非HDL-C标准化目标的设定和实现方法。适当的血脂标准化水平由疾病控制和预防中心(CDC) CVD生物标志物标准化计划(CDC CVD BSP)使用国家胆固醇教育计划推荐的分析性能目标来维持。可达到的测量一致性水平取决于分析物的特性以及参考测量程序和临床分析之间测量原理的差异。在传统血脂中观察到的技术和分析局限性在载脂蛋白中没有观察到。此外,载脂蛋白ob和脂蛋白(a)可能比传统的脂质更准确地分别捕获CVD风险和剩余CVD风险,因此促使当前的指南推荐载脂蛋白测量。这篇综述进一步讨论了CDC的标准化方法,并描述了过去11年来观察到的传统血脂和apoA-I和B的分析性能。以前由单一实验室维持的apoA-I和B参考系统已不复存在,因此需要建立新的系统,目前正在进行中。这种情况强调了实验室合作网络的重要性,例如CDC的胆固醇参考方法实验室网络(CRMLN),以确保标准化的可持续性。疾病预防控制中心正在支持国际临床化学和检验医学联合会(IFCC)建立这样一个脂蛋白网络的工作。通过标准化确保CVD生物标志物测量的可比性和可靠性对于有效实施临床实践指南和改善患者护理至关重要。利用30多年来积累的经验,CDC CVD BSP将与利益相关者一起继续提高传统和新兴CVD生物标志物的标准化。
{"title":"Cardiovascular disease lipids and lipoproteins biomarker standardization.","authors":"Alicia N Lyle, Uliana Danilenko, Otoe Sugahara, Hubert W Vesper","doi":"10.1080/10408363.2025.2462817","DOIUrl":"10.1080/10408363.2025.2462817","url":null,"abstract":"<p><p>Cardiovascular disease (CVD) is the leading cause of mortality in the United States and globally. This review describes changes in CVD lipid and lipoprotein biomarker measurements that occurred in line with the evolution of clinical practice guidelines for CVD risk assessment and treatment. It also discusses the level of comparability of these biomarker measurements in clinical practice. Comparable and reliable measurements are achieved through assay standardization, which not only depends on correct test calibration but also on factors such as analytical sensitivity, selectivity, susceptibility to factors that can affect the analytical measurement process, and the stability of the test system over time. The current status of standardization for traditional and newer CVD biomarkers is discussed, as are approaches to setting and achieving standardization goals for low-density lipoprotein cholesterol (LDL-C), high-density lipoprotein cholesterol (HDL-C), total cholesterol (TC), triglycerides (TG), lipoprotein(a) (Lp(a)), apolipoproteins (apo) A-I and B, and non-HDL-C. Appropriate levels of standardization for blood lipids are maintained by the Centers for Disease Control and Prevention's (CDC) CVD Biomarkers Standardization Program (CDC CVD BSP) using the analytical performance goals recommended by the National Cholesterol Education Program. The level of measurement agreement that can be achieved is dependent on the characteristics of the analytes and differences in measurement principles between reference measurement procedures and clinical assays. The technical and analytical limitations observed with traditional blood lipids are not observed with apolipoproteins. Additionally, apoB and Lp(a) may more accurately capture CVD risk and residual CVD risk, respectively, than traditional lipids, thus prompting current guidelines to recommend apolipoprotein measurements. This review further discusses CDC's approach to standardization and describes the analytical performance of traditional blood lipids and apoA-I and B observed over the past 11 years. The reference systems for apoA-I and B, previously maintained by a single laboratory, no longer exist, thus requiring the creation of new systems, which is currently underway. This situation emphasizes the importance of a collaborative network of laboratories, such as CDC's Cholesterol Reference Methods Laboratory Network (CRMLN), to ensure standardization sustainability. CDC is supporting the International Federation of Clinical Chemistry and Laboratory Medicine's (IFCC) work to establish such a network for lipoproteins. Ensuring comparability and reliability of CVD biomarker measurements through standardization remains critical for the effective implementation of clinical practice guidelines and for improving patient care. Utilizing experience gained over three decades, CDC CVD BSP will continue to improve the standardization of traditional and emerging CVD biomarkers together with stakeholders.</p>","PeriodicalId":10760,"journal":{"name":"Critical reviews in clinical laboratory sciences","volume":" ","pages":"266-287"},"PeriodicalIF":6.6,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143531230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-01Epub Date: 2025-03-09DOI: 10.1080/10408363.2025.2464248
Francesca Sanguedolce, Angelo Cormio, Magda Zanelli, Andrea Palicelli, Maurizio Zizzo, Ugo Giovanni Falagario, Roberta Mazzucchelli, Andrea Benedetto Galosi, Giuseppe Carrieri, Luigi Cormio
Glandular lesions involving the bladder are less common than conventional urothelial carcinoma, and they are often diagnostically challenging diseases, carrying different clinical outcomes. As a group, they encompass both primary and secondary neoplasms, with sometimes overlapping morphological features. In this scenario, proper clinical information is important, in that secondary involvement of the bladder may occur by direct extension or lymphatic/hematogenous spread from carcinomas at other sites, comprising prostate, colon, cervix, breast, and lung. According to the 5th edition of the WHO Classification of urological tumors, glandular morphology is a major hallmark of the following entities: urothelial carcinoma with glandular differentiation, adenocarcinoma, NOS, urachal carcinoma, and tumors of Mullerian type. The distinction among these entities, and between primary and secondary tumors, heavily relies on their biological and immunophenotypical features. This article will review glandular neoplasms of the bladder, highlighting their main immunophenotypical markers. Furthermore, molecular data associated with their pathogenesis, prognosis, and treatment will be described. The aim of this study is to provide a practical and comprehensive up-to-date overview of this complex topic.
{"title":"Diagnostic workout of glandular malignant lesions of the bladder according to the 5th WHO classification.","authors":"Francesca Sanguedolce, Angelo Cormio, Magda Zanelli, Andrea Palicelli, Maurizio Zizzo, Ugo Giovanni Falagario, Roberta Mazzucchelli, Andrea Benedetto Galosi, Giuseppe Carrieri, Luigi Cormio","doi":"10.1080/10408363.2025.2464248","DOIUrl":"10.1080/10408363.2025.2464248","url":null,"abstract":"<p><p>Glandular lesions involving the bladder are less common than conventional urothelial carcinoma, and they are often diagnostically challenging diseases, carrying different clinical outcomes. As a group, they encompass both primary and secondary neoplasms, with sometimes overlapping morphological features. In this scenario, proper clinical information is important, in that secondary involvement of the bladder may occur by direct extension or lymphatic/hematogenous spread from carcinomas at other sites, comprising prostate, colon, cervix, breast, and lung. According to the 5th edition of the WHO Classification of urological tumors, glandular morphology is a major hallmark of the following entities: urothelial carcinoma with glandular differentiation, adenocarcinoma, NOS, urachal carcinoma, and tumors of Mullerian type. The distinction among these entities, and between primary and secondary tumors, heavily relies on their biological and immunophenotypical features. This article will review glandular neoplasms of the bladder, highlighting their main immunophenotypical markers. Furthermore, molecular data associated with their pathogenesis, prognosis, and treatment will be described. The aim of this study is to provide a practical and comprehensive up-to-date overview of this complex topic.</p>","PeriodicalId":10760,"journal":{"name":"Critical reviews in clinical laboratory sciences","volume":" ","pages":"301-312"},"PeriodicalIF":6.6,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143584983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-01Epub Date: 2025-03-09DOI: 10.1080/10408363.2025.2464244
Fernando Marques-García, Ana Nieto-Librero, Xavier Tejedor-Ganduxe, Cristina Martinez-Bravo
<p><p>Biological variation (BV) is defined as the variation in the concentration of a measurand around the homeostatic set point. This is a concept introduced by Fraser and Harris in the second part of the twentieth century. BV is divided into two different estimates: within-subject BV (CV<sub>I</sub>) and between-subject BV (CV<sub>G</sub>). Biological variation studies of biomarkers have been gaining importance in recent years due to the potential practical application of these estimates. The main applications of BV in the clinical laboratory include: the establishment of Analytical Performance Specifications (APS), estimation of the individual's homeostatic set point (HSP), calculation of Reference Change Value (RCV), estimation of individuality index calculation (II), and establishment of personalized reference intervals (prRI). The classic models for obtaining BV estimates have been the most used to date. In these studies, a target population ("normal" population), a sampling frequency and time, and a number of samples per individual, among other factors, are defined. The Biological Variation Data Critical Appraisal Checklist (BIVAC) established by the Task Group-Biological Variation Database (TG-BVD) of the European Federation of Clinical Chemistry and Laboratory Medicine (EFLM) represents a guide for the evaluation and performance of these direct studies. These methods have limitations because they are laborious, expensive, invasive, and are based on an ideal population. In recent years, models have been proposed to obtain BV estimates based on the Real-World Data (RWD) strategy. In this case, we move from a model with a low number of individuals (direct methods) to a population model using the data stored in the Laboratory Information System (LIS). RWD methods are presented as an alternative to overcome the limitations of direct methods. Currently, there is little scientific evidence on the application of RWD models since only five papers have been published. In these papers, three different working algorithms are proposed (Loh et al., Jones et al., and Marques-Garcia et al.). These algorithms are divided into three fundamental stages for their development: patient data and study design, database(s) cleaning, and statistical strategies for obtaining BV estimates. When working with large amounts of data, RWD methods allow us to subdivide the population and thus obtain estimates into subgroups, what would be more difficult using direct methods. Of the three algorithms proposed, the algorithm developed in the Spanish multicenter project <i>BiVaBiDa</i> is the most complete, as it overcomes the limitations of the other two, including the possibility of calculating the confidence interval of the BV estimate. RWD methods also have limitations such as the anonymization of data and the standardization of electronic medical records, as well as the statistical complexity associated with data analysis. It is necessary to continue working on the deve
{"title":"Within-subject biological variation estimated using real-world data strategies (RWD): a systematic review.","authors":"Fernando Marques-García, Ana Nieto-Librero, Xavier Tejedor-Ganduxe, Cristina Martinez-Bravo","doi":"10.1080/10408363.2025.2464244","DOIUrl":"10.1080/10408363.2025.2464244","url":null,"abstract":"<p><p>Biological variation (BV) is defined as the variation in the concentration of a measurand around the homeostatic set point. This is a concept introduced by Fraser and Harris in the second part of the twentieth century. BV is divided into two different estimates: within-subject BV (CV<sub>I</sub>) and between-subject BV (CV<sub>G</sub>). Biological variation studies of biomarkers have been gaining importance in recent years due to the potential practical application of these estimates. The main applications of BV in the clinical laboratory include: the establishment of Analytical Performance Specifications (APS), estimation of the individual's homeostatic set point (HSP), calculation of Reference Change Value (RCV), estimation of individuality index calculation (II), and establishment of personalized reference intervals (prRI). The classic models for obtaining BV estimates have been the most used to date. In these studies, a target population (\"normal\" population), a sampling frequency and time, and a number of samples per individual, among other factors, are defined. The Biological Variation Data Critical Appraisal Checklist (BIVAC) established by the Task Group-Biological Variation Database (TG-BVD) of the European Federation of Clinical Chemistry and Laboratory Medicine (EFLM) represents a guide for the evaluation and performance of these direct studies. These methods have limitations because they are laborious, expensive, invasive, and are based on an ideal population. In recent years, models have been proposed to obtain BV estimates based on the Real-World Data (RWD) strategy. In this case, we move from a model with a low number of individuals (direct methods) to a population model using the data stored in the Laboratory Information System (LIS). RWD methods are presented as an alternative to overcome the limitations of direct methods. Currently, there is little scientific evidence on the application of RWD models since only five papers have been published. In these papers, three different working algorithms are proposed (Loh et al., Jones et al., and Marques-Garcia et al.). These algorithms are divided into three fundamental stages for their development: patient data and study design, database(s) cleaning, and statistical strategies for obtaining BV estimates. When working with large amounts of data, RWD methods allow us to subdivide the population and thus obtain estimates into subgroups, what would be more difficult using direct methods. Of the three algorithms proposed, the algorithm developed in the Spanish multicenter project <i>BiVaBiDa</i> is the most complete, as it overcomes the limitations of the other two, including the possibility of calculating the confidence interval of the BV estimate. RWD methods also have limitations such as the anonymization of data and the standardization of electronic medical records, as well as the statistical complexity associated with data analysis. It is necessary to continue working on the deve","PeriodicalId":10760,"journal":{"name":"Critical reviews in clinical laboratory sciences","volume":" ","pages":"288-300"},"PeriodicalIF":5.5,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143584986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-01Epub Date: 2025-03-01DOI: 10.1080/10408363.2025.2462814
Kenrick Berend, Micah Liam Arthur Heldeweg
In clinical medicine, hyponatremia is highly prevalent and frequently misdiagnosed, leading to substantial mismanagement and iatrogenic morbidity. Its differential diagnosis includes numerous diseases with diverse etiologies, making accurate assessment challenging. Despite extensive literature and guidelines on hyponatremia, most patients do not receive adequate evaluation due to the limitations of diagnostic algorithms, which rely on low-value clinical signs and are unable to identify concurrent conditions. In this review, we examine the range of laboratory tests available for hyponatremia assessment. Understanding renal mechanisms of solute and water exchange (e.g., fractional excretion) is essential for selecting appropriate tests and interpreting their diagnostic value. Additionally, detailed electrolyte and acid-base assessments remain critical for establishing a definitive diagnosis. We comprehensively discuss the selection of laboratory tests for specific differential diagnoses of hyponatremia. Importantly, in cases of acute hyponatremia, rapid correction should take precedence over a complete diagnostic workup. Ultimately, a thorough understanding of laboratory evaluation is crucial for accurately diagnosing hyponatremia. This paper critically reviews the available literature and explores relevant diseases in the context of associated laboratory parameters.
{"title":"The role of the clinical laboratory in diagnosing hyponatremia disorders.","authors":"Kenrick Berend, Micah Liam Arthur Heldeweg","doi":"10.1080/10408363.2025.2462814","DOIUrl":"10.1080/10408363.2025.2462814","url":null,"abstract":"<p><p>In clinical medicine, hyponatremia is highly prevalent and frequently misdiagnosed, leading to substantial mismanagement and iatrogenic morbidity. Its differential diagnosis includes numerous diseases with diverse etiologies, making accurate assessment challenging. Despite extensive literature and guidelines on hyponatremia, most patients do not receive adequate evaluation due to the limitations of diagnostic algorithms, which rely on low-value clinical signs and are unable to identify concurrent conditions. In this review, we examine the range of laboratory tests available for hyponatremia assessment. Understanding renal mechanisms of solute and water exchange (e.g., fractional excretion) is essential for selecting appropriate tests and interpreting their diagnostic value. Additionally, detailed electrolyte and acid-base assessments remain critical for establishing a definitive diagnosis. We comprehensively discuss the selection of laboratory tests for specific differential diagnoses of hyponatremia. Importantly, in cases of acute hyponatremia, rapid correction should take precedence over a complete diagnostic workup. Ultimately, a thorough understanding of laboratory evaluation is crucial for accurately diagnosing hyponatremia. This paper critically reviews the available literature and explores relevant diseases in the context of associated laboratory parameters.</p>","PeriodicalId":10760,"journal":{"name":"Critical reviews in clinical laboratory sciences","volume":" ","pages":"240-265"},"PeriodicalIF":6.6,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143531234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-01Epub Date: 2025-01-01DOI: 10.1080/10408363.2024.2434562
Mauro Panteghini, Magdalena Krintus
Poor analytical quality may be the bane of medical use of laboratory tests, and the fight against excessive analytical variability presents a daily struggle. Laboratories should prioritize the perspectives and needs of their customers (the patients and healthcare personnel). Among them, comparability of results from the same patient sample when measured by different laboratories using different in vitro diagnostic (IVD) medical devices is a logical priority to avoid result misinterpretation and potential patient harm. Harmonization (standardization) of laboratory measurements can be achieved by establishing metrological traceability of the results on clinical samples to stated higher-order references and providing an estimate of the uncertainty of measurement (MU). This estimate should be based on an MU budget including all known MU contributions generated by the employed calibration hierarchy, which in turn should be validated against fit-for-purpose maximum allowable MU derived according to internationally recommended models. In this report, we review the available strategies for establishing, evaluating, and monitoring analytical quality, drawing on three decades experience in the field. We discuss the most important aspects that may influence obtaining and maintaining analytical standardization in laboratory medicine, and offer practical solutions aimed at educating all stakeholders for the achievement of harmonized laboratory results. To fully implement the recommended approaches, all involved parties-i.e. reference providers, IVD manufacturers, medical laboratories, and External Quality Assessment organizers-must agree on their importance and enhance their specific knowledge.
{"title":"Establishing, evaluating and monitoring analytical quality in the traceability era.","authors":"Mauro Panteghini, Magdalena Krintus","doi":"10.1080/10408363.2024.2434562","DOIUrl":"10.1080/10408363.2024.2434562","url":null,"abstract":"<p><p>Poor analytical quality may be the bane of medical use of laboratory tests, and the fight against excessive analytical variability presents a daily struggle. Laboratories should prioritize the perspectives and needs of their customers (the patients and healthcare personnel). Among them, comparability of results from the same patient sample when measured by different laboratories using different <i>in vitro</i> diagnostic (IVD) medical devices is a logical priority to avoid result misinterpretation and potential patient harm. Harmonization (standardization) of laboratory measurements can be achieved by establishing metrological traceability of the results on clinical samples to stated higher-order references and providing an estimate of the uncertainty of measurement (MU). This estimate should be based on an MU budget including all known MU contributions generated by the employed calibration hierarchy, which in turn should be validated against fit-for-purpose maximum allowable MU derived according to internationally recommended models. In this report, we review the available strategies for establishing, evaluating, and monitoring analytical quality, drawing on three decades experience in the field. We discuss the most important aspects that may influence obtaining and maintaining analytical standardization in laboratory medicine, and offer practical solutions aimed at educating all stakeholders for the achievement of harmonized laboratory results. To fully implement the recommended approaches, all involved parties-i.e. reference providers, IVD manufacturers, medical laboratories, and External Quality Assessment organizers-must agree on their importance and enhance their specific knowledge.</p>","PeriodicalId":10760,"journal":{"name":"Critical reviews in clinical laboratory sciences","volume":" ","pages":"148-181"},"PeriodicalIF":6.6,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142913842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-01Epub Date: 2025-02-19DOI: 10.1080/10408363.2025.2463634
Mohammed F Alkadhem, Paul C Jutte, Marjan Wouthuyzen-Bakker, Anneke C Muller Kobold
Calprotectin is a protein predominantly found in the cytosol of myeloid cells, such as neutrophils and monocytes. Calprotectin has several functions in innate immunity, such as attenuating bacteria, recruiting and activating immune cells, and aiding in the production of pro-inflammatory cytokines and reactive oxygen species. Due to its presence in inflammatory sites, it has been investigated as a biomarker for various medical conditions, especially Inflammatory bowel diseases (IBD), rheumatoid arthritis (RA), and has gained interest in the diagnosis of several infectious diseases, in particular for diagnosing periprosthetic joint infections (PJI). Synovial fluid calprotectin has demonstrated to be a sensitive and specific biomarker for both confirming as well as excluding PJI. Synovial fluid calprotectin can be measured using enzyme-linked immunosorbent assay (ELISA), immunoturbidimetry, and lateral flow methods. It is a generally stable biomarker, showing no significant decrease or increase in its levels despite blood or lipid contamination, storage duration, freeze-thaw cycles, and enzymatic pretreatments for viscosity reduction. This review discusses the biology and physiology of calprotectin, pathophysiology of PJI, and the clinical and analytical considerations surrounding its use in diagnosing PJI.
{"title":"Analytical and clinical considerations of synovial fluid calprotectin in diagnosing periprosthetic joint infections.","authors":"Mohammed F Alkadhem, Paul C Jutte, Marjan Wouthuyzen-Bakker, Anneke C Muller Kobold","doi":"10.1080/10408363.2025.2463634","DOIUrl":"10.1080/10408363.2025.2463634","url":null,"abstract":"<p><p>Calprotectin is a protein predominantly found in the cytosol of myeloid cells, such as neutrophils and monocytes. Calprotectin has several functions in innate immunity, such as attenuating bacteria, recruiting and activating immune cells, and aiding in the production of pro-inflammatory cytokines and reactive oxygen species. Due to its presence in inflammatory sites, it has been investigated as a biomarker for various medical conditions, especially Inflammatory bowel diseases (IBD), rheumatoid arthritis (RA), and has gained interest in the diagnosis of several infectious diseases, in particular for diagnosing periprosthetic joint infections (PJI). Synovial fluid calprotectin has demonstrated to be a sensitive and specific biomarker for both confirming as well as excluding PJI. Synovial fluid calprotectin can be measured using enzyme-linked immunosorbent assay (ELISA), immunoturbidimetry, and lateral flow methods. It is a generally stable biomarker, showing no significant decrease or increase in its levels despite blood or lipid contamination, storage duration, freeze-thaw cycles, and enzymatic pretreatments for viscosity reduction. This review discusses the biology and physiology of calprotectin, pathophysiology of PJI, and the clinical and analytical considerations surrounding its use in diagnosing PJI.</p>","PeriodicalId":10760,"journal":{"name":"Critical reviews in clinical laboratory sciences","volume":" ","pages":"228-239"},"PeriodicalIF":6.6,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143448251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-01Epub Date: 2025-02-06DOI: 10.1080/10408363.2025.2453148
S J Lord, A R Horvath, S Sandberg, P J Monaghan, C M Cobbaert, M Reim, A Tolios, R Mueller, P M Bossuyt
Recent changes in the regulatory assessment of in vitro medical tests reflect a growing recognition of the need for more stringent clinical evidence requirements to protect patient safety and health. Under current regulations in the United States and Europe, when needed for regulatory approval, clinical performance reports must provide clinical evidence tailored to the intended purpose of the test and allow assessment of whether the test will achieve the intended clinical benefit. The quality of evidence must be proportionate to the risk for the patient and/or public health. These requirements now cover both commercial and laboratory developed tests (LDT) and demand a sound understanding of the fundamentals of clinical performance measures and study design to develop and appraise the study plan and interpret the study results. However, there is a lack of harmonized guidance for the laboratory profession, industry, regulatory agencies and notified bodies on how the clinical performance of tests should be measured. The Working Group on Test Evaluation (WG-TE) of the European Federation of Clinical Chemistry and Laboratory Medicine (EFLM) is a multidisciplinary group of laboratory professionals, clinical epidemiologists, health technology assessment experts, and representatives of the in vitro diagnostic (IVD) industry. This guidance paper aims to promote a shared understanding of the principles of clinical performance measures and study design. Measures of classification performance, also referred to as discrimination, such as sensitivity and specificity are firmly established as the primary measures for evaluating the clinical performance for screening and diagnostic tests. We explain these measures are just as relevant for other purposes of testing. We outline the importance of defining the most clinically meaningful classification of disease so the clinical benefits of testing can be explicitly inferred for those correctly classified, and harm for those incorrectly classified. We introduce the key principles and a checklist for formulating the research objective and study design to estimate clinical performance: (1) the purpose of a test e.g. diagnosis, screening, risk stratification, prognosis, prediction of treatment benefit, and corresponding research objective for assessing clinical performance; (2) the target condition for clinically meaningful classification; (3) clinical performance measures to assess whether the test is fit-for-purpose; and (4) study design types. Laboratory professionals, industry, and researchers can use this checklist to help identify relevant published studies and primary datasets, and to liaise with clinicians and methodologists when developing a study plan for evaluating clinical performance, where needed, to apply for regulatory approval.
{"title":"Is this test fit-for-purpose? Principles and a checklist for evaluating the clinical performance of a test in the new era of <i>in vitro</i> diagnostic (IVD) regulation.","authors":"S J Lord, A R Horvath, S Sandberg, P J Monaghan, C M Cobbaert, M Reim, A Tolios, R Mueller, P M Bossuyt","doi":"10.1080/10408363.2025.2453148","DOIUrl":"10.1080/10408363.2025.2453148","url":null,"abstract":"<p><p>Recent changes in the regulatory assessment of <i>in vitro</i> medical tests reflect a growing recognition of the need for more stringent clinical evidence requirements to protect patient safety and health. Under current regulations in the United States and Europe, when needed for regulatory approval, clinical performance reports must provide clinical evidence tailored to the intended purpose of the test and allow assessment of whether the test will achieve the intended clinical benefit. The quality of evidence must be proportionate to the risk for the patient and/or public health. These requirements now cover both commercial and laboratory developed tests (LDT) and demand a sound understanding of the fundamentals of clinical performance measures and study design to develop and appraise the study plan and interpret the study results. However, there is a lack of harmonized guidance for the laboratory profession, industry, regulatory agencies and notified bodies on how the clinical performance of tests should be measured. The Working Group on Test Evaluation (WG-TE) of the European Federation of Clinical Chemistry and Laboratory Medicine (EFLM) is a multidisciplinary group of laboratory professionals, clinical epidemiologists, health technology assessment experts, and representatives of the <i>in vitro</i> diagnostic (IVD) industry. This guidance paper aims to promote a shared understanding of the principles of clinical performance measures and study design. Measures of classification performance, also referred to as discrimination, such as sensitivity and specificity are firmly established as the primary measures for evaluating the clinical performance for screening and diagnostic tests. We explain these measures are just as relevant for other purposes of testing. We outline the importance of defining the most clinically meaningful classification of disease so the clinical benefits of testing can be explicitly inferred for those correctly classified, and harm for those incorrectly classified. We introduce the key principles and a checklist for formulating the research objective and study design to estimate clinical performance: (1) the purpose of a test e.g. diagnosis, screening, risk stratification, prognosis, prediction of treatment benefit, and corresponding research objective for assessing clinical performance; (2) the target condition for clinically meaningful classification; (3) clinical performance measures to assess whether the test is fit-for-purpose; and (4) study design types. Laboratory professionals, industry, and researchers can use this checklist to help identify relevant published studies and primary datasets, and to liaise with clinicians and methodologists when developing a study plan for evaluating clinical performance, where needed, to apply for regulatory approval.</p>","PeriodicalId":10760,"journal":{"name":"Critical reviews in clinical laboratory sciences","volume":" ","pages":"182-197"},"PeriodicalIF":6.6,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143254902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-01Epub Date: 2025-02-01DOI: 10.1080/10408363.2025.2453152
Abdurrahman Coskun, Irem Nur Savas, Ozge Can, Giuseppe Lippi
Monitoring individuals' laboratory data is essential for assessing their health status, evaluating the effectiveness of treatments, predicting disease prognosis and detecting subclinical conditions. Currently, monitoring is performed intermittently, measuring serum, plasma, whole blood, urine and occasionally other body fluids at predefined time intervals. The ideal monitoring approach entails continuous measurement of concentration and activity of biomolecules in all body fluids, including solid tissues. This can be achieved through the use of biosensors strategically placed at various locations on the human body where measurements are required for monitoring. High-tech wearable biosensors provide an ideal, noninvasive, and esthetically pleasing solution for monitoring individuals' laboratory data. However, despite significant advances in wearable biosensor technology, the measurement capacities and the number of different analytes that are continuously monitored in patients are not yet at the desired level. In this review, we conducted a literature search and examined: (i) an overview of the background of monitoring for personalized laboratory medicine, (ii) the body fluids and analytes used for monitoring individuals, (iii) the different types of biosensors and methods used for measuring the concentration and activity of biomolecules, and (iv) the statistical algorithms used for personalized data analysis and interpretation in monitoring and evaluation.
{"title":"From population-based to personalized laboratory medicine: continuous monitoring of individual laboratory data with wearable biosensors.","authors":"Abdurrahman Coskun, Irem Nur Savas, Ozge Can, Giuseppe Lippi","doi":"10.1080/10408363.2025.2453152","DOIUrl":"10.1080/10408363.2025.2453152","url":null,"abstract":"<p><p>Monitoring individuals' laboratory data is essential for assessing their health status, evaluating the effectiveness of treatments, predicting disease prognosis and detecting subclinical conditions. Currently, monitoring is performed intermittently, measuring serum, plasma, whole blood, urine and occasionally other body fluids at predefined time intervals. The ideal monitoring approach entails continuous measurement of concentration and activity of biomolecules in all body fluids, including solid tissues. This can be achieved through the use of biosensors strategically placed at various locations on the human body where measurements are required for monitoring. High-tech wearable biosensors provide an ideal, noninvasive, and esthetically pleasing solution for monitoring individuals' laboratory data. However, despite significant advances in wearable biosensor technology, the measurement capacities and the number of different analytes that are continuously monitored in patients are not yet at the desired level. In this review, we conducted a literature search and examined: (i) an overview of the background of monitoring for personalized laboratory medicine, (ii) the body fluids and analytes used for monitoring individuals, (iii) the different types of biosensors and methods used for measuring the concentration and activity of biomolecules, and (iv) the statistical algorithms used for personalized data analysis and interpretation in monitoring and evaluation.</p>","PeriodicalId":10760,"journal":{"name":"Critical reviews in clinical laboratory sciences","volume":" ","pages":"198-227"},"PeriodicalIF":6.6,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143074133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-01Epub Date: 2025-01-01DOI: 10.1080/10408363.2024.2431853
Miles D Thompson, Peter Chidiac, Pedro A Jose, Alexander S Hauser, Caroline M Gorvin
We present a series of three articles on the genetics and pharmacogenetics of G protein- coupled receptors (GPCR). In the first article, we discuss genetic variants of the G protein subunits and accessory proteins that are associated with human phenotypes; in the second article, we build upon this to discuss "G protein-coupled receptor (GPCR) gene variants and human genetic disease" and in the third article, we survey "G protein-coupled receptor pharmacogenomics". In the present article, we review the processes of ligand binding, GPCR activation, inactivation, and receptor trafficking to the membrane in the context of human genetic disease resulting from pathogenic variants of accessory proteins and G proteins. Pathogenic variants of the genes encoding G protein α and β subunits are examined in diverse phenotypes. Variants in the genes encoding accessory proteins that modify or organize G protein coupling have been associated with disease; these include the contribution of variants of the regulator of G protein signaling (RGS) to hypertension; the role of variants of activator of G protein signaling type III in phenotypes such as hypoxia; the contribution of variation at the RGS10 gene to short stature and immunological compromise; and the involvement of variants of G protein-coupled receptor kinases (GRKs), such as GRK4, in hypertension. Variation in genes that encode proteins involved in GPCR signaling are outlined in the context of the changes in structure and function that may be associated with human phenotypes.
{"title":"Genetic variants of accessory proteins and G proteins in human genetic disease.","authors":"Miles D Thompson, Peter Chidiac, Pedro A Jose, Alexander S Hauser, Caroline M Gorvin","doi":"10.1080/10408363.2024.2431853","DOIUrl":"10.1080/10408363.2024.2431853","url":null,"abstract":"<p><p>We present a series of three articles on the genetics and pharmacogenetics of G protein- coupled receptors (GPCR). In the first article, we discuss genetic variants of the G protein subunits and accessory proteins that are associated with human phenotypes; in the second article, we build upon this to discuss \"G protein-coupled receptor (GPCR) gene variants and human genetic disease\" and in the third article, we survey \"G protein-coupled receptor pharmacogenomics\". In the present article, we review the processes of ligand binding, GPCR activation, inactivation, and receptor trafficking to the membrane in the context of human genetic disease resulting from pathogenic variants of accessory proteins and G proteins. Pathogenic variants of the genes encoding G protein α and β subunits are examined in diverse phenotypes. Variants in the genes encoding accessory proteins that modify or organize G protein coupling have been associated with disease; these include the contribution of variants of the regulator of G protein signaling (RGS) to hypertension; the role of variants of activator of G protein signaling type III in phenotypes such as hypoxia; the contribution of variation at the <i>RGS10</i> gene to short stature and immunological compromise; and the involvement of variants of G protein-coupled receptor kinases (GRKs), such as GRK4, in hypertension. Variation in genes that encode proteins involved in GPCR signaling are outlined in the context of the changes in structure and function that may be associated with human phenotypes.</p>","PeriodicalId":10760,"journal":{"name":"Critical reviews in clinical laboratory sciences","volume":" ","pages":"113-134"},"PeriodicalIF":6.6,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11854058/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142913856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}