Pub Date : 2024-04-13DOI: 10.1007/s00769-024-01593-y
Fernando C. Raposo, Michael H. Ramsey
{"title":"The role of measurement uncertainty in the validation of a measurement procedure","authors":"Fernando C. Raposo, Michael H. Ramsey","doi":"10.1007/s00769-024-01593-y","DOIUrl":"10.1007/s00769-024-01593-y","url":null,"abstract":"","PeriodicalId":454,"journal":{"name":"Accreditation and Quality Assurance","volume":"29 3","pages":"267 - 267"},"PeriodicalIF":0.8,"publicationDate":"2024-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140707951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Determinations of three nutrients (nitrate, nitrite and phosphate ions) in seawater were investigated by continuous flow analysis (CFA) based on colorimetry and ion chromatography (IC). The accuracies of those methods were examined by discussing their uncertainties. Although some of the nitrite and phosphate validations were not sufficient due to low concentrations, the results of nitrate agreed well with each other. While CFA is popular in this field, its comparison with IC was reported for the first time and contributed to an increase in the reliability of the analytical results. Finally, the investigation led to the development of three kinds of seawater certified reference materials (CRMs) of the National Metrology Institute of Japan (NMIJ) (NMIJ CRM 7601-a, 7602-a and 7603-a) for which the property values of nutrients including dissolved silica were given. The details of the development are described in the present paper.
通过基于比色法和离子色谱法(IC)的连续流动分析法(CFA),研究了海水中三种营养物质(硝酸盐、亚硝酸盐和磷酸盐离子)的测定。通过讨论这些方法的不确定性,对其准确性进行了研究。虽然亚硝酸盐和磷酸盐的一些验证结果因浓度较低而不够充分,但硝酸盐的结果却相互吻合。虽然 CFA 在这一领域很流行,但它与 IC 的比较是首次报道,有助于提高分析结果的可靠性。最后,调查还促成了日本国家计量研究院(NMIJ)三种海水有证标准物质(NMIJ CRM 7601-a、7602-a 和 7603-a)的开发,其中给出了包括溶解二氧化硅在内的营养物质的属性值。本文介绍了开发的详细情况。
{"title":"Comparison of continuous flow analysis and ion chromatography for determinations of nitrate, nitrite and phosphate ions in seawater and development of related seawater certified reference materials","authors":"Chikako Cheong, Toshihiro Suzuki, Tsutomu Miura, Akiharu Hioki","doi":"10.1007/s00769-024-01586-x","DOIUrl":"10.1007/s00769-024-01586-x","url":null,"abstract":"<div><p>Determinations of three nutrients (nitrate, nitrite and phosphate ions) in seawater were investigated by continuous flow analysis (CFA) based on colorimetry and ion chromatography (IC). The accuracies of those methods were examined by discussing their uncertainties. Although some of the nitrite and phosphate validations were not sufficient due to low concentrations, the results of nitrate agreed well with each other. While CFA is popular in this field, its comparison with IC was reported for the first time and contributed to an increase in the reliability of the analytical results. Finally, the investigation led to the development of three kinds of seawater certified reference materials (CRMs) of the National Metrology Institute of Japan (NMIJ) (NMIJ CRM 7601-a, 7602-a and 7603-a) for which the property values of nutrients including dissolved silica were given. The details of the development are described in the present paper.</p></div>","PeriodicalId":454,"journal":{"name":"Accreditation and Quality Assurance","volume":"29 3","pages":"243 - 251"},"PeriodicalIF":0.8,"publicationDate":"2024-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140717803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-27DOI: 10.1007/s00769-024-01584-z
Manuel Alvarez-Prieto, Ricardo S. Páez-Montero
Sometimes analytical laboratories receive requests with a small number of determinations and/or a small number of samples, or outside the typical scope of analytical services. As a result, they may not have historical data on the performance of analytical processes and/or appropriate reference materials. Under these conditions it is difficult or uneconomical to use traditional or classic quality control charts. This is the so-called start-up problem of these charts. The Q charts seem appropriate charts under these conditions because they do not need any prior training or study phase. The fundamentals and the algebraic expressions of Q charts for the mean (four cases) and for the variance (two cases) are offered. This experimental study of Q charts for individual measurements was done with data from quality control for the evaluation of mass fraction of Ni and Al2O3 in a laterite CRM by ICP-OES. The performance of these Q charts is discussed where the analytical process is in the state of statistical control and in the presence of outliers at the start-up. In the first situation performance of Q charts are quite satisfactory and they behave properly. When outliers are collected at the beginning, the deformation of some charts is evident or the charts become useless. Severe outliers will corrupt the parameter estimates and the subsequent plotted points, or the charts will become insensitive and useless. The practitioner should take extreme care to assure that the initial values are obtained in the state of statistical control to have adequate sensitivity to detect parameter shifts.
{"title":"Quality control charts for short or long runs without a training phase. Part 1. Performances in state of control and in the presence of outliers","authors":"Manuel Alvarez-Prieto, Ricardo S. Páez-Montero","doi":"10.1007/s00769-024-01584-z","DOIUrl":"10.1007/s00769-024-01584-z","url":null,"abstract":"<div><p>Sometimes analytical laboratories receive requests with a small number of determinations and/or a small number of samples, or outside the typical scope of analytical services. As a result, they may not have historical data on the performance of analytical processes and/or appropriate reference materials. Under these conditions it is difficult or uneconomical to use traditional or classic quality control charts. This is the so-called start-up problem of these charts. The Q charts seem appropriate charts under these conditions because they do not need any prior training or study phase. The fundamentals and the algebraic expressions of Q charts for the mean (four cases) and for the variance (two cases) are offered. This experimental study of Q charts for individual measurements was done with data from quality control for the evaluation of mass fraction of Ni and Al<sub>2</sub>O<sub>3</sub> in a laterite CRM by ICP-OES. The performance of these Q charts is discussed where the analytical process is in the state of statistical control and in the presence of outliers at the start-up. In the first situation performance of Q charts are quite satisfactory and they behave properly. When outliers are collected at the beginning, the deformation of some charts is evident or the charts become useless. Severe outliers will corrupt the parameter estimates and the subsequent plotted points, or the charts will become insensitive and useless. The practitioner should take extreme care to assure that the initial values are obtained in the state of statistical control to have adequate sensitivity to detect parameter shifts.</p></div>","PeriodicalId":454,"journal":{"name":"Accreditation and Quality Assurance","volume":"29 3","pages":"231 - 242"},"PeriodicalIF":0.8,"publicationDate":"2024-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140374927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-27DOI: 10.1007/s00769-024-01579-w
Peter D. Rostron, Tom Fearn, Michael H. Ramsey
Measurement uncertainty (MU) arising at different stages of a measurement process can be estimated using analysis of variance (ANOVA) on replicated measurements. It is common practice to derive an expanded MU by multiplying the resulting standard deviation by a coverage factor k. This coverage factor then defines an interval around a measurement value within which the value of the measurand, or true value, is asserted to lie for a desired confidence level (e.g. 95 %). A value of k = 2 is often used to obtain approximate 95 % coverage, although k = 2 will be an underestimate when the standard deviation is estimated from a limited amount of data. An alternative is to use Student’s t-distribution to provide a value for k, but this requires an exact or approximate degrees of freedom (df). This paper explores two different methods of deriving an appropriate k in the case when two variances from an ANOVA (classical or robust) need to be combined to estimate the measurement variance. Simulations show that both methods using the modified coverage factor generally produce a confidence interval much closer to the desired level (e.g. 95 %) when the data are approximately normally distributed. When these confidence intervals do deviate from 95 %, they are consistently conservative (i.e. reported coverage is higher than the nominal 95 %). When outlying values are included at the level of the larger variance component, in some cases the method used for robust ANOVA produces confidence intervals that are very conservative.
测量过程不同阶段产生的测量不确定度 (MU) 可通过对重复测量进行方差分析 (ANOVA) 来估算。通常的做法是将由此得出的标准偏差乘以覆盖因子 k,从而得出一个扩展的 MU。该覆盖因子定义了测量值周围的一个区间,在该区间内,测量值或真值被认为处于所需的置信水平(例如 95%)。通常使用 k = 2 的值来获得近似 95 % 的覆盖率,不过在根据有限数据估算标准偏差时,k = 2 的值会被低估。另一种方法是使用学生 t 分布来提供 k 值,但这需要精确或近似的自由度 (df)。本文探讨了在需要结合方差分析(经典方差分析或稳健方差分析)中的两个方差来估计测量方差的情况下,推导适当 k 值的两种不同方法。模拟结果表明,当数据近似正态分布时,使用修正覆盖因子的两种方法通常都能得出更接近理想水平(如 95 %)的置信区间。当这些置信区间偏离 95 % 时,它们始终是保守的(即报告的覆盖率高于标称的 95 %)。当在较大方差分量的水平上包含离群值时,在某些情况下,稳健方差分析所使用的方法会产生非常保守的置信区间。
{"title":"Improved coverage factors for expanded measurement uncertainty calculated from two estimated variance components","authors":"Peter D. Rostron, Tom Fearn, Michael H. Ramsey","doi":"10.1007/s00769-024-01579-w","DOIUrl":"10.1007/s00769-024-01579-w","url":null,"abstract":"<div><p>Measurement uncertainty (MU) arising at different stages of a measurement process can be estimated using analysis of variance (ANOVA) on replicated measurements. It is common practice to derive an expanded MU by multiplying the resulting standard deviation by a coverage factor <i>k.</i> This coverage factor then defines an interval around a measurement value within which the value of the measurand, or true value, is asserted to lie for a desired confidence level (e.g. 95 %). A value of <i>k</i> = 2 is often used to obtain approximate 95 % coverage, although <i>k</i> = 2 will be an underestimate when the standard deviation is estimated from a limited amount of data. An alternative is to use Student’s <i>t-</i>distribution to provide a value for <i>k</i>, but this requires an exact or approximate degrees of freedom (df). This paper explores two different methods of deriving an appropriate <i>k</i> in the case when two variances from an ANOVA (classical or robust) need to be combined to estimate the measurement variance. Simulations show that both methods using the modified coverage factor generally produce a confidence interval much closer to the desired level (e.g. 95 %) when the data are approximately normally distributed. When these confidence intervals do deviate from 95 %, they are consistently conservative (i.e. reported coverage is higher than the nominal 95 %). When outlying values are included at the level of the larger variance component, in some cases the method used for robust ANOVA produces confidence intervals that are very conservative.</p></div>","PeriodicalId":454,"journal":{"name":"Accreditation and Quality Assurance","volume":"29 3","pages":"225 - 230"},"PeriodicalIF":0.8,"publicationDate":"2024-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00769-024-01579-w.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140377570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-23DOI: 10.1007/s00769-024-01578-x
Hening Huang
In his recent paper (https://doi.org/10.1016/j.measen.2022.100416), Willink showed two contradictory theoretical solutions to the problem of combining and transforming two sets of information about the same quantity. Each set of information is represented by a probability density function (PDF), and the transformation function is nonlinear. We refer to Willink’s contradictory result as “Willink paradox”. Two operations: (a) information combination and (b) information transformation are both mathematically valid according to probability theory. Therefore, the two contradictory solutions are both theoretically correct. In practice such as measurement uncertainty analysis, however, we cannot use both solutions; we must choose one or the other. We propose an entropy metric that can be used to provide a practical solution to the Willink paradox. Three examples are presented to illustrate the Willink paradox and the proposed entropy metric.
{"title":"Information combination and transformation: a paradox and its resolution","authors":"Hening Huang","doi":"10.1007/s00769-024-01578-x","DOIUrl":"10.1007/s00769-024-01578-x","url":null,"abstract":"<div><p>In his recent paper (https://doi.org/10.1016/j.measen.2022.100416), Willink showed two contradictory theoretical solutions to the problem of combining and transforming two sets of information about the same quantity. Each set of information is represented by a probability density function (PDF), and the transformation function is nonlinear. We refer to Willink’s contradictory result as “Willink paradox”. Two operations: (a) information combination and (b) information transformation are both mathematically valid according to probability theory. Therefore, the two contradictory solutions are both theoretically correct. In practice such as measurement uncertainty analysis, however, we cannot use both solutions; we must choose one or the other. We propose an entropy metric that can be used to provide a practical solution to the Willink paradox. Three examples are presented to illustrate the Willink paradox and the proposed entropy metric.</p></div>","PeriodicalId":454,"journal":{"name":"Accreditation and Quality Assurance","volume":"29 3","pages":"183 - 187"},"PeriodicalIF":0.8,"publicationDate":"2024-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140210973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-23DOI: 10.1007/s00769-024-01583-0
Hening Huang
This paper considers the problem of computing the combined standard uncertainty of an indirect measurement, in which the measurand is related to multiple influence quantities through a measurement model. In practice, there may be prior information or current information, or both, about the influence quantities. We propose a practical two-step procedure for taking into account all available information (prior and current) about influence quantities in measurement uncertainty analysis. The first step is to combine prior and current information to form the merged information for each influence quantity based on the weighted average method or the law of combination of distributions. The second step deals with the propagation of the merged information to calculate the combined standard uncertainty using the law of propagation of uncertainty or the principle of propagation of distributions. The proposed two-step procedure is based entirely on frequentist statistics. A case study on the calibration of a test weight (mass calibration) is presented to demonstrate the effectiveness of the proposed two-step procedure and compare it with a subjective Bayesian approach.
{"title":"A practical two-step procedure for taking into account all available information (prior and current) about influence quantities in measurement uncertainty analysis","authors":"Hening Huang","doi":"10.1007/s00769-024-01583-0","DOIUrl":"10.1007/s00769-024-01583-0","url":null,"abstract":"<div><p>This paper considers the problem of computing the combined standard uncertainty of an indirect measurement, in which the measurand is related to multiple influence quantities through a measurement model. In practice, there may be prior information or current information, or both, about the influence quantities. We propose a practical two-step procedure for taking into account all available information (prior and current) about influence quantities in measurement uncertainty analysis. The first step is to combine prior and current information to form the merged information for each influence quantity based on the weighted average method or the law of combination of distributions. The second step deals with the propagation of the merged information to calculate the combined standard uncertainty using the law of propagation of uncertainty or the principle of propagation of distributions. The proposed two-step procedure is based entirely on frequentist statistics. A case study on the calibration of a test weight (mass calibration) is presented to demonstrate the effectiveness of the proposed two-step procedure and compare it with a subjective Bayesian approach.</p></div>","PeriodicalId":454,"journal":{"name":"Accreditation and Quality Assurance","volume":"29 3","pages":"215 - 223"},"PeriodicalIF":0.8,"publicationDate":"2024-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140210816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-23DOI: 10.1007/s00769-024-01577-y
R. Willink
This article is a response to the preceding paper by Huang, who considers a recent result of Willink (Measurement: Sensors, 24:100416, 2022) and who describes the result as a paradox. The result implied that a set of information or a “state of knowledge” about a measurand cannot be identified with a unique probability distribution for the measurand, contrary to what seems suggested in the literature surrounding the revision of the Guide to the Expression of Uncertainty in Measurement. The result is restated and viewed in the context of CIPM Recommendation INC-1, which was foundational in the original development of the Guide. It is argued that the result is a proof, not a paradox, and that it will only appear paradoxical to those who have adopted an incorrect premise about probability. The idea of having “information” about the true value of a measurand is discussed and contrasted with the idea of having “belief” about it. The material supports the view that the analysis of measurement uncertainty is to be based on classical statistical principles.
{"title":"Paradox? What paradox?","authors":"R. Willink","doi":"10.1007/s00769-024-01577-y","DOIUrl":"10.1007/s00769-024-01577-y","url":null,"abstract":"<div><p>This article is a response to the preceding paper by Huang, who considers a recent result of Willink (Measurement: Sensors, 24:100416, 2022) and who describes the result as a paradox. The result implied that a set of information or a “state of knowledge” about a measurand cannot be identified with a unique probability distribution for the measurand, contrary to what seems suggested in the literature surrounding the revision of the <i>Guide to the Expression of Uncertainty in Measurement</i>. The result is restated and viewed in the context of CIPM Recommendation INC-1, which was foundational in the original development of the <i>Guide</i>. It is argued that the result is a proof, not a paradox, and that it will only appear paradoxical to those who have adopted an incorrect premise about probability. The idea of having “information” about the true value of a measurand is discussed and contrasted with the idea of having “belief” about it. The material supports the view that the analysis of measurement uncertainty is to be based on classical statistical principles.</p></div>","PeriodicalId":454,"journal":{"name":"Accreditation and Quality Assurance","volume":"29 3","pages":"189 - 192"},"PeriodicalIF":0.8,"publicationDate":"2024-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00769-024-01577-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140211250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-19DOI: 10.1007/s00769-024-01585-y
Fernando C. Raposo, Michael H. Ramsey
{"title":"Target uncertainty: a critical review","authors":"Fernando C. Raposo, Michael H. Ramsey","doi":"10.1007/s00769-024-01585-y","DOIUrl":"10.1007/s00769-024-01585-y","url":null,"abstract":"","PeriodicalId":454,"journal":{"name":"Accreditation and Quality Assurance","volume":"29 4","pages":"327 - 328"},"PeriodicalIF":0.8,"publicationDate":"2024-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140230703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
High-performance liquid chromatography (HPLC) pump has a main function to deliver a mobile phase which needs accuracy and stable flow rate. Recently, the reference of General European OMCL Network 2005: Qualification of Equipment Annex 1 qualification of liquid chromatography equipment presents three flow rate calibration methods that have been used to calibrate the HPLC pump performance such as using standard flow meter, gravimetric method, and volumetric method. In this reference, the calibration methods have not been clearly described. Therefore, this study aimed to investigate and compare the flow rate calibration by the using standard flow meter, gravimetric method, and volumetric method. The flow rate was calibrated at 0.5 and 5 mL/min with three replications. The stability between days of each method was observed for three days. The calibration procedure, traceability chart, and uncertainty budget of each method had been described. The flow rate results were analyzed by using the En ratio. The results showed that the stability of all methods had no statistically significant difference between days when compared to the first day itself. The average measurement flow rates by using standard flow meter, gravimetric method, and volumetric method at 0.5 mL/min were 0.4908, 0.4992, and 0.5012 mL/min, respectively, and at 5 mL/min, they were 4.8346, 4.8272, and 4.8394 mL/min, respectively. These flow rates showed consistent results between each other as confirmed with the En ratio ≤ 1. The precision of the three methods was 0.01 to 0.08 %RSD. This study concluded that the flow rate results of three calibration methods had been consistent with each other. All methods had a good stability and precision for the flow rate calibration of the HPLC pump.
{"title":"The comparison of flow rate calibration methods for high-performance liquid chromatography (HPLC) pump","authors":"Prangtip Uthaiwat, Theera Leeudomwong, Tassanai Sanponpute","doi":"10.1007/s00769-024-01580-3","DOIUrl":"10.1007/s00769-024-01580-3","url":null,"abstract":"<div><p>High-performance liquid chromatography (HPLC) pump has a main function to deliver a mobile phase which needs accuracy and stable flow rate. Recently, the reference of General European OMCL Network 2005: Qualification of Equipment Annex 1 qualification of liquid chromatography equipment presents three flow rate calibration methods that have been used to calibrate the HPLC pump performance such as using standard flow meter, gravimetric method, and volumetric method. In this reference, the calibration methods have not been clearly described. Therefore, this study aimed to investigate and compare the flow rate calibration by the using standard flow meter, gravimetric method, and volumetric method. The flow rate was calibrated at 0.5 and 5 mL/min with three replications. The stability between days of each method was observed for three days. The calibration procedure, traceability chart, and uncertainty budget of each method had been described. The flow rate results were analyzed by using the <i>E</i><sub><i>n</i></sub> ratio. The results showed that the stability of all methods had no statistically significant difference between days when compared to the first day itself. The average measurement flow rates by using standard flow meter, gravimetric method, and volumetric method at 0.5 mL/min were 0.4908, 0.4992, and 0.5012 mL/min, respectively, and at 5 mL/min, they were 4.8346, 4.8272, and 4.8394 mL/min, respectively. These flow rates showed consistent results between each other as confirmed with the <i>E</i><sub><i>n</i></sub> ratio ≤ 1. The precision of the three methods was 0.01 to 0.08 %RSD. This study concluded that the flow rate results of three calibration methods had been consistent with each other. All methods had a good stability and precision for the flow rate calibration of the HPLC pump.</p></div>","PeriodicalId":454,"journal":{"name":"Accreditation and Quality Assurance","volume":"29 3","pages":"205 - 214"},"PeriodicalIF":0.8,"publicationDate":"2024-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140240497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-13DOI: 10.1007/s00769-023-01567-6
Chun Yuan Huang, Ya Xuan Liu, Jian Zhou, Ming Wang, Meng Rui Yang, Hui Liu, Fukai Li, Liyuan Zhang
In this study, two different concentrations of matrix certified reference materials (CRMs) were produced for the accurate measurement of aflatoxin M1(AFM1) in milk powder (GBW(E) 100552, GBW(E) 100553). The raw material was obtained by feeding cows with positive drugs. The homogeneity, stability and characterization of these matrix CRMs were examined by liquid chromatography tandem mass spectrometry with isotope-labeled internal standard method. The certified value for the low concentration of AFM1 in milk powder was 2.45 µg/kg with an expanded uncertainty of 0.43 µg/kg (coverage factor k = 2, at 95% confidence). The certified value for the high concentration of AFM1 in milk powder was 3.45 µg/kg with an expanded uncertainty of 0.49 µg/kg (coverage factor k = 2, at 95% confidence). In addition, the samples were evaluated in detail for homogeneity, long-term stability at − 80 °C for 6 months and short-term stability at 4 °C for 7 days. The results showed that the samples were stable and homogeneous under the above conditions.