Pub Date : 2026-02-02DOI: 10.1016/j.measurement.2026.120692
Faisal Saleem , Józef Wiora , Delfim F.M. Torres
Modeling the dynamics of the flow rate system is challenged by the nonlinear behavior and the noisy measurement data. Accurate models require a comprehensive understanding of fluid mechanics, as well as knowledge of all instruments in the measurement chain. This study presents a black-box optimization approach to develop a nominal Fractional-Order (FO) model of a laboratory-scale flow system. The model was constructed by repeatedly solving an optimization problem using preprocessed experimental data and averaging the resulting optimal parameters. The nominal FO model was then validated against unseen, unprocessed measurement data to assess its robustness. The parameter sensitivity of the proposed model was analyzed by introducing and perturbations in each parameter individually. Error analysis evidences that root mean squared, mean absolute, and mean absolute percentage errors with the proposed model have reduced to , and 5.3%, respectively, compared to those integer-order models. Furthermore, residual-based distribution analysis confirms the robustness of the approach, with residuals tightly concentrated around the lowest values. Although the FO model incurs a higher computational cost during optimization, it was significantly reduced using an online optimizer. The proposed model demonstrates superior robustness and accuracy, making it a compelling choice for precise modeling.
{"title":"Fractional-order modeling of a flow rate measurement system utilizing Grünwald–Letnikov based optimization","authors":"Faisal Saleem , Józef Wiora , Delfim F.M. Torres","doi":"10.1016/j.measurement.2026.120692","DOIUrl":"10.1016/j.measurement.2026.120692","url":null,"abstract":"<div><div>Modeling the dynamics of the flow rate system is challenged by the nonlinear behavior and the noisy measurement data. Accurate models require a comprehensive understanding of fluid mechanics, as well as knowledge of all instruments in the measurement chain. This study presents a black-box optimization approach to develop a nominal Fractional-Order (FO) model of a laboratory-scale flow system. The model was constructed by repeatedly solving an optimization problem using preprocessed experimental data and averaging the resulting optimal parameters. The nominal FO model was then validated against unseen, unprocessed measurement data to assess its robustness. The parameter sensitivity of the proposed model was analyzed by introducing <span><math><mrow><mo>+</mo><mn>10</mn><mtext>%</mtext></mrow></math></span> and <span><math><mrow><mo>+</mo><mn>20</mn><mtext>%</mtext></mrow></math></span> perturbations in each parameter individually. Error analysis evidences that root mean squared, mean absolute, and mean absolute percentage errors with the proposed model have reduced to <span><math><mrow><mn>9</mn><mo>.</mo><mn>3</mn><mtext>%</mtext><mo>,</mo><mn>5</mn><mo>.</mo><mn>1</mn><mtext>%</mtext></mrow></math></span>, and 5.3%, respectively, compared to those integer-order models. Furthermore, residual-based distribution analysis confirms the robustness of the approach, with residuals tightly concentrated around the lowest values. Although the FO model incurs a higher computational cost during optimization, it was significantly reduced using an online optimizer. The proposed model demonstrates superior robustness and accuracy, making it a compelling choice for precise modeling.</div></div>","PeriodicalId":18349,"journal":{"name":"Measurement","volume":"268 ","pages":"Article 120692"},"PeriodicalIF":5.6,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146172075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-02DOI: 10.1016/j.measurement.2026.120683
Grzegorz Tytko , Krzysztof Bernacki , Yao Luo , Konstanty M. Gawrylczyk , Jun Tu , Pu Huang
Ferrite pot cores are commonly used in eddy current inspections that require highly sensitive probes. Computer simulations of these tests are typically performed using mathematical models, which also support the interpretation of measurement results and the selection of optimal testing parameters and probe geometry. However, the accurate modeling of ferrite cores remains challenging because of the limited availability of data on the materials used, as well as the complexity of determining their magnetic permeability. As a result, it is difficult to assess how the choice of core material affects the sensitivity of an eddy current probe. This study presents an analysis of this issue based on impedance measurements of 21 ferrite pot cores with varying geometric dimensions. The investigation included both magnetic ad non-magnetic test samples, provided in the form of plates a disks. Key parameters of the measurement setup were considered, including the probe operating frequency, core geometry, type of test material, and the core’s initial permeability. The results showed that the probe’s impedance components may differ by several tens of percent depending on the core material. The influence of the core material on probe sensitivity proved to be particularly significant when testing magnetic samples and cores with large geometric dimensions.
{"title":"Study of ferrite pot cores used in eddy current testing","authors":"Grzegorz Tytko , Krzysztof Bernacki , Yao Luo , Konstanty M. Gawrylczyk , Jun Tu , Pu Huang","doi":"10.1016/j.measurement.2026.120683","DOIUrl":"10.1016/j.measurement.2026.120683","url":null,"abstract":"<div><div>Ferrite pot cores are commonly used in eddy current inspections that require highly sensitive probes. Computer simulations of these tests are typically performed using mathematical models, which also support the interpretation of measurement results and the selection of optimal testing parameters and probe geometry. However, the accurate modeling of ferrite cores remains challenging because of the limited availability of data on the materials used, as well as the complexity of determining their magnetic permeability. As a result, it is difficult to assess how the choice of core material affects the sensitivity of an eddy current probe. This study presents an analysis of this issue based on impedance measurements of 21 ferrite pot cores with varying geometric dimensions. The investigation included both magnetic ad non-magnetic test samples, provided in the form of plates a disks. Key parameters of the measurement setup were considered, including the probe operating frequency, core geometry, type of test material, and the core’s initial permeability. The results showed that the probe’s impedance components may differ by several tens of percent depending on the core material. The influence of the core material on probe sensitivity proved to be particularly significant when testing magnetic samples and cores with large geometric dimensions.</div></div>","PeriodicalId":18349,"journal":{"name":"Measurement","volume":"268 ","pages":"Article 120683"},"PeriodicalIF":5.6,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146172078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As promising layered semiconductors, two-dimensional (2D) SnS2 and Janus SnSSe exhibit diverse nonlinear optical (NLO) behaviors that are strongly influenced through structural modulation. In this study, the nonlinear absorption (NLA) responses of both materials were systematically investigated using the open aperture (OA) Z-scan technique under excitation wavelengths of 355, 532, and 1064 nm. The results indicated that both materials exhibit two-photon absorption (TPA) as the dominant mechanism at high excitation energies. Notably, SnS2 still exhibited TPA as the dominant mechanism at low excitation energies, while SnSSe showed saturated absorption (SA) across all three wavelengths (355, 532, and 1064 nm). Structural analysis revealed that the introduction of Se transforms the point group from the centrosymmetric D3d to the non-centrosymmetric C3v, breaking inversion symmetry and altering the electronic state distribution and transition selection rules, thereby enhancing the SA behavior of SnSSe. These findings underscore the critical role of anion substitution in modulating the NLO properties of 2D materials, providing valuable insights for the development of advanced optical limiting (OL) devices and saturable absorbers.
{"title":"Anion substitution modulation of nonlinear absorption properties in two-dimensional SnS2 and Janus SnSSe","authors":"QianHou Liu, Shuangjie Li, Mengyu Shen, ZiHan Ren, Fei Xing, Fang Zhang","doi":"10.1016/j.measurement.2026.120674","DOIUrl":"10.1016/j.measurement.2026.120674","url":null,"abstract":"<div><div>As promising layered semiconductors, two-dimensional (2D) SnS<sub>2</sub> and Janus SnSSe exhibit diverse nonlinear optical (NLO) behaviors that are strongly influenced through structural modulation. In this study, the nonlinear absorption (NLA) responses of both materials were systematically investigated using the open aperture (OA) Z-scan technique under excitation wavelengths of 355, 532, and 1064 nm. The results indicated that both materials exhibit two-photon absorption (TPA) as the dominant mechanism at high excitation energies. Notably, SnS<sub>2</sub> still exhibited TPA as the dominant mechanism at low excitation energies, while SnSSe showed saturated absorption (SA) across all three wavelengths (355, 532, and 1064 nm). Structural analysis revealed that the introduction of Se transforms the point group from the centrosymmetric D<sub>3</sub>d to the non-centrosymmetric C<sub>3</sub>v, breaking inversion symmetry and altering the electronic state distribution and transition selection rules, thereby enhancing the SA behavior of SnSSe. These findings underscore the critical role of anion substitution in modulating the NLO properties of 2D materials, providing valuable insights for the development of advanced optical limiting (OL) devices and saturable absorbers.</div></div>","PeriodicalId":18349,"journal":{"name":"Measurement","volume":"268 ","pages":"Article 120674"},"PeriodicalIF":5.6,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146172138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-02DOI: 10.1016/j.measurement.2026.120680
Peng Li , Zhanhua Wu , Yuyuan Wu, Haiying Liang, Xinyuan Guo, Yongjian Li
The performance and reliability of mechanical systems significantly rely on the operating conditions of rolling bearings. Therefore, accurately identifying bearing faults is fundamental to maintaining stable system operation. However, fault diagnosis often faces challenges such as limited training samples and missing labels. To address these issues in bearing fault diagnosis, this paper proposes a Hybrid Higher-order Graph Convolutional Network (MixHop GCN) model that leverages few-shot label propagation. Building on the Gramian Angular Field Graph (GAF Graph) construction, the model integrates the Label Weight Propagation Algorithm (LWPA) with a MixHop GCN architecture. This algorithm introduces a confidence weight parameter to propagate known fault labels to unlabeled nodes along weighted edges in the graph. This procedure effectively mitigates the data-scarcity issues inherent in few-shot scenarios. Moreover, the MixHop GCN adaptively adjusts convolutional depth and breadth by mixing different powers of the adjacency matrix, thereby accommodating diverse graph structures and data characteristics. Furthermore, a Lasso regularization, an adjacency efficiency adaptive optimization mechanism that dynamically adjusts computational resource usage, markedly accelerates training and inference while improving diagnostic performance. Finally, the proposed model is empirically validated on two public datasets to demonstrate its efficacy.
{"title":"A hybrid higher-order graph convolutional network for fault diagnosis based on small sample label propagation","authors":"Peng Li , Zhanhua Wu , Yuyuan Wu, Haiying Liang, Xinyuan Guo, Yongjian Li","doi":"10.1016/j.measurement.2026.120680","DOIUrl":"10.1016/j.measurement.2026.120680","url":null,"abstract":"<div><div>The performance and reliability of mechanical systems significantly rely on the operating conditions of rolling bearings. Therefore, accurately identifying bearing faults is fundamental to maintaining stable system operation. However, fault diagnosis often faces challenges such as limited training samples and missing labels. To address these issues in bearing fault diagnosis, this paper proposes a Hybrid Higher-order Graph Convolutional Network (MixHop GCN) model that leverages few-shot label propagation. Building on the Gramian Angular Field Graph (GAF Graph) construction, the model integrates the Label Weight Propagation Algorithm (LWPA) with a MixHop GCN architecture. This algorithm introduces a confidence weight parameter to propagate known fault labels to unlabeled nodes along weighted edges in the graph. This procedure effectively mitigates the data-scarcity issues inherent in few-shot scenarios. Moreover, the MixHop GCN adaptively adjusts convolutional depth and breadth by mixing different powers of the adjacency matrix, thereby accommodating diverse graph structures and data characteristics. Furthermore, a Lasso regularization, an adjacency efficiency adaptive optimization mechanism that dynamically adjusts computational resource usage, markedly accelerates training and inference while improving diagnostic performance. Finally, the proposed model is empirically validated on two public datasets to demonstrate its efficacy.</div></div>","PeriodicalId":18349,"journal":{"name":"Measurement","volume":"268 ","pages":"Article 120680"},"PeriodicalIF":5.6,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146172037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01DOI: 10.1016/j.measurement.2026.120671
Min Gao , Xiaoping Kang , Kun Zhou , Teng Xie
Steel surface defect detection in high-speed production lines requires millisecond-level response while handling diverse anomalies ranging from microscopic cracks to macroscopic rolling marks. However, current methods fail to address steel-specific challenges including extreme aspect ratios, metallic texture interference, and three-order-magnitude scale variations. To address these limitations, this paper proposes a hierarchical detection Transformer framework termed OverLoPK-CSMPG-CDCMB-DEIM (OCC-DEIM), which incorporates a geometry-adaptive receptive field mechanism to fundamentally resolve the geometric mismatch between fixed receptive fields and diverse defect morphologies. The framework first employs Overview-first-look-closely-next convnet with PConv-context-mixing dynamic kernels (OverLoPK) as backbone, providing efficient computational foundation through hierarchical decomposition and partial convolution (PConv)-enhanced dynamic kernels, reducing computational cost by 82.6 percent while preserving fine-grained texture discrimination. Building upon this foundation, the core mechanism achieves receptive field adaptation from two dimensions. CSP sparse multi-path convolution with gated linear unit (CSMPG) realizes morphology adaptation through learnable sampling points and gating units for elongated scratches and irregular pitting. CSP dynamic convolutional mixer block (CDCMB) enhances directional sensitivity through anisotropic multi-branch convolutions for directional rolling defects. Evaluation demonstrates superior performance with 89.4 percent mean Average Precision ([email protected]) on NEU-DET, 90.4 percent on GC10-DET, and 97.5 percent on PCB-DET, improving 3.8 to 4.2 percentage points over baselines. The framework achieves 105.4 frames per second (FPS) on personal computer (PC) and 57.8 FPS on embedded platforms with only 14.9 M parameters. This work provides a practical solution for automated industrial defect inspection.
{"title":"OCC-DEIM: Simultaneous optimization of detection accuracy and computational efficiency for embedded steel defect inspection systems","authors":"Min Gao , Xiaoping Kang , Kun Zhou , Teng Xie","doi":"10.1016/j.measurement.2026.120671","DOIUrl":"10.1016/j.measurement.2026.120671","url":null,"abstract":"<div><div>Steel surface defect detection in high-speed production lines requires millisecond-level response while handling diverse anomalies ranging from microscopic cracks to macroscopic rolling marks. However, current methods fail to address steel-specific challenges including extreme aspect ratios, metallic texture interference, and three-order-magnitude scale variations. To address these limitations, this paper proposes a hierarchical detection Transformer framework termed OverLoPK-CSMPG-CDCMB-DEIM (OCC-DEIM), which incorporates a geometry-adaptive receptive field mechanism to fundamentally resolve the geometric mismatch between fixed receptive fields and diverse defect morphologies. The framework first employs Overview-first-look-closely-next convnet with PConv-context-mixing dynamic kernels (OverLoPK) as backbone, providing efficient computational foundation through hierarchical decomposition and partial convolution (PConv)-enhanced dynamic kernels, reducing computational cost by 82.6 percent while preserving fine-grained texture discrimination. Building upon this foundation, the core mechanism achieves receptive field adaptation from two dimensions. CSP sparse multi-path convolution with gated linear unit (CSMPG) realizes morphology adaptation through learnable sampling points and gating units for elongated scratches and irregular pitting. CSP dynamic convolutional mixer block (CDCMB) enhances directional sensitivity through anisotropic multi-branch convolutions for directional rolling defects. Evaluation demonstrates superior performance with 89.4 percent mean Average Precision ([email protected]) on NEU-DET, 90.4 percent on GC10-DET, and 97.5 percent on PCB-DET, improving 3.8 to 4.2 percentage points over baselines. The framework achieves 105.4 frames per second (FPS) on personal computer (PC) and 57.8 FPS on embedded platforms with only 14.9 M parameters. This work provides a practical solution for automated industrial defect inspection.</div></div>","PeriodicalId":18349,"journal":{"name":"Measurement","volume":"268 ","pages":"Article 120671"},"PeriodicalIF":5.6,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146172130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Underwater visible light communication (UWVLC) is gaining attention as a promising solution for high-speed, low-latency, and energy-efficient data transmission in aquatic environments. To enhance spectral efficiency and support multiple users in bandwidth-limited scenarios, this work investigates the integration of non-orthogonal multiple access (NOMA) with UWVLC systems. The underwater optical channel is modeled using the Gamma-Gamma distribution, which is particularly suitable for capturing both small-scale and large-scale turbulence effects, unlike conventional fading models address only partial characteristics of underwater optical fading. To ensure practical relevance, impairments such as imperfect channel state information and successive interference cancellation are incorporated into the analysis. Closed-form expressions for the outage probability are derived over Gamma-Gamma fading channels. This work incorporates realistic underwater impairments by jointly considering imperfect channel state information, residual successive interference cancellation, and Gamma-Gamma turbulence. The proposed analytical framework also unifies weak and strong turbulence conditions through Log-normal and κ-based approximations, providing a more comprehensive performance evaluation. These contributions offer deeper insights that are not addressed in prior NOMA-UWVLC studies and support the practical design of reliable multi-user underwater optical networks. Monte Carlo simulations for a three-user NOMA-UWVLC system are carried out to validate the analytical framework. The influence of key system parameters-including receiver aperture diameter, transmitter divergence angle, and diverse water conditions (coastal, seawater, and river water) is also investigated. These results provide key design insights for optimizing NOMA-enabled UWVLC systems under practical impairments and diverse underwater conditions, supporting future underwater IoT and 6G-driven applications.
{"title":"Gamma-gamma turbulence-based performance modeling of underwater NOMA visible light communication systems with imperfect CSI and SIC","authors":"Thaherbasha Shaik , Nageena Parveen Syed , Hareesh Ayanampudi","doi":"10.1016/j.measurement.2026.120657","DOIUrl":"10.1016/j.measurement.2026.120657","url":null,"abstract":"<div><div>Underwater visible light communication (UWVLC) is gaining attention as a promising solution for high-speed, low-latency, and energy-efficient data transmission in aquatic environments. To enhance spectral efficiency and support multiple users in bandwidth-limited scenarios, this work investigates the integration of non-orthogonal multiple access (NOMA) with UWVLC systems. The underwater optical channel is modeled using the Gamma-Gamma distribution, which is particularly suitable for capturing both small-scale and large-scale turbulence effects, unlike conventional fading models address only partial characteristics of underwater optical fading. To ensure practical relevance, impairments such as imperfect channel state information and successive interference cancellation are incorporated into the analysis. Closed-form expressions for the outage probability are derived over Gamma-Gamma fading channels. This work incorporates realistic underwater impairments by jointly considering imperfect channel state information, residual successive interference cancellation, and Gamma-Gamma turbulence. The proposed analytical framework also unifies weak and strong turbulence conditions through Log-normal and <em>κ</em>-based approximations, providing a more comprehensive performance evaluation. These contributions offer deeper insights that are not addressed in prior NOMA-UWVLC studies and support the practical design of reliable multi-user underwater optical networks. Monte Carlo simulations for a three-user NOMA-UWVLC system are carried out to validate the analytical framework. The influence of key system parameters-including receiver aperture diameter, transmitter divergence angle, and diverse water conditions (coastal, seawater, and river water) is also investigated. These results provide key design insights for optimizing NOMA-enabled UWVLC systems under practical impairments and diverse underwater conditions, supporting future underwater IoT and 6G-driven applications.</div></div>","PeriodicalId":18349,"journal":{"name":"Measurement","volume":"268 ","pages":"Article 120657"},"PeriodicalIF":5.6,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146172123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01DOI: 10.1016/j.measurement.2026.120656
Yijie Chen , Chongbo Sun , Xinyue Chang , Yixun Xue , Jia Su , Xuan Zhang
Electrochemical gas sensors (ECGSs) suffer severe accuracy degradation below –30 °C, yet prevailing calibration research typically focuses on algorithmic enhancements in isolation, assuming full-precision arithmetic and plentiful compute resources. We address this gap with a hardware–software co-design framework that simultaneously optimizes ADC resolution, feature extraction, model architecture, and numeric precision (full‑precision (FP32) vs. 8‑bit integer (INT8)). In addition to adopting floating-point operations (FLOPs) as a theoretical cost metric, we conduct an end-to-end deployment evaluation on an STM32F7 microcontroller by measuring inference latency and energy consumption per prediction. Across the low-temperature calibration range (down to –30 °C), support vector regression with a radial basis function kernel achieves the highest accuracy, while a full-precision multilayer perceptron (MLP) matches this performance with 50 % fewer FLOPs. Post-training quantization yields INT8 MLPs with only a 4.57 increase in evaluation score, alongside a 23 % reduction in inference latency and a 95 % reduction in energy consumption relative to FP32 models. Convolutional neural networks (CNNs), by contrast, show negligible latency improvements under INT8 in shallow configurations due to fixed quantization overhead. However, as model depth increases, CNNs scale more efficiently and exhibit smaller accuracy degradation than MLPs. Notably, the energy savings from quantization vary across architectures due to their differing computational characteristics. This comprehensive evaluation bridges the literature’s omission of embedded-hardware considerations, demonstrating how low-precision, lightweight neural models can be tailored for ultra-low-power, real-time calibration. It further highlights how different model architectures exhibit distinct latency and energy profiles, guiding architecture-aware deployment.
{"title":"A hardware–software co-design calibration framework for two-electrode gas sensors under low-temperature conditions: accuracy, latency, and energy trade-offs","authors":"Yijie Chen , Chongbo Sun , Xinyue Chang , Yixun Xue , Jia Su , Xuan Zhang","doi":"10.1016/j.measurement.2026.120656","DOIUrl":"10.1016/j.measurement.2026.120656","url":null,"abstract":"<div><div>Electrochemical gas sensors (ECGSs) suffer severe accuracy degradation below –30 °C, yet prevailing calibration research typically focuses on algorithmic enhancements in isolation, assuming full-precision arithmetic and plentiful compute resources. We address this gap with a hardware–software co-design framework that simultaneously optimizes ADC resolution, feature extraction, model architecture, and numeric precision (full‑precision (FP32) vs. 8‑bit integer (INT8)). In addition to adopting floating-point operations (FLOPs) as a theoretical cost metric, we conduct an end-to-end deployment evaluation on an STM32F7 microcontroller by measuring inference latency and energy consumption per prediction. Across the low-temperature calibration range (down to –30 °C), support vector regression with a radial basis function kernel achieves the highest accuracy, while a full-precision multilayer perceptron (MLP) matches this performance with 50 % fewer FLOPs. Post-training quantization yields INT8 MLPs with only a 4.57 increase in evaluation score, alongside a 23 % reduction in inference latency and a 95 % reduction in energy consumption relative to FP32 models. Convolutional neural networks (CNNs), by contrast, show negligible latency improvements under INT8 in shallow configurations due to fixed quantization overhead. However, as model depth increases, CNNs scale more efficiently and exhibit smaller accuracy degradation than MLPs. Notably, the energy savings from quantization vary across architectures due to their differing computational characteristics. This comprehensive evaluation bridges the literature’s omission of embedded-hardware considerations, demonstrating how low-precision, lightweight neural models can be tailored for ultra-low-power, real-time calibration. It further highlights how different model architectures exhibit distinct latency and energy profiles, guiding architecture-aware deployment.</div></div>","PeriodicalId":18349,"journal":{"name":"Measurement","volume":"268 ","pages":"Article 120656"},"PeriodicalIF":5.6,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146171364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01DOI: 10.1016/j.measurement.2026.120673
Yin Tao , Tao Liu , Peishi Yu , Aoqi Shen , Yuxiang Zhao , Xin Zhang , Junhua Zhao
Real-time dynamic strain monitoring is critical for assessing the structural health of soft and impact-prone intelligent structures, yet it remains challenging due to the difficulty of reliably integrating sensors for internal strain detection. Here, we present an integrated design-to-fabrication strategy based on hybrid 3D printing that combines fused filament fabrication (FFF) and direct ink writing (DIW) to embed strain sensors within soft structures. This strategy enables internal dynamic strain monitoring under impact loading while eliminating adhesive-dependent manual assembly and providing superior physical protection compared with traditional surface-mounted methods. Dynamic strain responses at multiple internal locations are experimentally measured under controlled impact conditions and systematically validated using finite element simulations incorporating a nonlinear visco-hyperelastic constitutive model. The results demonstrate repeatable strain responses under repeated impacts as well as location-dependent strain characteristics that enable impact zone identification. To illustrate the versatility of the proposed framework, several proof-of-concept demonstrations are presented, including impact monitoring in protective helmets, finger-bending detection in wearable devices, hermeticity monitoring in tube furnaces, and multi-zone impact localization. Overall, this work establishes an efficient and mechanically validated framework for integrating sensing functionalities into soft structures under dynamic loading.
{"title":"Real-time impact strain monitoring in soft structures via hybrid 3D printing","authors":"Yin Tao , Tao Liu , Peishi Yu , Aoqi Shen , Yuxiang Zhao , Xin Zhang , Junhua Zhao","doi":"10.1016/j.measurement.2026.120673","DOIUrl":"10.1016/j.measurement.2026.120673","url":null,"abstract":"<div><div>Real-time dynamic strain monitoring is critical for assessing the structural health of soft and impact-prone intelligent structures, yet it remains challenging due to the difficulty of reliably integrating sensors for internal strain detection. Here, we present an integrated design-to-fabrication strategy based on hybrid 3D printing that combines fused filament fabrication (FFF) and direct ink writing (DIW) to embed strain sensors within soft structures. This strategy enables internal dynamic strain monitoring under impact loading while eliminating adhesive-dependent manual assembly and providing superior physical protection compared with traditional surface-mounted methods. Dynamic strain responses at multiple internal locations are experimentally measured under controlled impact conditions and systematically validated using finite element simulations incorporating a nonlinear visco-hyperelastic constitutive model. The results demonstrate repeatable strain responses under repeated impacts as well as location-dependent strain characteristics that enable impact zone identification. To illustrate the versatility of the proposed framework, several proof-of-concept demonstrations are presented, including impact monitoring in protective helmets, finger-bending detection in wearable devices, hermeticity monitoring in tube furnaces, and multi-zone impact localization. Overall, this work establishes an efficient and mechanically validated framework for integrating sensing functionalities into soft structures under dynamic loading.</div></div>","PeriodicalId":18349,"journal":{"name":"Measurement","volume":"268 ","pages":"Article 120673"},"PeriodicalIF":5.6,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146096169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01DOI: 10.1016/j.measurement.2026.120676
Jie Li , Longnan Wang , Feng Zhou , Hongbo Liu , Dali Chen , Dongying Wang , Zhencheng Wang , Hangyu Zhou , Yang Xiao , Jinlong Lu , Tao Li , Qingquan Liang , Yumeng Zheng , Yiwen Xie , Jinxin Hu , Yongjie Ouyang , Zhenrong Zhang , Qiang Bian , Yang Yu
With the development of aerospace technology, the demand for high-temperature sensors used to measure temperatures in components such as engines and blades has been continuously increasing. However, these sensors are currently limited by the lack of advanced temperature calibration techniques, making high-precision measurement in the ultra-high-temperature range difficult to achieve. The traditional high-temperature calibration technique based on high-temperature furnaces is typically limited to 1800 °C due to the limitations of resistive heating sources and the thermal resistance of metal/ceramic materials. Laser heating technology has been proposed to achieve ultra-high temperature but it suffers from non-uniform temperature distribution and transient change in temperature. In this paper, we propose a novel static ultra-high temperature calibration method based on a modified ultra-high-temperature laser heating technology. By optimizing thermostatic object chamber design and airflow parameter, a long-term stable and uniform ultra-high temperature environment is created, effectively suppressing oxidation and improving the calibration temperature limit. Simulation and experimental results indicate that, with an argon airflow speed of 20 m/s and a preheating temperature of 300 ℃, our calibration method could achieve the system could create a uniform and stable temperature field with temperature fluctuations within ± 7 ℃ at 2100 ℃ over 20 min. Our method significantly improves temperature stability and adaptability of ultra-high-temperature calibration systems and provides reliable technical support for the further development of ultra-high temperature sensor.
{"title":"Design of high-stability laser thermostatic object chamber for ultra-high temperature sensor calibration","authors":"Jie Li , Longnan Wang , Feng Zhou , Hongbo Liu , Dali Chen , Dongying Wang , Zhencheng Wang , Hangyu Zhou , Yang Xiao , Jinlong Lu , Tao Li , Qingquan Liang , Yumeng Zheng , Yiwen Xie , Jinxin Hu , Yongjie Ouyang , Zhenrong Zhang , Qiang Bian , Yang Yu","doi":"10.1016/j.measurement.2026.120676","DOIUrl":"10.1016/j.measurement.2026.120676","url":null,"abstract":"<div><div>With the development of aerospace technology, the demand for high-temperature sensors used to measure temperatures in components such as engines and blades has been continuously increasing. However, these sensors are currently limited by the lack of advanced temperature calibration techniques, making high-precision measurement in the ultra-high-temperature range difficult to achieve. The traditional high-temperature calibration technique based on high-temperature furnaces is typically limited to 1800 °C due to the limitations of resistive heating sources and the thermal resistance of metal/ceramic materials. Laser heating technology has been proposed to achieve ultra-high temperature but it suffers from non-uniform temperature distribution and transient change in temperature. In this paper, we propose a novel static ultra-high temperature calibration method based on a modified ultra-high-temperature laser heating technology. By optimizing thermostatic object chamber design and airflow parameter, a long-term stable and uniform ultra-high temperature environment is created, effectively suppressing oxidation and improving the calibration temperature limit. Simulation and experimental results indicate that, with an argon airflow speed of 20 m/s and a preheating temperature of 300 ℃, our calibration method could achieve the system could create a uniform and stable temperature field with temperature fluctuations within ± 7 ℃ at 2100 ℃ over 20 min. Our method significantly improves temperature stability and adaptability of ultra-high-temperature calibration systems and provides reliable technical support for the further development of ultra-high temperature sensor.</div></div>","PeriodicalId":18349,"journal":{"name":"Measurement","volume":"268 ","pages":"Article 120676"},"PeriodicalIF":5.6,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146172034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01DOI: 10.1016/j.measurement.2026.120670
Seyedahmad Hosseini , Gabriele Chinello , Gordon Lindsay , Don McGlinchey
Accurate uncertainty quantification (UQ) is essential for deploying machine learning (ML) models in multiphase flow metering, where limited training data, incomplete feature representation, and distribution shifts across operating conditions introduce significant epistemic uncertainty beyond the inherent variability of the sensors and flow dynamics. Using experimental datasets under diverse multiphase conditions, this study evaluates predictive uncertainty across five ML models, including deep neural networks (DNN), long short-term memory networks (LSTM), random forests (RF), extreme gradient boosting (XGBoost), and Gaussian process regression (GPR). Conformal prediction (CP) is employed as a model-agnostic framework to generate calibrated prediction intervals (PIs), whereas GPR estimates the predictive variance through its kernel-based structure. The evaluation results show that the gas flow rate predictions exhibit high accuracy and well-calibrated intervals across the models, with the CP producing fixed PIs and the GPR achieving the narrowest, dynamically adjusted intervals. Among the CP-based models, RF demonstrated the best balance between the maximum prediction accuracy and minimal uncertainty. Liquid flowrate predictions exhibited improved epistemic uncertainty across all models. In fact, introducing mixture fluid density, a potential engineered feature derived from the gas volume fraction (GVF) and phase densities, decreased uncertainties. The analysis continued using Explainable AI platforms to highlight the importance of features based on predictive strength. The study findings emphasize the importance of both targeted feature engineering and uncertainty-aware modeling, highlighting practical advantages of CP for model-agnostic UQ and the necessity of XAI tools for transparency. Overall, these results support the applicability of explainable, uncertainty-calibrated ML systems for real-time multiphase flow monitoring, with direct implications for metering confidence and operational decision-making.
{"title":"A robust uncertainty quantification framework for machine learning–based wet-gas flow metering","authors":"Seyedahmad Hosseini , Gabriele Chinello , Gordon Lindsay , Don McGlinchey","doi":"10.1016/j.measurement.2026.120670","DOIUrl":"10.1016/j.measurement.2026.120670","url":null,"abstract":"<div><div>Accurate uncertainty quantification (UQ) is essential for deploying machine learning (ML) models in multiphase flow metering, where limited training data, incomplete feature representation, and distribution shifts across operating conditions introduce significant epistemic uncertainty beyond the inherent variability of the sensors and flow dynamics. Using experimental datasets under diverse multiphase conditions, this study evaluates predictive uncertainty across five ML models, including deep neural networks (DNN), long short-term memory networks (LSTM), random forests (RF), extreme gradient boosting (XGBoost), and Gaussian process regression (GPR). Conformal prediction (CP) is employed as a model-agnostic framework to generate calibrated prediction intervals (PIs), whereas GPR estimates the predictive variance through its kernel-based structure. The evaluation results show that the gas flow rate predictions exhibit high accuracy and well-calibrated intervals across the models, with the CP producing fixed PIs and the GPR achieving the narrowest, dynamically adjusted intervals. Among the CP-based models, RF demonstrated the best balance between the maximum prediction accuracy and minimal uncertainty. Liquid flowrate predictions exhibited improved epistemic uncertainty across all models. In fact, introducing mixture fluid density, a potential engineered feature derived from the gas volume fraction (GVF) and phase densities, decreased uncertainties. The analysis continued using Explainable AI platforms to highlight the importance of features based on predictive strength. The study findings emphasize the importance of both targeted feature engineering and uncertainty-aware modeling, highlighting practical advantages of CP for model-agnostic UQ and the necessity of XAI tools for transparency. Overall, these results support the applicability of explainable, uncertainty-calibrated ML systems for real-time multiphase flow monitoring, with direct implications for metering confidence and operational decision-making.</div></div>","PeriodicalId":18349,"journal":{"name":"Measurement","volume":"269 ","pages":"Article 120670"},"PeriodicalIF":5.6,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146147329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}