首页 > 最新文献

IEEE Sensors Journal最新文献

英文 中文
Method and Compensation Model for Measuring Geometric Errors of Rotary Axis Based on Circular Grating 基于圆光栅的旋转轴几何误差测量方法及补偿模型
IF 4.3 2区 综合性期刊 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-10-01 DOI: 10.1109/JSEN.2025.3613795
Jiakun Li;Shuai Han;Bintao Zhao;Qixin He;Kaifeng Hu;Yibin Qian;Qibo Feng
The rotary axis is the basis of rotational motion. At present, error compensation is the main method to improve the motion accuracy of the rotary axis. The key to error compensation lies in the fast and accurate measurement of the geometric errors of rotary axis. The simultaneous measurement of themultidegree-of-freedom geometric errors and the establishment of the error compensation model are the main means to achieve fast and accurate measurement. Existing methods have problems such as complex error decoupling, the need for servo rotation system, and incomplete error compensation models. To address these issues, we proposed a new method for measuring the four-degree-offreedom geometric errors of the rotary axis based on a circular grating (CG). The significant advantage is its ability to perform full-circle, simultaneous, and continuous measurement without requiring a servo rotation system. Afterward, an error compensation model for the measurement system was established based on the theory of homogeneous coordinate transformation, and the effects of drift, installation, and crosstalk errors on the results were analyzed in detail. During this process, we utilized a fourth-order transformation matrix and developed the first homogeneous coordinate transformation matrix applicable to CGs. The model was used to compensate for the experimental results. The results showed that the radial error motions and tilt error motions are reduced by 87% at most after compensation, and repeatability values of the tilt error motions are reduced by 20% at most. The experimental results verified the effectiveness of the method and the model.
旋转轴是旋转运动的基础。目前,提高转轴运动精度的主要方法是误差补偿。误差补偿的关键在于快速准确地测量旋转轴的几何误差。多自由度几何误差的同时测量和误差补偿模型的建立是实现快速准确测量的主要手段。现有方法存在误差解耦复杂、需要伺服旋转系统、误差补偿模型不完整等问题。为了解决这些问题,我们提出了一种基于圆光栅的四自由度旋转轴几何误差测量方法。显著的优点是它能够执行全圆,同时,连续测量,而不需要一个伺服旋转系统。基于齐次坐标变换理论,建立了测量系统的误差补偿模型,详细分析了漂移误差、安装误差和串扰误差对测量结果的影响。在此过程中,我们利用四阶变换矩阵,建立了第一个适用于cg的齐次坐标变换矩阵。利用该模型对实验结果进行了补偿。结果表明,补偿后的径向误差运动和倾斜误差运动最多减少87%,倾斜误差运动的重复性值最多减少20%。实验结果验证了该方法和模型的有效性。
{"title":"Method and Compensation Model for Measuring Geometric Errors of Rotary Axis Based on Circular Grating","authors":"Jiakun Li;Shuai Han;Bintao Zhao;Qixin He;Kaifeng Hu;Yibin Qian;Qibo Feng","doi":"10.1109/JSEN.2025.3613795","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3613795","url":null,"abstract":"The rotary axis is the basis of rotational motion. At present, error compensation is the main method to improve the motion accuracy of the rotary axis. The key to error compensation lies in the fast and accurate measurement of the geometric errors of rotary axis. The simultaneous measurement of themultidegree-of-freedom geometric errors and the establishment of the error compensation model are the main means to achieve fast and accurate measurement. Existing methods have problems such as complex error decoupling, the need for servo rotation system, and incomplete error compensation models. To address these issues, we proposed a new method for measuring the four-degree-offreedom geometric errors of the rotary axis based on a circular grating (CG). The significant advantage is its ability to perform full-circle, simultaneous, and continuous measurement without requiring a servo rotation system. Afterward, an error compensation model for the measurement system was established based on the theory of homogeneous coordinate transformation, and the effects of drift, installation, and crosstalk errors on the results were analyzed in detail. During this process, we utilized a fourth-order transformation matrix and developed the first homogeneous coordinate transformation matrix applicable to CGs. The model was used to compensate for the experimental results. The results showed that the radial error motions and tilt error motions are reduced by 87% at most after compensation, and repeatability values of the tilt error motions are reduced by 20% at most. The experimental results verified the effectiveness of the method and the model.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 21","pages":"40727-40737"},"PeriodicalIF":4.3,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145455945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Marker-to-Object Calibration Using Landmark Touch. 使用地标触摸的标记到对象校准。
IF 4.3 2区 综合性期刊 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-10-01 Epub Date: 2025-08-28 DOI: 10.1109/jsen.2025.3602006
Letian Ai, Saikat Sengupta, Yue Chen

In image-guided interventions, fiducial markers are widely used for medical instrument tracking by attaching them to designated positions. However, due to the difficulty of precise marker placement, obtaining an accurate marker-to-object transformation remains technically challenging, particularly with customized markers or those with non-standard geometries. To accurately identify the transformation, this study introduces a novel calibration method achieved by sequentially touching a fixed tip with landmarks on the object. An inverse sample consensus filter was proposed to remove potential measurement outliers and improve the robustness of the calibration result. Validation through simulations and experiments under two tracking modalities demonstrated superior translational accuracy and improved robustness compared to conventional methods. Specifically, the experiment conducted under electromagnetic tracking system demonstrated a translational error of 0.61 ± 0.11 mm and a rotational error of 0.97 ± 0.18°. The experiment using magnetic resonance imaging system demonstrated a translational error of 0.60 mm and a rotational error of 2.81°. A use case with an intracerebral hemorrhage evacuation robot further verified the feasibility of integrating the calibration method into the image-guided workflow. The proposed method achieved sub-millimeter calibration accuracy across different scenarios, demonstrating its effectiveness and strong potential for diverse research and clinical applications.

在图像引导干预中,通过将基准标记附加到指定位置,广泛用于医疗器械跟踪。然而,由于精确标记放置的困难,获得准确的标记到对象的转换在技术上仍然具有挑战性,特别是对于自定义标记或具有非标准几何形状的标记。为了准确地识别变换,本研究引入了一种新的校准方法,通过顺序触摸物体上的地标来实现固定尖端的校准。提出了一种反样本一致性滤波器来去除潜在的测量异常值,提高校准结果的鲁棒性。通过仿真和实验验证,在两种跟踪方式下,与传统方法相比,证明了优越的平移精度和改进的鲁棒性。具体而言,在电磁跟踪系统下进行的实验表明,平移误差为0.61±0.11 mm,旋转误差为0.97±0.18°。利用磁共振成像系统进行的实验表明,平移误差为0.60 mm,旋转误差为2.81°。以脑出血疏散机器人为例,进一步验证了将标定方法集成到图像引导工作流程中的可行性。该方法可在不同场景下实现亚毫米级的校准精度,显示了其有效性和强大的研究和临床应用潜力。
{"title":"Marker-to-Object Calibration Using Landmark Touch.","authors":"Letian Ai, Saikat Sengupta, Yue Chen","doi":"10.1109/jsen.2025.3602006","DOIUrl":"10.1109/jsen.2025.3602006","url":null,"abstract":"<p><p>In image-guided interventions, fiducial markers are widely used for medical instrument tracking by attaching them to designated positions. However, due to the difficulty of precise marker placement, obtaining an accurate marker-to-object transformation remains technically challenging, particularly with customized markers or those with non-standard geometries. To accurately identify the transformation, this study introduces a novel calibration method achieved by sequentially touching a fixed tip with landmarks on the object. An inverse sample consensus filter was proposed to remove potential measurement outliers and improve the robustness of the calibration result. Validation through simulations and experiments under two tracking modalities demonstrated superior translational accuracy and improved robustness compared to conventional methods. Specifically, the experiment conducted under electromagnetic tracking system demonstrated a translational error of 0.61 ± 0.11 mm and a rotational error of 0.97 ± 0.18°. The experiment using magnetic resonance imaging system demonstrated a translational error of 0.60 mm and a rotational error of 2.81°. A use case with an intracerebral hemorrhage evacuation robot further verified the feasibility of integrating the calibration method into the image-guided workflow. The proposed method achieved sub-millimeter calibration accuracy across different scenarios, demonstrating its effectiveness and strong potential for diverse research and clinical applications.</p>","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 19","pages":"36773-36784"},"PeriodicalIF":4.3,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12539640/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145342540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Novel LiDAR–Camera Joint Calibration Network Based on Cross-Modal Feature Fusion 一种基于跨模态特征融合的激光雷达-相机联合标定网络
IF 4.3 2区 综合性期刊 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-10-01 DOI: 10.1109/JSEN.2025.3613846
Yanhui Xi;Wenxin Zhu;Zhen Ding;Lanlan Liu
In autonomous driving and robotic navigation, the fusion of multimodal data from LiDAR and cameras relies on accurate extrinsic calibration. However, the calibration accuracy may drop when there is an external disturbance, such as sensor vibrations, temperature fluctuations, and aging. To address this problem, this article presents a novel LiDAR–camera joint calibration network based on cross-modal attention fusion (CMAF) and cross-domain feature extraction (CDFE). The CMAF module is constructed based on region-level matching and pixel-level interaction to improve the cross-modal feature alignment and fusion. To address the semantic inconsistency between encoder and decoder features, the CDFE is designed for a U-shaped architecture with multimodal skip connections to capture large-scale contextual correlations through the transformation from the spatial domain to the frequency domain, and it can maintain semantic consistency through the fusion of global features and original features (residual information) based on the dual-path architecture. Experiments on the KITTI odometry dataset and KITTI-360 dataset show that our network not only significantly outperforms mainstream methods and demonstrates strong generalization capabilities but also achieves high computational efficiency.
在自动驾驶和机器人导航中,来自激光雷达和摄像头的多模态数据的融合依赖于精确的外部校准。然而,当存在外部干扰时,如传感器振动、温度波动和老化,校准精度可能会下降。为了解决这一问题,本文提出了一种基于跨模态注意力融合(CMAF)和跨域特征提取(CDFE)的激光雷达-相机联合标定网络。基于区域级匹配和像素级交互构建了CMAF模块,提高了跨模态特征的对齐和融合。为了解决编码器和解码器特征之间的语义不一致问题,CDFE采用u型多模态跳跃连接架构,通过从空间域到频域的转换捕获大规模上下文相关性,并基于双路径架构通过融合全局特征和原始特征(残差信息)来保持语义一致性。在KITTI odometry数据集和KITTI-360数据集上的实验表明,我们的网络不仅明显优于主流方法,具有较强的泛化能力,而且具有较高的计算效率。
{"title":"A Novel LiDAR–Camera Joint Calibration Network Based on Cross-Modal Feature Fusion","authors":"Yanhui Xi;Wenxin Zhu;Zhen Ding;Lanlan Liu","doi":"10.1109/JSEN.2025.3613846","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3613846","url":null,"abstract":"In autonomous driving and robotic navigation, the fusion of multimodal data from LiDAR and cameras relies on accurate extrinsic calibration. However, the calibration accuracy may drop when there is an external disturbance, such as sensor vibrations, temperature fluctuations, and aging. To address this problem, this article presents a novel LiDAR–camera joint calibration network based on cross-modal attention fusion (CMAF) and cross-domain feature extraction (CDFE). The CMAF module is constructed based on region-level matching and pixel-level interaction to improve the cross-modal feature alignment and fusion. To address the semantic inconsistency between encoder and decoder features, the CDFE is designed for a U-shaped architecture with multimodal skip connections to capture large-scale contextual correlations through the transformation from the spatial domain to the frequency domain, and it can maintain semantic consistency through the fusion of global features and original features (residual information) based on the dual-path architecture. Experiments on the KITTI odometry dataset and KITTI-360 dataset show that our network not only significantly outperforms mainstream methods and demonstrates strong generalization capabilities but also achieves high computational efficiency.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 21","pages":"40849-40860"},"PeriodicalIF":4.3,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145405229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Adaptive Transition Process Recognition and Modeling Framework for Soft Sensor in Complex Engineering Systems 复杂工程系统中软传感器自适应过渡过程识别与建模框架
IF 4.3 2区 综合性期刊 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-09-29 DOI: 10.1109/JSEN.2025.3613587
Chao Ren;Zhen Liu;Zhen Ma;Cunsong Wang
In industrial production processes, variations in operating conditions or task requirements often lead to multimodal characteristics in process data. Mode identification is, therefore, essential for effective process monitoring and soft sensing. However, conventional strategies usually overlook transient modes, which hinder the adaptability of soft sensing models during transitional phases. To address these challenges, this article proposes an adaptive transition process recognition and modeling framework (ATPRMF). The framework consists of two key components: first, a Kullback–Leibler divergence (KLD)-based transitional mode identification method that adaptively detects the onset and termination of transitional modes by analyzing distributional differences; and second, a dynamic model fusion mechanism that integrates predictions from multiple models based on mode credibility, adapting to gradual distributional shifts and ensuring reliable predictions. Experimental validation on a real-world ball mill system and the benchmark Tennessee Eastman (TE) process demonstrates that the proposed framework significantly improves prediction accuracy and robustness compared to conventional approaches.
在工业生产过程中,操作条件或任务要求的变化常常导致过程数据的多模态特征。因此,模式识别对于有效的过程监控和软测量至关重要。然而,传统的策略往往忽略了瞬态模式,这阻碍了软测量模型在过渡阶段的适应性。为了应对这些挑战,本文提出了一种自适应过渡过程识别和建模框架(ATPRMF)。该框架由两个关键部分组成:首先,基于Kullback-Leibler散度(KLD)的过渡模式识别方法,通过分析分布差异自适应检测过渡模式的开始和终止;二是基于模式可信度的动态模型融合机制,将多个模型的预测融合在一起,适应分布的逐渐变化,保证预测的可靠性。在实际球磨机系统和基准田纳西伊士曼(TE)过程上的实验验证表明,与传统方法相比,该框架显著提高了预测精度和鲁棒性。
{"title":"An Adaptive Transition Process Recognition and Modeling Framework for Soft Sensor in Complex Engineering Systems","authors":"Chao Ren;Zhen Liu;Zhen Ma;Cunsong Wang","doi":"10.1109/JSEN.2025.3613587","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3613587","url":null,"abstract":"In industrial production processes, variations in operating conditions or task requirements often lead to multimodal characteristics in process data. Mode identification is, therefore, essential for effective process monitoring and soft sensing. However, conventional strategies usually overlook transient modes, which hinder the adaptability of soft sensing models during transitional phases. To address these challenges, this article proposes an adaptive transition process recognition and modeling framework (ATPRMF). The framework consists of two key components: first, a Kullback–Leibler divergence (KLD)-based transitional mode identification method that adaptively detects the onset and termination of transitional modes by analyzing distributional differences; and second, a dynamic model fusion mechanism that integrates predictions from multiple models based on mode credibility, adapting to gradual distributional shifts and ensuring reliable predictions. Experimental validation on a real-world ball mill system and the benchmark Tennessee Eastman (TE) process demonstrates that the proposed framework significantly improves prediction accuracy and robustness compared to conventional approaches.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 21","pages":"40713-40726"},"PeriodicalIF":4.3,"publicationDate":"2025-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145455746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Gas Classification Using Time-Series-to-Image Conversion and CNN-Based Analysis on Array Sensor 基于时序-图像转换的气体分类及基于cnn的阵列传感器分析
IF 4.3 2区 综合性期刊 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-09-29 DOI: 10.1109/JSEN.2025.3612971
Chang-Hyun Kim;Daewoong Jung;Seung-Hwan Choi;Sanghun Choi;Suwoong Lee
Gas detection is essential in industrial and domestic environments to ensure safety and prevent hazardous incidents. Traditional single-sensor time-series analysis often suffers from limitations in accuracy and robustness due to environmental variations. To address this issue, we propose an artificial intelligence (AI)-based approach that transforms 1-D time-series data into 2-D image representations, followed by the classification of acetylene (C2H2), ammonia (NH3), and hydrogen (H2) using convolutional neural networks (CNNs). By utilizing image transformation techniques such as recurrence plots (RPs), Gramian angular fields (GAFs), and Markov transition fields (MTFs), our method significantly enhances feature extraction from sensor data. In this study, we utilized sensor array data obtained from ZnO and CuO thin films previously synthesized using a droplet-based hydrothermal method. By exploiting the temperature-dependent response characteristics of these sensors, we aimed to improve classification accuracy. Experimental results indicate that our proposed approach achieves a 6.2% relative improvement over the LSTM baseline model (90.1%) in classification accuracy compared to the conventional LSTM model applied directly to raw time-series data. This study demonstrates that converting time-series data into image representations substantially improves gas detection performance, offering a scalable and efficient solution for various sensor-based applications. Future research will focus on real-time implementation and further optimization of deep learning architectures.
在工业和家庭环境中,气体检测对于确保安全和防止危险事件至关重要。由于环境的变化,传统的单传感器时间序列分析在精度和鲁棒性方面往往受到限制。为了解决这个问题,我们提出了一种基于人工智能(AI)的方法,该方法将一维时间序列数据转换为二维图像表示,然后使用卷积神经网络(cnn)对乙炔(C2H2)、氨(NH3)和氢(H2)进行分类。通过利用递归图(RPs)、格拉曼角场(gaf)和马尔可夫转换场(mtf)等图像变换技术,我们的方法显著增强了传感器数据的特征提取。在这项研究中,我们利用了之前用液滴水热法合成的ZnO和CuO薄膜的传感器阵列数据。通过利用这些传感器的温度相关响应特性,我们旨在提高分类精度。实验结果表明,与直接应用于原始时间序列数据的传统LSTM模型相比,我们提出的方法在分类精度上比LSTM基线模型(90.1%)提高了6.2%。该研究表明,将时间序列数据转换为图像表示可大大提高气体检测性能,为各种基于传感器的应用提供了可扩展且高效的解决方案。未来的研究将集中在实时实现和进一步优化深度学习架构上。
{"title":"Gas Classification Using Time-Series-to-Image Conversion and CNN-Based Analysis on Array Sensor","authors":"Chang-Hyun Kim;Daewoong Jung;Seung-Hwan Choi;Sanghun Choi;Suwoong Lee","doi":"10.1109/JSEN.2025.3612971","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3612971","url":null,"abstract":"Gas detection is essential in industrial and domestic environments to ensure safety and prevent hazardous incidents. Traditional single-sensor time-series analysis often suffers from limitations in accuracy and robustness due to environmental variations. To address this issue, we propose an artificial intelligence (AI)-based approach that transforms 1-D time-series data into 2-D image representations, followed by the classification of acetylene (C2H2), ammonia (NH3), and hydrogen (H2) using convolutional neural networks (CNNs). By utilizing image transformation techniques such as recurrence plots (RPs), Gramian angular fields (GAFs), and Markov transition fields (MTFs), our method significantly enhances feature extraction from sensor data. In this study, we utilized sensor array data obtained from ZnO and CuO thin films previously synthesized using a droplet-based hydrothermal method. By exploiting the temperature-dependent response characteristics of these sensors, we aimed to improve classification accuracy. Experimental results indicate that our proposed approach achieves a 6.2% relative improvement over the LSTM baseline model (90.1%) in classification accuracy compared to the conventional LSTM model applied directly to raw time-series data. This study demonstrates that converting time-series data into image representations substantially improves gas detection performance, offering a scalable and efficient solution for various sensor-based applications. Future research will focus on real-time implementation and further optimization of deep learning architectures.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 21","pages":"40690-40702"},"PeriodicalIF":4.3,"publicationDate":"2025-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11184418","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145455876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Miniaturized and Low-Cost Fingertip Optoacoustic Pretouch Sensor for Near-Distance Ranging and Material/Structure Classification 一种用于近距离测距和材料/结构分类的小型低成本指尖光声预触传感器
IF 4.3 2区 综合性期刊 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-09-29 DOI: 10.1109/JSEN.2025.3613561
Edward Bao;Cheng Fang;Dezhen Song
Precise grasping is required for robotic hands to perform useful functions. Current robotic grasping is limited by the inability to precisely set grasping conditions before physical contact, often resulting in crushing or slipping of the object. Integrated pretouch sensors that detect object parameters at near-distance are highly useful for addressing this issue. This article reports the first miniaturized and low-cost optoacoustic (OA) pretouch sensor integrated into the fingertip of a human-sized bionic robotic hand. The OA pretouch sensor performs distance ranging and material/structure classifications based on OA signals excited on the object surface by laser pulses. The sensorto-object distance is derived from the time delay, and the object material/structure is determined from the frequency spectra using a machine-learning (ML)-based classifier. The high sensitivity of the OA pretouch sensor allows clean OA signals to be captured from single laser pulses and eliminates the need of signal averaging, allowing data-acquisition in real time during continuous finger motion. The simplified and compact design is cost-effective and enables seamless integration of the OA pretouch sensors onto distal portion of a bionic robot finger. Experimental characterization showed a lateral resolution of 0.5 mm and ranging accuracy within 0.3 mm. Machine learning performed with a 100% accuracy in household material/structure classification and 90.4% accuracy in fruit firmness classification. These results confirm that OA pretouch sensors are viable for integration and improving the grasping of robot hands.
机器人手要执行有用的功能需要精确的抓握。目前的机器人抓取受到无法在物理接触前精确设置抓取条件的限制,经常导致物体破碎或滑动。集成的预触摸传感器可以近距离检测物体参数,对于解决这一问题非常有用。本文报道了首个集成在人体大小的仿生机器人手指尖的小型化、低成本光声(OA)预触传感器。OA预触传感器基于激光脉冲在物体表面激发的OA信号进行距离测距和材料/结构分类。传感器到物体的距离从时间延迟中得到,物体的材料/结构使用基于机器学习(ML)的分类器从频谱中确定。OA预触控传感器的高灵敏度允许从单个激光脉冲中捕获干净的OA信号,并且不需要信号平均,允许在连续手指运动期间实时采集数据。简化和紧凑的设计具有成本效益,并且可以将OA预触控传感器无缝集成到仿生机器人手指的远端部分。实验表征表明,横向分辨率为0.5 mm,测距精度在0.3 mm以内。机器学习在家居材料/结构分类上的准确率为100%,在水果硬度分类上的准确率为90.4%。这些结果证实了OA预触传感器在集成和提高机器人手部抓取能力方面是可行的。
{"title":"A Miniaturized and Low-Cost Fingertip Optoacoustic Pretouch Sensor for Near-Distance Ranging and Material/Structure Classification","authors":"Edward Bao;Cheng Fang;Dezhen Song","doi":"10.1109/JSEN.2025.3613561","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3613561","url":null,"abstract":"Precise grasping is required for robotic hands to perform useful functions. Current robotic grasping is limited by the inability to precisely set grasping conditions before physical contact, often resulting in crushing or slipping of the object. Integrated pretouch sensors that detect object parameters at near-distance are highly useful for addressing this issue. This article reports the first miniaturized and low-cost optoacoustic (OA) pretouch sensor integrated into the fingertip of a human-sized bionic robotic hand. The OA pretouch sensor performs distance ranging and material/structure classifications based on OA signals excited on the object surface by laser pulses. The sensorto-object distance is derived from the time delay, and the object material/structure is determined from the frequency spectra using a machine-learning (ML)-based classifier. The high sensitivity of the OA pretouch sensor allows clean OA signals to be captured from single laser pulses and eliminates the need of signal averaging, allowing data-acquisition in real time during continuous finger motion. The simplified and compact design is cost-effective and enables seamless integration of the OA pretouch sensors onto distal portion of a bionic robot finger. Experimental characterization showed a lateral resolution of 0.5 mm and ranging accuracy within 0.3 mm. Machine learning performed with a 100% accuracy in household material/structure classification and 90.4% accuracy in fruit firmness classification. These results confirm that OA pretouch sensors are viable for integration and improving the grasping of robot hands.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 21","pages":"40703-40712"},"PeriodicalIF":4.3,"publicationDate":"2025-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145455887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MULFE: A Sensor-Based Multilevel Feature Enhancement Method for Group Activity Recognition 基于传感器的群体活动识别多级特征增强方法
IF 4.3 2区 综合性期刊 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-09-29 DOI: 10.1109/JSEN.2025.3613557
Ruohong Huan;Ke Wang;Junhong Dong;Ji Zhang;Peng Chen;Guodao Sun;Ronghua Liang
The recognition of group activities from sensor data faces challenges in effective feature extraction, especially the difficulty of expressing the dynamic changes in member actions, the positional relationships, and the collaborative relationships among members. To address this, this article proposes a sensor-based multilevel feature enhancement (MULFE) method for group activity recognition (GAR). MULFE utilizes an individual action feature extraction network (IAFEN) to extract individual action features and constructs a group location-level feature enhancement (GLLFE) module to capture the group location interaction features among individuals. By combining group location interaction features with individual action features using attentionweighted fusion, location-level enhanced group activity features are achieved, with which the representation of multiple individual features within the group and the complex relational features of individual spatial locations are enhanced. Furthermore, a group spatiotemporal-level feature enhancement (GSLFE) module based on CAMLP-Mixer network is designed, using a multilayer perceptron (MLP) to achieve feature interaction and integration to further obtain group spatiotemporal features. The group spatiotemporal features are combined with the location-level enhanced group activity features to generate the multilevel enhanced group activity features, making the model more suitable for understanding complex group activities. Experiments are conducted on two self-built datasets, UT-Data-gar and Garsensors, to validate and analyze the performance of MULFE. The experimental results demonstrate that MULFE can effectively recognize group activities, particularly maintaining high accuracy and strong robustness in situations with random changes in group size.
从传感器数据中识别群体活动面临着有效特征提取的挑战,特别是难以表达成员动作的动态变化、位置关系和成员之间的协作关系。为了解决这个问题,本文提出了一种基于传感器的多级特征增强(MULFE)方法用于群体活动识别(GAR)。MULFE利用个体动作特征提取网络(IAFEN)提取个体动作特征,构建群体位置级特征增强(GLLFE)模块捕捉个体间群体位置交互特征。将群体位置交互特征与个体行为特征结合,采用注意力加权融合的方法,实现了位置级增强的群体活动特征,增强了群体内多个个体特征的表示和个体空间位置的复杂关系特征。在此基础上,设计了基于CAMLP-Mixer网络的组时空级特征增强(GSLFE)模块,利用多层感知器(MLP)实现特征交互与集成,进一步获得组时空特征。将群体时空特征与位置级增强群体活动特征相结合,生成多层次增强群体活动特征,使模型更适合于复杂群体活动的理解。在UT-Data-gar和Garsensors两个自建数据集上进行了实验,验证和分析了MULFE的性能。实验结果表明,该方法能够有效地识别群体活动,特别是在群体规模随机变化的情况下保持较高的准确率和较强的鲁棒性。
{"title":"MULFE: A Sensor-Based Multilevel Feature Enhancement Method for Group Activity Recognition","authors":"Ruohong Huan;Ke Wang;Junhong Dong;Ji Zhang;Peng Chen;Guodao Sun;Ronghua Liang","doi":"10.1109/JSEN.2025.3613557","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3613557","url":null,"abstract":"The recognition of group activities from sensor data faces challenges in effective feature extraction, especially the difficulty of expressing the dynamic changes in member actions, the positional relationships, and the collaborative relationships among members. To address this, this article proposes a sensor-based multilevel feature enhancement (MULFE) method for group activity recognition (GAR). MULFE utilizes an individual action feature extraction network (IAFEN) to extract individual action features and constructs a group location-level feature enhancement (GLLFE) module to capture the group location interaction features among individuals. By combining group location interaction features with individual action features using attentionweighted fusion, location-level enhanced group activity features are achieved, with which the representation of multiple individual features within the group and the complex relational features of individual spatial locations are enhanced. Furthermore, a group spatiotemporal-level feature enhancement (GSLFE) module based on CAMLP-Mixer network is designed, using a multilayer perceptron (MLP) to achieve feature interaction and integration to further obtain group spatiotemporal features. The group spatiotemporal features are combined with the location-level enhanced group activity features to generate the multilevel enhanced group activity features, making the model more suitable for understanding complex group activities. Experiments are conducted on two self-built datasets, UT-Data-gar and Garsensors, to validate and analyze the performance of MULFE. The experimental results demonstrate that MULFE can effectively recognize group activities, particularly maintaining high accuracy and strong robustness in situations with random changes in group size.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 21","pages":"40929-40945"},"PeriodicalIF":4.3,"publicationDate":"2025-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145405388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MapcQtNet: A Novel Deeply Integrated Hybrid Soft Sensor Modeling Method for Multistage Quality Prediction in Heat Treatment Process MapcQtNet:一种用于热处理过程多阶段质量预测的新型深度集成混合软传感器建模方法
IF 4.3 2区 综合性期刊 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-09-26 DOI: 10.1109/JSEN.2025.3612691
Dandan Yao;Yinghua Yang;Xiaozhi Liu;Yunfei Mu
In multistage industrial processes, critical quality variables are often difficult to measure in real time. Thus, a common solution is to measure them indirectly in a quality prediction manner through soft sensor methods. Most existing networks for multistage quality prediction focus on the overall transfer of process information between stages, but do not consider exactly how the quality information is transferred. In addition, how to introduce mechanisms in data-driven models to maximize the use of prior knowledge and improve the models’ interpretability is another problem that needs to be addressed. Considering the above two problems, a novel deeply integrated hybrid soft sensor modeling method is proposed for multistage quality prediction, known as the mechanism-aware progressive constraint-based quality transfer network (MapcQtNet). The MapcQtNet consists of two key blocks: quality transfer units (QTUs) and mechanism-aware progressive constraint (MAPC). With the addition of two quality transfer gates, QTUs can simulate the flow of quality information between stages in more detailed ways. In addition, the MAPC innovatively integrates the prior mechanism formulas into the network in a constrained manner, which helps the network to be aware of the process mechanism. With it, the MapcQtNet can not only enhance its interpretability but also gain the ability to achieve unlabeled predictions for the intermediate variable and uncertain mechanism parameter. A real industrial case of the heat treatment process verifies the validity of the proposed MapcQtNet as an advanced soft sensor modeling method for multistage quality prediction.
在多阶段工业过程中,关键质量变量往往难以实时测量。因此,一个常见的解决方案是通过软测量方法以质量预测的方式间接测量它们。大多数现有的多阶段质量预测网络关注的是过程信息在各阶段之间的整体传递,而没有考虑质量信息是如何传递的。此外,如何在数据驱动模型中引入机制,最大限度地利用先验知识,提高模型的可解释性是另一个需要解决的问题。针对以上两个问题,提出了一种深度集成的多阶段质量预测混合软传感器建模方法,即基于机制感知的渐进式约束的质量传递网络(MapcQtNet)。MapcQtNet由两个关键模块组成:质量传递单元(qtu)和机制感知渐进约束(MAPC)。通过添加两个质量传递门,qtu可以更详细地模拟各阶段之间的质量信息流。此外,MAPC创新地将先验机制公式以约束的方式整合到网络中,有助于网络对过程机制的感知。利用该方法,MapcQtNet不仅可以提高其可解释性,而且可以实现对中间变量和不确定机制参数的无标记预测。热处理过程的实际工业案例验证了MapcQtNet作为一种先进的多阶段质量预测软测量建模方法的有效性。
{"title":"MapcQtNet: A Novel Deeply Integrated Hybrid Soft Sensor Modeling Method for Multistage Quality Prediction in Heat Treatment Process","authors":"Dandan Yao;Yinghua Yang;Xiaozhi Liu;Yunfei Mu","doi":"10.1109/JSEN.2025.3612691","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3612691","url":null,"abstract":"In multistage industrial processes, critical quality variables are often difficult to measure in real time. Thus, a common solution is to measure them indirectly in a quality prediction manner through soft sensor methods. Most existing networks for multistage quality prediction focus on the overall transfer of process information between stages, but do not consider exactly how the quality information is transferred. In addition, how to introduce mechanisms in data-driven models to maximize the use of prior knowledge and improve the models’ interpretability is another problem that needs to be addressed. Considering the above two problems, a novel deeply integrated hybrid soft sensor modeling method is proposed for multistage quality prediction, known as the mechanism-aware progressive constraint-based quality transfer network (MapcQtNet). The MapcQtNet consists of two key blocks: quality transfer units (QTUs) and mechanism-aware progressive constraint (MAPC). With the addition of two quality transfer gates, QTUs can simulate the flow of quality information between stages in more detailed ways. In addition, the MAPC innovatively integrates the prior mechanism formulas into the network in a constrained manner, which helps the network to be aware of the process mechanism. With it, the MapcQtNet can not only enhance its interpretability but also gain the ability to achieve unlabeled predictions for the intermediate variable and uncertain mechanism parameter. A real industrial case of the heat treatment process verifies the validity of the proposed MapcQtNet as an advanced soft sensor modeling method for multistage quality prediction.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 21","pages":"40913-40928"},"PeriodicalIF":4.3,"publicationDate":"2025-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145405343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Single-Channel Wearable EEG Using Low-Power Qvar Sensor and Machine Learning for Drowsiness Detection 基于低功耗Qvar传感器和机器学习的单通道可穿戴EEG睡意检测
IF 4.3 2区 综合性期刊 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-09-26 DOI: 10.1109/JSEN.2025.3612476
Michele Antonio Gazzanti Pugliese Di Cotrone;Marco Balsi;Nicola Picozzi;Alessandro Zampogna;Soufyane Bouchelaghem;Antonio Suppa;Leonardo Davì;Denise Fabeni;Alessandro Gumiero;Ludovica Ferri;Luigi Della Torre;Patrizia Pulitano;Fernanda Irrera
This work deals with the fabrication and validation of an innovative wearable single-channel electroencephalogram (EEG) system, designed for real-time monitoring of specific brain activity. It is based on the use of a low-power sensor (Qvar) integrated in a miniaturized electronic platform, and on machine learning (ML) algorithms developed on purpose. The study demonstrates the accuracy (ACC) of Qvar in capturing EEG signals by systematic comparison with a gold standard and the comprehensive analyses in time and frequency domains confirm its reliability across the various EEG frequency bands. In this work, the specific application of drowsiness detection is addressed, leveraging ML algorithms trained and validated on public datasets and, at a more preliminary stage, on real-world data collected specifically for this study under the supervision of trained personnel. The results outline the system’s promise for domestic and outdoor monitoring of specific neurological conditions and applications, such as fatigue management and cognitive state assessment. The Qvar represents a significant step toward accessible and practical wearable EEG technologies, combining portability, ACC, and low-power consumption to enhance user experience, enable massive screening, and broaden the scope of EEG applications.
这项工作涉及一种创新的可穿戴单通道脑电图(EEG)系统的制造和验证,该系统旨在实时监测特定的大脑活动。它基于集成在小型化电子平台中的低功耗传感器(Qvar)的使用,以及专门开发的机器学习(ML)算法。通过与金标准的系统比较,验证了Qvar捕获脑电信号的准确性(ACC),并在时域和频域进行了综合分析,证实了其在脑电信号各频段的可靠性。在这项工作中,利用在公共数据集上训练和验证的ML算法,以及在更初步的阶段,在训练有素的人员的监督下,在专门为本研究收集的现实世界数据上,解决了困倦检测的具体应用。结果概述了该系统在国内和室外监测特定神经系统状况和应用的前景,如疲劳管理和认知状态评估。Qvar代表了向可访问和实用的可穿戴脑电图技术迈出的重要一步,它结合了便携性、ACC和低功耗,以增强用户体验,实现大规模筛查,并扩大脑电图应用范围。
{"title":"Single-Channel Wearable EEG Using Low-Power Qvar Sensor and Machine Learning for Drowsiness Detection","authors":"Michele Antonio Gazzanti Pugliese Di Cotrone;Marco Balsi;Nicola Picozzi;Alessandro Zampogna;Soufyane Bouchelaghem;Antonio Suppa;Leonardo Davì;Denise Fabeni;Alessandro Gumiero;Ludovica Ferri;Luigi Della Torre;Patrizia Pulitano;Fernanda Irrera","doi":"10.1109/JSEN.2025.3612476","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3612476","url":null,"abstract":"This work deals with the fabrication and validation of an innovative wearable single-channel electroencephalogram (EEG) system, designed for real-time monitoring of specific brain activity. It is based on the use of a low-power sensor (Qvar) integrated in a miniaturized electronic platform, and on machine learning (ML) algorithms developed on purpose. The study demonstrates the accuracy (ACC) of Qvar in capturing EEG signals by systematic comparison with a gold standard and the comprehensive analyses in time and frequency domains confirm its reliability across the various EEG frequency bands. In this work, the specific application of drowsiness detection is addressed, leveraging ML algorithms trained and validated on public datasets and, at a more preliminary stage, on real-world data collected specifically for this study under the supervision of trained personnel. The results outline the system’s promise for domestic and outdoor monitoring of specific neurological conditions and applications, such as fatigue management and cognitive state assessment. The Qvar represents a significant step toward accessible and practical wearable EEG technologies, combining portability, ACC, and low-power consumption to enhance user experience, enable massive screening, and broaden the scope of EEG applications.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 21","pages":"40668-40679"},"PeriodicalIF":4.3,"publicationDate":"2025-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145455767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Online Monitoring of Membrane Fouling Based on EIT and Deep Learning 基于EIT和深度学习的膜污染在线监测
IF 4.3 2区 综合性期刊 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-09-26 DOI: 10.1109/JSEN.2025.3612498
Xiuyan Li;Yuqi Hou;Shuai Wang;Qi Wang;Jie Wang;Pingjuan Niu
Membrane separation has been demonstrated to be the most efficient and effective method for water treatment and desalination. However, membrane filtration modules inevitably become contaminated during the water treatment process, which reduces their filtration efficiency. Electrical impedance tomography (EIT) is an effective method for online monitoring of membrane fouling. However, the accuracy of foulant distribution needs to be improved due to the low resolution of EIT images. In this article, an intelligent monitoring system based on EIT and deep learning is designed to generate the conductivity distribution of the membrane surface and track and monitor the dynamic changes of the membrane pollution in real-time. A deep-learning architecture for EIT image reconstruction, namely, TransUNet + Root-Net, is proposed. TransUNet combines the global perception ability of the Transformer with the local feature extraction advantages of UNet. To address the cross-domain generalization challenge encountered when training with simulation data and validating with experimental data, this article designs an unsupervised dual-domain mapping network (Root-Net) to map simulation data into a form resembling experimental data. By using the large amount of simulation data labels mapped by Root-net to train TransUNet, the model’s ability to represent the dynamic distribution of real membrane fouling is significantly improved. The results indicate 2.72% error for the TransUNet + Root-Net method. This approach more accurately characterizes the location and shape of the fouling, enabling real-time monitoring of its spatial distribution and providing insights into the evolution of membrane fouling.
膜分离已被证明是水处理和海水淡化中最有效的方法。然而,膜过滤组件在水处理过程中不可避免地会受到污染,从而降低其过滤效率。电阻抗层析成像(EIT)是膜污染在线监测的有效方法。然而,由于EIT图像的分辨率较低,污染物分布的准确性有待提高。本文设计了一种基于EIT和深度学习的智能监测系统,生成膜表面电导率分布,实时跟踪和监测膜污染的动态变化。提出了一种用于EIT图像重建的深度学习架构TransUNet + Root-Net。TransUNet结合了Transformer的全局感知能力和UNet的局部特征提取优势。为了解决在使用仿真数据进行训练和使用实验数据进行验证时遇到的跨域泛化挑战,本文设计了一个无监督双域映射网络(Root-Net),将仿真数据映射成类似实验数据的形式。通过使用Root-net映射的大量仿真数据标签对TransUNet进行训练,显著提高了模型表征真实膜污染动态分布的能力。结果表明,TransUNet + Root-Net方法的误差为2.72%。这种方法更准确地表征了污染的位置和形状,能够实时监测其空间分布,并为膜污染的演变提供见解。
{"title":"Online Monitoring of Membrane Fouling Based on EIT and Deep Learning","authors":"Xiuyan Li;Yuqi Hou;Shuai Wang;Qi Wang;Jie Wang;Pingjuan Niu","doi":"10.1109/JSEN.2025.3612498","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3612498","url":null,"abstract":"Membrane separation has been demonstrated to be the most efficient and effective method for water treatment and desalination. However, membrane filtration modules inevitably become contaminated during the water treatment process, which reduces their filtration efficiency. Electrical impedance tomography (EIT) is an effective method for online monitoring of membrane fouling. However, the accuracy of foulant distribution needs to be improved due to the low resolution of EIT images. In this article, an intelligent monitoring system based on EIT and deep learning is designed to generate the conductivity distribution of the membrane surface and track and monitor the dynamic changes of the membrane pollution in real-time. A deep-learning architecture for EIT image reconstruction, namely, TransUNet + Root-Net, is proposed. TransUNet combines the global perception ability of the Transformer with the local feature extraction advantages of UNet. To address the cross-domain generalization challenge encountered when training with simulation data and validating with experimental data, this article designs an unsupervised dual-domain mapping network (Root-Net) to map simulation data into a form resembling experimental data. By using the large amount of simulation data labels mapped by Root-net to train TransUNet, the model’s ability to represent the dynamic distribution of real membrane fouling is significantly improved. The results indicate 2.72% error for the TransUNet + Root-Net method. This approach more accurately characterizes the location and shape of the fouling, enabling real-time monitoring of its spatial distribution and providing insights into the evolution of membrane fouling.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 21","pages":"40680-40689"},"PeriodicalIF":4.3,"publicationDate":"2025-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145455873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Sensors Journal
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1