首页 > 最新文献

IEEE Sensors Journal最新文献

英文 中文
An Adaptive Transition Process Recognition and Modeling Framework for Soft Sensor in Complex Engineering Systems 复杂工程系统中软传感器自适应过渡过程识别与建模框架
IF 4.3 2区 综合性期刊 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-09-29 DOI: 10.1109/JSEN.2025.3613587
Chao Ren;Zhen Liu;Zhen Ma;Cunsong Wang
In industrial production processes, variations in operating conditions or task requirements often lead to multimodal characteristics in process data. Mode identification is, therefore, essential for effective process monitoring and soft sensing. However, conventional strategies usually overlook transient modes, which hinder the adaptability of soft sensing models during transitional phases. To address these challenges, this article proposes an adaptive transition process recognition and modeling framework (ATPRMF). The framework consists of two key components: first, a Kullback–Leibler divergence (KLD)-based transitional mode identification method that adaptively detects the onset and termination of transitional modes by analyzing distributional differences; and second, a dynamic model fusion mechanism that integrates predictions from multiple models based on mode credibility, adapting to gradual distributional shifts and ensuring reliable predictions. Experimental validation on a real-world ball mill system and the benchmark Tennessee Eastman (TE) process demonstrates that the proposed framework significantly improves prediction accuracy and robustness compared to conventional approaches.
在工业生产过程中,操作条件或任务要求的变化常常导致过程数据的多模态特征。因此,模式识别对于有效的过程监控和软测量至关重要。然而,传统的策略往往忽略了瞬态模式,这阻碍了软测量模型在过渡阶段的适应性。为了应对这些挑战,本文提出了一种自适应过渡过程识别和建模框架(ATPRMF)。该框架由两个关键部分组成:首先,基于Kullback-Leibler散度(KLD)的过渡模式识别方法,通过分析分布差异自适应检测过渡模式的开始和终止;二是基于模式可信度的动态模型融合机制,将多个模型的预测融合在一起,适应分布的逐渐变化,保证预测的可靠性。在实际球磨机系统和基准田纳西伊士曼(TE)过程上的实验验证表明,与传统方法相比,该框架显著提高了预测精度和鲁棒性。
{"title":"An Adaptive Transition Process Recognition and Modeling Framework for Soft Sensor in Complex Engineering Systems","authors":"Chao Ren;Zhen Liu;Zhen Ma;Cunsong Wang","doi":"10.1109/JSEN.2025.3613587","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3613587","url":null,"abstract":"In industrial production processes, variations in operating conditions or task requirements often lead to multimodal characteristics in process data. Mode identification is, therefore, essential for effective process monitoring and soft sensing. However, conventional strategies usually overlook transient modes, which hinder the adaptability of soft sensing models during transitional phases. To address these challenges, this article proposes an adaptive transition process recognition and modeling framework (ATPRMF). The framework consists of two key components: first, a Kullback–Leibler divergence (KLD)-based transitional mode identification method that adaptively detects the onset and termination of transitional modes by analyzing distributional differences; and second, a dynamic model fusion mechanism that integrates predictions from multiple models based on mode credibility, adapting to gradual distributional shifts and ensuring reliable predictions. Experimental validation on a real-world ball mill system and the benchmark Tennessee Eastman (TE) process demonstrates that the proposed framework significantly improves prediction accuracy and robustness compared to conventional approaches.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 21","pages":"40713-40726"},"PeriodicalIF":4.3,"publicationDate":"2025-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145455746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Gas Classification Using Time-Series-to-Image Conversion and CNN-Based Analysis on Array Sensor 基于时序-图像转换的气体分类及基于cnn的阵列传感器分析
IF 4.3 2区 综合性期刊 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-09-29 DOI: 10.1109/JSEN.2025.3612971
Chang-Hyun Kim;Daewoong Jung;Seung-Hwan Choi;Sanghun Choi;Suwoong Lee
Gas detection is essential in industrial and domestic environments to ensure safety and prevent hazardous incidents. Traditional single-sensor time-series analysis often suffers from limitations in accuracy and robustness due to environmental variations. To address this issue, we propose an artificial intelligence (AI)-based approach that transforms 1-D time-series data into 2-D image representations, followed by the classification of acetylene (C2H2), ammonia (NH3), and hydrogen (H2) using convolutional neural networks (CNNs). By utilizing image transformation techniques such as recurrence plots (RPs), Gramian angular fields (GAFs), and Markov transition fields (MTFs), our method significantly enhances feature extraction from sensor data. In this study, we utilized sensor array data obtained from ZnO and CuO thin films previously synthesized using a droplet-based hydrothermal method. By exploiting the temperature-dependent response characteristics of these sensors, we aimed to improve classification accuracy. Experimental results indicate that our proposed approach achieves a 6.2% relative improvement over the LSTM baseline model (90.1%) in classification accuracy compared to the conventional LSTM model applied directly to raw time-series data. This study demonstrates that converting time-series data into image representations substantially improves gas detection performance, offering a scalable and efficient solution for various sensor-based applications. Future research will focus on real-time implementation and further optimization of deep learning architectures.
在工业和家庭环境中,气体检测对于确保安全和防止危险事件至关重要。由于环境的变化,传统的单传感器时间序列分析在精度和鲁棒性方面往往受到限制。为了解决这个问题,我们提出了一种基于人工智能(AI)的方法,该方法将一维时间序列数据转换为二维图像表示,然后使用卷积神经网络(cnn)对乙炔(C2H2)、氨(NH3)和氢(H2)进行分类。通过利用递归图(RPs)、格拉曼角场(gaf)和马尔可夫转换场(mtf)等图像变换技术,我们的方法显著增强了传感器数据的特征提取。在这项研究中,我们利用了之前用液滴水热法合成的ZnO和CuO薄膜的传感器阵列数据。通过利用这些传感器的温度相关响应特性,我们旨在提高分类精度。实验结果表明,与直接应用于原始时间序列数据的传统LSTM模型相比,我们提出的方法在分类精度上比LSTM基线模型(90.1%)提高了6.2%。该研究表明,将时间序列数据转换为图像表示可大大提高气体检测性能,为各种基于传感器的应用提供了可扩展且高效的解决方案。未来的研究将集中在实时实现和进一步优化深度学习架构上。
{"title":"Gas Classification Using Time-Series-to-Image Conversion and CNN-Based Analysis on Array Sensor","authors":"Chang-Hyun Kim;Daewoong Jung;Seung-Hwan Choi;Sanghun Choi;Suwoong Lee","doi":"10.1109/JSEN.2025.3612971","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3612971","url":null,"abstract":"Gas detection is essential in industrial and domestic environments to ensure safety and prevent hazardous incidents. Traditional single-sensor time-series analysis often suffers from limitations in accuracy and robustness due to environmental variations. To address this issue, we propose an artificial intelligence (AI)-based approach that transforms 1-D time-series data into 2-D image representations, followed by the classification of acetylene (C2H2), ammonia (NH3), and hydrogen (H2) using convolutional neural networks (CNNs). By utilizing image transformation techniques such as recurrence plots (RPs), Gramian angular fields (GAFs), and Markov transition fields (MTFs), our method significantly enhances feature extraction from sensor data. In this study, we utilized sensor array data obtained from ZnO and CuO thin films previously synthesized using a droplet-based hydrothermal method. By exploiting the temperature-dependent response characteristics of these sensors, we aimed to improve classification accuracy. Experimental results indicate that our proposed approach achieves a 6.2% relative improvement over the LSTM baseline model (90.1%) in classification accuracy compared to the conventional LSTM model applied directly to raw time-series data. This study demonstrates that converting time-series data into image representations substantially improves gas detection performance, offering a scalable and efficient solution for various sensor-based applications. Future research will focus on real-time implementation and further optimization of deep learning architectures.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 21","pages":"40690-40702"},"PeriodicalIF":4.3,"publicationDate":"2025-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11184418","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145455876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Miniaturized and Low-Cost Fingertip Optoacoustic Pretouch Sensor for Near-Distance Ranging and Material/Structure Classification 一种用于近距离测距和材料/结构分类的小型低成本指尖光声预触传感器
IF 4.3 2区 综合性期刊 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-09-29 DOI: 10.1109/JSEN.2025.3613561
Edward Bao;Cheng Fang;Dezhen Song
Precise grasping is required for robotic hands to perform useful functions. Current robotic grasping is limited by the inability to precisely set grasping conditions before physical contact, often resulting in crushing or slipping of the object. Integrated pretouch sensors that detect object parameters at near-distance are highly useful for addressing this issue. This article reports the first miniaturized and low-cost optoacoustic (OA) pretouch sensor integrated into the fingertip of a human-sized bionic robotic hand. The OA pretouch sensor performs distance ranging and material/structure classifications based on OA signals excited on the object surface by laser pulses. The sensorto-object distance is derived from the time delay, and the object material/structure is determined from the frequency spectra using a machine-learning (ML)-based classifier. The high sensitivity of the OA pretouch sensor allows clean OA signals to be captured from single laser pulses and eliminates the need of signal averaging, allowing data-acquisition in real time during continuous finger motion. The simplified and compact design is cost-effective and enables seamless integration of the OA pretouch sensors onto distal portion of a bionic robot finger. Experimental characterization showed a lateral resolution of 0.5 mm and ranging accuracy within 0.3 mm. Machine learning performed with a 100% accuracy in household material/structure classification and 90.4% accuracy in fruit firmness classification. These results confirm that OA pretouch sensors are viable for integration and improving the grasping of robot hands.
机器人手要执行有用的功能需要精确的抓握。目前的机器人抓取受到无法在物理接触前精确设置抓取条件的限制,经常导致物体破碎或滑动。集成的预触摸传感器可以近距离检测物体参数,对于解决这一问题非常有用。本文报道了首个集成在人体大小的仿生机器人手指尖的小型化、低成本光声(OA)预触传感器。OA预触传感器基于激光脉冲在物体表面激发的OA信号进行距离测距和材料/结构分类。传感器到物体的距离从时间延迟中得到,物体的材料/结构使用基于机器学习(ML)的分类器从频谱中确定。OA预触控传感器的高灵敏度允许从单个激光脉冲中捕获干净的OA信号,并且不需要信号平均,允许在连续手指运动期间实时采集数据。简化和紧凑的设计具有成本效益,并且可以将OA预触控传感器无缝集成到仿生机器人手指的远端部分。实验表征表明,横向分辨率为0.5 mm,测距精度在0.3 mm以内。机器学习在家居材料/结构分类上的准确率为100%,在水果硬度分类上的准确率为90.4%。这些结果证实了OA预触传感器在集成和提高机器人手部抓取能力方面是可行的。
{"title":"A Miniaturized and Low-Cost Fingertip Optoacoustic Pretouch Sensor for Near-Distance Ranging and Material/Structure Classification","authors":"Edward Bao;Cheng Fang;Dezhen Song","doi":"10.1109/JSEN.2025.3613561","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3613561","url":null,"abstract":"Precise grasping is required for robotic hands to perform useful functions. Current robotic grasping is limited by the inability to precisely set grasping conditions before physical contact, often resulting in crushing or slipping of the object. Integrated pretouch sensors that detect object parameters at near-distance are highly useful for addressing this issue. This article reports the first miniaturized and low-cost optoacoustic (OA) pretouch sensor integrated into the fingertip of a human-sized bionic robotic hand. The OA pretouch sensor performs distance ranging and material/structure classifications based on OA signals excited on the object surface by laser pulses. The sensorto-object distance is derived from the time delay, and the object material/structure is determined from the frequency spectra using a machine-learning (ML)-based classifier. The high sensitivity of the OA pretouch sensor allows clean OA signals to be captured from single laser pulses and eliminates the need of signal averaging, allowing data-acquisition in real time during continuous finger motion. The simplified and compact design is cost-effective and enables seamless integration of the OA pretouch sensors onto distal portion of a bionic robot finger. Experimental characterization showed a lateral resolution of 0.5 mm and ranging accuracy within 0.3 mm. Machine learning performed with a 100% accuracy in household material/structure classification and 90.4% accuracy in fruit firmness classification. These results confirm that OA pretouch sensors are viable for integration and improving the grasping of robot hands.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 21","pages":"40703-40712"},"PeriodicalIF":4.3,"publicationDate":"2025-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145455887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MULFE: A Sensor-Based Multilevel Feature Enhancement Method for Group Activity Recognition 基于传感器的群体活动识别多级特征增强方法
IF 4.3 2区 综合性期刊 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-09-29 DOI: 10.1109/JSEN.2025.3613557
Ruohong Huan;Ke Wang;Junhong Dong;Ji Zhang;Peng Chen;Guodao Sun;Ronghua Liang
The recognition of group activities from sensor data faces challenges in effective feature extraction, especially the difficulty of expressing the dynamic changes in member actions, the positional relationships, and the collaborative relationships among members. To address this, this article proposes a sensor-based multilevel feature enhancement (MULFE) method for group activity recognition (GAR). MULFE utilizes an individual action feature extraction network (IAFEN) to extract individual action features and constructs a group location-level feature enhancement (GLLFE) module to capture the group location interaction features among individuals. By combining group location interaction features with individual action features using attentionweighted fusion, location-level enhanced group activity features are achieved, with which the representation of multiple individual features within the group and the complex relational features of individual spatial locations are enhanced. Furthermore, a group spatiotemporal-level feature enhancement (GSLFE) module based on CAMLP-Mixer network is designed, using a multilayer perceptron (MLP) to achieve feature interaction and integration to further obtain group spatiotemporal features. The group spatiotemporal features are combined with the location-level enhanced group activity features to generate the multilevel enhanced group activity features, making the model more suitable for understanding complex group activities. Experiments are conducted on two self-built datasets, UT-Data-gar and Garsensors, to validate and analyze the performance of MULFE. The experimental results demonstrate that MULFE can effectively recognize group activities, particularly maintaining high accuracy and strong robustness in situations with random changes in group size.
从传感器数据中识别群体活动面临着有效特征提取的挑战,特别是难以表达成员动作的动态变化、位置关系和成员之间的协作关系。为了解决这个问题,本文提出了一种基于传感器的多级特征增强(MULFE)方法用于群体活动识别(GAR)。MULFE利用个体动作特征提取网络(IAFEN)提取个体动作特征,构建群体位置级特征增强(GLLFE)模块捕捉个体间群体位置交互特征。将群体位置交互特征与个体行为特征结合,采用注意力加权融合的方法,实现了位置级增强的群体活动特征,增强了群体内多个个体特征的表示和个体空间位置的复杂关系特征。在此基础上,设计了基于CAMLP-Mixer网络的组时空级特征增强(GSLFE)模块,利用多层感知器(MLP)实现特征交互与集成,进一步获得组时空特征。将群体时空特征与位置级增强群体活动特征相结合,生成多层次增强群体活动特征,使模型更适合于复杂群体活动的理解。在UT-Data-gar和Garsensors两个自建数据集上进行了实验,验证和分析了MULFE的性能。实验结果表明,该方法能够有效地识别群体活动,特别是在群体规模随机变化的情况下保持较高的准确率和较强的鲁棒性。
{"title":"MULFE: A Sensor-Based Multilevel Feature Enhancement Method for Group Activity Recognition","authors":"Ruohong Huan;Ke Wang;Junhong Dong;Ji Zhang;Peng Chen;Guodao Sun;Ronghua Liang","doi":"10.1109/JSEN.2025.3613557","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3613557","url":null,"abstract":"The recognition of group activities from sensor data faces challenges in effective feature extraction, especially the difficulty of expressing the dynamic changes in member actions, the positional relationships, and the collaborative relationships among members. To address this, this article proposes a sensor-based multilevel feature enhancement (MULFE) method for group activity recognition (GAR). MULFE utilizes an individual action feature extraction network (IAFEN) to extract individual action features and constructs a group location-level feature enhancement (GLLFE) module to capture the group location interaction features among individuals. By combining group location interaction features with individual action features using attentionweighted fusion, location-level enhanced group activity features are achieved, with which the representation of multiple individual features within the group and the complex relational features of individual spatial locations are enhanced. Furthermore, a group spatiotemporal-level feature enhancement (GSLFE) module based on CAMLP-Mixer network is designed, using a multilayer perceptron (MLP) to achieve feature interaction and integration to further obtain group spatiotemporal features. The group spatiotemporal features are combined with the location-level enhanced group activity features to generate the multilevel enhanced group activity features, making the model more suitable for understanding complex group activities. Experiments are conducted on two self-built datasets, UT-Data-gar and Garsensors, to validate and analyze the performance of MULFE. The experimental results demonstrate that MULFE can effectively recognize group activities, particularly maintaining high accuracy and strong robustness in situations with random changes in group size.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 21","pages":"40929-40945"},"PeriodicalIF":4.3,"publicationDate":"2025-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145405388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MapcQtNet: A Novel Deeply Integrated Hybrid Soft Sensor Modeling Method for Multistage Quality Prediction in Heat Treatment Process MapcQtNet:一种用于热处理过程多阶段质量预测的新型深度集成混合软传感器建模方法
IF 4.3 2区 综合性期刊 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-09-26 DOI: 10.1109/JSEN.2025.3612691
Dandan Yao;Yinghua Yang;Xiaozhi Liu;Yunfei Mu
In multistage industrial processes, critical quality variables are often difficult to measure in real time. Thus, a common solution is to measure them indirectly in a quality prediction manner through soft sensor methods. Most existing networks for multistage quality prediction focus on the overall transfer of process information between stages, but do not consider exactly how the quality information is transferred. In addition, how to introduce mechanisms in data-driven models to maximize the use of prior knowledge and improve the models’ interpretability is another problem that needs to be addressed. Considering the above two problems, a novel deeply integrated hybrid soft sensor modeling method is proposed for multistage quality prediction, known as the mechanism-aware progressive constraint-based quality transfer network (MapcQtNet). The MapcQtNet consists of two key blocks: quality transfer units (QTUs) and mechanism-aware progressive constraint (MAPC). With the addition of two quality transfer gates, QTUs can simulate the flow of quality information between stages in more detailed ways. In addition, the MAPC innovatively integrates the prior mechanism formulas into the network in a constrained manner, which helps the network to be aware of the process mechanism. With it, the MapcQtNet can not only enhance its interpretability but also gain the ability to achieve unlabeled predictions for the intermediate variable and uncertain mechanism parameter. A real industrial case of the heat treatment process verifies the validity of the proposed MapcQtNet as an advanced soft sensor modeling method for multistage quality prediction.
在多阶段工业过程中,关键质量变量往往难以实时测量。因此,一个常见的解决方案是通过软测量方法以质量预测的方式间接测量它们。大多数现有的多阶段质量预测网络关注的是过程信息在各阶段之间的整体传递,而没有考虑质量信息是如何传递的。此外,如何在数据驱动模型中引入机制,最大限度地利用先验知识,提高模型的可解释性是另一个需要解决的问题。针对以上两个问题,提出了一种深度集成的多阶段质量预测混合软传感器建模方法,即基于机制感知的渐进式约束的质量传递网络(MapcQtNet)。MapcQtNet由两个关键模块组成:质量传递单元(qtu)和机制感知渐进约束(MAPC)。通过添加两个质量传递门,qtu可以更详细地模拟各阶段之间的质量信息流。此外,MAPC创新地将先验机制公式以约束的方式整合到网络中,有助于网络对过程机制的感知。利用该方法,MapcQtNet不仅可以提高其可解释性,而且可以实现对中间变量和不确定机制参数的无标记预测。热处理过程的实际工业案例验证了MapcQtNet作为一种先进的多阶段质量预测软测量建模方法的有效性。
{"title":"MapcQtNet: A Novel Deeply Integrated Hybrid Soft Sensor Modeling Method for Multistage Quality Prediction in Heat Treatment Process","authors":"Dandan Yao;Yinghua Yang;Xiaozhi Liu;Yunfei Mu","doi":"10.1109/JSEN.2025.3612691","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3612691","url":null,"abstract":"In multistage industrial processes, critical quality variables are often difficult to measure in real time. Thus, a common solution is to measure them indirectly in a quality prediction manner through soft sensor methods. Most existing networks for multistage quality prediction focus on the overall transfer of process information between stages, but do not consider exactly how the quality information is transferred. In addition, how to introduce mechanisms in data-driven models to maximize the use of prior knowledge and improve the models’ interpretability is another problem that needs to be addressed. Considering the above two problems, a novel deeply integrated hybrid soft sensor modeling method is proposed for multistage quality prediction, known as the mechanism-aware progressive constraint-based quality transfer network (MapcQtNet). The MapcQtNet consists of two key blocks: quality transfer units (QTUs) and mechanism-aware progressive constraint (MAPC). With the addition of two quality transfer gates, QTUs can simulate the flow of quality information between stages in more detailed ways. In addition, the MAPC innovatively integrates the prior mechanism formulas into the network in a constrained manner, which helps the network to be aware of the process mechanism. With it, the MapcQtNet can not only enhance its interpretability but also gain the ability to achieve unlabeled predictions for the intermediate variable and uncertain mechanism parameter. A real industrial case of the heat treatment process verifies the validity of the proposed MapcQtNet as an advanced soft sensor modeling method for multistage quality prediction.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 21","pages":"40913-40928"},"PeriodicalIF":4.3,"publicationDate":"2025-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145405343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Single-Channel Wearable EEG Using Low-Power Qvar Sensor and Machine Learning for Drowsiness Detection 基于低功耗Qvar传感器和机器学习的单通道可穿戴EEG睡意检测
IF 4.3 2区 综合性期刊 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-09-26 DOI: 10.1109/JSEN.2025.3612476
Michele Antonio Gazzanti Pugliese Di Cotrone;Marco Balsi;Nicola Picozzi;Alessandro Zampogna;Soufyane Bouchelaghem;Antonio Suppa;Leonardo Davì;Denise Fabeni;Alessandro Gumiero;Ludovica Ferri;Luigi Della Torre;Patrizia Pulitano;Fernanda Irrera
This work deals with the fabrication and validation of an innovative wearable single-channel electroencephalogram (EEG) system, designed for real-time monitoring of specific brain activity. It is based on the use of a low-power sensor (Qvar) integrated in a miniaturized electronic platform, and on machine learning (ML) algorithms developed on purpose. The study demonstrates the accuracy (ACC) of Qvar in capturing EEG signals by systematic comparison with a gold standard and the comprehensive analyses in time and frequency domains confirm its reliability across the various EEG frequency bands. In this work, the specific application of drowsiness detection is addressed, leveraging ML algorithms trained and validated on public datasets and, at a more preliminary stage, on real-world data collected specifically for this study under the supervision of trained personnel. The results outline the system’s promise for domestic and outdoor monitoring of specific neurological conditions and applications, such as fatigue management and cognitive state assessment. The Qvar represents a significant step toward accessible and practical wearable EEG technologies, combining portability, ACC, and low-power consumption to enhance user experience, enable massive screening, and broaden the scope of EEG applications.
这项工作涉及一种创新的可穿戴单通道脑电图(EEG)系统的制造和验证,该系统旨在实时监测特定的大脑活动。它基于集成在小型化电子平台中的低功耗传感器(Qvar)的使用,以及专门开发的机器学习(ML)算法。通过与金标准的系统比较,验证了Qvar捕获脑电信号的准确性(ACC),并在时域和频域进行了综合分析,证实了其在脑电信号各频段的可靠性。在这项工作中,利用在公共数据集上训练和验证的ML算法,以及在更初步的阶段,在训练有素的人员的监督下,在专门为本研究收集的现实世界数据上,解决了困倦检测的具体应用。结果概述了该系统在国内和室外监测特定神经系统状况和应用的前景,如疲劳管理和认知状态评估。Qvar代表了向可访问和实用的可穿戴脑电图技术迈出的重要一步,它结合了便携性、ACC和低功耗,以增强用户体验,实现大规模筛查,并扩大脑电图应用范围。
{"title":"Single-Channel Wearable EEG Using Low-Power Qvar Sensor and Machine Learning for Drowsiness Detection","authors":"Michele Antonio Gazzanti Pugliese Di Cotrone;Marco Balsi;Nicola Picozzi;Alessandro Zampogna;Soufyane Bouchelaghem;Antonio Suppa;Leonardo Davì;Denise Fabeni;Alessandro Gumiero;Ludovica Ferri;Luigi Della Torre;Patrizia Pulitano;Fernanda Irrera","doi":"10.1109/JSEN.2025.3612476","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3612476","url":null,"abstract":"This work deals with the fabrication and validation of an innovative wearable single-channel electroencephalogram (EEG) system, designed for real-time monitoring of specific brain activity. It is based on the use of a low-power sensor (Qvar) integrated in a miniaturized electronic platform, and on machine learning (ML) algorithms developed on purpose. The study demonstrates the accuracy (ACC) of Qvar in capturing EEG signals by systematic comparison with a gold standard and the comprehensive analyses in time and frequency domains confirm its reliability across the various EEG frequency bands. In this work, the specific application of drowsiness detection is addressed, leveraging ML algorithms trained and validated on public datasets and, at a more preliminary stage, on real-world data collected specifically for this study under the supervision of trained personnel. The results outline the system’s promise for domestic and outdoor monitoring of specific neurological conditions and applications, such as fatigue management and cognitive state assessment. The Qvar represents a significant step toward accessible and practical wearable EEG technologies, combining portability, ACC, and low-power consumption to enhance user experience, enable massive screening, and broaden the scope of EEG applications.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 21","pages":"40668-40679"},"PeriodicalIF":4.3,"publicationDate":"2025-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145455767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Online Monitoring of Membrane Fouling Based on EIT and Deep Learning 基于EIT和深度学习的膜污染在线监测
IF 4.3 2区 综合性期刊 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-09-26 DOI: 10.1109/JSEN.2025.3612498
Xiuyan Li;Yuqi Hou;Shuai Wang;Qi Wang;Jie Wang;Pingjuan Niu
Membrane separation has been demonstrated to be the most efficient and effective method for water treatment and desalination. However, membrane filtration modules inevitably become contaminated during the water treatment process, which reduces their filtration efficiency. Electrical impedance tomography (EIT) is an effective method for online monitoring of membrane fouling. However, the accuracy of foulant distribution needs to be improved due to the low resolution of EIT images. In this article, an intelligent monitoring system based on EIT and deep learning is designed to generate the conductivity distribution of the membrane surface and track and monitor the dynamic changes of the membrane pollution in real-time. A deep-learning architecture for EIT image reconstruction, namely, TransUNet + Root-Net, is proposed. TransUNet combines the global perception ability of the Transformer with the local feature extraction advantages of UNet. To address the cross-domain generalization challenge encountered when training with simulation data and validating with experimental data, this article designs an unsupervised dual-domain mapping network (Root-Net) to map simulation data into a form resembling experimental data. By using the large amount of simulation data labels mapped by Root-net to train TransUNet, the model’s ability to represent the dynamic distribution of real membrane fouling is significantly improved. The results indicate 2.72% error for the TransUNet + Root-Net method. This approach more accurately characterizes the location and shape of the fouling, enabling real-time monitoring of its spatial distribution and providing insights into the evolution of membrane fouling.
膜分离已被证明是水处理和海水淡化中最有效的方法。然而,膜过滤组件在水处理过程中不可避免地会受到污染,从而降低其过滤效率。电阻抗层析成像(EIT)是膜污染在线监测的有效方法。然而,由于EIT图像的分辨率较低,污染物分布的准确性有待提高。本文设计了一种基于EIT和深度学习的智能监测系统,生成膜表面电导率分布,实时跟踪和监测膜污染的动态变化。提出了一种用于EIT图像重建的深度学习架构TransUNet + Root-Net。TransUNet结合了Transformer的全局感知能力和UNet的局部特征提取优势。为了解决在使用仿真数据进行训练和使用实验数据进行验证时遇到的跨域泛化挑战,本文设计了一个无监督双域映射网络(Root-Net),将仿真数据映射成类似实验数据的形式。通过使用Root-net映射的大量仿真数据标签对TransUNet进行训练,显著提高了模型表征真实膜污染动态分布的能力。结果表明,TransUNet + Root-Net方法的误差为2.72%。这种方法更准确地表征了污染的位置和形状,能够实时监测其空间分布,并为膜污染的演变提供见解。
{"title":"Online Monitoring of Membrane Fouling Based on EIT and Deep Learning","authors":"Xiuyan Li;Yuqi Hou;Shuai Wang;Qi Wang;Jie Wang;Pingjuan Niu","doi":"10.1109/JSEN.2025.3612498","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3612498","url":null,"abstract":"Membrane separation has been demonstrated to be the most efficient and effective method for water treatment and desalination. However, membrane filtration modules inevitably become contaminated during the water treatment process, which reduces their filtration efficiency. Electrical impedance tomography (EIT) is an effective method for online monitoring of membrane fouling. However, the accuracy of foulant distribution needs to be improved due to the low resolution of EIT images. In this article, an intelligent monitoring system based on EIT and deep learning is designed to generate the conductivity distribution of the membrane surface and track and monitor the dynamic changes of the membrane pollution in real-time. A deep-learning architecture for EIT image reconstruction, namely, TransUNet + Root-Net, is proposed. TransUNet combines the global perception ability of the Transformer with the local feature extraction advantages of UNet. To address the cross-domain generalization challenge encountered when training with simulation data and validating with experimental data, this article designs an unsupervised dual-domain mapping network (Root-Net) to map simulation data into a form resembling experimental data. By using the large amount of simulation data labels mapped by Root-net to train TransUNet, the model’s ability to represent the dynamic distribution of real membrane fouling is significantly improved. The results indicate 2.72% error for the TransUNet + Root-Net method. This approach more accurately characterizes the location and shape of the fouling, enabling real-time monitoring of its spatial distribution and providing insights into the evolution of membrane fouling.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 21","pages":"40680-40689"},"PeriodicalIF":4.3,"publicationDate":"2025-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145455873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Flux-Directional Orthogonal Differential Probe for Low-Frequency Eddy-Current Nondestructive Testing 用于低频涡流无损检测的磁通定向正交差分探头
IF 4.3 2区 综合性期刊 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-09-25 DOI: 10.1109/JSEN.2025.3611949
Junmei Tian;Jie Zhang;Wujun Kui;Xiaoguang Cao;Ziqi Liang
Eddy-current probes are widely used to enable noncontact, high-speed detection in the operation and maintenance of metal pipelines and sheets. This study proposes a spatially orthogonal differential eddy-current probe based on the magnetic-flux directional extraction to address the issue of weak detection signals during low-frequency eddy-current testing of millimeter-scale surface defects on metal sheets. A simulation model of the probe is developed using COMSOL Multiphysics to analyze the magnetic-flux distribution and induced electromotive force (EMF) characteristics of both traditional runway-shaped and refined spatially orthogonal differential probes during defect detection. An experimental platform is constructed to compare defect signals of varying sizes at different detection speeds. Simulation results indicate that the induced EMF amplitude in the detection coil of the refined probe is approximately 3.3 times greater than that of the traditional runway-shaped differential eddy-current probe. Experimental findings confirm that the refined probe, operating at 2 m/s, can reliably detect defects with a width and depth of 0.5 mm.
涡流探头在金属管道和板材的运行和维护中广泛应用于非接触、高速检测。针对金属薄板毫米级表面缺陷低频涡流检测中检测信号微弱的问题,提出了一种基于磁通量定向提取的空间正交差分涡流探头。利用COMSOL Multiphysics软件建立了探针的仿真模型,分析了传统跑道型探针和改进空间正交差分探针在缺陷检测过程中的磁通量分布和感应电动势(EMF)特性。搭建实验平台,对不同检测速度下不同尺寸的缺陷信号进行比较。仿真结果表明,改进探头的感应电动势幅值比传统跑道型差动涡流探头的感应电动势幅值大约3.3倍。实验结果证实,改进后的探头以2m /s的速度工作,可以可靠地检测宽度和深度为0.5 mm的缺陷。
{"title":"Flux-Directional Orthogonal Differential Probe for Low-Frequency Eddy-Current Nondestructive Testing","authors":"Junmei Tian;Jie Zhang;Wujun Kui;Xiaoguang Cao;Ziqi Liang","doi":"10.1109/JSEN.2025.3611949","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3611949","url":null,"abstract":"Eddy-current probes are widely used to enable noncontact, high-speed detection in the operation and maintenance of metal pipelines and sheets. This study proposes a spatially orthogonal differential eddy-current probe based on the magnetic-flux directional extraction to address the issue of weak detection signals during low-frequency eddy-current testing of millimeter-scale surface defects on metal sheets. A simulation model of the probe is developed using COMSOL Multiphysics to analyze the magnetic-flux distribution and induced electromotive force (EMF) characteristics of both traditional runway-shaped and refined spatially orthogonal differential probes during defect detection. An experimental platform is constructed to compare defect signals of varying sizes at different detection speeds. Simulation results indicate that the induced EMF amplitude in the detection coil of the refined probe is approximately 3.3 times greater than that of the traditional runway-shaped differential eddy-current probe. Experimental findings confirm that the refined probe, operating at 2 m/s, can reliably detect defects with a width and depth of 0.5 mm.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 21","pages":"40651-40659"},"PeriodicalIF":4.3,"publicationDate":"2025-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145455972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multieye Visual Fusion Encoderless Control With Permanent Magnet Synchronous Machines 永磁同步电机多眼视觉融合无编码器控制
IF 4.3 2区 综合性期刊 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-09-25 DOI: 10.1109/JSEN.2025.3612050
Jingqi Dong;Le Sun;Longmiao Chen;Kuan Wang
Motion capture (MoCap) technology can be implemented using computer vision (CV)-based target position sensing methods. In modern industrial applications, CV-based position measurement techniques are increasingly emerging as a promising alternative to traditional encoders in servo drives, offering the potential to reduce system costs while maintaining performance. Although MoCap technologies have made significant progress in the past decades, CV-based systems still face challenges related to limited measurement accuracy and delayed real-time responsiveness, especially in cost-sensitive applications where both precise position recognition and real-time response are critical. To resolve these limitations, this article presents a visual-electromechanical (EM) sensing fusion control framework. A color visual wavelet-Transformer (CVWT) network is designed that utilizes color features as input, effectively preserving critical information while reducing training complexity and computational cost. The CVWT network integrates a wavelet transform module with a Transformer module to perform multiscale and multilevel feature extraction and modeling on visual data acquired from dual cameras. In addition, electrical and mechanical models are incorporated into the state estimation framework, and an extended Kalman filter (EKF) is employed to fuse multisource perceptual data. The experimental results demonstrate that under a maximum rotational speed of 25 r/min, the system achieves a position control accuracy of up to 0.47°, validating the effectiveness and feasibility of the proposed method within a low-cost vision-based framework.
运动捕捉(MoCap)技术可以使用基于计算机视觉(CV)的目标位置传感方法来实现。在现代工业应用中,基于cv的位置测量技术日益成为伺服驱动器中传统编码器的有前途的替代方案,提供了在保持性能的同时降低系统成本的潜力。尽管动作捕捉技术在过去几十年中取得了重大进展,但基于cv的系统仍然面临着与有限的测量精度和延迟的实时响应相关的挑战,特别是在精确位置识别和实时响应至关重要的成本敏感应用中。为了解决这些限制,本文提出了一个视觉机电(EM)传感融合控制框架。设计了一种利用颜色特征作为输入的彩色视觉小波变换(CVWT)网络,在有效保留关键信息的同时降低了训练复杂度和计算成本。CVWT网络集成了一个小波变换模块和一个Transformer模块,对从双摄像头获取的视觉数据进行多尺度、多水平的特征提取和建模。此外,将电气模型和力学模型纳入状态估计框架,并采用扩展卡尔曼滤波(EKF)融合多源感知数据。实验结果表明,在最大转速为25 r/min的情况下,系统的位置控制精度可达0.47°,验证了该方法在低成本视觉框架下的有效性和可行性。
{"title":"Multieye Visual Fusion Encoderless Control With Permanent Magnet Synchronous Machines","authors":"Jingqi Dong;Le Sun;Longmiao Chen;Kuan Wang","doi":"10.1109/JSEN.2025.3612050","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3612050","url":null,"abstract":"Motion capture (MoCap) technology can be implemented using computer vision (CV)-based target position sensing methods. In modern industrial applications, CV-based position measurement techniques are increasingly emerging as a promising alternative to traditional encoders in servo drives, offering the potential to reduce system costs while maintaining performance. Although MoCap technologies have made significant progress in the past decades, CV-based systems still face challenges related to limited measurement accuracy and delayed real-time responsiveness, especially in cost-sensitive applications where both precise position recognition and real-time response are critical. To resolve these limitations, this article presents a visual-electromechanical (EM) sensing fusion control framework. A color visual wavelet-Transformer (CVWT) network is designed that utilizes color features as input, effectively preserving critical information while reducing training complexity and computational cost. The CVWT network integrates a wavelet transform module with a Transformer module to perform multiscale and multilevel feature extraction and modeling on visual data acquired from dual cameras. In addition, electrical and mechanical models are incorporated into the state estimation framework, and an extended Kalman filter (EKF) is employed to fuse multisource perceptual data. The experimental results demonstrate that under a maximum rotational speed of 25 r/min, the system achieves a position control accuracy of up to 0.47°, validating the effectiveness and feasibility of the proposed method within a low-cost vision-based framework.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 21","pages":"40901-40912"},"PeriodicalIF":4.3,"publicationDate":"2025-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145405413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sensing Force Dynamics of Prehensile Grip During Object Slippage Using a Slip Inducing Device 利用滑移感应装置感应物体滑移时的抓握力动力学
IF 4.3 2区 综合性期刊 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-09-25 DOI: 10.1109/JSEN.2025.3612094
Ayesha Tooba Khan;Deepak Joshi;Biswarup Mukherjee
Understanding the force dynamics during object slippage is crucial in effectively improving the manipulation dexterity. Force dynamics during object slippage will be varied based on the characteristics of the mechanical stimuli. This work is the first to explore force dynamics while considering the simultaneous effects of slip direction, distance, and speed variations. We performed the experiment with healthy individuals to explore how the hand kinetics will be modulated during the reflex and the voluntary phases based on the choice of slip direction, slip distance, and slip speed. Our results reveal that the force dynamics significantly depend on the slip direction. However, we observed that the variation pattern differed depending on the reflex and voluntary phases of the hand kinetics. We also observe that the force dynamics were modulated depending on the significant interactions of slip distance and slip speed in a particular slip direction. The experiment was designed to closely mimic the real-life scenario of object slippage. Thus, the findings can significantly contribute to advanced sensorimotor rehabilitation strategies, haptic feedback systems, and mechatronic devices.
了解物体滑移过程中的力动力学是有效提高操纵灵巧性的关键。物体滑移过程中的力动力学将根据机械刺激的特性而变化。这项工作是第一次在考虑滑移方向、距离和速度变化同时影响的情况下探索力动力学。我们对健康个体进行了实验,以探索在选择滑动方向、滑动距离和滑动速度的基础上,在反射和自愿阶段如何调节手部动力学。我们的研究结果表明,力动力学显著依赖于滑移方向。然而,我们观察到变化模式不同取决于手动力学的反射和自愿阶段。我们还观察到,在特定的滑移方向上,力动力学取决于滑移距离和滑移速度的显著相互作用。该实验旨在密切模仿现实生活中物体滑动的场景。因此,研究结果可以为先进的感觉运动康复策略、触觉反馈系统和机电一体化设备做出重大贡献。
{"title":"Sensing Force Dynamics of Prehensile Grip During Object Slippage Using a Slip Inducing Device","authors":"Ayesha Tooba Khan;Deepak Joshi;Biswarup Mukherjee","doi":"10.1109/JSEN.2025.3612094","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3612094","url":null,"abstract":"Understanding the force dynamics during object slippage is crucial in effectively improving the manipulation dexterity. Force dynamics during object slippage will be varied based on the characteristics of the mechanical stimuli. This work is the first to explore force dynamics while considering the simultaneous effects of slip direction, distance, and speed variations. We performed the experiment with healthy individuals to explore how the hand kinetics will be modulated during the reflex and the voluntary phases based on the choice of slip direction, slip distance, and slip speed. Our results reveal that the force dynamics significantly depend on the slip direction. However, we observed that the variation pattern differed depending on the reflex and voluntary phases of the hand kinetics. We also observe that the force dynamics were modulated depending on the significant interactions of slip distance and slip speed in a particular slip direction. The experiment was designed to closely mimic the real-life scenario of object slippage. Thus, the findings can significantly contribute to advanced sensorimotor rehabilitation strategies, haptic feedback systems, and mechatronic devices.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 21","pages":"40660-40667"},"PeriodicalIF":4.3,"publicationDate":"2025-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145455877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Sensors Journal
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1