首页 > 最新文献

Frontiers in Neurorobotics最新文献

英文 中文
MSGU-Net: a lightweight multi-scale ghost U-Net for image segmentation. MSGU-Net:用于图像分割的轻量级多尺度幽灵U-Net。
IF 2.6 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-06 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1480055
Hua Cheng, Yang Zhang, Huangxin Xu, Dingliang Li, Zejian Zhong, Yinchuan Zhao, Zhuo Yan

U-Net and its variants have been widely used in the field of image segmentation. In this paper, a lightweight multi-scale Ghost U-Net (MSGU-Net) network architecture is proposed. This can efficiently and quickly process image segmentation tasks while generating high-quality object masks for each object. The pyramid structure (SPP-Inception) module and ghost module are seamlessly integrated in a lightweight manner. Equipped with an efficient local attention (ELA) mechanism and an attention gate mechanism, they are designed to accurately identify the region of interest (ROI). The SPP-Inception module and ghost module work in tandem to effectively merge multi-scale information derived from low-level features, high-level features, and decoder masks at each stage. Comparative experiments were conducted between the proposed MSGU-Net and state-of-the-art networks on the ISIC2017 and ISIC2018 datasets. In short, compared to the baseline U-Net, our model achieves superior segmentation performance while reducing parameter and computation costs by 96.08 and 92.59%, respectively. Moreover, MSGU-Net can serve as a lightweight deep neural network suitable for deployment across a range of intelligent devices and mobile platforms, offering considerable potential for widespread adoption.

U-Net及其变体在图像分割领域得到了广泛的应用。本文提出了一种轻量级的多尺度幽灵u网(MSGU-Net)网络架构。这可以高效快速地处理图像分割任务,同时为每个对象生成高质量的对象掩码。金字塔结构(SPP-Inception)模块和幽灵模块以轻量级的方式无缝集成。采用高效的局部注意(ELA)机制和注意门机制,精确识别感兴趣区域(ROI)。SPP-Inception模块和ghost模块协同工作,在每个阶段有效地合并来自低级特征、高级特征和解码器掩码的多尺度信息。在ISIC2017和ISIC2018数据集上,将拟议的MSGU-Net与最先进的网络进行了对比实验。简而言之,与基线U-Net相比,我们的模型在参数和计算成本分别降低96.8%和92.59%的情况下取得了更好的分割性能。此外,MSGU-Net可以作为一种轻量级的深度神经网络,适用于各种智能设备和移动平台,具有广泛采用的巨大潜力。
{"title":"MSGU-Net: a lightweight multi-scale ghost U-Net for image segmentation.","authors":"Hua Cheng, Yang Zhang, Huangxin Xu, Dingliang Li, Zejian Zhong, Yinchuan Zhao, Zhuo Yan","doi":"10.3389/fnbot.2024.1480055","DOIUrl":"10.3389/fnbot.2024.1480055","url":null,"abstract":"<p><p>U-Net and its variants have been widely used in the field of image segmentation. In this paper, a lightweight multi-scale Ghost U-Net (MSGU-Net) network architecture is proposed. This can efficiently and quickly process image segmentation tasks while generating high-quality object masks for each object. The pyramid structure (SPP-Inception) module and ghost module are seamlessly integrated in a lightweight manner. Equipped with an efficient local attention (ELA) mechanism and an attention gate mechanism, they are designed to accurately identify the region of interest (ROI). The SPP-Inception module and ghost module work in tandem to effectively merge multi-scale information derived from low-level features, high-level features, and decoder masks at each stage. Comparative experiments were conducted between the proposed MSGU-Net and state-of-the-art networks on the ISIC2017 and ISIC2018 datasets. In short, compared to the baseline U-Net, our model achieves superior segmentation performance while reducing parameter and computation costs by 96.08 and 92.59%, respectively. Moreover, MSGU-Net can serve as a lightweight deep neural network suitable for deployment across a range of intelligent devices and mobile platforms, offering considerable potential for widespread adoption.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1480055"},"PeriodicalIF":2.6,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11743674/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143004152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Architectural planning robot driven by unsupervised learning for space optimization. 基于无监督学习驱动的建筑规划机器人进行空间优化。
IF 2.6 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-03 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1517960
Zhe Zhang, Yuchun Zheng

Introduction: Space optimization in architectural planning is a crucial task for maximizing functionality and improving user experience in built environments. Traditional approaches often rely on manual planning or supervised learning techniques, which can be limited by the availability of labeled data and may not adapt well to complex spatial requirements.

Methods: To address these limitations, this paper presents a novel architectural planning robot driven by unsupervised learning for automatic space optimization. The proposed framework integrates spatial attention, clustering, and state refinement mechanisms to autonomously learn and optimize spatial configurations without the need for labeled training data. The spatial attention mechanism focuses the model on key areas within the architectural space, clustering identifies functional zones, and state refinement iteratively improves the spatial layout by adjusting based on learned patterns. Experiments conducted on multiple 3D datasets demonstrate the effectiveness of the proposed approach in achieving optimized space layouts with reduced computational requirements.

Results and discussion: The results show significant improvements in layout efficiency and processing time compared to traditional methods, indicating the potential for real-world applications in automated architectural planning and dynamic space management. This work contributes to the field by providing a scalable solution for architectural space optimization that adapts to diverse spatial requirements through unsupervised learning.

引言:建筑规划中的空间优化是实现建筑环境功能最大化和改善用户体验的关键任务。传统的方法通常依赖于人工规划或监督学习技术,这些技术可能受到标记数据可用性的限制,并且可能无法很好地适应复杂的空间要求。方法:针对这些局限性,本文提出了一种新型的无监督学习驱动的建筑规划机器人,用于自动空间优化。该框架集成了空间注意、聚类和状态细化机制,无需标记训练数据即可自主学习和优化空间配置。空间关注机制将模型聚焦于建筑空间内的关键区域,聚类识别功能区域,状态细化通过学习模式的调整迭代改进空间布局。在多个三维数据集上进行的实验证明了该方法在减少计算需求的情况下实现优化空间布局的有效性。结果与讨论:结果显示,与传统方法相比,该方法在布局效率和处理时间上有了显著的改善,表明了在自动化建筑规划和动态空间管理方面的实际应用潜力。这项工作为建筑空间优化提供了一个可扩展的解决方案,通过无监督学习适应不同的空间需求,从而为该领域做出了贡献。
{"title":"Architectural planning robot driven by unsupervised learning for space optimization.","authors":"Zhe Zhang, Yuchun Zheng","doi":"10.3389/fnbot.2024.1517960","DOIUrl":"10.3389/fnbot.2024.1517960","url":null,"abstract":"<p><strong>Introduction: </strong>Space optimization in architectural planning is a crucial task for maximizing functionality and improving user experience in built environments. Traditional approaches often rely on manual planning or supervised learning techniques, which can be limited by the availability of labeled data and may not adapt well to complex spatial requirements.</p><p><strong>Methods: </strong>To address these limitations, this paper presents a novel architectural planning robot driven by unsupervised learning for automatic space optimization. The proposed framework integrates spatial attention, clustering, and state refinement mechanisms to autonomously learn and optimize spatial configurations without the need for labeled training data. The spatial attention mechanism focuses the model on key areas within the architectural space, clustering identifies functional zones, and state refinement iteratively improves the spatial layout by adjusting based on learned patterns. Experiments conducted on multiple 3D datasets demonstrate the effectiveness of the proposed approach in achieving optimized space layouts with reduced computational requirements.</p><p><strong>Results and discussion: </strong>The results show significant improvements in layout efficiency and processing time compared to traditional methods, indicating the potential for real-world applications in automated architectural planning and dynamic space management. This work contributes to the field by providing a scalable solution for architectural space optimization that adapts to diverse spatial requirements through unsupervised learning.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1517960"},"PeriodicalIF":2.6,"publicationDate":"2025-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11739300/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143004218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EEG-powered cerebral transformer for athletic performance. 用于运动表现的脑电图驱动的大脑变压器。
IF 2.6 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-20 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1499734
Qikai Sun

Introduction: In recent years, with advancements in wearable devices and biosignal analysis technologies, sports performance analysis has become an increasingly popular research field, particularly due to the growing demand for real-time monitoring of athletes' conditions in sports training and competitive events. Traditional methods of sports performance analysis typically rely on video data or sensor data for motion recognition. However, unimodal data often fails to fully capture the neural state of athletes, leading to limitations in accuracy and real-time performance when dealing with complex movement patterns. Moreover, these methods struggle with multimodal data fusion, making it difficult to fully leverage the deep information from electroencephalogram (EEG) signals.

Methods: To address these challenges, this paper proposes a "Cerebral Transformer" model based on EEG signals and video data. By employing an adaptive attention mechanism and cross-modal fusion, the model effectively combines EEG signals and video streams to achieve precise recognition and analysis of athletes' movements. The model's effectiveness was validated through experiments on four datasets: SEED, DEAP, eSports Sensors, and MODA. The results show that the proposed model outperforms existing mainstream methods in terms of accuracy, recall, and F1 score, while also demonstrating high computational efficiency.

Results and discussion: The significance of this study lies in providing a more comprehensive and efficient solution for sports performance analysis. Through cross-modal data fusion, it not only improves the accuracy of complex movement recognition but also provides technical support for monitoring athletes' neural states, offering important applications in sports training and medical rehabilitation.

引言:近年来,随着可穿戴设备和生物信号分析技术的进步,运动表现分析已经成为一个越来越受欢迎的研究领域,特别是在运动训练和竞技项目中对运动员状态实时监测的需求越来越大。传统的运动表现分析方法通常依赖于视频数据或传感器数据进行运动识别。然而,单峰数据往往不能完全捕捉运动员的神经状态,导致在处理复杂的运动模式时,准确性和实时性受到限制。此外,这些方法在多模态数据融合方面存在困难,难以充分利用脑电图(EEG)信号中的深层信息。方法:针对这些问题,本文提出了一种基于脑电信号和视频数据的“大脑变压器”模型。该模型采用自适应注意机制和跨模态融合,有效地将脑电信号和视频流结合起来,实现对运动员运动的精确识别和分析。通过SEED、DEAP、eSports Sensors和MODA四个数据集的实验验证了该模型的有效性。结果表明,该模型在准确率、查全率和F1分数方面均优于现有主流方法,同时也显示出较高的计算效率。结果与讨论:本研究的意义在于为运动成绩分析提供更全面、更高效的解决方案。通过跨模态数据融合,不仅提高了复杂动作识别的准确性,而且为监测运动员的神经状态提供了技术支持,在运动训练和医学康复中具有重要的应用价值。
{"title":"EEG-powered cerebral transformer for athletic performance.","authors":"Qikai Sun","doi":"10.3389/fnbot.2024.1499734","DOIUrl":"https://doi.org/10.3389/fnbot.2024.1499734","url":null,"abstract":"<p><strong>Introduction: </strong>In recent years, with advancements in wearable devices and biosignal analysis technologies, sports performance analysis has become an increasingly popular research field, particularly due to the growing demand for real-time monitoring of athletes' conditions in sports training and competitive events. Traditional methods of sports performance analysis typically rely on video data or sensor data for motion recognition. However, unimodal data often fails to fully capture the neural state of athletes, leading to limitations in accuracy and real-time performance when dealing with complex movement patterns. Moreover, these methods struggle with multimodal data fusion, making it difficult to fully leverage the deep information from electroencephalogram (EEG) signals.</p><p><strong>Methods: </strong>To address these challenges, this paper proposes a \"Cerebral Transformer\" model based on EEG signals and video data. By employing an adaptive attention mechanism and cross-modal fusion, the model effectively combines EEG signals and video streams to achieve precise recognition and analysis of athletes' movements. The model's effectiveness was validated through experiments on four datasets: SEED, DEAP, eSports Sensors, and MODA. The results show that the proposed model outperforms existing mainstream methods in terms of accuracy, recall, and F1 score, while also demonstrating high computational efficiency.</p><p><strong>Results and discussion: </strong>The significance of this study lies in providing a more comprehensive and efficient solution for sports performance analysis. Through cross-modal data fusion, it not only improves the accuracy of complex movement recognition but also provides technical support for monitoring athletes' neural states, offering important applications in sports training and medical rehabilitation.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1499734"},"PeriodicalIF":2.6,"publicationDate":"2024-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11695414/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142931334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Edge-guided feature fusion network for RGB-T salient object detection. 边缘引导特征融合网络用于RGB-T显著目标检测。
IF 2.6 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-17 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1489658
Yuanlin Chen, Zengbao Sun, Cheng Yan, Ming Zhao

Introduction: RGB-T Salient Object Detection (SOD) aims to accurately segment salient regions in both visible light and thermal infrared images. However, many existing methods overlook the critical complementarity between these modalities, which can enhance detection accuracy.

Methods: We propose the Edge-Guided Feature Fusion Network (EGFF-Net), which consists of cross-modal feature extraction, edge-guided feature fusion, and salience map prediction. Firstly, the cross-modal feature extraction module captures and aggregates united and intersecting information in each local region of RGB and thermal images. Then, the edge-guided feature fusion module enhances the edge features of salient regions, considering that edge information is very helpful in refining significant area details. Moreover, a layer-by-layer decoding structure integrates multi-level features and generates the prediction of salience maps.

Results: We conduct extensive experiments on three benchmark datasets and compare EGFF-Net with state-of-the-art methods. Our approach achieves superior performance, demonstrating the effectiveness of the proposed modules in improving both detection accuracy and boundary refinement.

Discussion: The results highlight the importance of integrating cross-modal information and edge-guided fusion in RGB-T SOD. Our method outperforms existing techniques and provides a robust framework for future developments in multi-modal saliency detection.

RGB-T显著目标检测(SOD)旨在准确分割可见光和热红外图像中的显著区域。然而,许多现有的方法忽略了这些模式之间的关键互补性,这可以提高检测精度。方法:提出了一种边缘引导特征融合网络(EGFF-Net),该网络由跨模态特征提取、边缘引导特征融合和显著性地图预测组成。首先,跨模态特征提取模块对RGB图像和热图像各局部区域的统一和相交信息进行捕获和聚合;然后,考虑到边缘信息对提炼重要区域细节非常有帮助,边缘引导特征融合模块对显著区域的边缘特征进行增强;此外,一层一层的解码结构集成了多层次的特征,并产生显著性映射的预测。结果:我们在三个基准数据集上进行了广泛的实验,并将EGFF-Net与最先进的方法进行了比较。我们的方法取得了优异的性能,证明了所提出的模块在提高检测精度和边界细化方面的有效性。讨论:结果强调了整合跨模态信息和边缘引导融合在RGB-T SOD中的重要性。我们的方法优于现有的技术,并为未来多模态显著性检测的发展提供了一个强大的框架。
{"title":"Edge-guided feature fusion network for RGB-T salient object detection.","authors":"Yuanlin Chen, Zengbao Sun, Cheng Yan, Ming Zhao","doi":"10.3389/fnbot.2024.1489658","DOIUrl":"10.3389/fnbot.2024.1489658","url":null,"abstract":"<p><strong>Introduction: </strong>RGB-T Salient Object Detection (SOD) aims to accurately segment salient regions in both visible light and thermal infrared images. However, many existing methods overlook the critical complementarity between these modalities, which can enhance detection accuracy.</p><p><strong>Methods: </strong>We propose the Edge-Guided Feature Fusion Network (EGFF-Net), which consists of cross-modal feature extraction, edge-guided feature fusion, and salience map prediction. Firstly, the cross-modal feature extraction module captures and aggregates united and intersecting information in each local region of RGB and thermal images. Then, the edge-guided feature fusion module enhances the edge features of salient regions, considering that edge information is very helpful in refining significant area details. Moreover, a layer-by-layer decoding structure integrates multi-level features and generates the prediction of salience maps.</p><p><strong>Results: </strong>We conduct extensive experiments on three benchmark datasets and compare EGFF-Net with state-of-the-art methods. Our approach achieves superior performance, demonstrating the effectiveness of the proposed modules in improving both detection accuracy and boundary refinement.</p><p><strong>Discussion: </strong>The results highlight the importance of integrating cross-modal information and edge-guided fusion in RGB-T SOD. Our method outperforms existing techniques and provides a robust framework for future developments in multi-modal saliency detection.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1489658"},"PeriodicalIF":2.6,"publicationDate":"2024-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11685216/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142914613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-attention swin-transformer for detailed segmentation of ancient architectural color patterns. 古建筑色彩图案精细分割的交叉关注旋转变压器。
IF 2.6 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-13 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1513488
Lv Yongyin, Yu Caixia

Introduction: Segmentation tasks in computer vision play a crucial role in various applications, ranging from object detection to medical imaging and cultural heritage preservation. Traditional approaches, including convolutional neural networks (CNNs) and standard transformer-based models, have achieved significant success; however, they often face challenges in capturing fine-grained details and maintaining efficiency across diverse datasets. These methods struggle with balancing precision and computational efficiency, especially when dealing with complex patterns and high-resolution images.

Methods: To address these limitations, we propose a novel segmentation model that integrates a hierarchical vision transformer backbone with multi-scale self-attention, cascaded attention decoding, and diffusion-based robustness enhancement. Our approach aims to capture both local details and global contexts effectively while maintaining lower computational overhead.

Results and discussion: Experiments conducted on four diverse datasets, including Ancient Architecture, MS COCO, Cityscapes, and ScanNet, demonstrate that our model outperforms state-of-the-art methods in accuracy, recall, and computational efficiency. The results highlight the model's ability to generalize well across different tasks and provide robust segmentation, even in challenging scenarios. Our work paves the way for more efficient and precise segmentation techniques, making it valuable for applications where both detail and speed are critical.

计算机视觉中的分割任务在各种应用中发挥着至关重要的作用,从物体检测到医学成像和文化遗产保护。传统的方法,包括卷积神经网络(cnn)和标准的基于变压器的模型,已经取得了显著的成功;然而,他们在捕获细粒度的细节和保持跨不同数据集的效率方面经常面临挑战。这些方法努力平衡精度和计算效率,特别是在处理复杂模式和高分辨率图像时。方法:为了解决这些局限性,我们提出了一种新的分割模型,该模型将分层视觉转换主干与多尺度自注意、级联注意解码和基于扩散的鲁棒性增强相结合。我们的方法旨在有效地捕获局部细节和全局上下文,同时保持较低的计算开销。结果和讨论:在四个不同的数据集上进行的实验,包括古建筑、MS COCO、城市景观和ScanNet,表明我们的模型在准确性、召回率和计算效率方面优于最先进的方法。结果表明,即使在具有挑战性的场景中,该模型也能很好地泛化不同任务,并提供稳健的分割。我们的工作为更有效和精确的分割技术铺平了道路,使其对细节和速度都至关重要的应用程序有价值。
{"title":"Cross-attention swin-transformer for detailed segmentation of ancient architectural color patterns.","authors":"Lv Yongyin, Yu Caixia","doi":"10.3389/fnbot.2024.1513488","DOIUrl":"10.3389/fnbot.2024.1513488","url":null,"abstract":"<p><strong>Introduction: </strong>Segmentation tasks in computer vision play a crucial role in various applications, ranging from object detection to medical imaging and cultural heritage preservation. Traditional approaches, including convolutional neural networks (CNNs) and standard transformer-based models, have achieved significant success; however, they often face challenges in capturing fine-grained details and maintaining efficiency across diverse datasets. These methods struggle with balancing precision and computational efficiency, especially when dealing with complex patterns and high-resolution images.</p><p><strong>Methods: </strong>To address these limitations, we propose a novel segmentation model that integrates a hierarchical vision transformer backbone with multi-scale self-attention, cascaded attention decoding, and diffusion-based robustness enhancement. Our approach aims to capture both local details and global contexts effectively while maintaining lower computational overhead.</p><p><strong>Results and discussion: </strong>Experiments conducted on four diverse datasets, including Ancient Architecture, MS COCO, Cityscapes, and ScanNet, demonstrate that our model outperforms state-of-the-art methods in accuracy, recall, and computational efficiency. The results highlight the model's ability to generalize well across different tasks and provide robust segmentation, even in challenging scenarios. Our work paves the way for more efficient and precise segmentation techniques, making it valuable for applications where both detail and speed are critical.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1513488"},"PeriodicalIF":2.6,"publicationDate":"2024-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11707421/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142947470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
3D convolutional neural network based on spatial-spectral feature pictures learning for decoding motor imagery EEG signal. 基于空间频谱特征图学习的三维卷积神经网络在运动意象脑电信号解码中的应用。
IF 2.6 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-10 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1485640
Xiaoguang Li, Yaqi Chu, Xuejian Wu

Non-invasive brain-computer interfaces (BCI) hold great promise in the field of neurorehabilitation. They are easy to use and do not require surgery, particularly in the area of motor imagery electroencephalography (EEG). However, motor imagery EEG signals often have a low signal-to-noise ratio and limited spatial and temporal resolution. Traditional deep neural networks typically only focus on the spatial and temporal features of EEG, resulting in relatively low decoding and accuracy rates for motor imagery tasks. To address these challenges, this paper proposes a 3D Convolutional Neural Network (P-3DCNN) decoding method that jointly learns spatial-frequency feature maps from the frequency and spatial domains of the EEG signals. First, the Welch method is used to calculate the frequency band power spectrum of the EEG, and a 2D matrix representing the spatial topology distribution of the electrodes is constructed. These spatial-frequency representations are then generated through cubic interpolation of the temporal EEG data. Next, the paper designs a 3DCNN network with 1D and 2D convolutional layers in series to optimize the convolutional kernel parameters and effectively learn the spatial-frequency features of the EEG. Batch normalization and dropout are also applied to improve the training speed and classification performance of the network. Finally, through experiments, the proposed method is compared to various classic machine learning and deep learning techniques. The results show an average decoding accuracy rate of 86.69%, surpassing other advanced networks. This demonstrates the effectiveness of our approach in decoding motor imagery EEG and offers valuable insights for the development of BCI.

无创脑机接口(BCI)在神经康复领域具有广阔的应用前景。它们易于使用,不需要手术,特别是在运动图像脑电图(EEG)领域。然而,运动意象脑电信号通常具有低信噪比和有限的空间和时间分辨率。传统的深度神经网络通常只关注脑电图的空间和时间特征,导致运动图像任务的解码率和准确率相对较低。为了解决这些问题,本文提出了一种3D卷积神经网络(P-3DCNN)解码方法,该方法从脑电图信号的频率域和空间域共同学习空间-频率特征映射。首先,采用Welch方法计算EEG的频带功率谱,构造一个表示电极空间拓扑分布的二维矩阵;然后通过对时间脑电图数据的三次插值生成这些空间频率表示。接下来,设计了一维和二维卷积层串联的3DCNN网络,优化卷积核参数,有效学习脑电的空间频率特征。为了提高网络的训练速度和分类性能,还采用了批归一化和dropout方法。最后,通过实验,将该方法与各种经典的机器学习和深度学习技术进行了比较。结果表明,平均解码准确率为86.69%,超过其他先进网络。这证明了我们的方法在解码运动图像脑电图方面的有效性,并为脑机接口的发展提供了有价值的见解。
{"title":"3D convolutional neural network based on spatial-spectral feature pictures learning for decoding motor imagery EEG signal.","authors":"Xiaoguang Li, Yaqi Chu, Xuejian Wu","doi":"10.3389/fnbot.2024.1485640","DOIUrl":"10.3389/fnbot.2024.1485640","url":null,"abstract":"<p><p>Non-invasive brain-computer interfaces (BCI) hold great promise in the field of neurorehabilitation. They are easy to use and do not require surgery, particularly in the area of motor imagery electroencephalography (EEG). However, motor imagery EEG signals often have a low signal-to-noise ratio and limited spatial and temporal resolution. Traditional deep neural networks typically only focus on the spatial and temporal features of EEG, resulting in relatively low decoding and accuracy rates for motor imagery tasks. To address these challenges, this paper proposes a 3D Convolutional Neural Network (P-3DCNN) decoding method that jointly learns spatial-frequency feature maps from the frequency and spatial domains of the EEG signals. First, the Welch method is used to calculate the frequency band power spectrum of the EEG, and a 2D matrix representing the spatial topology distribution of the electrodes is constructed. These spatial-frequency representations are then generated through cubic interpolation of the temporal EEG data. Next, the paper designs a 3DCNN network with 1D and 2D convolutional layers in series to optimize the convolutional kernel parameters and effectively learn the spatial-frequency features of the EEG. Batch normalization and dropout are also applied to improve the training speed and classification performance of the network. Finally, through experiments, the proposed method is compared to various classic machine learning and deep learning techniques. The results show an average decoding accuracy rate of 86.69%, surpassing other advanced networks. This demonstrates the effectiveness of our approach in decoding motor imagery EEG and offers valuable insights for the development of BCI.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1485640"},"PeriodicalIF":2.6,"publicationDate":"2024-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11667157/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142885708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An improved graph factorization machine based on solving unbalanced game perception. 一种基于求解不平衡博弈感知的改进图分解机。
IF 2.6 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-04 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1481297
Xiaoxia Xie, Yuan Jia, Tiande Ma

The user perception of mobile game is crucial for improving user experience and thus enhancing game profitability. The sparse data captured in the game can lead to sporadic performance of the model. This paper proposes a new method, the balanced graph factorization machine (BGFM), based on existing algorithms, considering the data imbalance and important high-dimensional features. The data categories are first balanced by Borderline-SMOTE oversampling, and then features are represented naturally in a graph-structured way. The highlight is that the BGFM contains interaction mechanisms for aggregating beneficial features. The results are represented as edges in the graph. Next, BGFM combines factorization machine (FM) and graph neural network strategies to concatenate any sequential feature interactions of features in the graph with an attention mechanism that assigns inter-feature weights. Experiments were conducted on the collected game perception dataset. The performance of proposed BGFM was compared with eight state-of-the-art models, significantly surpassing all of them by AUC, precision, recall, and F-measure indices.

用户对手机游戏的感知对于改善用户体验,从而提高游戏的盈利能力至关重要。游戏中捕获的稀疏数据可能导致模型的零星性能。本文在现有算法的基础上,考虑到数据的不平衡性和重要的高维特征,提出了一种新的算法——平衡图分解机(BGFM)。首先通过Borderline-SMOTE过采样平衡数据类别,然后以图结构的方式自然地表示特征。重点是BGFM包含聚合有益特性的交互机制。结果表示为图中的边。其次,BGFM结合因子分解机(FM)和图神经网络策略,通过分配特征间权重的注意机制将图中特征的任何顺序特征交互连接起来。在收集到的游戏感知数据集上进行实验。将所提出的BGFM与8种最先进的模型进行了比较,在AUC、精度、召回率和F-measure指标上显著优于所有模型。
{"title":"An improved graph factorization machine based on solving unbalanced game perception.","authors":"Xiaoxia Xie, Yuan Jia, Tiande Ma","doi":"10.3389/fnbot.2024.1481297","DOIUrl":"10.3389/fnbot.2024.1481297","url":null,"abstract":"<p><p>The user perception of mobile game is crucial for improving user experience and thus enhancing game profitability. The sparse data captured in the game can lead to sporadic performance of the model. This paper proposes a new method, the balanced graph factorization machine (BGFM), based on existing algorithms, considering the data imbalance and important high-dimensional features. The data categories are first balanced by Borderline-SMOTE oversampling, and then features are represented naturally in a graph-structured way. The highlight is that the BGFM contains interaction mechanisms for aggregating beneficial features. The results are represented as edges in the graph. Next, BGFM combines factorization machine (FM) and graph neural network strategies to concatenate any sequential feature interactions of features in the graph with an attention mechanism that assigns inter-feature weights. Experiments were conducted on the collected game perception dataset. The performance of proposed BGFM was compared with eight state-of-the-art models, significantly surpassing all of them by AUC, precision, recall, and F-measure indices.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1481297"},"PeriodicalIF":2.6,"publicationDate":"2024-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11652536/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142853907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unmanned aerial vehicles for human detection and recognition using neural-network model. 基于神经网络的无人机人体检测与识别模型。
IF 2.6 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-04 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1443678
Yawar Abbas, Naif Al Mudawi, Bayan Alabdullah, Touseef Sadiq, Asaad Algarni, Hameedur Rahman, Ahmad Jalal

Introduction: Recognizing human actions is crucial for allowing machines to understand and recognize human behavior, with applications spanning video based surveillance systems, human-robot collaboration, sports analysis systems, and entertainment. The immense diversity in human movement and appearance poses a significant challenge in this field, especially when dealing with drone-recorded (RGB) videos. Factors such as dynamic backgrounds, motion blur, occlusions, varying video capture angles, and exposure issues greatly complicate recognition tasks.

Methods: In this study, we suggest a method that addresses these challenges in RGB videos captured by drones. Our approach begins by segmenting the video into individual frames, followed by preprocessing steps applied to these RGB frames. The preprocessing aims to reduce computational costs, optimize image quality, and enhance foreground objects while removing the background.

Result: This results in improved visibility of foreground objects while eliminating background noise. Next, we employ the YOLOv9 detection algorithm to identify human bodies within the images. From the grayscale silhouette, we extract the human skeleton and identify 15 important locations, such as the head, neck, shoulders (left and right), elbows, wrists, hips, knees, ankles, and hips (left and right), and belly button. By using all these points, we extract specific positions, angular and distance relationships between them, as well as 3D point clouds and fiducial points. Subsequently, we optimize this data using the kernel discriminant analysis (KDA) optimizer, followed by classification using a deep neural network (CNN). To validate our system, we conducted experiments on three benchmark datasets: UAV-Human, UCF, and Drone-Action.

Discussion: On these datasets, our suggested model produced corresponding action recognition accuracies of 0.68, 0.75, and 0.83.

导读:识别人类行为对于让机器理解和识别人类行为至关重要,其应用涵盖基于视频的监控系统、人机协作、体育分析系统和娱乐。人类运动和外观的巨大多样性在这一领域提出了重大挑战,特别是在处理无人机录制的(RGB)视频时。诸如动态背景、运动模糊、遮挡、不同的视频捕捉角度和曝光问题等因素极大地复杂化了识别任务。方法:在本研究中,我们提出了一种方法来解决无人机捕获的RGB视频中的这些挑战。我们的方法首先将视频分割成单独的帧,然后是应用于这些RGB帧的预处理步骤。预处理的目的是减少计算成本,优化图像质量,增强前景对象,同时去除背景。结果:这提高了前景物体的可见度,同时消除了背景噪声。接下来,我们使用YOLOv9检测算法来识别图像中的人体。从灰度剪影中,我们提取出人体骨架,并识别出15个重要位置,如头部、颈部、肩部(左和右)、肘部、手腕、臀部、膝盖、脚踝和臀部(左和右)以及肚脐。通过使用所有这些点,我们提取了它们之间的特定位置,角度和距离关系,以及3D点云和基准点。随后,我们使用核判别分析(KDA)优化器对该数据进行优化,然后使用深度神经网络(CNN)进行分类。为了验证我们的系统,我们在三个基准数据集上进行了实验:UAV-Human, UCF和Drone-Action。讨论:在这些数据集上,我们建议的模型产生相应的动作识别精度分别为0.68、0.75和0.83。
{"title":"Unmanned aerial vehicles for human detection and recognition using neural-network model.","authors":"Yawar Abbas, Naif Al Mudawi, Bayan Alabdullah, Touseef Sadiq, Asaad Algarni, Hameedur Rahman, Ahmad Jalal","doi":"10.3389/fnbot.2024.1443678","DOIUrl":"10.3389/fnbot.2024.1443678","url":null,"abstract":"<p><strong>Introduction: </strong>Recognizing human actions is crucial for allowing machines to understand and recognize human behavior, with applications spanning video based surveillance systems, human-robot collaboration, sports analysis systems, and entertainment. The immense diversity in human movement and appearance poses a significant challenge in this field, especially when dealing with drone-recorded (RGB) videos. Factors such as dynamic backgrounds, motion blur, occlusions, varying video capture angles, and exposure issues greatly complicate recognition tasks.</p><p><strong>Methods: </strong>In this study, we suggest a method that addresses these challenges in RGB videos captured by drones. Our approach begins by segmenting the video into individual frames, followed by preprocessing steps applied to these RGB frames. The preprocessing aims to reduce computational costs, optimize image quality, and enhance foreground objects while removing the background.</p><p><strong>Result: </strong>This results in improved visibility of foreground objects while eliminating background noise. Next, we employ the YOLOv9 detection algorithm to identify human bodies within the images. From the grayscale silhouette, we extract the human skeleton and identify 15 important locations, such as the head, neck, shoulders (left and right), elbows, wrists, hips, knees, ankles, and hips (left and right), and belly button. By using all these points, we extract specific positions, angular and distance relationships between them, as well as 3D point clouds and fiducial points. Subsequently, we optimize this data using the kernel discriminant analysis (KDA) optimizer, followed by classification using a deep neural network (CNN). To validate our system, we conducted experiments on three benchmark datasets: UAV-Human, UCF, and Drone-Action.</p><p><strong>Discussion: </strong>On these datasets, our suggested model produced corresponding action recognition accuracies of 0.68, 0.75, and 0.83.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1443678"},"PeriodicalIF":2.6,"publicationDate":"2024-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11652500/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142853909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Posture-invariant myoelectric control with self-calibrating random forests. 具有自校正随机森林的姿态不变肌电控制。
IF 2.6 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-04 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1462023
Xinyu Jiang, Chenfei Ma, Kianoush Nazarpour

Introduction: Myoelectric control systems translate different patterns of electromyographic (EMG) signals into the control commands of diverse human-machine interfaces via hand gesture recognition, enabling intuitive control of prosthesis and immersive interactions in the metaverse. The effect of arm position is a confounding factor leading to the variability of EMG characteristics. Developing a model with its characteristics and performance invariant across postures, could largely promote the translation of myoelectric control into real world practice.

Methods: Here we propose a self-calibrating random forest (RF) model which can (1) be pre-trained on data from many users, then one-shot calibrated on a new user and (2) self-calibrate in an unsupervised and autonomous way to adapt to varying arm positions.

Results: Analyses on data from 86 participants (66 for pre-training and 20 in real-time evaluation experiments) demonstrate the high generalisability of the proposed RF architecture to varying arm positions.

Discussion: Our work promotes the use of simple, explainable, efficient and parallelisable model for posture-invariant myoelectric control.

简介:肌电控制系统通过手势识别,将不同模式的肌电信号转化为各种人机界面的控制命令,实现对假肢的直观控制和虚拟世界的沉浸式交互。手臂位置的影响是导致肌电特征变异性的一个混杂因素。开发一个具有不同姿势的特征和性能不变的模型,可以在很大程度上促进肌电控制在现实世界中的应用。方法:本文提出了一种自校准随机森林(RF)模型,该模型可以(1)对来自许多用户的数据进行预训练,然后对新用户进行一次校准;(2)以无监督和自主的方式进行自校准,以适应不同的手臂位置。结果:对86名参与者(66名用于预训练,20名用于实时评估实验)的数据进行分析,证明了所提出的射频架构对不同手臂位置的高通用性。讨论:我们的工作促进了使用简单,可解释,高效和并行模型的姿势不变肌电控制。
{"title":"Posture-invariant myoelectric control with self-calibrating random forests.","authors":"Xinyu Jiang, Chenfei Ma, Kianoush Nazarpour","doi":"10.3389/fnbot.2024.1462023","DOIUrl":"10.3389/fnbot.2024.1462023","url":null,"abstract":"<p><strong>Introduction: </strong>Myoelectric control systems translate different patterns of electromyographic (EMG) signals into the control commands of diverse human-machine interfaces via hand gesture recognition, enabling intuitive control of prosthesis and immersive interactions in the metaverse. The effect of arm position is a confounding factor leading to the variability of EMG characteristics. Developing a model with its characteristics and performance invariant across postures, could largely promote the translation of myoelectric control into real world practice.</p><p><strong>Methods: </strong>Here we propose a self-calibrating random forest (RF) model which can (1) be pre-trained on data from many users, then one-shot calibrated on a new user and (2) self-calibrate in an unsupervised and autonomous way to adapt to varying arm positions.</p><p><strong>Results: </strong>Analyses on data from 86 participants (66 for pre-training and 20 in real-time evaluation experiments) demonstrate the high generalisability of the proposed RF architecture to varying arm positions.</p><p><strong>Discussion: </strong>Our work promotes the use of simple, explainable, efficient and parallelisable model for posture-invariant myoelectric control.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1462023"},"PeriodicalIF":2.6,"publicationDate":"2024-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11652494/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142853908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EEG-based action anticipation in human-robot interaction: a comparative pilot study. 人机交互中基于脑电图的动作预期:一项比较试点研究。
IF 2.6 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-03 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1491721
Rodrigo Vieira, Plinio Moreno, Athanasios Vourvopoulos

As robots become integral to various sectors, improving human-robot collaboration is crucial, particularly in anticipating human actions to enhance safety and efficiency. Electroencephalographic (EEG) signals offer a promising solution, as they can detect brain activity preceding movement by over a second, enabling predictive capabilities in robots. This study explores how EEG can be used for action anticipation in human-robot interaction (HRI), leveraging its high temporal resolution and modern deep learning techniques. We evaluated multiple Deep Learning classification models on a motor imagery (MI) dataset, achieving up to 80.90% accuracy. These results were further validated in a pilot experiment, where actions were accurately predicted several hundred milliseconds before execution. This research demonstrates the potential of combining EEG with deep learning to enhance real-time collaborative tasks, paving the way for safer and more efficient human-robot interactions.

随着机器人成为各个领域不可或缺的一部分,改善人机协作至关重要,特别是在预测人类行为以提高安全性和效率方面。脑电图(EEG)信号提供了一个很有前途的解决方案,因为它们可以在一秒钟内检测到运动前的大脑活动,从而实现机器人的预测能力。本研究探讨了EEG如何利用其高时间分辨率和现代深度学习技术用于人机交互(HRI)中的动作预期。我们在一个运动图像(MI)数据集上评估了多个深度学习分类模型,准确率高达80.90%。这些结果在试点实验中得到了进一步验证,在执行前几百毫秒准确预测了操作。这项研究展示了脑电图与深度学习相结合的潜力,可以增强实时协作任务,为更安全、更高效的人机交互铺平道路。
{"title":"EEG-based action anticipation in human-robot interaction: a comparative pilot study.","authors":"Rodrigo Vieira, Plinio Moreno, Athanasios Vourvopoulos","doi":"10.3389/fnbot.2024.1491721","DOIUrl":"10.3389/fnbot.2024.1491721","url":null,"abstract":"<p><p>As robots become integral to various sectors, improving human-robot collaboration is crucial, particularly in anticipating human actions to enhance safety and efficiency. Electroencephalographic (EEG) signals offer a promising solution, as they can detect brain activity preceding movement by over a second, enabling predictive capabilities in robots. This study explores how EEG can be used for action anticipation in human-robot interaction (HRI), leveraging its high temporal resolution and modern deep learning techniques. We evaluated multiple Deep Learning classification models on a motor imagery (MI) dataset, achieving up to 80.90% accuracy. These results were further validated in a pilot experiment, where actions were accurately predicted several hundred milliseconds before execution. This research demonstrates the potential of combining EEG with deep learning to enhance real-time collaborative tasks, paving the way for safer and more efficient human-robot interactions.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1491721"},"PeriodicalIF":2.6,"publicationDate":"2024-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11649676/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142845975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Frontiers in Neurorobotics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1