首页 > 最新文献

Frontiers in Neurorobotics最新文献

英文 中文
EEG-powered cerebral transformer for athletic performance. 用于运动表现的脑电图驱动的大脑变压器。
IF 2.6 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-20 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1499734
Qikai Sun

Introduction: In recent years, with advancements in wearable devices and biosignal analysis technologies, sports performance analysis has become an increasingly popular research field, particularly due to the growing demand for real-time monitoring of athletes' conditions in sports training and competitive events. Traditional methods of sports performance analysis typically rely on video data or sensor data for motion recognition. However, unimodal data often fails to fully capture the neural state of athletes, leading to limitations in accuracy and real-time performance when dealing with complex movement patterns. Moreover, these methods struggle with multimodal data fusion, making it difficult to fully leverage the deep information from electroencephalogram (EEG) signals.

Methods: To address these challenges, this paper proposes a "Cerebral Transformer" model based on EEG signals and video data. By employing an adaptive attention mechanism and cross-modal fusion, the model effectively combines EEG signals and video streams to achieve precise recognition and analysis of athletes' movements. The model's effectiveness was validated through experiments on four datasets: SEED, DEAP, eSports Sensors, and MODA. The results show that the proposed model outperforms existing mainstream methods in terms of accuracy, recall, and F1 score, while also demonstrating high computational efficiency.

Results and discussion: The significance of this study lies in providing a more comprehensive and efficient solution for sports performance analysis. Through cross-modal data fusion, it not only improves the accuracy of complex movement recognition but also provides technical support for monitoring athletes' neural states, offering important applications in sports training and medical rehabilitation.

引言:近年来,随着可穿戴设备和生物信号分析技术的进步,运动表现分析已经成为一个越来越受欢迎的研究领域,特别是在运动训练和竞技项目中对运动员状态实时监测的需求越来越大。传统的运动表现分析方法通常依赖于视频数据或传感器数据进行运动识别。然而,单峰数据往往不能完全捕捉运动员的神经状态,导致在处理复杂的运动模式时,准确性和实时性受到限制。此外,这些方法在多模态数据融合方面存在困难,难以充分利用脑电图(EEG)信号中的深层信息。方法:针对这些问题,本文提出了一种基于脑电信号和视频数据的“大脑变压器”模型。该模型采用自适应注意机制和跨模态融合,有效地将脑电信号和视频流结合起来,实现对运动员运动的精确识别和分析。通过SEED、DEAP、eSports Sensors和MODA四个数据集的实验验证了该模型的有效性。结果表明,该模型在准确率、查全率和F1分数方面均优于现有主流方法,同时也显示出较高的计算效率。结果与讨论:本研究的意义在于为运动成绩分析提供更全面、更高效的解决方案。通过跨模态数据融合,不仅提高了复杂动作识别的准确性,而且为监测运动员的神经状态提供了技术支持,在运动训练和医学康复中具有重要的应用价值。
{"title":"EEG-powered cerebral transformer for athletic performance.","authors":"Qikai Sun","doi":"10.3389/fnbot.2024.1499734","DOIUrl":"https://doi.org/10.3389/fnbot.2024.1499734","url":null,"abstract":"<p><strong>Introduction: </strong>In recent years, with advancements in wearable devices and biosignal analysis technologies, sports performance analysis has become an increasingly popular research field, particularly due to the growing demand for real-time monitoring of athletes' conditions in sports training and competitive events. Traditional methods of sports performance analysis typically rely on video data or sensor data for motion recognition. However, unimodal data often fails to fully capture the neural state of athletes, leading to limitations in accuracy and real-time performance when dealing with complex movement patterns. Moreover, these methods struggle with multimodal data fusion, making it difficult to fully leverage the deep information from electroencephalogram (EEG) signals.</p><p><strong>Methods: </strong>To address these challenges, this paper proposes a \"Cerebral Transformer\" model based on EEG signals and video data. By employing an adaptive attention mechanism and cross-modal fusion, the model effectively combines EEG signals and video streams to achieve precise recognition and analysis of athletes' movements. The model's effectiveness was validated through experiments on four datasets: SEED, DEAP, eSports Sensors, and MODA. The results show that the proposed model outperforms existing mainstream methods in terms of accuracy, recall, and F1 score, while also demonstrating high computational efficiency.</p><p><strong>Results and discussion: </strong>The significance of this study lies in providing a more comprehensive and efficient solution for sports performance analysis. Through cross-modal data fusion, it not only improves the accuracy of complex movement recognition but also provides technical support for monitoring athletes' neural states, offering important applications in sports training and medical rehabilitation.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1499734"},"PeriodicalIF":2.6,"publicationDate":"2024-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11695414/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142931334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Edge-guided feature fusion network for RGB-T salient object detection. 边缘引导特征融合网络用于RGB-T显著目标检测。
IF 2.6 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-17 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1489658
Yuanlin Chen, Zengbao Sun, Cheng Yan, Ming Zhao

Introduction: RGB-T Salient Object Detection (SOD) aims to accurately segment salient regions in both visible light and thermal infrared images. However, many existing methods overlook the critical complementarity between these modalities, which can enhance detection accuracy.

Methods: We propose the Edge-Guided Feature Fusion Network (EGFF-Net), which consists of cross-modal feature extraction, edge-guided feature fusion, and salience map prediction. Firstly, the cross-modal feature extraction module captures and aggregates united and intersecting information in each local region of RGB and thermal images. Then, the edge-guided feature fusion module enhances the edge features of salient regions, considering that edge information is very helpful in refining significant area details. Moreover, a layer-by-layer decoding structure integrates multi-level features and generates the prediction of salience maps.

Results: We conduct extensive experiments on three benchmark datasets and compare EGFF-Net with state-of-the-art methods. Our approach achieves superior performance, demonstrating the effectiveness of the proposed modules in improving both detection accuracy and boundary refinement.

Discussion: The results highlight the importance of integrating cross-modal information and edge-guided fusion in RGB-T SOD. Our method outperforms existing techniques and provides a robust framework for future developments in multi-modal saliency detection.

RGB-T显著目标检测(SOD)旨在准确分割可见光和热红外图像中的显著区域。然而,许多现有的方法忽略了这些模式之间的关键互补性,这可以提高检测精度。方法:提出了一种边缘引导特征融合网络(EGFF-Net),该网络由跨模态特征提取、边缘引导特征融合和显著性地图预测组成。首先,跨模态特征提取模块对RGB图像和热图像各局部区域的统一和相交信息进行捕获和聚合;然后,考虑到边缘信息对提炼重要区域细节非常有帮助,边缘引导特征融合模块对显著区域的边缘特征进行增强;此外,一层一层的解码结构集成了多层次的特征,并产生显著性映射的预测。结果:我们在三个基准数据集上进行了广泛的实验,并将EGFF-Net与最先进的方法进行了比较。我们的方法取得了优异的性能,证明了所提出的模块在提高检测精度和边界细化方面的有效性。讨论:结果强调了整合跨模态信息和边缘引导融合在RGB-T SOD中的重要性。我们的方法优于现有的技术,并为未来多模态显著性检测的发展提供了一个强大的框架。
{"title":"Edge-guided feature fusion network for RGB-T salient object detection.","authors":"Yuanlin Chen, Zengbao Sun, Cheng Yan, Ming Zhao","doi":"10.3389/fnbot.2024.1489658","DOIUrl":"10.3389/fnbot.2024.1489658","url":null,"abstract":"<p><strong>Introduction: </strong>RGB-T Salient Object Detection (SOD) aims to accurately segment salient regions in both visible light and thermal infrared images. However, many existing methods overlook the critical complementarity between these modalities, which can enhance detection accuracy.</p><p><strong>Methods: </strong>We propose the Edge-Guided Feature Fusion Network (EGFF-Net), which consists of cross-modal feature extraction, edge-guided feature fusion, and salience map prediction. Firstly, the cross-modal feature extraction module captures and aggregates united and intersecting information in each local region of RGB and thermal images. Then, the edge-guided feature fusion module enhances the edge features of salient regions, considering that edge information is very helpful in refining significant area details. Moreover, a layer-by-layer decoding structure integrates multi-level features and generates the prediction of salience maps.</p><p><strong>Results: </strong>We conduct extensive experiments on three benchmark datasets and compare EGFF-Net with state-of-the-art methods. Our approach achieves superior performance, demonstrating the effectiveness of the proposed modules in improving both detection accuracy and boundary refinement.</p><p><strong>Discussion: </strong>The results highlight the importance of integrating cross-modal information and edge-guided fusion in RGB-T SOD. Our method outperforms existing techniques and provides a robust framework for future developments in multi-modal saliency detection.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1489658"},"PeriodicalIF":2.6,"publicationDate":"2024-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11685216/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142914613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-attention swin-transformer for detailed segmentation of ancient architectural color patterns. 古建筑色彩图案精细分割的交叉关注旋转变压器。
IF 2.6 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-13 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1513488
Lv Yongyin, Yu Caixia

Introduction: Segmentation tasks in computer vision play a crucial role in various applications, ranging from object detection to medical imaging and cultural heritage preservation. Traditional approaches, including convolutional neural networks (CNNs) and standard transformer-based models, have achieved significant success; however, they often face challenges in capturing fine-grained details and maintaining efficiency across diverse datasets. These methods struggle with balancing precision and computational efficiency, especially when dealing with complex patterns and high-resolution images.

Methods: To address these limitations, we propose a novel segmentation model that integrates a hierarchical vision transformer backbone with multi-scale self-attention, cascaded attention decoding, and diffusion-based robustness enhancement. Our approach aims to capture both local details and global contexts effectively while maintaining lower computational overhead.

Results and discussion: Experiments conducted on four diverse datasets, including Ancient Architecture, MS COCO, Cityscapes, and ScanNet, demonstrate that our model outperforms state-of-the-art methods in accuracy, recall, and computational efficiency. The results highlight the model's ability to generalize well across different tasks and provide robust segmentation, even in challenging scenarios. Our work paves the way for more efficient and precise segmentation techniques, making it valuable for applications where both detail and speed are critical.

计算机视觉中的分割任务在各种应用中发挥着至关重要的作用,从物体检测到医学成像和文化遗产保护。传统的方法,包括卷积神经网络(cnn)和标准的基于变压器的模型,已经取得了显著的成功;然而,他们在捕获细粒度的细节和保持跨不同数据集的效率方面经常面临挑战。这些方法努力平衡精度和计算效率,特别是在处理复杂模式和高分辨率图像时。方法:为了解决这些局限性,我们提出了一种新的分割模型,该模型将分层视觉转换主干与多尺度自注意、级联注意解码和基于扩散的鲁棒性增强相结合。我们的方法旨在有效地捕获局部细节和全局上下文,同时保持较低的计算开销。结果和讨论:在四个不同的数据集上进行的实验,包括古建筑、MS COCO、城市景观和ScanNet,表明我们的模型在准确性、召回率和计算效率方面优于最先进的方法。结果表明,即使在具有挑战性的场景中,该模型也能很好地泛化不同任务,并提供稳健的分割。我们的工作为更有效和精确的分割技术铺平了道路,使其对细节和速度都至关重要的应用程序有价值。
{"title":"Cross-attention swin-transformer for detailed segmentation of ancient architectural color patterns.","authors":"Lv Yongyin, Yu Caixia","doi":"10.3389/fnbot.2024.1513488","DOIUrl":"10.3389/fnbot.2024.1513488","url":null,"abstract":"<p><strong>Introduction: </strong>Segmentation tasks in computer vision play a crucial role in various applications, ranging from object detection to medical imaging and cultural heritage preservation. Traditional approaches, including convolutional neural networks (CNNs) and standard transformer-based models, have achieved significant success; however, they often face challenges in capturing fine-grained details and maintaining efficiency across diverse datasets. These methods struggle with balancing precision and computational efficiency, especially when dealing with complex patterns and high-resolution images.</p><p><strong>Methods: </strong>To address these limitations, we propose a novel segmentation model that integrates a hierarchical vision transformer backbone with multi-scale self-attention, cascaded attention decoding, and diffusion-based robustness enhancement. Our approach aims to capture both local details and global contexts effectively while maintaining lower computational overhead.</p><p><strong>Results and discussion: </strong>Experiments conducted on four diverse datasets, including Ancient Architecture, MS COCO, Cityscapes, and ScanNet, demonstrate that our model outperforms state-of-the-art methods in accuracy, recall, and computational efficiency. The results highlight the model's ability to generalize well across different tasks and provide robust segmentation, even in challenging scenarios. Our work paves the way for more efficient and precise segmentation techniques, making it valuable for applications where both detail and speed are critical.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1513488"},"PeriodicalIF":2.6,"publicationDate":"2024-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11707421/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142947470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
3D convolutional neural network based on spatial-spectral feature pictures learning for decoding motor imagery EEG signal. 基于空间频谱特征图学习的三维卷积神经网络在运动意象脑电信号解码中的应用。
IF 2.6 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-10 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1485640
Xiaoguang Li, Yaqi Chu, Xuejian Wu

Non-invasive brain-computer interfaces (BCI) hold great promise in the field of neurorehabilitation. They are easy to use and do not require surgery, particularly in the area of motor imagery electroencephalography (EEG). However, motor imagery EEG signals often have a low signal-to-noise ratio and limited spatial and temporal resolution. Traditional deep neural networks typically only focus on the spatial and temporal features of EEG, resulting in relatively low decoding and accuracy rates for motor imagery tasks. To address these challenges, this paper proposes a 3D Convolutional Neural Network (P-3DCNN) decoding method that jointly learns spatial-frequency feature maps from the frequency and spatial domains of the EEG signals. First, the Welch method is used to calculate the frequency band power spectrum of the EEG, and a 2D matrix representing the spatial topology distribution of the electrodes is constructed. These spatial-frequency representations are then generated through cubic interpolation of the temporal EEG data. Next, the paper designs a 3DCNN network with 1D and 2D convolutional layers in series to optimize the convolutional kernel parameters and effectively learn the spatial-frequency features of the EEG. Batch normalization and dropout are also applied to improve the training speed and classification performance of the network. Finally, through experiments, the proposed method is compared to various classic machine learning and deep learning techniques. The results show an average decoding accuracy rate of 86.69%, surpassing other advanced networks. This demonstrates the effectiveness of our approach in decoding motor imagery EEG and offers valuable insights for the development of BCI.

无创脑机接口(BCI)在神经康复领域具有广阔的应用前景。它们易于使用,不需要手术,特别是在运动图像脑电图(EEG)领域。然而,运动意象脑电信号通常具有低信噪比和有限的空间和时间分辨率。传统的深度神经网络通常只关注脑电图的空间和时间特征,导致运动图像任务的解码率和准确率相对较低。为了解决这些问题,本文提出了一种3D卷积神经网络(P-3DCNN)解码方法,该方法从脑电图信号的频率域和空间域共同学习空间-频率特征映射。首先,采用Welch方法计算EEG的频带功率谱,构造一个表示电极空间拓扑分布的二维矩阵;然后通过对时间脑电图数据的三次插值生成这些空间频率表示。接下来,设计了一维和二维卷积层串联的3DCNN网络,优化卷积核参数,有效学习脑电的空间频率特征。为了提高网络的训练速度和分类性能,还采用了批归一化和dropout方法。最后,通过实验,将该方法与各种经典的机器学习和深度学习技术进行了比较。结果表明,平均解码准确率为86.69%,超过其他先进网络。这证明了我们的方法在解码运动图像脑电图方面的有效性,并为脑机接口的发展提供了有价值的见解。
{"title":"3D convolutional neural network based on spatial-spectral feature pictures learning for decoding motor imagery EEG signal.","authors":"Xiaoguang Li, Yaqi Chu, Xuejian Wu","doi":"10.3389/fnbot.2024.1485640","DOIUrl":"10.3389/fnbot.2024.1485640","url":null,"abstract":"<p><p>Non-invasive brain-computer interfaces (BCI) hold great promise in the field of neurorehabilitation. They are easy to use and do not require surgery, particularly in the area of motor imagery electroencephalography (EEG). However, motor imagery EEG signals often have a low signal-to-noise ratio and limited spatial and temporal resolution. Traditional deep neural networks typically only focus on the spatial and temporal features of EEG, resulting in relatively low decoding and accuracy rates for motor imagery tasks. To address these challenges, this paper proposes a 3D Convolutional Neural Network (P-3DCNN) decoding method that jointly learns spatial-frequency feature maps from the frequency and spatial domains of the EEG signals. First, the Welch method is used to calculate the frequency band power spectrum of the EEG, and a 2D matrix representing the spatial topology distribution of the electrodes is constructed. These spatial-frequency representations are then generated through cubic interpolation of the temporal EEG data. Next, the paper designs a 3DCNN network with 1D and 2D convolutional layers in series to optimize the convolutional kernel parameters and effectively learn the spatial-frequency features of the EEG. Batch normalization and dropout are also applied to improve the training speed and classification performance of the network. Finally, through experiments, the proposed method is compared to various classic machine learning and deep learning techniques. The results show an average decoding accuracy rate of 86.69%, surpassing other advanced networks. This demonstrates the effectiveness of our approach in decoding motor imagery EEG and offers valuable insights for the development of BCI.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1485640"},"PeriodicalIF":2.6,"publicationDate":"2024-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11667157/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142885708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An improved graph factorization machine based on solving unbalanced game perception. 一种基于求解不平衡博弈感知的改进图分解机。
IF 2.6 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-04 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1481297
Xiaoxia Xie, Yuan Jia, Tiande Ma

The user perception of mobile game is crucial for improving user experience and thus enhancing game profitability. The sparse data captured in the game can lead to sporadic performance of the model. This paper proposes a new method, the balanced graph factorization machine (BGFM), based on existing algorithms, considering the data imbalance and important high-dimensional features. The data categories are first balanced by Borderline-SMOTE oversampling, and then features are represented naturally in a graph-structured way. The highlight is that the BGFM contains interaction mechanisms for aggregating beneficial features. The results are represented as edges in the graph. Next, BGFM combines factorization machine (FM) and graph neural network strategies to concatenate any sequential feature interactions of features in the graph with an attention mechanism that assigns inter-feature weights. Experiments were conducted on the collected game perception dataset. The performance of proposed BGFM was compared with eight state-of-the-art models, significantly surpassing all of them by AUC, precision, recall, and F-measure indices.

用户对手机游戏的感知对于改善用户体验,从而提高游戏的盈利能力至关重要。游戏中捕获的稀疏数据可能导致模型的零星性能。本文在现有算法的基础上,考虑到数据的不平衡性和重要的高维特征,提出了一种新的算法——平衡图分解机(BGFM)。首先通过Borderline-SMOTE过采样平衡数据类别,然后以图结构的方式自然地表示特征。重点是BGFM包含聚合有益特性的交互机制。结果表示为图中的边。其次,BGFM结合因子分解机(FM)和图神经网络策略,通过分配特征间权重的注意机制将图中特征的任何顺序特征交互连接起来。在收集到的游戏感知数据集上进行实验。将所提出的BGFM与8种最先进的模型进行了比较,在AUC、精度、召回率和F-measure指标上显著优于所有模型。
{"title":"An improved graph factorization machine based on solving unbalanced game perception.","authors":"Xiaoxia Xie, Yuan Jia, Tiande Ma","doi":"10.3389/fnbot.2024.1481297","DOIUrl":"10.3389/fnbot.2024.1481297","url":null,"abstract":"<p><p>The user perception of mobile game is crucial for improving user experience and thus enhancing game profitability. The sparse data captured in the game can lead to sporadic performance of the model. This paper proposes a new method, the balanced graph factorization machine (BGFM), based on existing algorithms, considering the data imbalance and important high-dimensional features. The data categories are first balanced by Borderline-SMOTE oversampling, and then features are represented naturally in a graph-structured way. The highlight is that the BGFM contains interaction mechanisms for aggregating beneficial features. The results are represented as edges in the graph. Next, BGFM combines factorization machine (FM) and graph neural network strategies to concatenate any sequential feature interactions of features in the graph with an attention mechanism that assigns inter-feature weights. Experiments were conducted on the collected game perception dataset. The performance of proposed BGFM was compared with eight state-of-the-art models, significantly surpassing all of them by AUC, precision, recall, and F-measure indices.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1481297"},"PeriodicalIF":2.6,"publicationDate":"2024-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11652536/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142853907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unmanned aerial vehicles for human detection and recognition using neural-network model. 基于神经网络的无人机人体检测与识别模型。
IF 2.6 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-04 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1443678
Yawar Abbas, Naif Al Mudawi, Bayan Alabdullah, Touseef Sadiq, Asaad Algarni, Hameedur Rahman, Ahmad Jalal

Introduction: Recognizing human actions is crucial for allowing machines to understand and recognize human behavior, with applications spanning video based surveillance systems, human-robot collaboration, sports analysis systems, and entertainment. The immense diversity in human movement and appearance poses a significant challenge in this field, especially when dealing with drone-recorded (RGB) videos. Factors such as dynamic backgrounds, motion blur, occlusions, varying video capture angles, and exposure issues greatly complicate recognition tasks.

Methods: In this study, we suggest a method that addresses these challenges in RGB videos captured by drones. Our approach begins by segmenting the video into individual frames, followed by preprocessing steps applied to these RGB frames. The preprocessing aims to reduce computational costs, optimize image quality, and enhance foreground objects while removing the background.

Result: This results in improved visibility of foreground objects while eliminating background noise. Next, we employ the YOLOv9 detection algorithm to identify human bodies within the images. From the grayscale silhouette, we extract the human skeleton and identify 15 important locations, such as the head, neck, shoulders (left and right), elbows, wrists, hips, knees, ankles, and hips (left and right), and belly button. By using all these points, we extract specific positions, angular and distance relationships between them, as well as 3D point clouds and fiducial points. Subsequently, we optimize this data using the kernel discriminant analysis (KDA) optimizer, followed by classification using a deep neural network (CNN). To validate our system, we conducted experiments on three benchmark datasets: UAV-Human, UCF, and Drone-Action.

Discussion: On these datasets, our suggested model produced corresponding action recognition accuracies of 0.68, 0.75, and 0.83.

导读:识别人类行为对于让机器理解和识别人类行为至关重要,其应用涵盖基于视频的监控系统、人机协作、体育分析系统和娱乐。人类运动和外观的巨大多样性在这一领域提出了重大挑战,特别是在处理无人机录制的(RGB)视频时。诸如动态背景、运动模糊、遮挡、不同的视频捕捉角度和曝光问题等因素极大地复杂化了识别任务。方法:在本研究中,我们提出了一种方法来解决无人机捕获的RGB视频中的这些挑战。我们的方法首先将视频分割成单独的帧,然后是应用于这些RGB帧的预处理步骤。预处理的目的是减少计算成本,优化图像质量,增强前景对象,同时去除背景。结果:这提高了前景物体的可见度,同时消除了背景噪声。接下来,我们使用YOLOv9检测算法来识别图像中的人体。从灰度剪影中,我们提取出人体骨架,并识别出15个重要位置,如头部、颈部、肩部(左和右)、肘部、手腕、臀部、膝盖、脚踝和臀部(左和右)以及肚脐。通过使用所有这些点,我们提取了它们之间的特定位置,角度和距离关系,以及3D点云和基准点。随后,我们使用核判别分析(KDA)优化器对该数据进行优化,然后使用深度神经网络(CNN)进行分类。为了验证我们的系统,我们在三个基准数据集上进行了实验:UAV-Human, UCF和Drone-Action。讨论:在这些数据集上,我们建议的模型产生相应的动作识别精度分别为0.68、0.75和0.83。
{"title":"Unmanned aerial vehicles for human detection and recognition using neural-network model.","authors":"Yawar Abbas, Naif Al Mudawi, Bayan Alabdullah, Touseef Sadiq, Asaad Algarni, Hameedur Rahman, Ahmad Jalal","doi":"10.3389/fnbot.2024.1443678","DOIUrl":"10.3389/fnbot.2024.1443678","url":null,"abstract":"<p><strong>Introduction: </strong>Recognizing human actions is crucial for allowing machines to understand and recognize human behavior, with applications spanning video based surveillance systems, human-robot collaboration, sports analysis systems, and entertainment. The immense diversity in human movement and appearance poses a significant challenge in this field, especially when dealing with drone-recorded (RGB) videos. Factors such as dynamic backgrounds, motion blur, occlusions, varying video capture angles, and exposure issues greatly complicate recognition tasks.</p><p><strong>Methods: </strong>In this study, we suggest a method that addresses these challenges in RGB videos captured by drones. Our approach begins by segmenting the video into individual frames, followed by preprocessing steps applied to these RGB frames. The preprocessing aims to reduce computational costs, optimize image quality, and enhance foreground objects while removing the background.</p><p><strong>Result: </strong>This results in improved visibility of foreground objects while eliminating background noise. Next, we employ the YOLOv9 detection algorithm to identify human bodies within the images. From the grayscale silhouette, we extract the human skeleton and identify 15 important locations, such as the head, neck, shoulders (left and right), elbows, wrists, hips, knees, ankles, and hips (left and right), and belly button. By using all these points, we extract specific positions, angular and distance relationships between them, as well as 3D point clouds and fiducial points. Subsequently, we optimize this data using the kernel discriminant analysis (KDA) optimizer, followed by classification using a deep neural network (CNN). To validate our system, we conducted experiments on three benchmark datasets: UAV-Human, UCF, and Drone-Action.</p><p><strong>Discussion: </strong>On these datasets, our suggested model produced corresponding action recognition accuracies of 0.68, 0.75, and 0.83.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1443678"},"PeriodicalIF":2.6,"publicationDate":"2024-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11652500/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142853909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Posture-invariant myoelectric control with self-calibrating random forests. 具有自校正随机森林的姿态不变肌电控制。
IF 2.6 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-04 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1462023
Xinyu Jiang, Chenfei Ma, Kianoush Nazarpour

Introduction: Myoelectric control systems translate different patterns of electromyographic (EMG) signals into the control commands of diverse human-machine interfaces via hand gesture recognition, enabling intuitive control of prosthesis and immersive interactions in the metaverse. The effect of arm position is a confounding factor leading to the variability of EMG characteristics. Developing a model with its characteristics and performance invariant across postures, could largely promote the translation of myoelectric control into real world practice.

Methods: Here we propose a self-calibrating random forest (RF) model which can (1) be pre-trained on data from many users, then one-shot calibrated on a new user and (2) self-calibrate in an unsupervised and autonomous way to adapt to varying arm positions.

Results: Analyses on data from 86 participants (66 for pre-training and 20 in real-time evaluation experiments) demonstrate the high generalisability of the proposed RF architecture to varying arm positions.

Discussion: Our work promotes the use of simple, explainable, efficient and parallelisable model for posture-invariant myoelectric control.

简介:肌电控制系统通过手势识别,将不同模式的肌电信号转化为各种人机界面的控制命令,实现对假肢的直观控制和虚拟世界的沉浸式交互。手臂位置的影响是导致肌电特征变异性的一个混杂因素。开发一个具有不同姿势的特征和性能不变的模型,可以在很大程度上促进肌电控制在现实世界中的应用。方法:本文提出了一种自校准随机森林(RF)模型,该模型可以(1)对来自许多用户的数据进行预训练,然后对新用户进行一次校准;(2)以无监督和自主的方式进行自校准,以适应不同的手臂位置。结果:对86名参与者(66名用于预训练,20名用于实时评估实验)的数据进行分析,证明了所提出的射频架构对不同手臂位置的高通用性。讨论:我们的工作促进了使用简单,可解释,高效和并行模型的姿势不变肌电控制。
{"title":"Posture-invariant myoelectric control with self-calibrating random forests.","authors":"Xinyu Jiang, Chenfei Ma, Kianoush Nazarpour","doi":"10.3389/fnbot.2024.1462023","DOIUrl":"10.3389/fnbot.2024.1462023","url":null,"abstract":"<p><strong>Introduction: </strong>Myoelectric control systems translate different patterns of electromyographic (EMG) signals into the control commands of diverse human-machine interfaces via hand gesture recognition, enabling intuitive control of prosthesis and immersive interactions in the metaverse. The effect of arm position is a confounding factor leading to the variability of EMG characteristics. Developing a model with its characteristics and performance invariant across postures, could largely promote the translation of myoelectric control into real world practice.</p><p><strong>Methods: </strong>Here we propose a self-calibrating random forest (RF) model which can (1) be pre-trained on data from many users, then one-shot calibrated on a new user and (2) self-calibrate in an unsupervised and autonomous way to adapt to varying arm positions.</p><p><strong>Results: </strong>Analyses on data from 86 participants (66 for pre-training and 20 in real-time evaluation experiments) demonstrate the high generalisability of the proposed RF architecture to varying arm positions.</p><p><strong>Discussion: </strong>Our work promotes the use of simple, explainable, efficient and parallelisable model for posture-invariant myoelectric control.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1462023"},"PeriodicalIF":2.6,"publicationDate":"2024-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11652494/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142853908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EEG-based action anticipation in human-robot interaction: a comparative pilot study. 人机交互中基于脑电图的动作预期:一项比较试点研究。
IF 2.6 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-03 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1491721
Rodrigo Vieira, Plinio Moreno, Athanasios Vourvopoulos

As robots become integral to various sectors, improving human-robot collaboration is crucial, particularly in anticipating human actions to enhance safety and efficiency. Electroencephalographic (EEG) signals offer a promising solution, as they can detect brain activity preceding movement by over a second, enabling predictive capabilities in robots. This study explores how EEG can be used for action anticipation in human-robot interaction (HRI), leveraging its high temporal resolution and modern deep learning techniques. We evaluated multiple Deep Learning classification models on a motor imagery (MI) dataset, achieving up to 80.90% accuracy. These results were further validated in a pilot experiment, where actions were accurately predicted several hundred milliseconds before execution. This research demonstrates the potential of combining EEG with deep learning to enhance real-time collaborative tasks, paving the way for safer and more efficient human-robot interactions.

随着机器人成为各个领域不可或缺的一部分,改善人机协作至关重要,特别是在预测人类行为以提高安全性和效率方面。脑电图(EEG)信号提供了一个很有前途的解决方案,因为它们可以在一秒钟内检测到运动前的大脑活动,从而实现机器人的预测能力。本研究探讨了EEG如何利用其高时间分辨率和现代深度学习技术用于人机交互(HRI)中的动作预期。我们在一个运动图像(MI)数据集上评估了多个深度学习分类模型,准确率高达80.90%。这些结果在试点实验中得到了进一步验证,在执行前几百毫秒准确预测了操作。这项研究展示了脑电图与深度学习相结合的潜力,可以增强实时协作任务,为更安全、更高效的人机交互铺平道路。
{"title":"EEG-based action anticipation in human-robot interaction: a comparative pilot study.","authors":"Rodrigo Vieira, Plinio Moreno, Athanasios Vourvopoulos","doi":"10.3389/fnbot.2024.1491721","DOIUrl":"10.3389/fnbot.2024.1491721","url":null,"abstract":"<p><p>As robots become integral to various sectors, improving human-robot collaboration is crucial, particularly in anticipating human actions to enhance safety and efficiency. Electroencephalographic (EEG) signals offer a promising solution, as they can detect brain activity preceding movement by over a second, enabling predictive capabilities in robots. This study explores how EEG can be used for action anticipation in human-robot interaction (HRI), leveraging its high temporal resolution and modern deep learning techniques. We evaluated multiple Deep Learning classification models on a motor imagery (MI) dataset, achieving up to 80.90% accuracy. These results were further validated in a pilot experiment, where actions were accurately predicted several hundred milliseconds before execution. This research demonstrates the potential of combining EEG with deep learning to enhance real-time collaborative tasks, paving the way for safer and more efficient human-robot interactions.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1491721"},"PeriodicalIF":2.6,"publicationDate":"2024-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11649676/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142845975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Immersive interfaces for clinical applications: current status and future perspective. 临床应用沉浸式界面:现状与未来展望
IF 2.6 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-27 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1362444
Naïg Chenais, Arno Görgen

Digital immersive technologies have become increasingly prominent in clinical research and practice, including medical communication and technical education, serious games for health, psychotherapy, and interfaces for neurorehabilitation. The worldwide enthusiasm for digital health and digital therapeutics has prompted the development and testing of numerous applications and interaction methods. Nevertheless, the lack of consistency in the approaches and the peculiarity of the constructed environments contribute to an increasing disparity between the eagerness for new immersive designs and the long-term clinical adoption of these technologies. Several challenges emerge in aligning the different priorities of virtual environment designers and clinicians. This article seeks to examine the utilization and mechanics of medical immersive interfaces based on extended reality and highlight specific design challenges. The transfer of skills from virtual to clinical environments is often confounded by perceptual and attractiveness factors. We argue that a multidisciplinary approach to development and testing, along with a comprehensive acknowledgement of the shared mechanisms that underlie immersive training, are essential for the sustainable integration of extended reality into clinical settings. The present review discusses the application of a multilevel sensory framework to extended reality design, with the aim of developing brain-centered immersive interfaces tailored for therapeutic and educational purposes. Such a framework must include broader design questions, such as the integration of digital technologies into psychosocial care models, clinical validation, and related ethical concerns. We propose that efforts to bridge the virtual gap should include mixed methodologies and neurodesign approaches, integrating user behavioral and physiological feedback into iterative design phases.

数字沉浸式技术在临床研究和实践中日益突出,包括医学交流和技术教育、健康严肃游戏、心理治疗和神经康复接口。世界范围内对数字健康和数字治疗的热情促使了许多应用和交互方法的开发和测试。然而,方法的缺乏一致性和构建环境的特殊性导致了对新的沉浸式设计的渴望与这些技术的长期临床采用之间的差距越来越大。在协调虚拟环境设计师和临床医生的不同优先级时,出现了一些挑战。本文旨在研究基于扩展现实的医疗沉浸式界面的使用和机制,并强调具体的设计挑战。从虚拟环境到临床环境的技能转移常常受到感知和吸引力因素的干扰。我们认为,开发和测试的多学科方法,以及对沉浸式培训基础的共享机制的全面认识,对于将扩展现实可持续地整合到临床环境中至关重要。本综述讨论了多层次感官框架在扩展现实设计中的应用,旨在开发适合治疗和教育目的的以大脑为中心的沉浸式界面。这样的框架必须包括更广泛的设计问题,例如将数字技术整合到社会心理护理模式、临床验证和相关的伦理问题中。我们建议,弥合虚拟差距的努力应包括混合方法和神经设计方法,将用户行为和生理反馈整合到迭代设计阶段。
{"title":"Immersive interfaces for clinical applications: current status and future perspective.","authors":"Naïg Chenais, Arno Görgen","doi":"10.3389/fnbot.2024.1362444","DOIUrl":"10.3389/fnbot.2024.1362444","url":null,"abstract":"<p><p>Digital immersive technologies have become increasingly prominent in clinical research and practice, including medical communication and technical education, serious games for health, psychotherapy, and interfaces for neurorehabilitation. The worldwide enthusiasm for digital health and digital therapeutics has prompted the development and testing of numerous applications and interaction methods. Nevertheless, the lack of consistency in the approaches and the peculiarity of the constructed environments contribute to an increasing disparity between the eagerness for new immersive designs and the long-term clinical adoption of these technologies. Several challenges emerge in aligning the different priorities of virtual environment designers and clinicians. This article seeks to examine the utilization and mechanics of medical immersive interfaces based on extended reality and highlight specific design challenges. The transfer of skills from virtual to clinical environments is often confounded by perceptual and attractiveness factors. We argue that a multidisciplinary approach to development and testing, along with a comprehensive acknowledgement of the shared mechanisms that underlie immersive training, are essential for the sustainable integration of extended reality into clinical settings. The present review discusses the application of a multilevel sensory framework to extended reality design, with the aim of developing brain-centered immersive interfaces tailored for therapeutic and educational purposes. Such a framework must include broader design questions, such as the integration of digital technologies into psychosocial care models, clinical validation, and related ethical concerns. We propose that efforts to bridge the virtual gap should include mixed methodologies and neurodesign approaches, integrating user behavioral and physiological feedback into iterative design phases.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1362444"},"PeriodicalIF":2.6,"publicationDate":"2024-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11631914/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142812874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A multimodal travel route recommendation system leveraging visual Transformers and self-attention mechanisms. 利用视觉变形和自关注机制的多模式旅行路线推荐系统。
IF 2.6 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-26 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1439195
Zhang Juan, Jing Zhang, Ming Gao

Introduction: With the rapid development of the tourism industry, the demand for accurate and personalized travel route recommendations has significantly increased. However, traditional methods often fail to effectively integrate visual and sequential information, leading to recommendations that are both less accurate and less personalized.

Methods: This paper introduces SelfAM-Vtrans, a novel algorithm that leverages multimodal data-combining visual Transformers, LSTMs, and self-attention mechanisms-to enhance the accuracy and personalization of travel route recommendations. SelfAM-Vtrans integrates visual and sequential information by employing a visual Transformer to extract features from travel images, thereby capturing spatial relationships within them. Concurrently, a Long Short-Term Memory (LSTM) network encodes sequential data to capture the temporal dependencies within travel sequences. To effectively merge these two modalities, a self-attention mechanism fuses the visual features and sequential encodings, thoroughly accounting for their interdependencies. Based on this fused representation, a classification or regression model is trained using real travel datasets to recommend optimal travel routes.

Results and discussion: The algorithm was rigorously evaluated through experiments conducted on real-world travel datasets, and its performance was benchmarked against other route recommendation methods. The results demonstrate that SelfAM-Vtrans significantly outperforms traditional approaches in terms of both recommendation accuracy and personalization. By comprehensively incorporating both visual and sequential data, this method offers travelers more tailored and precise route suggestions, thereby enriching the overall travel experience.

导读:随着旅游业的快速发展,人们对精准、个性化的旅游路线推荐的需求显著增加。然而,传统的方法往往不能有效地整合视觉和顺序信息,导致推荐既不准确又不个性化。方法:本文介绍了一种利用多模态数据(结合视觉变形、lstm和自关注机制)来提高旅行路线推荐准确性和个性化的新算法SelfAM-Vtrans。SelfAM-Vtrans通过使用视觉转换器从旅行图像中提取特征,从而捕获其中的空间关系,从而集成了视觉和顺序信息。同时,长短期记忆(LSTM)网络对序列数据进行编码,以捕获旅行序列中的时间依赖性。为了有效地融合这两种模式,一种自注意机制融合了视觉特征和顺序编码,彻底考虑了它们的相互依赖性。基于这种融合表示,使用真实的旅行数据集训练分类或回归模型,以推荐最优的旅行路线。结果与讨论:通过在真实旅行数据集上进行的实验对该算法进行了严格的评估,并将其性能与其他路线推荐方法进行了基准测试。结果表明,SelfAM-Vtrans在推荐准确性和个性化方面都明显优于传统方法。通过综合结合视觉和顺序数据,该方法为旅行者提供更定制和精确的路线建议,从而丰富整体旅行体验。
{"title":"A multimodal travel route recommendation system leveraging visual Transformers and self-attention mechanisms.","authors":"Zhang Juan, Jing Zhang, Ming Gao","doi":"10.3389/fnbot.2024.1439195","DOIUrl":"10.3389/fnbot.2024.1439195","url":null,"abstract":"<p><strong>Introduction: </strong>With the rapid development of the tourism industry, the demand for accurate and personalized travel route recommendations has significantly increased. However, traditional methods often fail to effectively integrate visual and sequential information, leading to recommendations that are both less accurate and less personalized.</p><p><strong>Methods: </strong>This paper introduces SelfAM-Vtrans, a novel algorithm that leverages multimodal data-combining visual Transformers, LSTMs, and self-attention mechanisms-to enhance the accuracy and personalization of travel route recommendations. SelfAM-Vtrans integrates visual and sequential information by employing a visual Transformer to extract features from travel images, thereby capturing spatial relationships within them. Concurrently, a Long Short-Term Memory (LSTM) network encodes sequential data to capture the temporal dependencies within travel sequences. To effectively merge these two modalities, a self-attention mechanism fuses the visual features and sequential encodings, thoroughly accounting for their interdependencies. Based on this fused representation, a classification or regression model is trained using real travel datasets to recommend optimal travel routes.</p><p><strong>Results and discussion: </strong>The algorithm was rigorously evaluated through experiments conducted on real-world travel datasets, and its performance was benchmarked against other route recommendation methods. The results demonstrate that SelfAM-Vtrans significantly outperforms traditional approaches in terms of both recommendation accuracy and personalization. By comprehensively incorporating both visual and sequential data, this method offers travelers more tailored and precise route suggestions, thereby enriching the overall travel experience.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1439195"},"PeriodicalIF":2.6,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11628496/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142806763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Frontiers in Neurorobotics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1