首页 > 最新文献

Frontiers in Neurorobotics最新文献

英文 中文
NeuroVI-based wave compensation system control for offshore wind turbines. 基于神经网络的海上风力发电机波浪补偿系统控制。
IF 2.8 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-07-30 eCollection Date: 2025-01-01 DOI: 10.3389/fnbot.2025.1648713
Fengshuang Ma, Xiangyong Liu, Zhiqiang Xu, Tianhong Ding

In deep-sea areas, the hoisting operation of offshore wind turbines is seriously affected by waves, and the secondary impact is prone to occur between the turbine and the pile foundation. To address this issue, this study proposes an integrated wave compensation system for offshore wind turbines based on a neuromorphic vision (NeuroVI) camera. The system employs a NeuroVI camera to achieve non-contact, high-precision, and low-latency displacement detection of hydraulic cylinders, overcoming the limitations of traditional magnetostrictive displacement sensors, which exhibit slow response and susceptibility to interference in harsh marine conditions. A dynamic simulation model was developed using AMESim-Simulink co-simulation to analyze the compensation performance of the NeuroVI-based system under step and sinusoidal wave disturbances. Comparative results demonstrate that the NeuroVI feedback system achieves faster response times and superior stability over conventional sensors. Laboratory-scale model tests and real-world application in the installation of a 5.2 MW offshore wind turbine validated the system's feasibility and robustness, enabling real-time collaborative control of turbine and cylinder displacement to effectively mitigate multi-impact risks. This research provides an innovative approach for deploying neural perception technology in complex marine scenarios and advances the development of neuro-robotic systems in ocean engineering.

在深海地区,海上风电机组吊装作业受海浪影响严重,风机与桩基之间容易发生二次冲击。为了解决这一问题,本研究提出了一种基于神经形态视觉(NeuroVI)相机的海上风力涡轮机综合波浪补偿系统。该系统采用NeuroVI摄像头,实现了液压缸的非接触式、高精度、低延迟位移检测,克服了传统磁致伸缩位移传感器在恶劣海洋条件下响应缓慢、易受干扰的局限性。利用AMESim-Simulink联合仿真建立了动态仿真模型,分析了基于neurovi的系统在阶跃波和正弦波干扰下的补偿性能。对比结果表明,与传统传感器相比,NeuroVI反馈系统具有更快的响应时间和更好的稳定性。实验室规模的模型测试和5.2 MW海上风力涡轮机的实际应用验证了该系统的可行性和鲁棒性,实现了涡轮机和气缸位移的实时协同控制,有效降低了多重影响风险。该研究为在复杂的海洋环境中应用神经感知技术提供了一种创新的方法,并推动了海洋工程中神经机器人系统的发展。
{"title":"NeuroVI-based wave compensation system control for offshore wind turbines.","authors":"Fengshuang Ma, Xiangyong Liu, Zhiqiang Xu, Tianhong Ding","doi":"10.3389/fnbot.2025.1648713","DOIUrl":"10.3389/fnbot.2025.1648713","url":null,"abstract":"<p><p>In deep-sea areas, the hoisting operation of offshore wind turbines is seriously affected by waves, and the secondary impact is prone to occur between the turbine and the pile foundation. To address this issue, this study proposes an integrated wave compensation system for offshore wind turbines based on a neuromorphic vision (NeuroVI) camera. The system employs a NeuroVI camera to achieve non-contact, high-precision, and low-latency displacement detection of hydraulic cylinders, overcoming the limitations of traditional magnetostrictive displacement sensors, which exhibit slow response and susceptibility to interference in harsh marine conditions. A dynamic simulation model was developed using AMESim-Simulink co-simulation to analyze the compensation performance of the NeuroVI-based system under step and sinusoidal wave disturbances. Comparative results demonstrate that the NeuroVI feedback system achieves faster response times and superior stability over conventional sensors. Laboratory-scale model tests and real-world application in the installation of a 5.2 MW offshore wind turbine validated the system's feasibility and robustness, enabling real-time collaborative control of turbine and cylinder displacement to effectively mitigate multi-impact risks. This research provides an innovative approach for deploying neural perception technology in complex marine scenarios and advances the development of neuro-robotic systems in ocean engineering.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"19 ","pages":"1648713"},"PeriodicalIF":2.8,"publicationDate":"2025-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12343490/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144845569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pre-training, personalization, and self-calibration: all a neural network-based myoelectric decoder needs. 预训练,个性化和自校准:所有基于神经网络的肌电解码器需要。
IF 2.8 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-07-28 eCollection Date: 2025-01-01 DOI: 10.3389/fnbot.2025.1604453
Chenfei Ma, Xinyu Jiang, Kianoush Nazarpour

Myoelectric control systems translate electromyographic signals (EMG) from muscles into movement intentions, allowing control over various interfaces, such as prosthetics, wearable devices, and robotics. However, a major challenge lies in enhancing the system's ability to generalize, personalize, and adapt to the high variability of EMG signals. Artificial intelligence, particularly neural networks, has shown promising decoding performance when applied to large datasets. However, highly parameterized deep neural networks usually require extensive user-specific data with ground truth labels to learn individual unique EMG patterns. However, the characteristics of the EMG signal can change significantly over time, even for the same user, leading to performance degradation during extended use. In this work, we propose an innovative three-stage neural network training scheme designed to progressively develop an adaptive workflow, improving and maintaining the network performance on 28 subjects over 2 days. Experiments demonstrate the importance and necessity of each stage in the proposed framework.

肌电控制系统将来自肌肉的肌电图信号(EMG)转化为运动意图,允许控制各种接口,如假肢,可穿戴设备和机器人。然而,一个主要的挑战在于增强系统的泛化、个性化和适应肌电信号的高度可变性的能力。人工智能,特别是神经网络,在应用于大型数据集时显示出有希望的解码性能。然而,高度参数化的深度神经网络通常需要大量的用户特定数据和基础真值标签来学习单个独特的肌电模式。然而,随着时间的推移,肌电图信号的特征会发生显著变化,即使是同一用户,也会在长时间使用期间导致性能下降。在这项工作中,我们提出了一种创新的三阶段神经网络训练方案,旨在逐步开发自适应工作流程,在2天内改善和保持28个受试者的网络性能。实验证明了该框架中每个阶段的重要性和必要性。
{"title":"Pre-training, personalization, and self-calibration: all a neural network-based myoelectric decoder needs.","authors":"Chenfei Ma, Xinyu Jiang, Kianoush Nazarpour","doi":"10.3389/fnbot.2025.1604453","DOIUrl":"10.3389/fnbot.2025.1604453","url":null,"abstract":"<p><p>Myoelectric control systems translate electromyographic signals (EMG) from muscles into movement intentions, allowing control over various interfaces, such as prosthetics, wearable devices, and robotics. However, a major challenge lies in enhancing the system's ability to generalize, personalize, and adapt to the high variability of EMG signals. Artificial intelligence, particularly neural networks, has shown promising decoding performance when applied to large datasets. However, highly parameterized deep neural networks usually require extensive user-specific data with ground truth labels to learn individual unique EMG patterns. However, the characteristics of the EMG signal can change significantly over time, even for the same user, leading to performance degradation during extended use. In this work, we propose an innovative three-stage neural network training scheme designed to progressively develop an adaptive workflow, improving and maintaining the network performance on 28 subjects over 2 days. Experiments demonstrate the importance and necessity of each stage in the proposed framework.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"19 ","pages":"1604453"},"PeriodicalIF":2.8,"publicationDate":"2025-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12336220/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144821257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design and analysis of combined discrete-time zeroing neural network for solving time-varying nonlinear equation with robot application. 用于求解时变非线性方程的组合离散时间归零神经网络的设计与分析。
IF 2.8 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-07-11 eCollection Date: 2025-01-01 DOI: 10.3389/fnbot.2025.1576473
Zhisheng Ma, Shaobin Huang

Zeroing neural network (ZNN) is viewed as an effective solution to time-varying nonlinear equation (TVNE). In this paper, a further study is shown by proposing a novel combined discrete-time ZNN (CDTZNN) model for solving TVNE. Specifically, a new difference formula, which is called the Taylor difference formula, is constructed for first-order derivative approximation by following Taylor series expansion. The Taylor difference formula is then used to discretize the continuous-time ZNN model in the previous study. The corresponding DTZNN model is obtained, where the direct Jacobian matrix inversion is required (being time consuming). Another DTZNN model for computing the inverse of Jacobian matrix is established to solve the aforementioned limitation. The novel CDTZNN model for solving the TVNE is thus developed by combining the two models. Theoretical analysis and numerical results demonstrate the efficacy of the proposed CDTZNN model. The CDTZNN applicability is further indicated by applying the proposed model to the motion planning of robot manipulators.

归零神经网络(ZNN)是求解时变非线性方程的有效方法。本文提出了一种新的组合离散时间ZNN (CDTZNN)模型来求解TVNE。具体来说,通过泰勒级数展开,构造了一阶导数近似的差分公式,称为泰勒差分公式。然后利用泰勒差分公式对连续时间ZNN模型进行离散化。得到相应的DTZNN模型,其中需要对雅可比矩阵进行直接反演(耗时较长)。为了解决上述问题,建立了另一种计算雅可比矩阵逆的DTZNN模型。将这两种模型结合起来,建立了求解TVNE的新型CDTZNN模型。理论分析和数值结果验证了所提出的CDTZNN模型的有效性。将该模型应用于机器人机械手的运动规划,进一步证明了CDTZNN的适用性。
{"title":"Design and analysis of combined discrete-time zeroing neural network for solving time-varying nonlinear equation with robot application.","authors":"Zhisheng Ma, Shaobin Huang","doi":"10.3389/fnbot.2025.1576473","DOIUrl":"10.3389/fnbot.2025.1576473","url":null,"abstract":"<p><p>Zeroing neural network (ZNN) is viewed as an effective solution to time-varying nonlinear equation (TVNE). In this paper, a further study is shown by proposing a novel combined discrete-time ZNN (CDTZNN) model for solving TVNE. Specifically, a new difference formula, which is called the Taylor difference formula, is constructed for first-order derivative approximation by following Taylor series expansion. The Taylor difference formula is then used to discretize the continuous-time ZNN model in the previous study. The corresponding DTZNN model is obtained, where the direct Jacobian matrix inversion is required (being time consuming). Another DTZNN model for computing the inverse of Jacobian matrix is established to solve the aforementioned limitation. The novel CDTZNN model for solving the TVNE is thus developed by combining the two models. Theoretical analysis and numerical results demonstrate the efficacy of the proposed CDTZNN model. The CDTZNN applicability is further indicated by applying the proposed model to the motion planning of robot manipulators.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"19 ","pages":"1576473"},"PeriodicalIF":2.8,"publicationDate":"2025-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12289663/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144729707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A robust and effective framework for 3D scene reconstruction and high-quality rendering in nasal endoscopy surgery. 鼻内窥镜手术中三维场景重建和高质量渲染的鲁棒有效框架。
IF 2.6 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-06-27 eCollection Date: 2025-01-01 DOI: 10.3389/fnbot.2025.1630728
Xueqin Ji, Shuting Zhao, Di Liu, Feng Wang, Xinrong Chen

In nasal endoscopic surgery, the narrow nasal cavity restricts the surgical field of view and the manipulation of surgical instruments. Therefore, precise real-time intraoperative navigation, which can provide precise 3D information, plays a crucial role in avoiding critical areas with dense blood vessels and nerves. Although significant progress has been made in endoscopic 3D reconstruction methods, their application in nasal scenarios still faces numerous challenges. On the one hand, there is a lack of high-quality, annotated nasal endoscopy datasets. On the other hand, issues such as motion blur and soft tissue deformations complicate the nasal endoscopy reconstruction process. To tackle these challenges, a series of nasal endoscopy examination videos are collected, and the pose information for each frame is recorded. Additionally, a novel model named Mip-EndoGS is proposed, which integrates 3D Gaussian Splatting for reconstruction and rendering and a diffusion module to reduce image blurring in endoscopic data. Meanwhile, by incorporating an adaptive low-pass filter into the rendering pipeline, the aliasing artifacts (jagged edges) are mitigated, which occur during the rendering process. Extensive quantitative and visual experiments show that the proposed model is capable of reconstructing 3D scenes within the nasal cavity in real-time, thereby offering surgeons more detailed and precise information about the surgical scene. Moreover, the proposed approach holds great potential for integration with AR-based surgical navigation systems to enhance intraoperative guidance.

在鼻内镜手术中,狭窄的鼻腔限制了手术视野和手术器械的操作。因此,精确的术中实时导航,能够提供精确的三维信息,对于避开血管和神经密集的关键区域起着至关重要的作用。尽管内窥镜三维重建方法取得了重大进展,但其在鼻腔场景中的应用仍面临许多挑战。一方面,缺乏高质量的、带注释的鼻内窥镜数据集。另一方面,运动模糊和软组织变形等问题使鼻内窥镜重建过程复杂化。为了解决这些问题,我们收集了一系列鼻内窥镜检查视频,并记录了每帧的姿势信息。此外,提出了一种新的模型Mip-EndoGS,该模型集成了用于重建和渲染的三维高斯飞溅和用于减少内镜数据图像模糊的扩散模块。同时,通过在渲染管道中加入自适应低通滤波器,可以减轻渲染过程中出现的混叠现象(锯齿状边缘)。大量的定量和视觉实验表明,该模型能够实时重建鼻腔内的三维场景,从而为外科医生提供更详细和精确的手术场景信息。此外,该方法具有与基于ar的手术导航系统集成以增强术中引导的巨大潜力。
{"title":"A robust and effective framework for 3D scene reconstruction and high-quality rendering in nasal endoscopy surgery.","authors":"Xueqin Ji, Shuting Zhao, Di Liu, Feng Wang, Xinrong Chen","doi":"10.3389/fnbot.2025.1630728","DOIUrl":"10.3389/fnbot.2025.1630728","url":null,"abstract":"<p><p>In nasal endoscopic surgery, the narrow nasal cavity restricts the surgical field of view and the manipulation of surgical instruments. Therefore, precise real-time intraoperative navigation, which can provide precise 3D information, plays a crucial role in avoiding critical areas with dense blood vessels and nerves. Although significant progress has been made in endoscopic 3D reconstruction methods, their application in nasal scenarios still faces numerous challenges. On the one hand, there is a lack of high-quality, annotated nasal endoscopy datasets. On the other hand, issues such as motion blur and soft tissue deformations complicate the nasal endoscopy reconstruction process. To tackle these challenges, a series of nasal endoscopy examination videos are collected, and the pose information for each frame is recorded. Additionally, a novel model named Mip-EndoGS is proposed, which integrates 3D Gaussian Splatting for reconstruction and rendering and a diffusion module to reduce image blurring in endoscopic data. Meanwhile, by incorporating an adaptive low-pass filter into the rendering pipeline, the aliasing artifacts (jagged edges) are mitigated, which occur during the rendering process. Extensive quantitative and visual experiments show that the proposed model is capable of reconstructing 3D scenes within the nasal cavity in real-time, thereby offering surgeons more detailed and precise information about the surgical scene. Moreover, the proposed approach holds great potential for integration with AR-based surgical navigation systems to enhance intraoperative guidance.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"19 ","pages":"1630728"},"PeriodicalIF":2.6,"publicationDate":"2025-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12245865/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144626010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Understanding human co-manipulation via motion and haptic information to enable future physical human-robotic collaborations. 通过运动和触觉信息了解人类的协同操作,以实现未来的物理人机协作。
IF 2.6 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-06-19 eCollection Date: 2025-01-01 DOI: 10.3389/fnbot.2025.1480399
Kody Shaw, John L Salmon, Marc D Killpack

Human teams intuitively and effectively collaborate to move large, heavy, or unwieldy objects. However, understanding of this interaction in literature is limited. This is especially problematic given our goal to enable human-robot teams to work together. Therefore, to better understand how human teams work together to eventually enable intuitive human-robot interaction, in this paper we examine four sub-components of collaborative manipulation (co-manipulation), using motion and haptics. We define co-manipulation as a group of two or more agents collaboratively moving an object. We present a study that uses a large object for co-manipulation as we vary the number of participants (two or three) and the roles of the participants (leaders or followers), and the degrees of freedom necessary to complete the defined motion for the object. In analyzing the results, we focus on four key components related to motion and haptics. Specifically, we first define and examine a static or rest state to demonstrate a method of detecting transitions between the static state and an active state, where one or more agents are moving toward an intended goal. Secondly, we analyze a variety of signals (e.g. force, acceleration, etc.) during movements in each of the six rigid-body degrees of freedom of the co-manipulated object. This data allows us to identify the best signals that correlate with the desired motion of the team. Third, we examine the completion percentage of each task. The completion percentage for each task can be used to determine which motion objectives can be communicated via haptic feedback. Finally, we define a metric to determine if participants divide two degree-of-freedom tasks into separate degrees of freedom or if they take the most direct path. These four components contribute to the necessary groundwork for advancing intuitive human-robot interaction.

人类团队直观而有效地协作来移动大型、重型或笨重的物体。然而,文献中对这种相互作用的理解是有限的。考虑到我们的目标是让人-机器人团队一起工作,这尤其成问题。因此,为了更好地理解人类团队如何共同工作,最终实现直观的人机交互,在本文中,我们研究了使用运动和触觉的协作操作(协同操作)的四个子组件。我们将协同操作定义为一组两个或多个代理协作移动一个对象。我们提出了一项研究,该研究使用大型对象进行协同操作,因为我们改变了参与者的数量(两个或三个)和参与者的角色(领导者或追随者),以及完成对象定义运动所需的自由度。在分析结果时,我们重点关注与运动和触觉相关的四个关键组件。具体来说,我们首先定义和检查静态或静止状态,以演示检测静态状态和活动状态之间转换的方法,在活动状态中,一个或多个代理正在向预期目标移动。其次,我们分析了协同操纵对象的六个刚体自由度运动过程中的各种信号(例如力,加速度等)。这些数据使我们能够识别出与团队期望动作相关的最佳信号。第三,我们检查每个任务的完成百分比。每个任务的完成百分比可以用来确定哪些运动目标可以通过触觉反馈进行交流。最后,我们定义了一个度量来确定参与者是否将两个自由度任务划分为单独的自由度,或者他们是否采取最直接的路径。这四个组件为推进直观的人机交互提供了必要的基础。
{"title":"Understanding human co-manipulation via motion and haptic information to enable future physical human-robotic collaborations.","authors":"Kody Shaw, John L Salmon, Marc D Killpack","doi":"10.3389/fnbot.2025.1480399","DOIUrl":"10.3389/fnbot.2025.1480399","url":null,"abstract":"<p><p>Human teams intuitively and effectively collaborate to move large, heavy, or unwieldy objects. However, understanding of this interaction in literature is limited. This is especially problematic given our goal to enable human-robot teams to work together. Therefore, to better understand how human teams work together to eventually enable intuitive human-robot interaction, in this paper we examine four sub-components of collaborative manipulation (co-manipulation), using motion and haptics. We define co-manipulation as a group of two or more agents collaboratively moving an object. We present a study that uses a large object for co-manipulation as we vary the number of participants (two or three) and the roles of the participants (leaders or followers), and the degrees of freedom necessary to complete the defined motion for the object. In analyzing the results, we focus on four key components related to motion and haptics. Specifically, we first define and examine a static or rest state to demonstrate a method of detecting transitions between the static state and an active state, where one or more agents are moving toward an intended goal. Secondly, we analyze a variety of signals (e.g. force, acceleration, etc.) during movements in each of the six rigid-body degrees of freedom of the co-manipulated object. This data allows us to identify the best signals that correlate with the desired motion of the team. Third, we examine the completion percentage of each task. The completion percentage for each task can be used to determine which motion objectives can be communicated via haptic feedback. Finally, we define a metric to determine if participants divide two degree-of-freedom tasks into separate degrees of freedom or if they take the most direct path. These four components contribute to the necessary groundwork for advancing intuitive human-robot interaction.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"19 ","pages":"1480399"},"PeriodicalIF":2.6,"publicationDate":"2025-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12222233/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144559877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multimodal fusion image enhancement technique and CFEC-YOLOv7 for underwater target detection algorithm research. 多模态融合图像增强技术与CFEC-YOLOv7水下目标检测算法研究。
IF 2.6 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-06-19 eCollection Date: 2025-01-01 DOI: 10.3389/fnbot.2025.1616919
Xiaorong Qiu, Yingzhong Shi

The underwater environment is more complex than that on land, resulting in severe static and dynamic blurring in underwater images, reducing the recognition accuracy of underwater targets and failing to meet the needs of underwater environment detection. Firstly, for the static blurring problem, we propose an adaptive color compensation algorithm and an improved MSR algorithm. Secondly, for the problem of dynamic blur, we adopt the Restormer network to eliminate the dynamic blur caused by the combined effects of camera shake, camera out-of-focus and relative motion displacement, etc. then, through qualitative analysis, quantitative analysis and underwater target detection on the enhanced dataset, the feasibility of our underwater enhancement method is verified. Finally, we propose a target recognition network suitable for the complex underwater environment. The local and global information is fused through the CCBC module and the ECLOU loss function to improve the positioning accuracy. The FasterNet module is introduced to reduce redundant computations and parameter counting. The experimental results show that the CFEC-YOLOv7 model and the underwater image enhancement method proposed by us exhibit excellent performance, can better adapt to the underwater target recognition task, and have a good application prospect.

水下环境比陆地环境复杂,导致水下图像静态和动态模糊严重,降低了水下目标的识别精度,不能满足水下环境检测的需要。首先,针对静态模糊问题,提出了一种自适应颜色补偿算法和改进的MSR算法。其次,针对动态模糊问题,采用Restormer网络消除由相机抖动、相机失焦和相对运动位移等综合影响引起的动态模糊,然后通过对增强数据集的定性分析、定量分析和水下目标检测,验证了我们的水下增强方法的可行性。最后,提出了一种适用于复杂水下环境的目标识别网络。通过CCBC模块和ECLOU损失函数融合局部和全局信息,提高定位精度。引入FasterNet模块,减少冗余计算和参数计数。实验结果表明,CFEC-YOLOv7模型和我们提出的水下图像增强方法表现出优异的性能,能更好地适应水下目标识别任务,具有良好的应用前景。
{"title":"Multimodal fusion image enhancement technique and CFEC-YOLOv7 for underwater target detection algorithm research.","authors":"Xiaorong Qiu, Yingzhong Shi","doi":"10.3389/fnbot.2025.1616919","DOIUrl":"10.3389/fnbot.2025.1616919","url":null,"abstract":"<p><p>The underwater environment is more complex than that on land, resulting in severe static and dynamic blurring in underwater images, reducing the recognition accuracy of underwater targets and failing to meet the needs of underwater environment detection. Firstly, for the static blurring problem, we propose an adaptive color compensation algorithm and an improved MSR algorithm. Secondly, for the problem of dynamic blur, we adopt the Restormer network to eliminate the dynamic blur caused by the combined effects of camera shake, camera out-of-focus and relative motion displacement, etc. then, through qualitative analysis, quantitative analysis and underwater target detection on the enhanced dataset, the feasibility of our underwater enhancement method is verified. Finally, we propose a target recognition network suitable for the complex underwater environment. The local and global information is fused through the CCBC module and the ECLOU loss function to improve the positioning accuracy. The FasterNet module is introduced to reduce redundant computations and parameter counting. The experimental results show that the CFEC-YOLOv7 model and the underwater image enhancement method proposed by us exhibit excellent performance, can better adapt to the underwater target recognition task, and have a good application prospect.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"19 ","pages":"1616919"},"PeriodicalIF":2.6,"publicationDate":"2025-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12222134/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144559876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
User recommendation method integrating hierarchical graph attention network with multimodal knowledge graph. 结合层次图关注网络和多模态知识图的用户推荐方法。
IF 2.6 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-06-18 eCollection Date: 2025-01-01 DOI: 10.3389/fnbot.2025.1587973
Xiaofei Han, Xin Dou

In common graph neural network (GNN), although incorporating social network information effectively utilizes interactions between users, it often overlooks the deeper semantic relationships between items and fails to integrate visual and textual feature information. This limitation can restrict the diversity and accuracy of recommendation results. To address this, the present study combines knowledge graph, GNN, and multimodal information to enhance feature representations of both users and items. The inclusion of knowledge graph not only provides a better understanding of the underlying logic behind user interests and preferences but also aids in addressing the cold-start problem for new users and items. Moreover, in improving recommendation accuracy, visual and textual features of items are incorporated as supplementary information. Therefore, a user recommendation model is proposed that integrates hierarchical graph attention network with multimodal knowledge graph. The model consists of four key components: a collaborative knowledge graph neural layer, an image feature extraction layer, a text feature extraction layer, and a prediction layer. The first three layers extract user and item features, and the recommendation is completed in the prediction layer. Experimental results based on two public datasets demonstrate that the proposed model significantly outperforms existing recommendation methods in terms of recommendation performance.

在普通图神经网络(GNN)中,虽然整合社交网络信息有效地利用了用户之间的交互,但它往往忽略了项目之间更深层次的语义关系,未能整合视觉和文本特征信息。这种限制会限制推荐结果的多样性和准确性。为了解决这个问题,本研究结合了知识图、GNN和多模态信息来增强用户和物品的特征表示。知识图谱的包含不仅提供了对用户兴趣和偏好背后的底层逻辑的更好理解,而且还有助于解决新用户和新项目的冷启动问题。此外,为了提高推荐的准确性,将项目的视觉特征和文本特征作为补充信息。为此,提出了一种将层次图关注网络与多模态知识图相结合的用户推荐模型。该模型由协同知识图神经层、图像特征提取层、文本特征提取层和预测层四个关键部分组成。前三层提取用户和项目特征,在预测层完成推荐。基于两个公开数据集的实验结果表明,该模型在推荐性能方面明显优于现有的推荐方法。
{"title":"User recommendation method integrating hierarchical graph attention network with multimodal knowledge graph.","authors":"Xiaofei Han, Xin Dou","doi":"10.3389/fnbot.2025.1587973","DOIUrl":"10.3389/fnbot.2025.1587973","url":null,"abstract":"<p><p>In common graph neural network (GNN), although incorporating social network information effectively utilizes interactions between users, it often overlooks the deeper semantic relationships between items and fails to integrate visual and textual feature information. This limitation can restrict the diversity and accuracy of recommendation results. To address this, the present study combines knowledge graph, GNN, and multimodal information to enhance feature representations of both users and items. The inclusion of knowledge graph not only provides a better understanding of the underlying logic behind user interests and preferences but also aids in addressing the cold-start problem for new users and items. Moreover, in improving recommendation accuracy, visual and textual features of items are incorporated as supplementary information. Therefore, a user recommendation model is proposed that integrates hierarchical graph attention network with multimodal knowledge graph. The model consists of four key components: a collaborative knowledge graph neural layer, an image feature extraction layer, a text feature extraction layer, and a prediction layer. The first three layers extract user and item features, and the recommendation is completed in the prediction layer. Experimental results based on two public datasets demonstrate that the proposed model significantly outperforms existing recommendation methods in terms of recommendation performance.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"19 ","pages":"1587973"},"PeriodicalIF":2.6,"publicationDate":"2025-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12213718/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144553235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Context-Aware Enhanced Feature Refinement for small object detection with Deformable DETR. 上下文感知增强特征细化小对象检测与变形的DETR。
IF 2.6 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-06-10 eCollection Date: 2025-01-01 DOI: 10.3389/fnbot.2025.1588565
Donghao Shi, Cunbin Zhao, Jianwen Shao, Minjie Feng, Lei Luo, Bing Ouyang, Jiamin Huang

Small object detection is a critical task in applications like autonomous driving and ship black smoke detection. While Deformable DETR has advanced small object detection, it faces limitations due to its reliance on CNNs for feature extraction, which restricts global context understanding and results in suboptimal feature representation. Additionally, it struggles with detecting small objects that occupy only a few pixels due to significant size disparities. To overcome these challenges, we propose the Context-Aware Enhanced Feature Refinement Deformable DETR, an improved Deformable DETR network. Our approach introduces Mask Attention in the backbone to improve feature extraction while effectively suppressing irrelevant background information. Furthermore, we propose a Context-Aware Enhanced Feature Refinement Encoder to address the issue of small objects with limited pixel representation. Experimental results demonstrate that our method outperforms the baseline, achieving a 2.1% improvement in mAP.

在自动驾驶和船舶黑烟检测等应用中,小物体检测是一项关键任务。虽然Deformable DETR具有先进的小目标检测,但由于其依赖cnn进行特征提取,限制了全局上下文理解并导致次优特征表示,因此面临局限性。此外,由于显著的尺寸差异,它难以检测仅占用几个像素的小物体。为了克服这些挑战,我们提出了上下文感知增强特征细化可变形DETR,这是一种改进的可变形DETR网络。我们的方法在主干中引入了掩模注意,以改进特征提取,同时有效地抑制不相关的背景信息。此外,我们提出了一个上下文感知增强特征细化编码器,以解决像素表示有限的小对象问题。实验结果表明,我们的方法优于基线,实现了2.1%的mAP改进。
{"title":"Context-Aware Enhanced Feature Refinement for small object detection with Deformable DETR.","authors":"Donghao Shi, Cunbin Zhao, Jianwen Shao, Minjie Feng, Lei Luo, Bing Ouyang, Jiamin Huang","doi":"10.3389/fnbot.2025.1588565","DOIUrl":"10.3389/fnbot.2025.1588565","url":null,"abstract":"<p><p>Small object detection is a critical task in applications like autonomous driving and ship black smoke detection. While Deformable DETR has advanced small object detection, it faces limitations due to its reliance on CNNs for feature extraction, which restricts global context understanding and results in suboptimal feature representation. Additionally, it struggles with detecting small objects that occupy only a few pixels due to significant size disparities. To overcome these challenges, we propose the Context-Aware Enhanced Feature Refinement Deformable DETR, an improved Deformable DETR network. Our approach introduces Mask Attention in the backbone to improve feature extraction while effectively suppressing irrelevant background information. Furthermore, we propose a Context-Aware Enhanced Feature Refinement Encoder to address the issue of small objects with limited pixel representation. Experimental results demonstrate that our method outperforms the baseline, achieving a 2.1% improvement in mAP.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"19 ","pages":"1588565"},"PeriodicalIF":2.6,"publicationDate":"2025-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12185399/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144484070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Depth-aware unpaired image-to-image translation for autonomous driving test scenario generation using a dual-branch GAN. 使用双分支GAN生成自动驾驶测试场景的深度感知非配对图像到图像转换。
IF 2.6 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-05-30 eCollection Date: 2025-01-01 DOI: 10.3389/fnbot.2025.1603964
Donghao Shi, Chenxin Zhao, Cunbin Zhao, Zhou Fang, Chonghao Yu, Jian Li, Minjie Feng

Reliable visual perception is essential for autonomous driving test scenario generation, yet adverse weather and lighting variations pose significant challenges to simulation robustness and generalization. Traditional unpaired image-to-image translation methods primarily rely on RGB-based transformations, often resulting in geometric distortions and loss of structural consistency, which can negatively impact the realism and accuracy of generated test scenarios. To address these limitations, we propose a Depth-Aware Dual-Branch Generative Adversarial Network (DAB-GAN) that explicitly incorporates depth information to preserve spatial structures during scenario generation. The dual-branch generator processes both RGB and depth inputs, ensuring geometric fidelity, while a self-attention mechanism enhances spatial dependencies and local detail refinement. This enables the creation of realistic and structure-preserving test environments that are crucial for evaluating autonomous driving perception systems, especially under adverse weather conditions. Experimental results demonstrate that DAB-GAN outperforms existing unpaired image-to-image translation methods, achieving superior visual fidelity and maintaining depth-aware structural integrity. This approach provides a robust framework for generating diverse and challenging test scenarios, enhancing the development and validation of autonomous driving systems under various real-world conditions.

可靠的视觉感知对于自动驾驶测试场景的生成至关重要,然而恶劣的天气和光照变化对模拟的鲁棒性和泛化构成了重大挑战。传统的非配对图像到图像的转换方法主要依赖于基于rgb的转换,经常导致几何扭曲和结构一致性的丧失,这可能会对生成的测试场景的真实感和准确性产生负面影响。为了解决这些限制,我们提出了一种深度感知双分支生成对抗网络(DAB-GAN),该网络明确地融合了深度信息,以在场景生成过程中保留空间结构。双支路生成器同时处理RGB和深度输入,确保几何保真度,而自关注机制增强了空间依赖性和局部细节细化。这使得能够创建真实且保留结构的测试环境,这对于评估自动驾驶感知系统至关重要,特别是在恶劣天气条件下。实验结果表明,DAB-GAN优于现有的非配对图像到图像的转换方法,在保持深度感知结构完整性的同时获得了卓越的视觉保真度。这种方法为生成多样化且具有挑战性的测试场景提供了一个强大的框架,增强了在各种现实条件下自动驾驶系统的开发和验证。
{"title":"Depth-aware unpaired image-to-image translation for autonomous driving test scenario generation using a dual-branch GAN.","authors":"Donghao Shi, Chenxin Zhao, Cunbin Zhao, Zhou Fang, Chonghao Yu, Jian Li, Minjie Feng","doi":"10.3389/fnbot.2025.1603964","DOIUrl":"10.3389/fnbot.2025.1603964","url":null,"abstract":"<p><p>Reliable visual perception is essential for autonomous driving test scenario generation, yet adverse weather and lighting variations pose significant challenges to simulation robustness and generalization. Traditional unpaired image-to-image translation methods primarily rely on RGB-based transformations, often resulting in geometric distortions and loss of structural consistency, which can negatively impact the realism and accuracy of generated test scenarios. To address these limitations, we propose a Depth-Aware Dual-Branch Generative Adversarial Network (DAB-GAN) that explicitly incorporates depth information to preserve spatial structures during scenario generation. The dual-branch generator processes both RGB and depth inputs, ensuring geometric fidelity, while a self-attention mechanism enhances spatial dependencies and local detail refinement. This enables the creation of realistic and structure-preserving test environments that are crucial for evaluating autonomous driving perception systems, especially under adverse weather conditions. Experimental results demonstrate that DAB-GAN outperforms existing unpaired image-to-image translation methods, achieving superior visual fidelity and maintaining depth-aware structural integrity. This approach provides a robust framework for generating diverse and challenging test scenarios, enhancing the development and validation of autonomous driving systems under various real-world conditions.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"19 ","pages":"1603964"},"PeriodicalIF":2.6,"publicationDate":"2025-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12162506/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144301898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Gait analysis system for assessing abnormal patterns in individuals with hemiparetic stroke during robot-assisted gait training: a criterion-related validity study in healthy adults. 步态分析系统在机器人辅助的步态训练中评估偏瘫中风患者的异常模式:一项健康成人标准相关的有效性研究。
IF 2.6 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-05-21 eCollection Date: 2025-01-01 DOI: 10.3389/fnbot.2025.1558009
Issei Nakashima, Daisuke Imoto, Satoshi Hirano, Hitoshi Konosu, Yohei Otaka

Introduction: Gait robots have the potential to analyze gait characteristics during gait training using mounted sensors in addition to robotic assistance of the individual's movements. However, no systems have been proposed to analyze gait performance during robot-assisted gait training. Our newly developed gait robot," Welwalk WW-2000 (WW-2000)" is equipped with a gait analysis system to analyze abnormal gait patterns during robot-assisted gait training. We previously investigated the validity of the index values for the nine abnormal gait patterns. Here, we proposed new index values for four abnormal gait patterns, which are anterior trunk tilt, excessive trunk shifts over the affected side, excessive knee joint flexion, and swing difficulty; we investigated the criterion validity of the WW-2000 gait analysis system in healthy adults for these new index values.

Methods: Twelve healthy participants simulated four abnormal gait patterns manifested in individuals with hemiparetic stroke while wearing the robot. Each participant was instructed to perform 16 gait trials, with four grades of severity for each of the four abnormal gait patterns. Twenty strides were recorded for each gait trial using a gait analysis system in the WW-2000 and video cameras. Abnormal gait patterns were assessed using the two parameters: the index values calculated for each stride from the WW-2000 gait analysis system, and assessor's severity scores for each stride. The correlation of the index values between the two methods was evaluated using the Spearman rank correlation coefficient for each gait pattern in each participant.

Results: The median (minimum to maximum) values of Spearman rank correlation coefficient among the 12 participants between the index value calculated using the WW-2000 gait analysis system and the assessor's severity scores for anterior trunk tilt, excessive trunk shifts over the affected side, excessive knee joint flexion, and swing difficulty were 0.892 (0.749-0.969), 0.859 (0.439-0.923), 0.920 (0.738-0.969), and 0.681 (0.391-0.889), respectively.

Discussion: The WW-2000 gait analysis system captured four new abnormal gait patterns observed in individuals with hemiparetic stroke with high validity, in addition to nine previously validated abnormal gait patterns. Assessing abnormal gait patterns is important as improving them contributes to stroke rehabilitation.

Clinical trial registration: https://jrct.niph.go.jp, identifier jRCT 042190109.

步态机器人在步态训练过程中,除了机器人辅助个人运动外,还可以使用安装的传感器分析步态特征。然而,在机器人辅助的步态训练过程中,还没有提出分析步态性能的系统。我们新开发的步态机器人Welwalk www -2000 (www -2000)配备了步态分析系统,用于分析机器人辅助步态训练过程中的异常步态模式。我们之前调查了九种异常步态模式的指标值的有效性。在这里,我们提出了四种异常步态模式的新指标值,这四种异常步态模式是躯干前倾、躯干过度向患侧移动、膝关节过度屈曲和摇摆困难;我们调查了WW-2000步态分析系统在健康成人中对这些新指标值的效度。方法:12名健康参与者在佩戴机器人时模拟了偏瘫中风患者的四种异常步态模式。每位参与者被要求进行16次步态试验,每种步态异常模式的严重程度分为四个等级。使用WW-2000中的步态分析系统和摄像机记录每次步态试验的20步。采用WW-2000步态分析系统计算的每一步的指数值和评估者对每一步的严重程度评分两个参数对异常步态模式进行评估。使用Spearman秩相关系数对每个参与者的每种步态模式评估两种方法之间指标值的相关性。结果:12名受试者中,采用WW-2000步态分析系统计算的指数值与评估者前肢倾斜、躯干过度向患侧移动、膝关节过度屈曲、摇摆困难程度评分的Spearman秩相关系数中位数(最小至最大值)分别为0.892(0.749-0.969)、0.859(0.439-0.923)、0.920(0.738-0.969)、0.681(0.391-0.889)。讨论:WW-2000步态分析系统捕获了四种新的异常步态模式,在偏瘫中风患者中观察到高效度,除了先前验证的九种异常步态模式。评估异常的步态模式是重要的,因为改善它们有助于中风康复。临床试验注册:https://jrct.niph.go.jp,编号jRCT 042190109。
{"title":"Gait analysis system for assessing abnormal patterns in individuals with hemiparetic stroke during robot-assisted gait training: a criterion-related validity study in healthy adults.","authors":"Issei Nakashima, Daisuke Imoto, Satoshi Hirano, Hitoshi Konosu, Yohei Otaka","doi":"10.3389/fnbot.2025.1558009","DOIUrl":"10.3389/fnbot.2025.1558009","url":null,"abstract":"<p><strong>Introduction: </strong>Gait robots have the potential to analyze gait characteristics during gait training using mounted sensors in addition to robotic assistance of the individual's movements. However, no systems have been proposed to analyze gait performance during robot-assisted gait training. Our newly developed gait robot,\" Welwalk WW-2000 (WW-2000)\" is equipped with a gait analysis system to analyze abnormal gait patterns during robot-assisted gait training. We previously investigated the validity of the index values for the nine abnormal gait patterns. Here, we proposed new index values for four abnormal gait patterns, which are anterior trunk tilt, excessive trunk shifts over the affected side, excessive knee joint flexion, and swing difficulty; we investigated the criterion validity of the WW-2000 gait analysis system in healthy adults for these new index values.</p><p><strong>Methods: </strong>Twelve healthy participants simulated four abnormal gait patterns manifested in individuals with hemiparetic stroke while wearing the robot. Each participant was instructed to perform 16 gait trials, with four grades of severity for each of the four abnormal gait patterns. Twenty strides were recorded for each gait trial using a gait analysis system in the WW-2000 and video cameras. Abnormal gait patterns were assessed using the two parameters: the index values calculated for each stride from the WW-2000 gait analysis system, and assessor's severity scores for each stride. The correlation of the index values between the two methods was evaluated using the Spearman rank correlation coefficient for each gait pattern in each participant.</p><p><strong>Results: </strong>The median (minimum to maximum) values of Spearman rank correlation coefficient among the 12 participants between the index value calculated using the WW-2000 gait analysis system and the assessor's severity scores for anterior trunk tilt, excessive trunk shifts over the affected side, excessive knee joint flexion, and swing difficulty were 0.892 (0.749-0.969), 0.859 (0.439-0.923), 0.920 (0.738-0.969), and 0.681 (0.391-0.889), respectively.</p><p><strong>Discussion: </strong>The WW-2000 gait analysis system captured four new abnormal gait patterns observed in individuals with hemiparetic stroke with high validity, in addition to nine previously validated abnormal gait patterns. Assessing abnormal gait patterns is important as improving them contributes to stroke rehabilitation.</p><p><strong>Clinical trial registration: </strong>https://jrct.niph.go.jp, identifier jRCT 042190109.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"19 ","pages":"1558009"},"PeriodicalIF":2.6,"publicationDate":"2025-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12133724/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144225249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Frontiers in Neurorobotics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1