首页 > 最新文献

Frontiers in Neurorobotics最新文献

英文 中文
Brain-inspired semantic data augmentation for multi-style images 大脑启发的多风格图像语义数据增强技术
IF 3.1 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-03-04 DOI: 10.3389/fnbot.2024.1382406
Wei Wang, Zhaowei Shang, Chengxing Li

Data augmentation is an effective technique for automatically expanding training data in deep learning. Brain-inspired methods are approaches that draw inspiration from the functionality and structure of the human brain and apply these mechanisms and principles to artificial intelligence and computer science. When there is a large style difference between training data and testing data, common data augmentation methods cannot effectively enhance the generalization performance of the deep model. To solve this problem, we improve modeling Domain Shifts with Uncertainty (DSU) and propose a new brain-inspired computer vision image data augmentation method which consists of two key components, namely, using Robust statistics and controlling the Coefficient of variance for DSU (RCDSU) and Feature Data Augmentation (FeatureDA). RCDSU calculates feature statistics (mean and standard deviation) with robust statistics to weaken the influence of outliers, making the statistics close to the real values and improving the robustness of deep learning models. By controlling the coefficient of variance, RCDSU makes the feature statistics shift with semantic preservation and increases shift range. FeatureDA controls the coefficient of variance similarly to generate the augmented features with semantics unchanged and increase the coverage of augmented features. RCDSU and FeatureDA are proposed to perform style transfer and content transfer in the feature space, and improve the generalization ability of the model at the style and content level respectively. On Photo, Art Painting, Cartoon, and Sketch (PACS) multi-style classification task, RCDSU plus FeatureDA achieves competitive accuracy. After adding Gaussian noise to PACS dataset, RCDSU plus FeatureDA shows strong robustness against outliers. FeatureDA achieves excellent results on CIFAR-100 image classification task. RCDSU plus FeatureDA can be applied as a novel brain-inspired semantic data augmentation method with implicit robot automation which is suitable for datasets with large style differences between training and testing data.

数据增强是深度学习中自动扩展训练数据的有效技术。脑启发方法是从人脑的功能和结构中汲取灵感,并将这些机制和原理应用于人工智能和计算机科学的方法。当训练数据和测试数据之间存在较大的风格差异时,普通的数据增强方法无法有效提高深度模型的泛化性能。为了解决这个问题,我们改进了不确定性域转移(DSU)建模,并提出了一种新的脑启发计算机视觉图像数据增强方法,该方法由两个关键部分组成,即使用鲁棒统计并控制方差系数的DSU(RCDSU)和特征数据增强(FeatureDA)。RCDSU 使用鲁棒统计计算特征统计数据(均值和标准差),以削弱异常值的影响,使统计数据接近真实值,提高深度学习模型的鲁棒性。通过控制方差系数,RCDSU 可以使特征统计数据在保留语义的前提下进行移动,并增加移动范围。FeatureDA 同样控制方差系数,在语义不变的情况下生成增强特征,并增加增强特征的覆盖范围。RCDSU 和 FeatureDA 的提出是为了在特征空间中进行风格转移和内容转移,并分别在风格和内容层面提高模型的泛化能力。在照片、艺术绘画、卡通和素描(PACS)多风格分类任务中,RCDSU 和 FeatureDA 实现了具有竞争力的准确率。在 PACS 数据集中加入高斯噪声后,RCDSU 和 FeatureDA 对异常值表现出很强的鲁棒性。在 CIFAR-100 图像分类任务中,FeatureDA 取得了优异的成绩。RCDSU 加上 FeatureDA 可以作为一种新颖的大脑启发语义数据增强方法来应用,它具有隐式机器人自动化功能,适用于训练数据和测试数据之间存在较大风格差异的数据集。
{"title":"Brain-inspired semantic data augmentation for multi-style images","authors":"Wei Wang, Zhaowei Shang, Chengxing Li","doi":"10.3389/fnbot.2024.1382406","DOIUrl":"https://doi.org/10.3389/fnbot.2024.1382406","url":null,"abstract":"<p>Data augmentation is an effective technique for automatically expanding training data in deep learning. Brain-inspired methods are approaches that draw inspiration from the functionality and structure of the human brain and apply these mechanisms and principles to artificial intelligence and computer science. When there is a large style difference between training data and testing data, common data augmentation methods cannot effectively enhance the generalization performance of the deep model. To solve this problem, we improve modeling Domain Shifts with Uncertainty (DSU) and propose a new brain-inspired computer vision image data augmentation method which consists of two key components, namely, <italic>using Robust statistics and controlling the Coefficient of variance for DSU</italic> (RCDSU) and <italic>Feature Data Augmentation</italic> (FeatureDA). RCDSU calculates feature statistics (mean and standard deviation) with robust statistics to weaken the influence of outliers, making the statistics close to the real values and improving the robustness of deep learning models. By controlling the coefficient of variance, RCDSU makes the feature statistics shift with semantic preservation and increases shift range. FeatureDA controls the coefficient of variance similarly to generate the augmented features with semantics unchanged and increase the coverage of augmented features. RCDSU and FeatureDA are proposed to perform style transfer and content transfer in the feature space, and improve the generalization ability of the model at the style and content level respectively. On Photo, Art Painting, Cartoon, and Sketch (PACS) multi-style classification task, RCDSU plus FeatureDA achieves competitive accuracy. After adding Gaussian noise to PACS dataset, RCDSU plus FeatureDA shows strong robustness against outliers. FeatureDA achieves excellent results on CIFAR-100 image classification task. RCDSU plus FeatureDA can be applied as a novel brain-inspired semantic data augmentation method with implicit robot automation which is suitable for datasets with large style differences between training and testing data.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2024-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140300167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-channel high-order network representation learning research 多通道高阶网络表征学习研究
IF 3.1 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-02-29 DOI: 10.3389/fnbot.2024.1340462
Zhonglin Ye, Yanlong Tang, Haixing Zhao, Zhaoyang Wang, Ying Ji
The existing network representation learning algorithms mainly model the relationship between network nodes based on the structural features of the network, or use text features, hierarchical features and other external attributes to realize the network joint representation learning. Capturing global features of the network allows the obtained node vectors to retain more comprehensive feature information during training, thereby enhancing the quality of embeddings. In order to preserve the global structural features of the network in the training results, we employed a multi-channel learning approach to perform high-order feature modeling on the network. We proposed a novel algorithm for multi-channel high-order network representation learning, referred to as the Multi-Channel High-Order Network Representation (MHNR) algorithm. This algorithm initially constructs high-order network features from the original network structure, thereby transforming the single-channel network representation learning process into a multi-channel high-order network representation learning process. Then, for each single-channel network representation learning process, the novel graph assimilation mechanism is introduced in the algorithm, so as to realize the high-order network structure modeling mechanism in the single-channel network representation learning. Finally, the algorithm integrates the multi-channel and single-channel mechanism of high-order network structure joint modeling, realizing the efficient use of network structure features and sufficient modeling. Experimental results show that the node classification performance of the proposed MHNR algorithm reaches a good order on Citeseer, Cora, and DBLP data, and its node classification performance is better than that of the comparison algorithm used in this paper. In addition, when the vector length is optimized, the average classification accuracy of nodes of the proposed algorithm is up to 12.24% higher than that of the DeepWalk algorithm. Therefore, the node classification performance of the proposed algorithm can reach the current optimal order only based on the structural features of the network under the condition of no external feature supplementary modeling.
现有的网络表示学习算法主要基于网络的结构特征对网络节点之间的关系进行建模,或者利用文本特征、层次特征等外部属性实现网络联合表示学习。捕捉网络的全局特征可以使得到的节点向量在训练过程中保留更全面的特征信息,从而提高嵌入的质量。为了在训练结果中保留网络的全局结构特征,我们采用了多通道学习方法对网络进行高阶特征建模。我们提出了一种新颖的多通道高阶网络表征学习算法,称为多通道高阶网络表征(MHNR)算法。该算法首先从原始网络结构中构建高阶网络特征,从而将单通道网络表征学习过程转化为多通道高阶网络表征学习过程。然后,针对每个单通道网络表征学习过程,在算法中引入新颖的图同化机制,从而实现单通道网络表征学习中的高阶网络结构建模机制。最后,该算法整合了多通道和单通道的高阶网络结构联合建模机制,实现了网络结构特征的高效利用和充分建模。实验结果表明,本文提出的 MHNR 算法在 Citeseer、Cora 和 DBLP 数据上的节点分类性能达到了较好的阶次,其节点分类性能优于本文采用的对比算法。此外,当优化向量长度时,所提算法的节点平均分类准确率比 DeepWalk 算法高出 12.24%。因此,在没有外部特征补充建模的条件下,本文提出的算法只需基于网络的结构特征,其节点分类性能就能达到当前的最优阶。
{"title":"Multi-channel high-order network representation learning research","authors":"Zhonglin Ye, Yanlong Tang, Haixing Zhao, Zhaoyang Wang, Ying Ji","doi":"10.3389/fnbot.2024.1340462","DOIUrl":"https://doi.org/10.3389/fnbot.2024.1340462","url":null,"abstract":"The existing network representation learning algorithms mainly model the relationship between network nodes based on the structural features of the network, or use text features, hierarchical features and other external attributes to realize the network joint representation learning. Capturing global features of the network allows the obtained node vectors to retain more comprehensive feature information during training, thereby enhancing the quality of embeddings. In order to preserve the global structural features of the network in the training results, we employed a multi-channel learning approach to perform high-order feature modeling on the network. We proposed a novel algorithm for multi-channel high-order network representation learning, referred to as the Multi-Channel High-Order Network Representation (MHNR) algorithm. This algorithm initially constructs high-order network features from the original network structure, thereby transforming the single-channel network representation learning process into a multi-channel high-order network representation learning process. Then, for each single-channel network representation learning process, the novel graph assimilation mechanism is introduced in the algorithm, so as to realize the high-order network structure modeling mechanism in the single-channel network representation learning. Finally, the algorithm integrates the multi-channel and single-channel mechanism of high-order network structure joint modeling, realizing the efficient use of network structure features and sufficient modeling. Experimental results show that the node classification performance of the proposed MHNR algorithm reaches a good order on Citeseer, Cora, and DBLP data, and its node classification performance is better than that of the comparison algorithm used in this paper. In addition, when the vector length is optimized, the average classification accuracy of nodes of the proposed algorithm is up to 12.24% higher than that of the DeepWalk algorithm. Therefore, the node classification performance of the proposed algorithm can reach the current optimal order only based on the structural features of the network under the condition of no external feature supplementary modeling.","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"52 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2024-02-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140006199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning-based control framework for dynamic contact processes in humanoid grasping 基于深度学习的仿人抓取动态接触过程控制框架
IF 3.1 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-02-28 DOI: 10.3389/fnbot.2024.1349752
Shaowen Cheng, Yongbin Jin, Hongtao Wang
Humanoid grasping is a critical ability for anthropomorphic hand, and plays a significant role in the development of humanoid robots. In this article, we present a deep learning-based control framework for humanoid grasping, incorporating the dynamic contact process among the anthropomorphic hand, the object, and the environment. This method efficiently eliminates the constraints imposed by inaccessible grasping points on both the contact surface of the object and the table surface. To mimic human-like grasping movements, an underactuated anthropomorphic hand is utilized, which is designed based on human hand data. The utilization of hand gestures, rather than controlling each motor separately, has significantly decreased the control dimensionality. Additionally, a deep learning framework is used to select gestures and grasp actions. Our methodology, proven both in simulation and on real robot, exceeds the performance of static analysis-based methods, as measured by the standard grasp metric Q1. It expands the range of objects the system can handle, effectively grasping thin items such as cards on tables, a task beyond the capabilities of previous methodologies.
仿人抓取是拟人手的一项关键能力,在仿人机器人的发展中发挥着重要作用。在这篇文章中,我们提出了一种基于深度学习的仿人抓取控制框架,将拟人手、物体和环境之间的动态接触过程纳入其中。这种方法能有效消除物体接触面和工作台表面上无法触及的抓取点所带来的限制。为了模仿人类的抓取动作,我们使用了根据人类手部数据设计的欠驱动拟人手。利用手势而不是单独控制每个电机,大大降低了控制维度。此外,还使用了深度学习框架来选择手势和抓握动作。我们的方法在模拟和真实机器人上都得到了验证,其性能超过了基于静态分析的方法,以标准抓取指标 Q1 来衡量。它扩大了系统可抓取的物体范围,有效地抓取了桌子上的卡片等薄物品,这超出了以往方法的能力范围。
{"title":"Deep learning-based control framework for dynamic contact processes in humanoid grasping","authors":"Shaowen Cheng, Yongbin Jin, Hongtao Wang","doi":"10.3389/fnbot.2024.1349752","DOIUrl":"https://doi.org/10.3389/fnbot.2024.1349752","url":null,"abstract":"Humanoid grasping is a critical ability for anthropomorphic hand, and plays a significant role in the development of humanoid robots. In this article, we present a deep learning-based control framework for humanoid grasping, incorporating the dynamic contact process among the anthropomorphic hand, the object, and the environment. This method efficiently eliminates the constraints imposed by inaccessible grasping points on both the contact surface of the object and the table surface. To mimic human-like grasping movements, an underactuated anthropomorphic hand is utilized, which is designed based on human hand data. The utilization of hand gestures, rather than controlling each motor separately, has significantly decreased the control dimensionality. Additionally, a deep learning framework is used to select gestures and grasp actions. Our methodology, proven both in simulation and on real robot, exceeds the performance of static analysis-based methods, as measured by the standard grasp metric <jats:italic>Q</jats:italic><jats:sub>1</jats:sub>. It expands the range of objects the system can handle, effectively grasping thin items such as cards on tables, a task beyond the capabilities of previous methodologies.","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"6 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2024-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140006200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HiDeS: a higher-order-derivative-supervised neural ordinary differential equation for multi-robot systems and opinion dynamics HiDeS:用于多机器人系统和舆论动力学的高阶衍生监督神经常微分方程
IF 3.1 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-02-26 DOI: 10.3389/fnbot.2024.1382305
Meng Li, Wenyu Bian, Liangxiong Chen, Mei Liu

This paper addresses the limitations of current neural ordinary differential equations (NODEs) in modeling and predicting complex dynamics by introducing a novel framework called higher-order-derivative-supervised (HiDeS) NODE. This method extends traditional NODE frameworks by incorporating higher-order derivatives and their interactions into the modeling process, thereby enabling the capture of intricate system behaviors. In addition, the HiDeS NODE employs both the state vector and its higher-order derivatives as supervised signals, which is different from conventional NODEs that utilize only the state vector as a supervised signal. This approach is designed to enhance the predicting capability of NODEs. Through extensive experiments in the complex fields of multi-robot systems and opinion dynamics, the HiDeS NODE demonstrates improved modeling and predicting capabilities over existing models. This research not only proposes an expressive and predictive framework for dynamic systems but also marks the first application of NODEs to the fields of multi-robot systems and opinion dynamics, suggesting broad potential for future interdisciplinary work. The code is available at https://github.com/MengLi-Thea/HiDeS-A-Higher-Order-Derivative-Supervised-Neural-Ordinary-Differential-Equation.

本文通过引入一种称为高阶导数监督(HiDeS)神经常微分方程(NODE)的新框架,解决了当前神经常微分方程(NODE)在复杂动力学建模和预测方面的局限性。该方法通过将高阶导数及其相互作用纳入建模过程,扩展了传统的 NODE 框架,从而能够捕捉错综复杂的系统行为。此外,HiDeS NODE 将状态向量及其高阶导数都作为监督信号,这与传统的 NODE 只将状态向量作为监督信号不同。这种方法旨在增强 NODE 的预测能力。通过在多机器人系统和舆论动力学等复杂领域的大量实验,HiDeS NODE 展示了比现有模型更强的建模和预测能力。这项研究不仅为动态系统提出了一个具有表现力和预测力的框架,而且标志着 NODEs 首次应用于多机器人系统和舆论动力学领域,为未来的跨学科工作提供了广阔的发展空间。代码见 https://github.com/MengLi-Thea/HiDeS-A-Higher-Order-Derivative-Supervised-Neural-Ordinary-Differential-Equation。
{"title":"HiDeS: a higher-order-derivative-supervised neural ordinary differential equation for multi-robot systems and opinion dynamics","authors":"Meng Li, Wenyu Bian, Liangxiong Chen, Mei Liu","doi":"10.3389/fnbot.2024.1382305","DOIUrl":"https://doi.org/10.3389/fnbot.2024.1382305","url":null,"abstract":"<p>This paper addresses the limitations of current neural ordinary differential equations (NODEs) in modeling and predicting complex dynamics by introducing a novel framework called higher-order-derivative-supervised (HiDeS) NODE. This method extends traditional NODE frameworks by incorporating higher-order derivatives and their interactions into the modeling process, thereby enabling the capture of intricate system behaviors. In addition, the HiDeS NODE employs both the state vector and its higher-order derivatives as supervised signals, which is different from conventional NODEs that utilize only the state vector as a supervised signal. This approach is designed to enhance the predicting capability of NODEs. Through extensive experiments in the complex fields of multi-robot systems and opinion dynamics, the HiDeS NODE demonstrates improved modeling and predicting capabilities over existing models. This research not only proposes an expressive and predictive framework for dynamic systems but also marks the first application of NODEs to the fields of multi-robot systems and opinion dynamics, suggesting broad potential for future interdisciplinary work. The code is available at <ext-link ext-link-type=\"uri\" xlink:href=\"https://github.com/MengLi-Thea/HiDeS-A-Higher-Order-Derivative-Supervised-Neural-Ordinary-Differential-Equation\" xmlns:xlink=\"http://www.w3.org/1999/xlink\">https://github.com/MengLi-Thea/HiDeS-A-Higher-Order-Derivative-Supervised-Neural-Ordinary-Differential-Equation</ext-link>.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"93 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2024-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140108081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A data-driven acceleration-level scheme for image-based visual servoing of manipulators with unknown structure 基于图像的未知结构机械手视觉伺服数据驱动加速度方案
IF 3.1 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-02-22 DOI: 10.3389/fnbot.2024.1380430
Liuyi Wen, Zhengtai Xie

The research on acceleration-level visual servoing of manipulators is crucial yet insufficient, which restricts the potential application range of visual servoing. To address this issue, this paper proposes a quadratic programming-based acceleration-level image-based visual servoing (AIVS) scheme, which considers joint constraints. Besides, aiming to address the unknown problems in visual servoing systems, a data-driven learning algorithm is proposed to facilitate estimating structural information. Building upon this foundation, a data-driven acceleration-level image-based visual servoing (DAIVS) scheme is proposed, integrating learning and control capabilities. Subsequently, a recurrent neural network (RNN) is developed to tackle the DAIVS scheme, followed by theoretical analyses substantiating its stability. Afterwards, simulations and experiments on a Franka Emika Panda manipulator with eye-in-hand structure and comparisons among the existing methods are provided. The obtained results demonstrate the feasibility and practicality of the proposed schemes and highlight the superior learning and control ability of the proposed RNN. This method is particularly well-suited for visual servoing applications of manipulators with unknown structure.

对机械手加速级视觉伺服的研究十分关键,但还不够充分,这限制了视觉伺服的潜在应用范围。针对这一问题,本文提出了一种基于二次编程的加速度级图像视觉伺服(AIVS)方案,该方案考虑了关节约束。此外,为了解决视觉伺服系统中的未知问题,本文还提出了一种数据驱动学习算法,以方便估计结构信息。在此基础上,提出了一种数据驱动的加速级基于图像的视觉伺服(DAIVS)方案,将学习和控制能力融为一体。随后,开发了一种循环神经网络(RNN)来处理 DAIVS 方案,并通过理论分析证实了该方案的稳定性。随后,对具有手眼结构的 Franka Emika Panda 机械手进行了模拟和实验,并对现有方法进行了比较。所得结果证明了所提方案的可行性和实用性,并凸显了所提 RNN 的卓越学习和控制能力。该方法尤其适用于未知结构机械手的视觉伺服应用。
{"title":"A data-driven acceleration-level scheme for image-based visual servoing of manipulators with unknown structure","authors":"Liuyi Wen, Zhengtai Xie","doi":"10.3389/fnbot.2024.1380430","DOIUrl":"https://doi.org/10.3389/fnbot.2024.1380430","url":null,"abstract":"<p>The research on acceleration-level visual servoing of manipulators is crucial yet insufficient, which restricts the potential application range of visual servoing. To address this issue, this paper proposes a quadratic programming-based acceleration-level image-based visual servoing (AIVS) scheme, which considers joint constraints. Besides, aiming to address the unknown problems in visual servoing systems, a data-driven learning algorithm is proposed to facilitate estimating structural information. Building upon this foundation, a data-driven acceleration-level image-based visual servoing (DAIVS) scheme is proposed, integrating learning and control capabilities. Subsequently, a recurrent neural network (RNN) is developed to tackle the DAIVS scheme, followed by theoretical analyses substantiating its stability. Afterwards, simulations and experiments on a Franka Emika Panda manipulator with eye-in-hand structure and comparisons among the existing methods are provided. The obtained results demonstrate the feasibility and practicality of the proposed schemes and highlight the superior learning and control ability of the proposed RNN. This method is particularly well-suited for visual servoing applications of manipulators with unknown structure.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"122 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2024-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140170748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Editorial: Sensing and control for efficient human-robot collaboration. 社论:传感与控制,实现高效的人机协作。
IF 3.1 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-02-20 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1370415
Jing Luo, Chao Zeng, Zhenyu Lu, Wen Qi
{"title":"Editorial: Sensing and control for efficient human-robot collaboration.","authors":"Jing Luo, Chao Zeng, Zhenyu Lu, Wen Qi","doi":"10.3389/fnbot.2024.1370415","DOIUrl":"https://doi.org/10.3389/fnbot.2024.1370415","url":null,"abstract":"","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1370415"},"PeriodicalIF":3.1,"publicationDate":"2024-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10912567/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140039113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Assessment and analysis of accents in air traffic control speech: a fusion of deep learning and information theory 空中交通管制语音中重音的评估与分析:深度学习与信息论的融合
IF 3.1 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-02-19 DOI: 10.3389/fnbot.2024.1360094
Weijun Pan, Jian Zhang, Yumei Zhang, Peiyuan Jiang, Shuai Han
Introduction

Enhancing the generalization and reliability of speech recognition models in the field of air traffic control (ATC) is a challenging task. This is due to the limited storage, difficulty in acquisition, and high labeling costs of ATC speech data, which may result in data sample bias and class imbalance, leading to uncertainty and inaccuracy in speech recognition results. This study investigates a method for assessing the quality of ATC speech based on accents. Different combinations of data quality categories are selected according to the requirements of different model application scenarios to address the aforementioned issues effectively.

Methods

The impact of accents on the performance of speech recognition models is analyzed, and a fusion feature phoneme recognition model based on prior text information is constructed to identify phonemes of speech uttered by speakers. This model includes an audio encoding module, a prior text encoding module, a feature fusion module, and fully connected layers. The model takes speech and its corresponding prior text as input and outputs a predicted phoneme sequence of the speech. The model recognizes accented speech as phonemes that do not match the transcribed phoneme sequence of the actual speech text and quantitatively evaluates the accents in ATC communication by calculating the differences between the recognized phoneme sequence and the transcribed phoneme sequence of the actual speech text. Additionally, different levels of accents are input into different types of speech recognition models to analyze and compare the recognition accuracy of the models.

Result

Experimental results show that, under the same experimental conditions, the highest impact of different levels of accents on speech recognition accuracy in ATC communication is 26.37%.

Discussion

This further demonstrates that accents affect the accuracy of speech recognition models in ATC communication and can be considered as one of the metrics for evaluating the quality of ATC speech.

引言 提高空中交通管制(ATC)领域语音识别模型的泛化和可靠性是一项具有挑战性的任务。这是因为空管语音数据存储有限、获取困难、标注成本高,可能导致数据样本偏差和类不平衡,从而导致语音识别结果的不确定性和不准确性。本研究探讨了一种基于口音的空管语音质量评估方法。方法分析了重音对语音识别模型性能的影响,并构建了一个基于先验文本信息的融合特征音素识别模型,以识别说话人所说语音的音素。该模型包括音频编码模块、先验文本编码模块、特征融合模块和全连接层。该模型将语音及其相应的先验文本作为输入,并输出语音的预测音素序列。该模型将重音语音识别为与实际语音文本的转录音素序列不匹配的音素,并通过计算识别的音素序列与实际语音文本的转录音素序列之间的差异,对 ATC 通信中的重音进行定量评估。实验结果实验结果表明,在相同的实验条件下,不同程度的重音对空管通信中语音识别准确率的影响最高,达到 26.37%。讨论这进一步证明了重音会影响空管通信中语音识别模型的准确率,可以将重音作为评价空管语音质量的指标之一。
{"title":"Assessment and analysis of accents in air traffic control speech: a fusion of deep learning and information theory","authors":"Weijun Pan, Jian Zhang, Yumei Zhang, Peiyuan Jiang, Shuai Han","doi":"10.3389/fnbot.2024.1360094","DOIUrl":"https://doi.org/10.3389/fnbot.2024.1360094","url":null,"abstract":"<sec><title>Introduction</title><p>Enhancing the generalization and reliability of speech recognition models in the field of air traffic control (ATC) is a challenging task. This is due to the limited storage, difficulty in acquisition, and high labeling costs of ATC speech data, which may result in data sample bias and class imbalance, leading to uncertainty and inaccuracy in speech recognition results. This study investigates a method for assessing the quality of ATC speech based on accents. Different combinations of data quality categories are selected according to the requirements of different model application scenarios to address the aforementioned issues effectively.</p></sec><sec><title>Methods</title><p>The impact of accents on the performance of speech recognition models is analyzed, and a fusion feature phoneme recognition model based on prior text information is constructed to identify phonemes of speech uttered by speakers. This model includes an audio encoding module, a prior text encoding module, a feature fusion module, and fully connected layers. The model takes speech and its corresponding prior text as input and outputs a predicted phoneme sequence of the speech. The model recognizes accented speech as phonemes that do not match the transcribed phoneme sequence of the actual speech text and quantitatively evaluates the accents in ATC communication by calculating the differences between the recognized phoneme sequence and the transcribed phoneme sequence of the actual speech text. Additionally, different levels of accents are input into different types of speech recognition models to analyze and compare the recognition accuracy of the models.</p></sec><sec><title>Result</title><p>Experimental results show that, under the same experimental conditions, the highest impact of different levels of accents on speech recognition accuracy in ATC communication is 26.37%.</p></sec><sec><title>Discussion</title><p>This further demonstrates that accents affect the accuracy of speech recognition models in ATC communication and can be considered as one of the metrics for evaluating the quality of ATC speech.</p></sec>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"103 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2024-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140037520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Keeping social distance in a classroom while interacting via a telepresence robot: a pilot study 在课堂上通过远程呈现机器人进行互动时保持社交距离:一项试点研究
IF 3.1 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-02-14 DOI: 10.3389/fnbot.2024.1339000
Kristel Marmor, Janika Leoste, Mati Heidmets, Katrin Kangur, Martin Rebane, Jaanus Pöial, Tiina Kasuk
IntroductionThe use of various telecommunication tools has grown significantly. However, many of these tools (e.g., computer-based teleconferencing) are problematic in relaying non-verbal human communication. Telepresence robots (TPRs) are seen as telecommunication tools that can support non-verbal communication.MethodsIn this paper, we examine the usability of TPRs, and communication distance related behavioral realism in communication situations between physically present persons and a TPR-mediated person. Twenty-four participants, who played out 36 communication situations with TPRs, were observed and interviewed.ResultsThe results indicate that TPR-mediated people, especially women, choose shorter than normal communication distances. The type of the robot did not influence the choice of communication distance. The participants perceived the use of TPRs positively as a feasible telecommunication method.DiscussionWhen introducing TPRs, situations with greater intrapersonal distances require more practice compared to scenarios where a physically present person communicates with a telepresent individual in the audience. In the latter situation, the robot-mediated person could be perceived as “behaviorally realistic” much faster than in vice versa communication situations.
导言:各种远程通信工具的使用大幅增长。然而,其中许多工具(如基于计算机的远程会议)在传递人类非语言交流方面存在问题。本文研究了远程呈现机器人(TPR)的可用性,以及以远程呈现机器人为媒介的人与人之间在沟通时与沟通距离相关的行为真实性。结果表明,以 TPR 为媒介的人,尤其是女性,会选择比正常交流距离更短的交流距离。机器人的类型并不影响交流距离的选择。讨论在引入 TPR 时,与亲身在场者与远程在场者进行交流的情景相比,人与人之间距离较远的情景需要更多的练习。在后一种情况下,以机器人为媒介的人被视为 "行为逼真 "的速度要比相反的交流情况快得多。
{"title":"Keeping social distance in a classroom while interacting via a telepresence robot: a pilot study","authors":"Kristel Marmor, Janika Leoste, Mati Heidmets, Katrin Kangur, Martin Rebane, Jaanus Pöial, Tiina Kasuk","doi":"10.3389/fnbot.2024.1339000","DOIUrl":"https://doi.org/10.3389/fnbot.2024.1339000","url":null,"abstract":"IntroductionThe use of various telecommunication tools has grown significantly. However, many of these tools (e.g., computer-based teleconferencing) are problematic in relaying non-verbal human communication. Telepresence robots (TPRs) are seen as telecommunication tools that can support non-verbal communication.MethodsIn this paper, we examine the usability of TPRs, and communication distance related behavioral realism in communication situations between physically present persons and a TPR-mediated person. Twenty-four participants, who played out 36 communication situations with TPRs, were observed and interviewed.ResultsThe results indicate that TPR-mediated people, especially women, choose shorter than normal communication distances. The type of the robot did not influence the choice of communication distance. The participants perceived the use of TPRs positively as a feasible telecommunication method.DiscussionWhen introducing TPRs, situations with greater intrapersonal distances require more practice compared to scenarios where a physically present person communicates with a telepresent individual in the audience. In the latter situation, the robot-mediated person could be perceived as “behaviorally realistic” much faster than in vice versa communication situations.","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"38 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2024-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139765131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design and assessment of a reconfigurable behavioral assistive robot: a pilot study 可重构行为辅助机器人的设计与评估:试点研究
IF 3.1 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-02-14 DOI: 10.3389/fnbot.2024.1332721
Enming Shi, Wenzhuo Zhi, Wanxin Chen, Yuhang Han, Bi Zhang, Xingang Zhao
IntroductionFor patients with functional motor disorders of the lower limbs due to brain damage or accidental injury, restoring the ability to stand and walk plays an important role in clinical rehabilitation. Lower limb exoskeleton robots generally require patients to convert themselves to a standing position for use, while being a wearable device with limited movement distance.MethodsThis paper proposes a reconfigurable behavioral assistive robot that integrates the functions of an exoskeleton robot and an assistive standing wheelchair through a novel mechanism. The new mechanism is based on a four-bar linkage, and through simple and stable conformal transformations, the robot can switch between exoskeleton state, sit-to-stand support state, and wheelchair state. This enables the robot to achieve the functions of assisted walking, assisted standing up, supported standing and wheelchair mobility, respectively, thereby meeting the daily activity needs of sit-to-stand transitions and gait training. The configuration transformation module controls seamless switching between different configurations through an industrial computer. Experimental protocols have been developed for wearable testing of robotic prototypes not only for healthy subjects but also for simulated hemiplegic patients.ResultsThe experimental results indicate that the gait tracking effect during robot-assisted walking is satisfactory, and there are no sudden speed changes during the assisted standing up process, providing smooth support to the wearer. Meanwhile, the activation of the main force-generating muscles of the legs and the plantar pressure decreases significantly in healthy subjects and simulated hemiplegic patients wearing the robot for assisted walking and assisted standing-up compared to the situation when the robot is not worn.DiscussionThese experimental findings demonstrate that the reconfigurable behavioral assistive robot prototype of this study is effective, reducing the muscular burden on the wearer during walking and standing up, and provide effective support for the subject's body. The experimental results objectively and comprehensively showcase the effectiveness and potential of the reconfigurable behavioral assistive robot in the realms of behavioral assistance and rehabilitation training.
引言 对于因脑损伤或意外伤害导致下肢功能运动障碍的患者来说,恢复站立和行走能力在临床康复中发挥着重要作用。下肢外骨骼机器人一般需要患者将自己转换为站立姿势才能使用,同时作为一种可穿戴设备,其移动距离有限。方法本文提出了一种可重构的行为辅助机器人,通过一种新型机构将外骨骼机器人和辅助站立轮椅的功能整合在一起。新机构以四杆连杆为基础,通过简单稳定的保形变换,机器人可以在外骨骼状态、坐立支撑状态和轮椅状态之间切换。这样,机器人就能分别实现辅助行走、辅助站立、支撑站立和轮椅移动等功能,从而满足坐立转换和步态训练等日常活动需求。配置转换模块通过工业计算机控制不同配置之间的无缝切换。实验结果表明,机器人辅助行走时的步态跟踪效果令人满意,在辅助站立过程中没有出现速度突变,为佩戴者提供了平稳的支撑。同时,与未佩戴机器人的情况相比,健康受试者和模拟偏瘫患者在佩戴机器人辅助行走和辅助起立时,腿部主要发力肌肉的激活程度和足底压力都明显降低。实验结果客观全面地展示了可重构行为辅助机器人在行为辅助和康复训练领域的有效性和潜力。
{"title":"Design and assessment of a reconfigurable behavioral assistive robot: a pilot study","authors":"Enming Shi, Wenzhuo Zhi, Wanxin Chen, Yuhang Han, Bi Zhang, Xingang Zhao","doi":"10.3389/fnbot.2024.1332721","DOIUrl":"https://doi.org/10.3389/fnbot.2024.1332721","url":null,"abstract":"IntroductionFor patients with functional motor disorders of the lower limbs due to brain damage or accidental injury, restoring the ability to stand and walk plays an important role in clinical rehabilitation. Lower limb exoskeleton robots generally require patients to convert themselves to a standing position for use, while being a wearable device with limited movement distance.MethodsThis paper proposes a reconfigurable behavioral assistive robot that integrates the functions of an exoskeleton robot and an assistive standing wheelchair through a novel mechanism. The new mechanism is based on a four-bar linkage, and through simple and stable conformal transformations, the robot can switch between exoskeleton state, sit-to-stand support state, and wheelchair state. This enables the robot to achieve the functions of assisted walking, assisted standing up, supported standing and wheelchair mobility, respectively, thereby meeting the daily activity needs of sit-to-stand transitions and gait training. The configuration transformation module controls seamless switching between different configurations through an industrial computer. Experimental protocols have been developed for wearable testing of robotic prototypes not only for healthy subjects but also for simulated hemiplegic patients.ResultsThe experimental results indicate that the gait tracking effect during robot-assisted walking is satisfactory, and there are no sudden speed changes during the assisted standing up process, providing smooth support to the wearer. Meanwhile, the activation of the main force-generating muscles of the legs and the plantar pressure decreases significantly in healthy subjects and simulated hemiplegic patients wearing the robot for assisted walking and assisted standing-up compared to the situation when the robot is not worn.DiscussionThese experimental findings demonstrate that the reconfigurable behavioral assistive robot prototype of this study is effective, reducing the muscular burden on the wearer during walking and standing up, and provide effective support for the subject's body. The experimental results objectively and comprehensively showcase the effectiveness and potential of the reconfigurable behavioral assistive robot in the realms of behavioral assistance and rehabilitation training.","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"5 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2024-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139765042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Human-robot planar co-manipulation of extended objects: data-driven models and control from human-human dyads 扩展物体的人与机器人平面协同操纵:数据驱动模型和人与人之间的控制
IF 3.1 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-02-12 DOI: 10.3389/fnbot.2024.1291694
Erich Mielke, Eric Townsend, David Wingate, John L. Salmon, Marc D. Killpack
Human teams are able to easily perform collaborative manipulation tasks. However, simultaneously manipulating a large extended object for a robot and human is a difficult task due to the inherent ambiguity in the desired motion. Our approach in this paper is to leverage data from human-human dyad experiments to determine motion intent for a physical human-robot co-manipulation task. We do this by showing that the human-human dyad data exhibits distinct torque triggers for a lateral movement. As an alternative intent estimation method, we also develop a deep neural network based on motion data from human-human trials to predict future trajectories based on past object motion. We then show how force and motion data can be used to determine robot control in a human-robot dyad. Finally, we compare human-human dyad performance to the performance of two controllers that we developed for human-robot co-manipulation. We evaluate these controllers in three-degree-of-freedom planar motion where determining if the task involves rotation or translation is ambiguous.
人类团队能够轻松完成协作操纵任务。然而,机器人和人类同时操纵一个大型扩展物体是一项艰巨的任务,因为所需的运动本身就存在模糊性。我们在本文中采用的方法是利用来自人机对偶实验的数据来确定物理人机协同操纵任务的运动意图。我们的方法是证明人与人之间的双人实验数据显示了横向运动的不同扭矩触发点。作为另一种意图估计方法,我们还开发了一种基于人机试验运动数据的深度神经网络,以根据过去的物体运动预测未来的轨迹。然后,我们展示了如何利用力和运动数据来确定机器人在人机协作中的控制。最后,我们将人机合作的性能与我们为人机合作操纵开发的两个控制器的性能进行比较。我们在三自由度平面运动中对这些控制器进行了评估,在这种运动中,确定任务是旋转还是平移是模棱两可的。
{"title":"Human-robot planar co-manipulation of extended objects: data-driven models and control from human-human dyads","authors":"Erich Mielke, Eric Townsend, David Wingate, John L. Salmon, Marc D. Killpack","doi":"10.3389/fnbot.2024.1291694","DOIUrl":"https://doi.org/10.3389/fnbot.2024.1291694","url":null,"abstract":"Human teams are able to easily perform collaborative manipulation tasks. However, simultaneously manipulating a large extended object for a robot and human is a difficult task due to the inherent ambiguity in the desired motion. Our approach in this paper is to leverage data from human-human dyad experiments to determine motion intent for a physical human-robot co-manipulation task. We do this by showing that the human-human dyad data exhibits distinct torque triggers for a lateral movement. As an alternative intent estimation method, we also develop a deep neural network based on motion data from human-human trials to predict future trajectories based on past object motion. We then show how force and motion data can be used to determine robot control in a human-robot dyad. Finally, we compare human-human dyad performance to the performance of two controllers that we developed for human-robot co-manipulation. We evaluate these controllers in three-degree-of-freedom planar motion where determining if the task involves rotation or translation is ambiguous.","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"95 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2024-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139764918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Frontiers in Neurorobotics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1