首页 > 最新文献

Biomimetic Intelligence and Robotics最新文献

英文 中文
Computer vision-based six layered ConvNeural network to recognize sign language for both numeral and alphabet signs 基于计算机视觉的六层卷积神经网络识别数字和字母手语
Pub Date : 2023-12-09 DOI: 10.1016/j.birob.2023.100141
Muhammad Aminur Rahaman , Kabiratun Ummi Oyshe , Prothoma Khan Chowdhury , Tanoy Debnath , Anichur Rahman , Md. Saikat Islam Khan

People who have trouble communicating verbally are often dependent on sign language, which can be difficult for most people to understand, making interaction with them a difficult endeavor. The Sign Language Recognition (SLR) system takes an input expression from a hearing or speaking-impaired person and outputs it in the form of text or voice to a normal person. The existing study related to the Sign Language Recognition system has some drawbacks, such as a lack of large datasets and datasets with a range of backgrounds, skin tones, and ages. This research efficiently focuses on Sign Language Recognition to overcome previous limitations. Most importantly, we use our proposed Convolutional Neural Network (CNN) model, “ConvNeural”, in order to train our dataset. Additionally, we develop our own datasets, “BdSL_OPSA22_STATIC1” and “BdSL_OPSA22_STATIC2”, both of which have ambiguous backgrounds. “BdSL_OPSA22_STATIC1” and “BdSL_OPSA22_STATIC2” both include images of Bangla characters and numerals, a total of 24,615 and 8437 images, respectively. The “ConvNeural” model outperforms the pre-trained models with accuracy of 98.38% for “BdSL_OPSA22_STATIC1” and 92.78% for “BdSL_OPSA22_STATIC2”. For “BdSL_OPSA22_STATIC1” dataset, we get precision, recall, F1-score, sensitivity and specificity of 96%, 95%, 95%, 99.31% , and 95.78% respectively. Moreover, in case of “BdSL_OPSA22_STATIC2” dataset, we achieve precision, recall, F1-score, sensitivity and specificity of 90%, 88%, 88%, 100%, and 100% respectively.

有语言交流障碍的人通常依赖手语,而大多数人都很难理解手语,这使得与他们的交流变得十分困难。手语识别(SLR)系统接收来自听力或语言障碍者的输入表达,并以文本或语音的形式输出给正常人。与手语识别系统相关的现有研究存在一些缺陷,如缺乏大型数据集和具有不同背景、肤色和年龄的数据集。本研究将重点有效地放在手语识别上,以克服以往的局限性。最重要的是,我们使用我们提出的卷积神经网络(CNN)模型 "ConvNeural "来训练我们的数据集。此外,我们还开发了自己的数据集 "BdSL_OPSA22_STATIC1 "和 "BdSL_OPSA22_STATIC2",这两个数据集的背景都很模糊。"BdSL_OPSA22_STATIC1 "和 "BdSL_OPSA22_STATIC2 "都包含孟加拉语字符和数字图像,总数分别为 24615 张和 8437 张。对于 "BdSL_OPSA22_STATIC1 "和 "BdSL_OPSA22_STATIC2","ConvNeural "模型的准确率分别为 98.38%和 92.78%,优于预训练模型。对于 "BdSL_OPSA22_STATIC1 "数据集,我们得到的精确度、召回率、F1 分数、灵敏度和特异性分别为 96%、95%、95%、99.31% 和 95.78%。此外,"BdSL_OPSA22_STATIC2 "数据集的精确度、召回率、F1-分数、灵敏度和特异性分别为 90%、88%、88%、100% 和 100%。
{"title":"Computer vision-based six layered ConvNeural network to recognize sign language for both numeral and alphabet signs","authors":"Muhammad Aminur Rahaman ,&nbsp;Kabiratun Ummi Oyshe ,&nbsp;Prothoma Khan Chowdhury ,&nbsp;Tanoy Debnath ,&nbsp;Anichur Rahman ,&nbsp;Md. Saikat Islam Khan","doi":"10.1016/j.birob.2023.100141","DOIUrl":"10.1016/j.birob.2023.100141","url":null,"abstract":"<div><p>People who have trouble communicating verbally are often dependent on sign language, which can be difficult for most people to understand, making interaction with them a difficult endeavor. The Sign Language Recognition (SLR) system takes an input expression from a hearing or speaking-impaired person and outputs it in the form of text or voice to a normal person. The existing study related to the Sign Language Recognition system has some drawbacks, such as a lack of large datasets and datasets with a range of backgrounds, skin tones, and ages. This research efficiently focuses on Sign Language Recognition to overcome previous limitations. Most importantly, we use our proposed Convolutional Neural Network (CNN) model, “ConvNeural”, in order to train our dataset. Additionally, we develop our own datasets, “BdSL_OPSA22_STATIC1” and “BdSL_OPSA22_STATIC2”, both of which have ambiguous backgrounds. “BdSL_OPSA22_STATIC1” and “BdSL_OPSA22_STATIC2” both include images of Bangla characters and numerals, a total of 24,615 and 8437 images, respectively. The “ConvNeural” model outperforms the pre-trained models with accuracy of 98.38% for “BdSL_OPSA22_STATIC1” and 92.78% for “BdSL_OPSA22_STATIC2”. For “BdSL_OPSA22_STATIC1” dataset, we get precision, recall, F1-score, sensitivity and specificity of 96%, 95%, 95%, 99.31% , and 95.78% respectively. Moreover, in case of “BdSL_OPSA22_STATIC2” dataset, we achieve precision, recall, F1-score, sensitivity and specificity of 90%, 88%, 88%, 100%, and 100% respectively.</p></div>","PeriodicalId":100184,"journal":{"name":"Biomimetic Intelligence and Robotics","volume":"4 1","pages":"Article 100141"},"PeriodicalIF":0.0,"publicationDate":"2023-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667379723000554/pdfft?md5=eebeb918508ba2531b5fc2956421475e&pid=1-s2.0-S2667379723000554-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138619425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Image format pipeline and instrument diagram recognition method based on deep learning 基于深度学习的图像格式管道和仪器图识别方法
Pub Date : 2023-12-08 DOI: 10.1016/j.birob.2023.100142
Guanqun Su , Shuai Zhao , Tao Li , Shengyong Liu , Yaqi Li , Guanglong Zhao , Zhongtao Li

In this study, we proposed a recognition method based on deep artificial neural networks to identify various elements in pipelines and instrumentation diagrams (P&ID) in image formats, such as symbols, texts, and pipelines. Presently, the P&ID image format is recognized manually, and there is a problem with a high recognition error rate; therefore, automation of the above process is an important issue in the processing plant industry. The China National Offshore Petrochemical Engineering Co. provided the image set used in this study, which contains 51 P&ID drawings in the PDF. We converted the PDF P&ID drawings to PNG P&IDs with an image size of 8410 × 5940. In addition, we used labeling software to annotate the images, divided the dataset into training and test sets in a 3:1 ratio, and deployed a deep neural network for recognition. The method proposed in this study is divided into three steps. The first step segments the images and recognizes symbols using YOLOv5 + SE. The second step determines text regions using character region awareness for text detection, and performs character recognition within the text region using the optical character recognition technique. The third step is pipeline recognition using YOLOv5 + SE. The symbol recognition accuracy was 94.52%, and the recall rate was 93.27%. The recognition accuracy in the text positioning stage was 97.26% and the recall rate was 90.27%. The recognition accuracy in the character recognition stage was 90.03% and the recall rate was 91.87%. The pipeline identification accuracy was 92.9%, and the recall rate was 90.36%.

在这项研究中,我们提出了一种基于深度人工神经网络的识别方法,用于识别符号、文本和管道等图像格式中管道和仪表图(P&ID)的各种元素。目前,P&ID 图像格式需要人工识别,存在识别错误率高的问题,因此,上述过程的自动化是加工厂行业的一个重要问题。中国海洋石油化工工程有限公司提供了本研究使用的图像集,其中包含 51 张 PDF 格式的 P&ID 图纸。我们将 PDF 格式的 P&ID 图纸转换为 PNG 格式的 P&ID 图纸,图像大小为 8410 × 5940。此外,我们使用标注软件对图像进行标注,将数据集按 3:1 的比例分为训练集和测试集,并部署深度神经网络进行识别。本研究提出的方法分为三个步骤。第一步使用 YOLOv5 + SE 对图像进行分割并识别符号。第二步使用字符区域感知确定文本区域,进行文本检测,并使用光学字符识别技术在文本区域内进行字符识别。第三步是使用 YOLOv5 + SE 进行流水线识别。符号识别准确率为 94.52%,召回率为 93.27%。文本定位阶段的识别准确率为 97.26%,召回率为 90.27%。字符识别阶段的识别准确率为 90.03%,召回率为 91.87%。管道识别准确率为 92.9%,召回率为 90.36%。
{"title":"Image format pipeline and instrument diagram recognition method based on deep learning","authors":"Guanqun Su ,&nbsp;Shuai Zhao ,&nbsp;Tao Li ,&nbsp;Shengyong Liu ,&nbsp;Yaqi Li ,&nbsp;Guanglong Zhao ,&nbsp;Zhongtao Li","doi":"10.1016/j.birob.2023.100142","DOIUrl":"10.1016/j.birob.2023.100142","url":null,"abstract":"<div><p>In this study, we proposed a recognition method based on deep artificial neural networks to identify various elements in pipelines and instrumentation diagrams (P&amp;ID) in image formats, such as symbols, texts, and pipelines. Presently, the P&amp;ID image format is recognized manually, and there is a problem with a high recognition error rate; therefore, automation of the above process is an important issue in the processing plant industry. The China National Offshore Petrochemical Engineering Co. provided the image set used in this study, which contains 51 P&amp;ID drawings in the PDF. We converted the PDF P&amp;ID drawings to PNG P&amp;IDs with an image size of 8410 × 5940. In addition, we used labeling software to annotate the images, divided the dataset into training and test sets in a 3:1 ratio, and deployed a deep neural network for recognition. The method proposed in this study is divided into three steps. The first step segments the images and recognizes symbols using YOLOv5 + SE. The second step determines text regions using character region awareness for text detection, and performs character recognition within the text region using the optical character recognition technique. The third step is pipeline recognition using YOLOv5 + SE. The symbol recognition accuracy was 94.52%, and the recall rate was 93.27%. The recognition accuracy in the text positioning stage was 97.26% and the recall rate was 90.27%. The recognition accuracy in the character recognition stage was 90.03% and the recall rate was 91.87%. The pipeline identification accuracy was 92.9%, and the recall rate was 90.36%.</p></div>","PeriodicalId":100184,"journal":{"name":"Biomimetic Intelligence and Robotics","volume":"4 1","pages":"Article 100142"},"PeriodicalIF":0.0,"publicationDate":"2023-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667379723000566/pdfft?md5=9d3473b5d2acdf3a606cb65e7ef087e9&pid=1-s2.0-S2667379723000566-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138621153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LiDAR-based estimation of bounding box coordinates using Gaussian process regression and particle swarm optimization 利用高斯过程回归和粒子群优化技术进行基于激光雷达的边界框坐标估算
Pub Date : 2023-11-27 DOI: 10.1016/j.birob.2023.100140
Vinodha K., E.S. Gopi, Tushar Agnibhoj

Camera-based object tracking systems in a given closed environment lack privacy and confidentiality. In this study, light detection and ranging (LiDAR) was applied to track objects similar to the camera tracking in a closed environment, guaranteeing privacy and confidentiality. The primary objective was to demonstrate the efficacy of the proposed technique through carefully designed experiments conducted using two scenarios. In Scenario I, the study illustrates the capability of the proposed technique to detect the locations of multiple objects positioned on a flat surface, achieved by analyzing LiDAR data collected from several locations within the closed environment. Scenario II demonstrates the effectiveness of the proposed technique in detecting multiple objects using LiDAR data obtained from a single, fixed location. Real-time experiments are conducted with human subjects navigating predefined paths. Three individuals move within an environment, while LiDAR, fixed at the center, dynamically tracks and identifies their locations at multiple instances. Results demonstrate that a single, strategically positioned LiDAR can adeptly detect objects in motion around it. Furthermore, this study provides a comparison of various regression techniques for predicting bounding box coordinates. Gaussian process regression (GPR), combined with particle swarm optimization (PSO) for prediction, achieves the lowest prediction mean square error of all the regression techniques examined at 0.01. Hyperparameter tuning of GPR using PSO significantly minimizes the regression error. Results of the experiment pave the way for its extension to various real-time applications such as crowd management in malls, surveillance systems, and various Internet of Things scenarios.

在特定的封闭环境中,基于摄像头的物体追踪系统缺乏隐私性和保密性。在这项研究中,光探测和测距(LiDAR)被应用于在封闭环境中追踪物体,类似于摄像头追踪,同时保证了隐私和保密性。研究的主要目的是通过精心设计的两个场景实验来证明所提技术的有效性。在场景 I 中,研究通过分析从封闭环境中多个位置收集的激光雷达数据,展示了所提技术检测平面上多个物体位置的能力。场景 II 演示了所提技术使用从单一固定位置获取的激光雷达数据检测多个物体的有效性。实时实验是由人类受试者在预定义的路径上进行导航。三个人在一个环境中移动,而固定在中心的激光雷达则在多个实例中动态跟踪和识别他们的位置。结果表明,单个战略性定位的激光雷达可以很好地探测周围运动的物体。此外,本研究还对用于预测边界框坐标的各种回归技术进行了比较。高斯过程回归(GPR)结合粒子群优化(PSO)进行预测,其预测均方误差为 0.01,是所有受检回归技术中最小的。利用 PSO 对 GPR 进行超参数调整,可显著减少回归误差。实验结果为将其扩展到各种实时应用(如商场人群管理、监控系统和各种物联网应用场景)铺平了道路。
{"title":"LiDAR-based estimation of bounding box coordinates using Gaussian process regression and particle swarm optimization","authors":"Vinodha K.,&nbsp;E.S. Gopi,&nbsp;Tushar Agnibhoj","doi":"10.1016/j.birob.2023.100140","DOIUrl":"https://doi.org/10.1016/j.birob.2023.100140","url":null,"abstract":"<div><p>Camera-based object tracking systems in a given closed environment lack privacy and confidentiality. In this study, light detection and ranging (LiDAR) was applied to track objects similar to the camera tracking in a closed environment, guaranteeing privacy and confidentiality. The primary objective was to demonstrate the efficacy of the proposed technique through carefully designed experiments conducted using two scenarios. In Scenario I, the study illustrates the capability of the proposed technique to detect the locations of multiple objects positioned on a flat surface, achieved by analyzing LiDAR data collected from several locations within the closed environment. Scenario II demonstrates the effectiveness of the proposed technique in detecting multiple objects using LiDAR data obtained from a single, fixed location. Real-time experiments are conducted with human subjects navigating predefined paths. Three individuals move within an environment, while LiDAR, fixed at the center, dynamically tracks and identifies their locations at multiple instances. Results demonstrate that a single, strategically positioned LiDAR can adeptly detect objects in motion around it. Furthermore, this study provides a comparison of various regression techniques for predicting bounding box coordinates. Gaussian process regression (GPR), combined with particle swarm optimization (PSO) for prediction, achieves the lowest prediction mean square error of all the regression techniques examined at 0.01. Hyperparameter tuning of GPR using PSO significantly minimizes the regression error. Results of the experiment pave the way for its extension to various real-time applications such as crowd management in malls, surveillance systems, and various Internet of Things scenarios.</p></div>","PeriodicalId":100184,"journal":{"name":"Biomimetic Intelligence and Robotics","volume":"4 1","pages":"Article 100140"},"PeriodicalIF":0.0,"publicationDate":"2023-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667379723000542/pdfft?md5=635b3e34ad837f8738911fa4e2cc14f0&pid=1-s2.0-S2667379723000542-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139090230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Computer-controlled ultra high voltage amplifier for dielectric elastomer actuators 用于介电弹性体致动器的计算机控制超高压放大器
Pub Date : 2023-11-23 DOI: 10.1016/j.birob.2023.100139
Ardi Wiranata , Zebing Mao , Yu Kuwajima , Yuya Yamaguchi , Muhammad Akhsin Muflikhun , Hiroki Shigemune , Naoki Hosoya , Shingo Maeda

Soft robotics is a breakthrough technology to support human–robot interactions. The soft structure of a soft robot can increase safety during human and robot interactions. One of the promising soft actuators for soft robotics is dielectric elastomer actuators (DEAs). DEAs can operate silently and have an excellent energy density. The simple structure of DEAs leads to the easy fabrication of soft actuators. The simplicity combined with silent operation and high energy density make DEAs interesting for soft robotics researchers. DEAs actuation follows the Maxwell-pressure principle. The pressure produced in the DEAs actuation depends much on the voltage applied. Common DEAs requires high voltage to gain an actuation. Since the power consumption of DEAs is in the milli-Watt range, the current needed to operate the DEAs can be neglected. Several commercially available DC-DC converters can convert the volt range to the kV range. In order to get a voltage in the 2–3 kV range, the reliable DC-DC converter can be pricy for each device. This problem hinders the education of soft actuators, especially for a newcomer laboratory that works in soft electric actuators. This paper introduces an entirely do-it-yourself (DIY) Ultrahigh voltage amplifier (UHV-Amp) for education in soft robotics. UHV-Amp can amplify 12 V to at a maximum of 4 kV DC. As a demonstration, we used this UHV-Amp to test a single layer of powdered-based DEAs. The strategy to build this educational type UHV-Amp was utilizing a Cockcroft-Walton circuit structure to amplify the voltage range to the kV range. In its current state, the UHV-Amp has the potential to achieve approximately 4 kV. We created a simple platform to control the UHV-Amp from a personal computer. In near future, we expect this easy control of the UHV-Amp can contribute to the education of soft electric actuators.

软体机器人技术是支持人与机器人互动的一项突破性技术。软机器人的软结构可以提高人与机器人交互过程中的安全性。介电弹性体致动器(DEA)是软机器人技术中最有前途的软致动器之一。DEA 可以静音运行,并具有出色的能量密度。DEA 结构简单,易于制造软致动器。简单的结构加上无声操作和高能量密度,使得软机器人研究人员对 DEAs 颇感兴趣。DEAs 驱动遵循麦克斯韦压力原理。DEAs 驱动过程中产生的压力在很大程度上取决于所施加的电压。普通的 DEA 需要高电压才能获得驱动力。由于 DEA 的功耗在毫瓦级,因此可以忽略 DEA 运行所需的电流。市面上有几种直流-直流转换器可以将伏特范围转换为千伏范围。为了获得 2-3 千伏范围内的电压,可靠的直流-直流转换器对每个设备来说都是昂贵的。这个问题阻碍了软促动器的教育,尤其是对于从事软电动促动器研究的新实验室而言。本文介绍了一种用于软机器人教育的完全自己动手(DIY)的超高压放大器(UHV-Amp)。UHV-Amp 可将 12 V 电压放大到最高 4 kV 直流电压。作为演示,我们使用该超高压放大器测试了单层粉末状 DEA。制造这种教育型超高压放大器的策略是利用 Cockcroft-Walton 电路结构将电压范围放大到 kV 范围。在目前的状态下,超高压放大器有可能达到约 4 千伏的电压。我们创建了一个简单的平台,可通过个人电脑控制超高压放大器。在不久的将来,我们希望这种对超高压交流放大器的简易控制能够为软电动执行器的教育做出贡献。
{"title":"Computer-controlled ultra high voltage amplifier for dielectric elastomer actuators","authors":"Ardi Wiranata ,&nbsp;Zebing Mao ,&nbsp;Yu Kuwajima ,&nbsp;Yuya Yamaguchi ,&nbsp;Muhammad Akhsin Muflikhun ,&nbsp;Hiroki Shigemune ,&nbsp;Naoki Hosoya ,&nbsp;Shingo Maeda","doi":"10.1016/j.birob.2023.100139","DOIUrl":"https://doi.org/10.1016/j.birob.2023.100139","url":null,"abstract":"<div><p>Soft robotics is a breakthrough technology to support human–robot interactions. The soft structure of a soft robot can increase safety during human and robot interactions. One of the promising soft actuators for soft robotics is dielectric elastomer actuators (DEAs). DEAs can operate silently and have an excellent energy density. The simple structure of DEAs leads to the easy fabrication of soft actuators. The simplicity combined with silent operation and high energy density make DEAs interesting for soft robotics researchers. DEAs actuation follows the Maxwell-pressure principle. The pressure produced in the DEAs actuation depends much on the voltage applied. Common DEAs requires high voltage to gain an actuation. Since the power consumption of DEAs is in the milli-Watt range, the current needed to operate the DEAs can be neglected. Several commercially available DC-DC converters can convert the volt range to the kV range. In order to get a voltage in the 2–3 kV range, the reliable DC-DC converter can be pricy for each device. This problem hinders the education of soft actuators, especially for a newcomer laboratory that works in soft electric actuators. This paper introduces an entirely do-it-yourself (DIY) Ultrahigh voltage amplifier (UHV-Amp) for education in soft robotics. UHV-Amp can amplify 12 V to at a maximum of 4 kV DC. As a demonstration, we used this UHV-Amp to test a single layer of powdered-based DEAs. The strategy to build this educational type UHV-Amp was utilizing a Cockcroft-Walton circuit structure to amplify the voltage range to the kV range. In its current state, the UHV-Amp has the potential to achieve approximately 4 kV. We created a simple platform to control the UHV-Amp from a personal computer. In near future, we expect this easy control of the UHV-Amp can contribute to the education of soft electric actuators.</p></div>","PeriodicalId":100184,"journal":{"name":"Biomimetic Intelligence and Robotics","volume":"4 1","pages":"Article 100139"},"PeriodicalIF":0.0,"publicationDate":"2023-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667379723000530/pdfft?md5=918de8d63135576758e24a01f703e9af&pid=1-s2.0-S2667379723000530-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139090229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Aye-aye middle finger kinematic modeling and motion tracking during tap-scanning 在轻拍扫描过程中,Aye-aye中指的运动学建模和运动跟踪
Pub Date : 2023-11-14 DOI: 10.1016/j.birob.2023.100134
Nihar Masurkar , Jiming Kang , Hamidreza Nemati , Ehsan Dehghan-Niri

The aye-aye (Daubentonia madagascariensis) is a nocturnal lemur native to the island of Madagascar with a unique thin middle finger. Its slender third digit has a remarkably specific adaptation, allowing them to perform tap-scanning to locate small cavities beneath tree bark and extract wood-boring larvae from it. As an exceptional active acoustic actuator, this finger makes an aye-aye’s biological system an attractive model for pioneering Nondestructive Evaluation (NDE) methods and robotic systems. Despite the important aspects of the topic in the aye-aye’s unique foraging and its potential contribution to the engineering sensory, little is known about the mechanism and dynamics of this unique finger. This paper used a motion-tracking approach for the aye-aye’s middle finger using simultaneous video graphic capture. To mimic the motion, a two-link robot arm model is designed to reproduce the trajectory. Kinematics formulations were proposed to derive the motion of the middle finger using the Lagrangian method. In addition, a hardware model was developed to simulate the aye-aye’s finger motion. To validate the model, different motion states such as trajectory paths and joint angles, were compared. The simulation results indicate the kinematics of the model were consistent with the actual finger movement. This model is used to understand the aye-aye’s unique tap-scanning process for pioneering new tap-testing NDE strategies for various inspection applications.

马达加斯加狐猴(Daubentonia Madagascar)是一种夜间活动的狐猴,原产于马达加斯加岛,具有独特的细中指。它细长的第三根手指具有非常特殊的适应性,使它们能够通过轻拍扫描来定位树皮下的小洞,并从中提取蛀木幼虫。作为一种特殊的主动声学驱动器,这种手指使aye-aye的生物系统成为开创性的无损评估(NDE)方法和机器人系统的一个有吸引力的模型。尽管该主题在aye-aye独特的觅食及其对工程感官的潜在贡献方面具有重要意义,但对这种独特手指的机制和动力学知之甚少。本文采用同步视频图像捕捉的方法对中指进行运动跟踪。为了模拟运动,设计了一个双连杆机械臂模型来再现运动轨迹。提出了用拉格朗日方法推导中指运动的运动学公式。此外,开发了一个硬件模型来模拟aye-aye的手指运动。为了验证该模型,比较了不同的运动状态,如轨迹路径和关节角度。仿真结果表明,该模型的运动学与实际手指运动基本一致。该模型用于了解aye-aye独特的轻叩扫描过程,为各种检测应用开创了新的轻叩测试NDE策略。
{"title":"Aye-aye middle finger kinematic modeling and motion tracking during tap-scanning","authors":"Nihar Masurkar ,&nbsp;Jiming Kang ,&nbsp;Hamidreza Nemati ,&nbsp;Ehsan Dehghan-Niri","doi":"10.1016/j.birob.2023.100134","DOIUrl":"10.1016/j.birob.2023.100134","url":null,"abstract":"<div><p>The aye-aye (Daubentonia madagascariensis) is a nocturnal lemur native to the island of Madagascar with a unique thin middle finger. Its slender third digit has a remarkably specific adaptation, allowing them to perform tap-scanning to locate small cavities beneath tree bark and extract wood-boring larvae from it. As an exceptional active acoustic actuator, this finger makes an aye-aye’s biological system an attractive model for pioneering Nondestructive Evaluation (NDE) methods and robotic systems. Despite the important aspects of the topic in the aye-aye’s unique foraging and its potential contribution to the engineering sensory, little is known about the mechanism and dynamics of this unique finger. This paper used a motion-tracking approach for the aye-aye’s middle finger using simultaneous video graphic capture. To mimic the motion, a two-link robot arm model is designed to reproduce the trajectory. Kinematics formulations were proposed to derive the motion of the middle finger using the Lagrangian method. In addition, a hardware model was developed to simulate the aye-aye’s finger motion. To validate the model, different motion states such as trajectory paths and joint angles, were compared. The simulation results indicate the kinematics of the model were consistent with the actual finger movement. This model is used to understand the aye-aye’s unique tap-scanning process for pioneering new tap-testing NDE strategies for various inspection applications.</p></div>","PeriodicalId":100184,"journal":{"name":"Biomimetic Intelligence and Robotics","volume":"3 4","pages":"Article 100134"},"PeriodicalIF":0.0,"publicationDate":"2023-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667379723000487/pdfft?md5=64a88634d026b11caf9e009364209eb4&pid=1-s2.0-S2667379723000487-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135763587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A framework to develop and test a model-free motion control system for a forestry crane 林业起重机无模型运动控制系统的开发与测试框架
Pub Date : 2023-11-10 DOI: 10.1016/j.birob.2023.100133
Pedro La Hera , Omar Mendoza-Trejo , Håkan Lideskog , Daniel Ortíz Morales

This article has the objective of presenting our method to develop and test a motion control system for a heavy-duty hydraulically actuated manipulator, which is part of a newly developed prototype featuring a fully-autonomous unmanned forestry machine. This control algorithm is based on functional analysis and differential algebra, under the concepts of a new type of approach known as model-free intelligent PID control (iPID). As it can be unsafe to test this form of control directly on real hardware, our main contribution is to introduce a framework for developing and testing control software. This framework incorporates a desktop-size mockup crane equipped with comparable hardware as the real one, which we design and manufactured using 3D-printing. This downscaled mechatronic system allows to safely test the implementation of control software in real-time hardware directly on our desks, prior to the actual testing on the real machine. The results demonstrate that this development framework is useful to safely test control software for heavy-duty systems, and it helped us present the first experiments with the world’s first unmanned forestry machine capable of performing fully autonomous forestry tasks.

本文旨在介绍我们开发和测试重型液压驱动机械臂运动控制系统的方法,该系统是新开发的全自动无人林业机械原型的一部分。该控制算法基于泛函分析和微分代数,在一种称为无模型智能PID控制(iPID)的新型方法的概念下。由于直接在实际硬件上测试这种形式的控制是不安全的,我们的主要贡献是引入一个用于开发和测试控制软件的框架。这个框架结合了一个桌面大小的模拟起重机,配备了与真实起重机相当的硬件,我们使用3d打印设计和制造。这种缩小的机电一体化系统允许在实际机器上进行实际测试之前,直接在我们的办公桌上安全地测试实时硬件控制软件的实施。结果表明,该开发框架对于安全测试重型系统的控制软件非常有用,并且它帮助我们展示了世界上第一台能够执行完全自主林业任务的无人林业机器的首次实验。
{"title":"A framework to develop and test a model-free motion control system for a forestry crane","authors":"Pedro La Hera ,&nbsp;Omar Mendoza-Trejo ,&nbsp;Håkan Lideskog ,&nbsp;Daniel Ortíz Morales","doi":"10.1016/j.birob.2023.100133","DOIUrl":"https://doi.org/10.1016/j.birob.2023.100133","url":null,"abstract":"<div><p>This article has the objective of presenting our method to develop and test a motion control system for a heavy-duty hydraulically actuated manipulator, which is part of a newly developed prototype featuring a fully-autonomous unmanned forestry machine. This control algorithm is based on functional analysis and differential algebra, under the concepts of a new type of approach known as model-free intelligent PID control (iPID). As it can be unsafe to test this form of control directly on real hardware, our main contribution is to introduce a framework for developing and testing control software. This framework incorporates a desktop-size mockup crane equipped with comparable hardware as the real one, which we design and manufactured using 3D-printing. This downscaled mechatronic system allows to safely test the implementation of control software in real-time hardware directly on our desks, prior to the actual testing on the real machine. The results demonstrate that this development framework is useful to safely test control software for heavy-duty systems, and it helped us present the first experiments with the world’s first unmanned forestry machine capable of performing fully autonomous forestry tasks.</p></div>","PeriodicalId":100184,"journal":{"name":"Biomimetic Intelligence and Robotics","volume":"3 4","pages":"Article 100133"},"PeriodicalIF":0.0,"publicationDate":"2023-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667379723000475/pdfft?md5=9177b5eeb292d107fd475cafba14e2b3&pid=1-s2.0-S2667379723000475-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134663213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Heterogeneous multi-agent task allocation based on graph neural network ant colony optimization algorithms 基于图神经网络蚁群优化算法的异构多智能体任务分配
Pub Date : 2023-10-31 DOI: 10.20517/ir.2023.33
Ziyuan Ma, Huajun Gong
Heterogeneous multi-agent task allocation is a key optimization problem widely used in fields such as drone swarms and multi-robot coordination. This paper proposes a new paradigm that innovatively combines graph neural networks and ant colony optimization algorithms to solve the assignment problem of heterogeneous multi-agents. The paper introduces an innovative Graph-based Heterogeneous Neural Network Ant Colony Optimization (GHNN-ACO) algorithm for heterogeneous multi-agent scenarios. The multi-agent system is composed of unmanned aerial vehicles, unmanned ships, and unmanned vehicles that work together to effectively respond to emergencies. This method uses graph neural networks to learn the relationship between tasks and agents, forming a graph representation, which is then integrated into ant colony optimization algorithms to guide the search process of ants. Firstly, the algorithm in this paper constructs heterogeneous graph data containing different types of agents and their relationships and uses the algorithm to classify and predict linkages for agent nodes. Secondly, the GHNN-ACO algorithm performs effectively in heterogeneous multi-agent scenarios, providing an effective solution for node classification and link prediction tasks in intelligent agent systems. Thirdly, the algorithm achieves an accuracy rate of 95.31% in assigning multiple tasks to multiple agents. It holds potential application prospects in emergency response and provides a new idea for multi-agent system cooperation.
异构多智能体任务分配是广泛应用于无人机群和多机器人协调等领域的关键优化问题。本文创新性地将图神经网络与蚁群优化算法相结合,提出了一种解决异构多智能体分配问题的新范式。本文介绍了一种基于图的异构神经网络蚁群优化算法(GHNN-ACO)。多智能体系统由无人机、无人船和无人车组成,协同工作,有效应对突发事件。该方法利用图神经网络学习任务与智能体之间的关系,形成图表示,然后将其集成到蚁群优化算法中,指导蚂蚁的搜索过程。首先,本文算法构建了包含不同类型的智能体及其关系的异构图数据,并利用该算法对智能体节点的关联进行分类和预测。其次,GHNN-ACO算法在异构多智能体场景下表现良好,为智能智能体系统中的节点分类和链路预测任务提供了有效的解决方案。第三,该算法对多个agent进行多任务分配的准确率达到95.31%。它在应急响应中具有潜在的应用前景,为多智能体系统协作提供了新的思路。
{"title":"Heterogeneous multi-agent task allocation based on graph neural network ant colony optimization algorithms","authors":"Ziyuan Ma, Huajun Gong","doi":"10.20517/ir.2023.33","DOIUrl":"https://doi.org/10.20517/ir.2023.33","url":null,"abstract":"Heterogeneous multi-agent task allocation is a key optimization problem widely used in fields such as drone swarms and multi-robot coordination. This paper proposes a new paradigm that innovatively combines graph neural networks and ant colony optimization algorithms to solve the assignment problem of heterogeneous multi-agents. The paper introduces an innovative Graph-based Heterogeneous Neural Network Ant Colony Optimization (GHNN-ACO) algorithm for heterogeneous multi-agent scenarios. The multi-agent system is composed of unmanned aerial vehicles, unmanned ships, and unmanned vehicles that work together to effectively respond to emergencies. This method uses graph neural networks to learn the relationship between tasks and agents, forming a graph representation, which is then integrated into ant colony optimization algorithms to guide the search process of ants. Firstly, the algorithm in this paper constructs heterogeneous graph data containing different types of agents and their relationships and uses the algorithm to classify and predict linkages for agent nodes. Secondly, the GHNN-ACO algorithm performs effectively in heterogeneous multi-agent scenarios, providing an effective solution for node classification and link prediction tasks in intelligent agent systems. Thirdly, the algorithm achieves an accuracy rate of 95.31% in assigning multiple tasks to multiple agents. It holds potential application prospects in emergency response and provides a new idea for multi-agent system cooperation.","PeriodicalId":100184,"journal":{"name":"Biomimetic Intelligence and Robotics","volume":"3 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135808939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FPC-BTB detection and positioning system based on optimized YOLOv5 基于优化YOLOv5的FPC-BTB检测定位系统
Pub Date : 2023-10-31 DOI: 10.1016/j.birob.2023.100132
Changyu Jing , Tianyu Fu , Fengming Li , Ligang Jin , Rui Song

With the aim of addressing the visual positioning problem of board-to-board (BTB) jacks during the automatic assembly of flexible printed circuit (FPC) in mobile phones, an FPC-BTB jack detection method based on the optimized You Only Look Once, version 5 (YOLOv5) deep learning algorithm was proposed in this study. An FPC-BTB jack real-time detection and positioning system was developed for the real-time target detection and pose output synchronization of the BTB jack. On that basis, a visual positioning experimental platform that integrated a UR5e manipulator arm and Hikvision industrial camera was built for BTB jack detection and positioning experiments. As indicated by the experimental results, the developed FPC-BTB jack detection and positioning system for BTB target recognition and positioning achieved a success rate of 99.677%. Its average detection accuracy reached 99.341%, the average confidence of the detected target was 91%, the detection and positioning speed reached 31.25 frames per second, and the positioning deviation was less than 0.93 mm, which conforms to the practical application requirements of the FPC assembly process.

针对手机柔性印刷电路(FPC)自动装配过程中板对板(BTB)插孔的视觉定位问题,提出了一种基于优化后的You Only Look Once, version 5 (YOLOv5)深度学习算法的FPC-BTB插孔检测方法。为实现BTB千斤顶的实时目标检测和位姿输出同步,研制了FPC-BTB千斤顶实时检测定位系统。在此基础上,搭建了UR5e机械手与海康威视工业相机相结合的视觉定位实验平台,进行BTB千斤顶检测与定位实验。实验结果表明,所开发的FPC-BTB插孔检测定位系统对BTB目标的识别定位成功率达到99.677%。其平均检测精度达到99.341%,检测目标平均置信度91%,检测定位速度达到31.25帧/秒,定位偏差小于0.93 mm,符合FPC装配工艺的实际应用要求。
{"title":"FPC-BTB detection and positioning system based on optimized YOLOv5","authors":"Changyu Jing ,&nbsp;Tianyu Fu ,&nbsp;Fengming Li ,&nbsp;Ligang Jin ,&nbsp;Rui Song","doi":"10.1016/j.birob.2023.100132","DOIUrl":"https://doi.org/10.1016/j.birob.2023.100132","url":null,"abstract":"<div><p>With the aim of addressing the visual positioning problem of board-to-board (BTB) jacks during the automatic assembly of flexible printed circuit (FPC) in mobile phones, an FPC-BTB jack detection method based on the optimized You Only Look Once, version 5 (YOLOv5) deep learning algorithm was proposed in this study. An FPC-BTB jack real-time detection and positioning system was developed for the real-time target detection and pose output synchronization of the BTB jack. On that basis, a visual positioning experimental platform that integrated a UR5e manipulator arm and Hikvision industrial camera was built for BTB jack detection and positioning experiments. As indicated by the experimental results, the developed FPC-BTB jack detection and positioning system for BTB target recognition and positioning achieved a success rate of 99.677%. Its average detection accuracy reached 99.341%, the average confidence of the detected target was 91%, the detection and positioning speed reached 31.25 frames per second, and the positioning deviation was less than 0.93 mm, which conforms to the practical application requirements of the FPC assembly process.</p></div>","PeriodicalId":100184,"journal":{"name":"Biomimetic Intelligence and Robotics","volume":"3 4","pages":"Article 100132"},"PeriodicalIF":0.0,"publicationDate":"2023-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667379723000463/pdfft?md5=389029475c5fb205080a541f55997139&pid=1-s2.0-S2667379723000463-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134663215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Path planning with obstacle avoidance for soft robots based on improved particle swarm optimization algorithm 基于改进粒子群算法的软机器人避障路径规划
Pub Date : 2023-10-29 DOI: 10.20517/ir.2023.31
Hongwei Liu, Yang Jiang, Manlu Liu, Xinbin Zhang, Jianwen Huo, Haoxiang Su
Soft-bodied robots have the advantages of high flexibility and multiple degrees of freedom and have promising applications in exploring complex unstructured environments. Kinematic coupling exists for the soft robot in a problematic space environment for motion planning between the soft robot arm segments. In solving the soft robot inverse kinematics, there are only solutions or even no solutions, and soft robot obstacle avoidance control is tough to exist, as other problems. In this paper, we use the segmental constant curvature assumption to derive the positive and negative kinematic relationships and design the tip self-growth algorithm to reduce the difficulty of solving the parameters in the inverse kinematics of the soft robot to avoid kinematic coupling. Finally, by combining the improved particle swarm algorithm to optimize the paths, the convergence speed and reconciliation accuracy of the algorithm are further accelerated. The simulation results prove that the method can successfully move the soft robot in complex space with high computational efficiency and high accuracy, which verifies the effectiveness of the research.
软体机器人具有高灵活性和多自由度的优点,在探索复杂的非结构化环境方面具有广阔的应用前景。在复杂的空间环境中,软机器人臂段之间存在运动耦合,需要进行运动规划。在求解软机器人逆运动学问题时,存在只有解甚至无解的问题,软机器人避障控制与其他问题一样难以存在。本文利用节段常曲率假设导出了软机器人的正、负运动学关系,并设计了尖端自生长算法,降低了软机器人逆运动学参数的求解难度,避免了运动学耦合。最后,结合改进的粒子群算法对路径进行优化,进一步加快了算法的收敛速度和调和精度。仿真结果表明,该方法能够成功实现软机器人在复杂空间中的移动,计算效率高,精度高,验证了研究的有效性。
{"title":"Path planning with obstacle avoidance for soft robots based on improved particle swarm optimization algorithm","authors":"Hongwei Liu, Yang Jiang, Manlu Liu, Xinbin Zhang, Jianwen Huo, Haoxiang Su","doi":"10.20517/ir.2023.31","DOIUrl":"https://doi.org/10.20517/ir.2023.31","url":null,"abstract":"Soft-bodied robots have the advantages of high flexibility and multiple degrees of freedom and have promising applications in exploring complex unstructured environments. Kinematic coupling exists for the soft robot in a problematic space environment for motion planning between the soft robot arm segments. In solving the soft robot inverse kinematics, there are only solutions or even no solutions, and soft robot obstacle avoidance control is tough to exist, as other problems. In this paper, we use the segmental constant curvature assumption to derive the positive and negative kinematic relationships and design the tip self-growth algorithm to reduce the difficulty of solving the parameters in the inverse kinematics of the soft robot to avoid kinematic coupling. Finally, by combining the improved particle swarm algorithm to optimize the paths, the convergence speed and reconciliation accuracy of the algorithm are further accelerated. The simulation results prove that the method can successfully move the soft robot in complex space with high computational efficiency and high accuracy, which verifies the effectiveness of the research.","PeriodicalId":100184,"journal":{"name":"Biomimetic Intelligence and Robotics","volume":"29 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136135608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning approaches for object recognition in plant diseases: a review 植物病害目标识别的深度学习方法综述
Pub Date : 2023-10-28 DOI: 10.20517/ir.2023.29
Zimo Zhou, Yue Zhang, Zhaohui Gu, Simon X. Yang
Plant diseases pose a significant threat to the economic viability of agriculture and the normal functioning of trees in forests. Accurate detection and identification of plant diseases are crucial for smart agricultural and forestry management. In recent years, the intersection of agriculture and artificial intelligence has become a popular research topic. Researchers have been experimenting with object recognition algorithms, specifically convolutional neural networks, to identify diseases in plant images. The goal is to reduce labor and improve detection efficiency. This article reviews the application of object detection methods for detecting common plant diseases, such as tomato, citrus, maize, and pine trees. It introduces various object detection models, ranging from basic to modern and sophisticated networks, and compares the innovative aspects and drawbacks of commonly used neural network models. Furthermore, the article discusses current challenges in plant disease detection and object detection methods and suggests promising directions for future work in learning-based plant disease detection systems.
植物病害对农业的经济活力和森林中树木的正常功能构成重大威胁。植物病害的准确检测和识别对于智能农林管理至关重要。近年来,农业与人工智能的交叉已经成为一个热门的研究课题。研究人员一直在试验物体识别算法,特别是卷积神经网络,以识别植物图像中的疾病。目标是减少劳动,提高检测效率。本文综述了目标检测方法在番茄、柑橘、玉米、松树等常见植物病害检测中的应用。介绍了各种目标检测模型,从基本到现代和复杂的网络,并比较了常用神经网络模型的创新之处和缺点。此外,本文还讨论了目前植物病害检测和目标检测方法面临的挑战,并提出了基于学习的植物病害检测系统的未来工作方向。
{"title":"Deep learning approaches for object recognition in plant diseases: a review","authors":"Zimo Zhou, Yue Zhang, Zhaohui Gu, Simon X. Yang","doi":"10.20517/ir.2023.29","DOIUrl":"https://doi.org/10.20517/ir.2023.29","url":null,"abstract":"Plant diseases pose a significant threat to the economic viability of agriculture and the normal functioning of trees in forests. Accurate detection and identification of plant diseases are crucial for smart agricultural and forestry management. In recent years, the intersection of agriculture and artificial intelligence has become a popular research topic. Researchers have been experimenting with object recognition algorithms, specifically convolutional neural networks, to identify diseases in plant images. The goal is to reduce labor and improve detection efficiency. This article reviews the application of object detection methods for detecting common plant diseases, such as tomato, citrus, maize, and pine trees. It introduces various object detection models, ranging from basic to modern and sophisticated networks, and compares the innovative aspects and drawbacks of commonly used neural network models. Furthermore, the article discusses current challenges in plant disease detection and object detection methods and suggests promising directions for future work in learning-based plant disease detection systems.","PeriodicalId":100184,"journal":{"name":"Biomimetic Intelligence and Robotics","volume":"6 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136231677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Biomimetic Intelligence and Robotics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1