首页 > 最新文献

International Journal of Intelligent Robotics and Applications最新文献

英文 中文
A strong and fast millimeter-sized soft pneumatic actuator based on alternative pole water electrolysis 基于替代极电解水技术的强大而快速的毫米级软气动执行器
IF 1.7 Q3 ROBOTICS Pub Date : 2024-01-12 DOI: 10.1007/s41315-023-00307-w
Hadi Kolivand, Azita Souri, Arash Ahmadi

A new soft pneumatic microactuator based on alternative pole water electrolysis has recently been proposed. In these actuators, a water-based electrolyte is electrolyzed under an alternative current, generating hydrogen/oxygen nanobubbles/microbubbles. These bubbles cause the expansion of the electrolyte, resulting in the displacement of the actuator membrane. These actuators stand out for their lightweight design, cost-effectiveness, high performance, and versatility for various applications. In this paper, a strong and fast millimeter-sized actuator based on alternative pole water electrolysis is proposed. The proposed actuator, electronic driver circuits, and measurement systems is implemented, and some experiments to investigate the actuator’s performance under different conditions, including input variables such as voltage, time, temperature, and mass load are conducted. Our experimental results and comparisons with other actuators demonstrate that the proposed actuator exhibits favorable properties in terms of response time, output mechanical force, reliability, scalability, and simplicity of manufacturing. The versatility of this actuator makes it suitable for a wide range of soft robotics applications, including limb movement and manipulation. Additionally, it has potential medical applications such as microrobotics for navigation in narrow body channels for diagnosis, sampling, drug delivery, and surgery.

Graphical abstract

最近有人提出了一种基于替代极水电解的新型软气动微执行器。在这些致动器中,水基电解质在替代电流下进行电解,产生氢/氧纳米气泡/微气泡。这些气泡使电解质膨胀,导致致动器膜移位。这些致动器因设计轻巧、成本效益高、性能优越和适用于各种应用而脱颖而出。本文提出了一种基于替代极水电解的强大而快速的毫米级致动器。本文实现了所提出的致动器、电子驱动电路和测量系统,并进行了一些实验来研究致动器在不同条件下的性能,包括电压、时间、温度和质量负载等输入变量。我们的实验结果以及与其他致动器的比较表明,所提出的致动器在响应时间、输出机械力、可靠性、可扩展性和制造简易性方面都表现出良好的性能。这种致动器的多功能性使其适用于广泛的软机器人应用,包括肢体运动和操纵。此外,它还具有潜在的医疗应用价值,例如用于在狭窄体腔内导航的微型机器人,以进行诊断、采样、给药和手术。
{"title":"A strong and fast millimeter-sized soft pneumatic actuator based on alternative pole water electrolysis","authors":"Hadi Kolivand, Azita Souri, Arash Ahmadi","doi":"10.1007/s41315-023-00307-w","DOIUrl":"https://doi.org/10.1007/s41315-023-00307-w","url":null,"abstract":"<p>A new soft pneumatic microactuator based on alternative pole water electrolysis has recently been proposed. In these actuators, a water-based electrolyte is electrolyzed under an alternative current, generating hydrogen/oxygen nanobubbles/microbubbles. These bubbles cause the expansion of the electrolyte, resulting in the displacement of the actuator membrane. These actuators stand out for their lightweight design, cost-effectiveness, high performance, and versatility for various applications. In this paper, a strong and fast millimeter-sized actuator based on alternative pole water electrolysis is proposed. The proposed actuator, electronic driver circuits, and measurement systems is implemented, and some experiments to investigate the actuator’s performance under different conditions, including input variables such as voltage, time, temperature, and mass load are conducted. Our experimental results and comparisons with other actuators demonstrate that the proposed actuator exhibits favorable properties in terms of response time, output mechanical force, reliability, scalability, and simplicity of manufacturing. The versatility of this actuator makes it suitable for a wide range of soft robotics applications, including limb movement and manipulation. Additionally, it has potential medical applications such as microrobotics for navigation in narrow body channels for diagnosis, sampling, drug delivery, and surgery.</p><h3 data-test=\"abstract-sub-heading\">Graphical abstract</h3>\u0000","PeriodicalId":44563,"journal":{"name":"International Journal of Intelligent Robotics and Applications","volume":"2 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139458737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A review on quadrotor attitude control strategies 四旋翼飞行器姿态控制策略综述
IF 1.7 Q3 ROBOTICS Pub Date : 2024-01-10 DOI: 10.1007/s41315-023-00308-9

Abstract

Quadrotors have been more frequently used in different areas, from aerial photography to drug delivery in medical emergencies. These vehicles have high maneuverability, which makes them suitable for carrying out missions that humans would not be able to do due to physical constraints. They can be used in inhospitable environments where the physical integrity and health of humans would be compromised. However, they are highly nonlinear and multivariable systems whose dynamics are strongly coupled. These characteristics turn attitude control design into a complex task. Furthermore, the controller has to be able to deal with uncertainties and exogenous disturbances in practice, intensifying the difficulty of the control problem. Therefore, a quadrotor attitude control must have high robustness and fast response without compromising its global stability. Aiming to gather solutions to this control problem, this article provides a detailed and in-depth discussion on quadrotor attitude control strategies for flight control designers, including angular representation, controller stability, fault tolerance, actuator saturation, and strategies for exogenous disturbance rejection.

摘要 四旋翼飞行器已越来越多地应用于不同领域,从航空摄影到医疗紧急情况下的药物输送。这些飞行器具有高机动性,因此适合执行人类因身体限制而无法完成的任务。它们可以在人类的身体完整性和健康会受到损害的荒凉环境中使用。然而,它们是高度非线性和多变的系统,其动力学具有很强的耦合性。这些特点使姿态控制设计成为一项复杂的任务。此外,在实际应用中,控制器还必须能够处理不确定性和外来干扰,从而增加了控制问题的难度。因此,四旋翼飞行器的姿态控制必须具有高鲁棒性和快速响应性,同时不影响其全局稳定性。为了收集这一控制问题的解决方案,本文为飞行控制设计人员提供了有关四旋翼飞行器姿态控制策略的详细而深入的讨论,包括角度表示、控制器稳定性、容错、致动器饱和以及外源干扰抑制策略。
{"title":"A review on quadrotor attitude control strategies","authors":"","doi":"10.1007/s41315-023-00308-9","DOIUrl":"https://doi.org/10.1007/s41315-023-00308-9","url":null,"abstract":"<h3>Abstract</h3> <p>Quadrotors have been more frequently used in different areas, from aerial photography to drug delivery in medical emergencies. These vehicles have high maneuverability, which makes them suitable for carrying out missions that humans would not be able to do due to physical constraints. They can be used in inhospitable environments where the physical integrity and health of humans would be compromised. However, they are highly nonlinear and multivariable systems whose dynamics are strongly coupled. These characteristics turn attitude control design into a complex task. Furthermore, the controller has to be able to deal with uncertainties and exogenous disturbances in practice, intensifying the difficulty of the control problem. Therefore, a quadrotor attitude control must have high robustness and fast response without compromising its global stability. Aiming to gather solutions to this control problem, this article provides a detailed and in-depth discussion on quadrotor attitude control strategies for flight control designers, including angular representation, controller stability, fault tolerance, actuator saturation, and strategies for exogenous disturbance rejection.</p>","PeriodicalId":44563,"journal":{"name":"International Journal of Intelligent Robotics and Applications","volume":"33 1 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139420921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-robot system for inspection of underwater pipelines in shallow waters 用于检查浅水区水下管道的多机器人系统
IF 1.7 Q3 ROBOTICS Pub Date : 2024-01-10 DOI: 10.1007/s41315-023-00309-8
Sahejad Patel, Fadl Abdellatif, Mohammed Alsheikh, Hassane Trigui, Ali Outa, Ayman Amer, Mohammed Sarraj, Ahmed Al Brahim, Yazeed Alnumay, Amjad Felemban, Ali Alrasheed, Abdulwahab Halawani, Hesham Jifri, Hassan Jaleel, Jeff Shamma

Shallow Water Inspection & Monitoring Robot (SWIM-R) is designed to quickly and safely inspect oil and gas pipelines in extremely shallow waters. Divers clean and inspect pipeline joints. However, diving operations are slow in shallow waters as diving support ships cannot access shallow depths. Remotely operated vehicles (ROVs) that can perform cleaning and inspection are typically suited for deeper regions and are too large for smaller boats that navigate in shallow areas. To resolve this challenge, two SWIM-R vehicles and a companion Autonomous Surface Vehicle (ASV) were developed as a multi-robot system to minimize the reliance on divers for pipeline inspection. A unique mission architecture is presented that avails three operating modes depending on the depth; direct control from the shore, relayed control via the ASV, and direct control from a small zodiac. The mission architecture includes two ROVs; a Cleaning SWIM-R fitted with a water-jet nozzle to clean marine growth from the surface to be inspected, and an Inspection SWIM-R fitted with a neutrally-buoyant multi-functional robotic arm to inspect the surface and crawling tracks to traverse the seafloor. This multi-robot system was field tested, which proved its efficacy in inspecting oil and gas assets in shallow waters.

浅水检查和监测机器人 (SWIM-R) 设计用于在极浅水域快速、安全地检查石油和天然气管道。潜水员负责清洁和检查管道接头。但是,由于潜水支援船无法进入浅水区,因此在浅水区潜水作业速度很慢。能够进行清洁和检查的遥控潜水器 (ROV) 通常适用于较深的区域,对于在浅水区域航行的小型船只来说过于庞大。为解决这一难题,我们开发了两个 SWIM-R 潜水器和一个配套的自主水面潜水器 (ASV),作为一个多机器人系统,以尽量减少管道检测对潜水员的依赖。该系统采用独特的任务架构,可根据深度提供三种操作模式:由岸上直接控制、通过 ASV 进行中继控制以及由小型冲锋舟直接控制。任务架构包括两个遥控潜水器:一个是装有喷水喷嘴的清洁型 SWIM-R,用于清洁待检查海面上的海洋生物;另一个是装有中性浮力多功能机械臂的检查型 SWIM-R,用于检查海面,并利用爬行履带穿越海底。对这一多机器人系统进行了实地测试,证明了它在检查浅水区石油和天然气资产方面的功效。
{"title":"Multi-robot system for inspection of underwater pipelines in shallow waters","authors":"Sahejad Patel, Fadl Abdellatif, Mohammed Alsheikh, Hassane Trigui, Ali Outa, Ayman Amer, Mohammed Sarraj, Ahmed Al Brahim, Yazeed Alnumay, Amjad Felemban, Ali Alrasheed, Abdulwahab Halawani, Hesham Jifri, Hassan Jaleel, Jeff Shamma","doi":"10.1007/s41315-023-00309-8","DOIUrl":"https://doi.org/10.1007/s41315-023-00309-8","url":null,"abstract":"<p>Shallow Water Inspection &amp; Monitoring Robot (SWIM-R) is designed to quickly and safely inspect oil and gas pipelines in extremely shallow waters. Divers clean and inspect pipeline joints. However, diving operations are slow in shallow waters as diving support ships cannot access shallow depths. Remotely operated vehicles (ROVs) that can perform cleaning and inspection are typically suited for deeper regions and are too large for smaller boats that navigate in shallow areas. To resolve this challenge, two SWIM-R vehicles and a companion Autonomous Surface Vehicle (ASV) were developed as a multi-robot system to minimize the reliance on divers for pipeline inspection. A unique mission architecture is presented that avails three operating modes depending on the depth; direct control from the shore, relayed control via the ASV, and direct control from a small zodiac. The mission architecture includes two ROVs; a Cleaning SWIM-R fitted with a water-jet nozzle to clean marine growth from the surface to be inspected, and an Inspection SWIM-R fitted with a neutrally-buoyant multi-functional robotic arm to inspect the surface and crawling tracks to traverse the seafloor. This multi-robot system was field tested, which proved its efficacy in inspecting oil and gas assets in shallow waters.</p>","PeriodicalId":44563,"journal":{"name":"International Journal of Intelligent Robotics and Applications","volume":"1 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139420850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated face recognition system for smart attendance application using convolutional neural networks 使用卷积神经网络的智能考勤应用人脸自动识别系统
IF 1.7 Q3 ROBOTICS Pub Date : 2024-01-09 DOI: 10.1007/s41315-023-00310-1
Lakshmi Narayana Thalluri, Kiranmai Babburu, Aravind Kumar Madam, K. V. V. Kumar, G. V. Ganesh, Konari Rajasekhar, Koushik Guha, Md. Baig Mohammad, S. S. Kiran, Addepalli V. S. Y. Narayana Sarma, Vegesna Venkatasiva Naga Yaswanth

In this paper, a touch less automated face recognition system for smart attendance application was designed using convolutional neural network (CNN). The presented touch less smart attendance system is useful for offices and college’s attendance applications with this the spread of covid-19 type viruses can be restrict. The CNN was trained with dedicated database of 1890 faces with different illumination levels and rotate angles of total 30 targeted classes. A CNN performance analysis was done with 9-layer and 11-layer with different activation functions i.e., Step, Sigmoid, Tanh, softmax, and ReLu. An 11-layer CNN with ReLu activation function offers an accuracy of 96.2% for the designed face database. The system is capable to detect multiple faces from test images using Viola Jones algorithm. Eventually, a web application was designed which helps to monitor the attendance and to generate the report.

本文利用卷积神经网络(CNN)设计了一种用于智能考勤应用的免触摸自动人脸识别系统。该系统适用于办公室和大学的考勤应用,可有效限制 covid-19 型病毒的传播。CNN 使用专用数据库进行训练,该数据库包含 1890 张具有不同光照度和旋转角度的人脸,共有 30 个目标类别。使用不同的激活函数(即 Step、Sigmoid、Tanh、softmax 和 ReLu)对 9 层和 11 层 CNN 进行了性能分析。采用 ReLu 激活函数的 11 层 CNN 对所设计的人脸数据库的准确率为 96.2%。该系统能够使用 Viola Jones 算法从测试图像中检测出多张人脸。最后,还设计了一个网络应用程序,帮助监测考勤情况并生成报告。
{"title":"Automated face recognition system for smart attendance application using convolutional neural networks","authors":"Lakshmi Narayana Thalluri, Kiranmai Babburu, Aravind Kumar Madam, K. V. V. Kumar, G. V. Ganesh, Konari Rajasekhar, Koushik Guha, Md. Baig Mohammad, S. S. Kiran, Addepalli V. S. Y. Narayana Sarma, Vegesna Venkatasiva Naga Yaswanth","doi":"10.1007/s41315-023-00310-1","DOIUrl":"https://doi.org/10.1007/s41315-023-00310-1","url":null,"abstract":"<p>In this paper, a touch less automated face recognition system for smart attendance application was designed using convolutional neural network (CNN). The presented touch less smart attendance system is useful for offices and college’s attendance applications with this the spread of covid-19 type viruses can be restrict. The CNN was trained with dedicated database of 1890 faces with different illumination levels and rotate angles of total 30 targeted classes. A CNN performance analysis was done with 9-layer and 11-layer with different activation functions i.e., Step, Sigmoid, Tanh, softmax, and ReLu. An 11-layer CNN with ReLu activation function offers an accuracy of 96.2% for the designed face database. The system is capable to detect multiple faces from test images using Viola Jones algorithm. Eventually, a web application was designed which helps to monitor the attendance and to generate the report.</p>","PeriodicalId":44563,"journal":{"name":"International Journal of Intelligent Robotics and Applications","volume":"12 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139420852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A review of robotic charging for electric vehicles 电动汽车机器人充电技术综述
IF 1.7 Q3 ROBOTICS Pub Date : 2023-12-03 DOI: 10.1007/s41315-023-00306-x
Hendri Maja Saputra, Nur Safwati Mohd Nor, Estiko Rijanto, Mohd Zarhamdy Md Zain, Intan Zaurah Mat Darus, Edwar Yazid

This paper reviews the technical aspects of robotic charging for Electric Vehicles (EVs), aiming to identify research trends, methods, and challenges. It implemented the Systematic Literature Review (SLR), starting with the formulation of research question; searching and collecting articles from databases, including Web of Science, Scopus, Dimensions, and Lens; selecting articles; and data extraction. We reviewed the articles published from 2012 to 2022 and found that the number of publications increased exponentially. The top five keywords were electric vehicle, robotic, automatic charging, pose estimation, and computer vision. We continued an in-depth review from the points of view of autonomous docking, charging socket detection-pose estimation, plug insertion, and robot manipulator. No article used a camera, Lidar, or Laser as the sensor that reported successful autonomous docking without position error. Furthermore, we identified two problems when using computer vision for the socket pose estimation and the plug insertion: low robustness against different socket shapes and light conditions; inability to monitor excessive plugging force. Using infrared to locate the socket yielded more robustness. However, it requires modification of the socket on the vehicle. A few articles used a camera and force/torque sensors to control the plug insertion based on different control approaches: model-based control and data-driven machine learning. The challenges were to increase the success rate and shorten the time. Most researchers used commercial 6-DOF robot manipulators, whereas a few designed lower-DOF robot manipulators. Another research challenge was developing a 4-DOF robot manipulator with compliance that ensures a 100% success rate of plug insertion.

本文综述了电动汽车(ev)机器人充电的技术方面,旨在确定研究趋势、方法和挑战。采用系统文献综述法(SLR),从研究问题的提出入手;从Web of Science、Scopus、Dimensions和Lens等数据库中搜索和收集文章;选择的文章;以及数据提取。我们回顾了2012年至2022年发表的文章,发现论文数量呈指数级增长。排名前五的关键词是电动汽车、机器人、自动充电、姿态估计和计算机视觉。我们继续从自主对接、充电插座检测-姿态估计、插头插入和机器人操纵的角度进行了深入的综述。没有一篇文章使用摄像头、激光雷达或激光作为传感器,报告成功的自主对接没有位置错误。此外,在使用计算机视觉进行插座姿态估计和插头插入时,我们发现了两个问题:对不同插座形状和光照条件的鲁棒性较低;无法监测过大的堵塞力。使用红外线定位插座产生了更坚固的效果。然而,它需要修改车辆上的插座。一些文章使用摄像头和力/扭矩传感器来控制基于不同控制方法的插头插入:基于模型的控制和数据驱动的机器学习。面临的挑战是提高成功率和缩短时间。大多数研究人员使用商用的六自由度机器人机械手,而很少有人设计低自由度机器人机械手。另一项研究挑战是开发一种具有顺应性的4自由度机器人机械手,以确保100%的插拔成功率。
{"title":"A review of robotic charging for electric vehicles","authors":"Hendri Maja Saputra, Nur Safwati Mohd Nor, Estiko Rijanto, Mohd Zarhamdy Md Zain, Intan Zaurah Mat Darus, Edwar Yazid","doi":"10.1007/s41315-023-00306-x","DOIUrl":"https://doi.org/10.1007/s41315-023-00306-x","url":null,"abstract":"<p>This paper reviews the technical aspects of robotic charging for Electric Vehicles (EVs), aiming to identify research trends, methods, and challenges. It implemented the Systematic Literature Review (SLR), starting with the formulation of research question; searching and collecting articles from databases, including Web of Science, Scopus, Dimensions, and Lens; selecting articles; and data extraction. We reviewed the articles published from 2012 to 2022 and found that the number of publications increased exponentially. The top five keywords were electric vehicle, robotic, automatic charging, pose estimation, and computer vision. We continued an in-depth review from the points of view of autonomous docking, charging socket detection-pose estimation, plug insertion, and robot manipulator. No article used a camera, Lidar, or Laser as the sensor that reported successful autonomous docking without position error. Furthermore, we identified two problems when using computer vision for the socket pose estimation and the plug insertion: low robustness against different socket shapes and light conditions; inability to monitor excessive plugging force. Using infrared to locate the socket yielded more robustness. However, it requires modification of the socket on the vehicle. A few articles used a camera and force/torque sensors to control the plug insertion based on different control approaches: model-based control and data-driven machine learning. The challenges were to increase the success rate and shorten the time. Most researchers used commercial 6-DOF robot manipulators, whereas a few designed lower-DOF robot manipulators. Another research challenge was developing a 4-DOF robot manipulator with compliance that ensures a 100% success rate of plug insertion.</p>","PeriodicalId":44563,"journal":{"name":"International Journal of Intelligent Robotics and Applications","volume":"44 45","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138495137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Experiments with cooperative robots that can detect object’s shape, color and size to perform tasks in industrial workplaces 协作机器人的实验,可以检测物体的形状,颜色和大小,以执行工业工作场所的任务
IF 1.7 Q3 ROBOTICS Pub Date : 2023-11-25 DOI: 10.1007/s41315-023-00305-y
Md Fahim Shahoriar Titu, S. M. Rezwanul Haque, Rifad Islam, Akram Hossain, Mohammad Abdul Qayum, Riasat Khan

Automation and human-robot collaboration are increasing in modern workplaces such as industrial manufacturing. Nowadays, humans rely heavily on advanced robotic devices to perform tasks quickly and accurately. Modern robots with computer vision and artificial intelligence are gaining attention and popularity rapidly. This paper demonstrates how a robot can automatically detect an object’s shape, color, and size using computer vision techniques and act based on information feedback. In this work, a powerful computational model for a robot has been developed that distinguishes an object’s shape, size, and color in real time with high accuracy. Then it can integrate a robotic arm to pick a specific object. A dataset of 6558 images of various monochromatic objects has been developed, containing three colors against a white background and five shapes for the research. The designed system for detection has achieved 99.8% success in an object’s shape detection. Also, the system demonstrated 100% success in the object’s color and size detection with the OpenCV image processing framework. On the other hand, the prototype robotic system based on Raspberry Pi-4B has achieved 80.7% accuracy for geometrical shape detection and 81.07%, and 59.77% accuracy for color recognition and distance measurement, respectively. Moreover, the system guided a robotic arm to pick up the object based on its color and shape with a mean response time of 19 seconds. The idea is to simulate a workplace environment where a worker will ask the robotic systems to perform a task on a specific object. Our robotic system can accurately identify the object’s attributes (e.g., 100%) and is able to perform the task reliably (81%). However, reliability can be improved by using a more powerful computing system, such as the robotic prototype. The article’s contribution is to use a cutting-edge computer vision technique to detect and categorize objects with the help of a small private dataset to shorten the training duration and enable the suggested system to adapt to components that may be needed for creating a new industrial product in a shorter period. The source code and images of the collected dataset can be found at: https://github.com/TituShahoriar/cse499B_Hardware_Proposed_System.

在工业制造等现代工作场所,自动化和人机协作正在增加。如今,人类严重依赖先进的机器人设备来快速准确地执行任务。具有计算机视觉和人工智能的现代机器人正迅速受到关注和普及。本文演示了机器人如何使用计算机视觉技术自动检测物体的形状、颜色和大小,并根据信息反馈采取行动。在这项工作中,开发了一个强大的机器人计算模型,可以高精度地实时区分物体的形状、大小和颜色。然后,它可以集成一个机械臂来挑选特定的物体。一个包含6558张各种单色物体图像的数据集已经开发出来,其中包含白色背景下的三种颜色和五种形状。所设计的检测系统对物体形状的检测成功率达到99.8%。此外,该系统在OpenCV图像处理框架下对物体的颜色和尺寸检测成功率为100%。另一方面,基于Raspberry Pi-4B的原型机器人系统的几何形状检测准确率为80.7%,颜色识别准确率为81.07%,距离测量准确率为59.77%。此外,该系统还引导机械臂根据物体的颜色和形状拾取物体,平均响应时间为19秒。这个想法是模拟一个工作环境,在这个环境中,工人会要求机器人系统对特定的物体执行任务。我们的机器人系统可以准确地识别物体的属性(例如,100%),并且能够可靠地执行任务(81%)。然而,可靠性可以通过使用更强大的计算系统来提高,比如机器人原型。本文的贡献是使用尖端的计算机视觉技术在小型私有数据集的帮助下检测和分类对象,以缩短训练时间,并使建议的系统能够适应在更短的时间内创建新工业产品可能需要的组件。收集到的数据集的源代码和图像可以在https://github.com/TituShahoriar/cse499B_Hardware_Proposed_System上找到。
{"title":"Experiments with cooperative robots that can detect object’s shape, color and size to perform tasks in industrial workplaces","authors":"Md Fahim Shahoriar Titu, S. M. Rezwanul Haque, Rifad Islam, Akram Hossain, Mohammad Abdul Qayum, Riasat Khan","doi":"10.1007/s41315-023-00305-y","DOIUrl":"https://doi.org/10.1007/s41315-023-00305-y","url":null,"abstract":"<p>Automation and human-robot collaboration are increasing in modern workplaces such as industrial manufacturing. Nowadays, humans rely heavily on advanced robotic devices to perform tasks quickly and accurately. Modern robots with computer vision and artificial intelligence are gaining attention and popularity rapidly. This paper demonstrates how a robot can automatically detect an object’s shape, color, and size using computer vision techniques and act based on information feedback. In this work, a powerful computational model for a robot has been developed that distinguishes an object’s shape, size, and color in real time with high accuracy. Then it can integrate a robotic arm to pick a specific object. A dataset of 6558 images of various monochromatic objects has been developed, containing three colors against a white background and five shapes for the research. The designed system for detection has achieved 99.8% success in an object’s shape detection. Also, the system demonstrated 100% success in the object’s color and size detection with the OpenCV image processing framework. On the other hand, the prototype robotic system based on Raspberry Pi-4B has achieved 80.7% accuracy for geometrical shape detection and 81.07%, and 59.77% accuracy for color recognition and distance measurement, respectively. Moreover, the system guided a robotic arm to pick up the object based on its color and shape with a mean response time of 19 seconds. The idea is to simulate a workplace environment where a worker will ask the robotic systems to perform a task on a specific object. Our robotic system can accurately identify the object’s attributes (e.g., 100%) and is able to perform the task reliably (81%). However, reliability can be improved by using a more powerful computing system, such as the robotic prototype. The article’s contribution is to use a cutting-edge computer vision technique to detect and categorize objects with the help of a small private dataset to shorten the training duration and enable the suggested system to adapt to components that may be needed for creating a new industrial product in a shorter period. The source code and images of the collected dataset can be found at: https://github.com/TituShahoriar/cse499B_Hardware_Proposed_System.</p>","PeriodicalId":44563,"journal":{"name":"International Journal of Intelligent Robotics and Applications","volume":"44 46","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138495136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Survey on learning-based scene extrapolation in robotics 机器人中基于学习的场景外推研究综述
IF 1.7 Q3 ROBOTICS Pub Date : 2023-11-22 DOI: 10.1007/s41315-023-00303-0
Selma Güzel, Sırma Yavuz

Human’s imagination capability provides recognition of unseen environment which should be improved in robots in order to have better mapping, planning, navigation and exploration capabilities in the fields where the robots are utilized such as military, disasters, and industry. The task of completion of a partial scene via estimating the unobserved parts relied on the known information is called scene extrapolation. It increases performance and satisfies a valid approximation of unseen content even if it is impossible or hard to obtain it due to the issues related with security, environment, etc. In this survey paper, the studies related to learning-based scene extrapolation in robotics are presented and evaluated taking the efficiencies and limitations of the methods into account to provide researchers in this field a general overview on this task and encourage them to improve the current studies for higher success. In addition, the methods which use common datasets and metrics are compared. To the best of our knowledge, there isn’t any survey on this essential topic and we hope this survey will compensate this.

人类的想象能力提供了对未知环境的识别能力,机器人需要提高这一能力,以便在军事、灾害、工业等机器人使用的领域具有更好的测绘、规划、导航和探索能力。根据已知信息估计未观测到的部分来完成局部场景的任务称为场景外推。它提高了性能并满足了未见内容的有效近似,即使由于与安全性、环境等相关的问题而无法或难以获得这些内容。在这篇调查论文中,介绍和评估了机器人技术中基于学习的场景外推的相关研究,并考虑到这些方法的效率和局限性,为该领域的研究人员提供了对该任务的总体概述,并鼓励他们改进当前的研究以获得更高的成功。此外,还比较了使用常用数据集和指标的方法。据我们所知,没有任何关于这个重要话题的调查,我们希望这个调查能弥补这一点。
{"title":"Survey on learning-based scene extrapolation in robotics","authors":"Selma Güzel, Sırma Yavuz","doi":"10.1007/s41315-023-00303-0","DOIUrl":"https://doi.org/10.1007/s41315-023-00303-0","url":null,"abstract":"<p>Human’s imagination capability provides recognition of unseen environment which should be improved in robots in order to have better mapping, planning, navigation and exploration capabilities in the fields where the robots are utilized such as military, disasters, and industry. The task of completion of a partial scene via estimating the unobserved parts relied on the known information is called scene extrapolation. It increases performance and satisfies a valid approximation of unseen content even if it is impossible or hard to obtain it due to the issues related with security, environment, etc. In this survey paper, the studies related to learning-based scene extrapolation in robotics are presented and evaluated taking the efficiencies and limitations of the methods into account to provide researchers in this field a general overview on this task and encourage them to improve the current studies for higher success. In addition, the methods which use common datasets and metrics are compared. To the best of our knowledge, there isn’t any survey on this essential topic and we hope this survey will compensate this.\u0000</p>","PeriodicalId":44563,"journal":{"name":"International Journal of Intelligent Robotics and Applications","volume":"44 30","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138495118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Voice controlled humanoid robot 语音控制的人形机器人
Q3 ROBOTICS Pub Date : 2023-11-14 DOI: 10.1007/s41315-023-00304-z
Bisma Naeem, Wasey Kareem, None Saeed-Ul-Hassan, Naureen Naeem, Roha Naeem
{"title":"Voice controlled humanoid robot","authors":"Bisma Naeem, Wasey Kareem, None Saeed-Ul-Hassan, Naureen Naeem, Roha Naeem","doi":"10.1007/s41315-023-00304-z","DOIUrl":"https://doi.org/10.1007/s41315-023-00304-z","url":null,"abstract":"","PeriodicalId":44563,"journal":{"name":"International Journal of Intelligent Robotics and Applications","volume":"79 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134900730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
3D LiDAR-based obstacle detection and tracking for autonomous navigation in dynamic environments 基于三维激光雷达的动态环境下自主导航障碍物检测与跟踪
Q3 ROBOTICS Pub Date : 2023-11-14 DOI: 10.1007/s41315-023-00302-1
Arindam Saha, Bibhas Chandra Dhara
{"title":"3D LiDAR-based obstacle detection and tracking for autonomous navigation in dynamic environments","authors":"Arindam Saha, Bibhas Chandra Dhara","doi":"10.1007/s41315-023-00302-1","DOIUrl":"https://doi.org/10.1007/s41315-023-00302-1","url":null,"abstract":"","PeriodicalId":44563,"journal":{"name":"International Journal of Intelligent Robotics and Applications","volume":"38 38","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134953639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Constraint-free discretized manifold-based path planner 基于无约束离散流形的路径规划器
Q3 ROBOTICS Pub Date : 2023-10-14 DOI: 10.1007/s41315-023-00300-3
Sindhu Radhakrishnan, Wail Gueaieb
{"title":"Constraint-free discretized manifold-based path planner","authors":"Sindhu Radhakrishnan, Wail Gueaieb","doi":"10.1007/s41315-023-00300-3","DOIUrl":"https://doi.org/10.1007/s41315-023-00300-3","url":null,"abstract":"","PeriodicalId":44563,"journal":{"name":"International Journal of Intelligent Robotics and Applications","volume":"128 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135803734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
International Journal of Intelligent Robotics and Applications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1