[This corrects the article DOI: 10.3389/frobt.2024.1337380.].
[This corrects the article DOI: 10.3389/frobt.2024.1337380.].
Introduction: Robotics uptake in the aerospace industry is low, mainly due to the low-volume/high-accuracy production that aerospace manufacturers require. Furthermore, aerospace manufacturing and assembly sites are often unstructured environments not specifically suitable for robots to operate in. Methods: This paper introduces a robotic visual inspection system using off-the-shelf components able to inspect the mounting holes for wing slat actuators without the need for fixed-coordinate programming; the part just needs to be left within reach of the robot. Our system sets one of the opposed pairs of mounting holes as a reference (the "datum") and then compares the tilt of all other pairs of mounting holes with respect to it. Under the assumption that any deviation in the mounting hole tilt is not systematic but due to normal manufacturing tolerances, our system will either guarantee the correct alignment of all mounting holes or highlight the existence of misaligned holes. Results and Discussion: Computer-vision tilt measurements are performed with an error of below 0.03° using custom optimization for the sub-pixel determination of the center and radius of the mounting holes. The error introduced by the robot's motion from the datum to each of the remaining hole pairs is compensated by moving back to the datum and fixing the orientation again before moving to inspect the next hole pair. This error is estimated to be approximately 0.05°, taking the total tilt error estimation for any mounting hole pair to be 0.08° with respect to the datum. This is confirmed by manually measuring the tilt of the hole pairs using a clock gauge on a calibrated table (not used during normal operation).
Introduction: Many countries are facing a shortage of healthcare workers. Furthermore, healthcare workers are experiencing many stressors, resulting in psychological issues, impaired health, and increased intentions to leave the workplace. In recent years, different technologies have been implemented to lighten workload on healthcare workers, such as electronic patient files. Robotic solutions are still rather uncommon. To help with acceptance and actual use of robots their functionalities should correspond to the users' needs.
Method: In the pilot study Care4All-Initial, we developed and field-tested applications for a mobile service robot in a psychosocial, multimodal group therapy for people with dementia. To guide the process and assess possible facilitators and barriers, we conducted a reoccurring focus group including people with dementia, therapists, professional caregivers as well as researchers from different disciplines with a user-centered design approach. The focus group suggested and reviewed applications and discussed ethical implications. We recorded the focus group discussions in writing and used content analysis.
Results: The focus group discussed 15 different topics regarding ethical concerns that we used as a framework for the research project: Ethical facilitators were respect for the autonomy of the people with dementia and their proxies regarding participating and data sharing. Furthermore, the robot had to be useful for the therapists and attendees. Ethical barriers were the deception and possible harm of the people with dementia or therapists. The focus group suggested 32 different applications. We implemented 13 applications that centered on the robot interacting with the people with dementia and lightening the workload off the therapists. The implemented applications were facilitated through utilizing existing hard- and software and building on applications. Barriers to implementation were due to hardware, software, or applications not fitting the scope of the project.
Discussion: To prevent barriers of robot employment in a group therapy for people with dementia, the robot's applications have to be developed sufficiently for a flawless and safe use, the use of the robot should not cause irritation or agitation, but rather be meaningful and useful to its users. To facilitate the development sufficient time, money, expertise and planning is essential.
In this study, we address the critical need for enhanced situational awareness and victim detection capabilities in Search and Rescue (SAR) operations amidst disasters. Traditional unmanned ground vehicles (UGVs) often struggle in such chaotic environments due to their limited manoeuvrability and the challenge of distinguishing victims from debris. Recognising these gaps, our research introduces a novel technological framework that integrates advanced gesture-recognition with cutting-edge deep learning for camera-based victim identification, specifically designed to empower UGVs in disaster scenarios. At the core of our methodology is the development and implementation of the Meerkat Optimization Algorithm-Stacked Convolutional Neural Network-Bi-Long Short Term Memory-Gated Recurrent Unit (MOA-SConv-Bi-LSTM-GRU) model, which sets a new benchmark for hand gesture detection with its remarkable performance metrics: accuracy, precision, recall, and F1-score all approximately 0.9866. This model enables intuitive, real-time control of UGVs through hand gestures, allowing for precise navigation in confined and obstacle-ridden spaces, which is vital for effective SAR operations. Furthermore, we leverage the capabilities of the latest YOLOv8 deep learning model, trained on specialised datasets to accurately detect human victims under a wide range of challenging conditions, such as varying occlusions, lighting, and perspectives. Our comprehensive testing in simulated emergency scenarios validates the effectiveness of our integrated approach. The system demonstrated exceptional proficiency in navigating through obstructions and rapidly locating victims, even in environments with visual impairments like smoke, clutter, and poor lighting. Our study not only highlights the critical gaps in current SAR response capabilities but also offers a pioneering solution through a synergistic blend of gesture-based control, deep learning, and purpose-built robotics. The key findings underscore the potential of our integrated technological framework to significantly enhance UGV performance in disaster scenarios, thereby optimising life-saving outcomes when time is of the essence. This research paves the way for future advancements in SAR technology, with the promise of more efficient and reliable rescue operations in the face of disaster.