To introduce the concept of the “constraint tolerance” (i.e., a feasibility of solutions) in the flight scheduling problem, this paper proposes the optimization method that can find the feasible flight schedules by optimizing the original objective function while maximizing the constraint tolerance as much as possible. The proposed method further is improved by integrating it with the local search and archive mechanisms to obtain a wide range of Pareto-optimal solutions with a high constraint tolerance. A comparison between the proposed method and the conventional methods with or without adding a new objective function to maximize the constraint tolerance shows the statistical superiority of the proposed method.
{"title":"Multi-objective optimization of flight schedules to maximize constraint tolerance by local search and archive mechanisms","authors":"Tomoki Ishizuka, Akinori Murata, Hiroyuki Sato, Keiki Takadama","doi":"10.1007/s10015-025-01021-5","DOIUrl":"10.1007/s10015-025-01021-5","url":null,"abstract":"<div><p>To introduce the concept of the “constraint tolerance” (i.e., a feasibility of solutions) in the flight scheduling problem, this paper proposes the optimization method that can find the feasible flight schedules by optimizing the original objective function while maximizing the constraint tolerance as much as possible. The proposed method further is improved by integrating it with the local search and archive mechanisms to obtain a wide range of Pareto-optimal solutions with a high constraint tolerance. A comparison between the proposed method and the conventional methods with or without adding a new objective function to maximize the constraint tolerance shows the statistical superiority of the proposed method.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"30 2","pages":"289 - 302"},"PeriodicalIF":0.8,"publicationDate":"2025-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143925685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-07DOI: 10.1007/s10015-025-01017-1
Mari Nakamura
A heterogeneous boid is a multi-agent system comprised of several types of agents that communicate locally. It forms diverse patterns of agent groups through various interactions. With appropriately tuned interactions, it forms stable patterns of a unified cluster with symmetrical structures that reflect local interactions. This ensures that these patterns remain stable, regardless of the number of agents (i.e., scalability). Prior research introduced the retrograde model, where two agent types exhibited reverse movement while a third type formed a unified cluster. By tuning the interaction, this model formed stable dynamic patterns. With a large number of agents, even under appropriate interactions, long-lasting metastable states emerge, making it difficult to distinguish them from stable patterns. In this study, by focusing on large-scale structures (cluster shape and agent flow), we reclassified three stable dynamic patterns formed by the retrograde model, removing the metastable states. We identify a new dynamic stable pattern, named as an irregular-oscillating pattern, by focusing on a cluster of specific shapes.
{"title":"Stable dynamic patterns generated by retrograde model","authors":"Mari Nakamura","doi":"10.1007/s10015-025-01017-1","DOIUrl":"10.1007/s10015-025-01017-1","url":null,"abstract":"<div><p>A heterogeneous boid is a multi-agent system comprised of several types of agents that communicate locally. It forms diverse patterns of agent groups through various interactions. With appropriately tuned interactions, it forms stable patterns of a unified cluster with symmetrical structures that reflect local interactions. This ensures that these patterns remain stable, regardless of the number of agents (i.e., scalability). Prior research introduced the retrograde model, where two agent types exhibited reverse movement while a third type formed a unified cluster. By tuning the interaction, this model formed stable dynamic patterns. With a large number of agents, even under appropriate interactions, long-lasting metastable states emerge, making it difficult to distinguish them from stable patterns. In this study, by focusing on large-scale structures (cluster shape and agent flow), we reclassified three stable dynamic patterns formed by the retrograde model, removing the metastable states. We identify a new dynamic stable pattern, named as an irregular-oscillating pattern, by focusing on a cluster of specific shapes.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"30 2","pages":"236 - 244"},"PeriodicalIF":0.8,"publicationDate":"2025-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143925572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-07DOI: 10.1007/s10015-025-01016-2
Lauritz Keysberg, Naoki Wakamiya
One remarkable feat of biological swarms is their ability to work under very different environmental circumstances and disturbances. They exhibit a flexible kind of robustness, which accommodates external events without staying on rigid positions. Based on the observation that conventionally robust flocking models can be very complex and use information unavailable to biological swarm, we undertook a wide investigation into the properties of existing flocking models such as Boid, Couzin, Vicsek, and Cucker–Smale. That is, to see if a similar “natural” flexibility could be observed in flocking models with lower complexity. We established a toolset of three metrics which allows for a comprehensive evaluation of different flocking models. These metrics measure general model performance, robustness under noise, as well as a naive complexity of the model itself. Our results show a general trend for divergence between performance and robustness. The most robust models had a medium–high complexity. While our results show no clear relation between robustness and low complexity, we discuss examples for robust behavior with simple rules.
{"title":"Towards flexible swarms: comparison of flocking models with varying complexity","authors":"Lauritz Keysberg, Naoki Wakamiya","doi":"10.1007/s10015-025-01016-2","DOIUrl":"10.1007/s10015-025-01016-2","url":null,"abstract":"<div><p>One remarkable feat of biological swarms is their ability to work under very different environmental circumstances and disturbances. They exhibit a flexible kind of robustness, which accommodates external events without staying on rigid positions. Based on the observation that conventionally robust flocking models can be very complex and use information unavailable to biological swarm, we undertook a wide investigation into the properties of existing flocking models such as Boid, Couzin, Vicsek, and Cucker–Smale. That is, to see if a similar “natural” flexibility could be observed in flocking models with lower complexity. We established a toolset of three metrics which allows for a comprehensive evaluation of different flocking models. These metrics measure general model performance, robustness under noise, as well as a naive complexity of the model itself. Our results show a general trend for divergence between performance and robustness. The most robust models had a medium–high complexity. While our results show no clear relation between robustness and low complexity, we discuss examples for robust behavior with simple rules.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"30 2","pages":"219 - 226"},"PeriodicalIF":0.8,"publicationDate":"2025-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10015-025-01016-2.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143925571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-04DOI: 10.1007/s10015-025-01018-0
Takahiro Shimizu, Shoichi Maeyama
SLAM (simultaneous localization and mapping) plays a crucial role in autonomous navigation. In a previous study, SLAM based on AUKF (augmented unscented Kalman filter), called AUKF-SLAM, was proposed. This study demonstrated that simultaneous estimation of kinematic parameters improves the accuracy on 2D SLAM in an indoor environment. We currently aim to develop the 3D AUKF-SLAM for outdoor use, and this paper presents the 6DOF localization based on AUKF as a preliminary step. To expand 2D (3DOF) localization to 6DOF localization, we adopted quaternion for attitude representation. However, it is not the best way to estimate each element of the four-dimensional vector of quaternion as state variables because they do not vary independently. As a solution to this problem, the idea of estimating the attitude error represented as the three-dimensional parameter called GRPs (generalized Rodrigues parameters) was proposed. In addition, it was reported that simultaneous estimation of the attitude error represented as GRPs and states not represented as errors is effective for the estimation of the motion of space crafts. Therefore, we applied this method to wheeled mobile robots to address the problem and realize the 6DOF localization based on AUKF. We implemented this system on ROS (robot operating system) and experimented in simulational and real environments. As a result, we demonstrated that it could perform 6DOF localization and estimation of the wheel radius simultaneously.
SLAM (simultaneous localization and mapping)在自主导航中起着至关重要的作用。在之前的研究中,提出了基于AUKF (augmented unscented Kalman filter)的SLAM,称为AUKF-SLAM。该研究表明,同时估计运动参数可以提高室内环境下二维SLAM的精度。我们目前的目标是开发户外使用的3D AUKF- slam,本文提出了基于AUKF的6DOF定位作为初步步骤。为了将2D (3DOF)定位扩展到6DOF定位,我们采用四元数表示姿态。然而,将四元数的四维向量的每个元素估计为状态变量并不是最好的方法,因为它们不是独立变化的。为了解决这一问题,提出了用三维参数GRPs(广义罗德里格斯参数)来估计姿态误差的思想。此外,还报道了同时估计以grp表示的姿态误差和不以误差表示的状态对航天器运动的估计是有效的。因此,我们将该方法应用于轮式移动机器人来解决这一问题,实现基于AUKF的6DOF定位。我们在ROS(机器人操作系统)上实现了该系统,并在模拟和真实环境中进行了实验。结果表明,该方法可以同时进行6自由度定位和车轮半径估计。
{"title":"6DOF localization with AUKF based on triple RTK-GNSS","authors":"Takahiro Shimizu, Shoichi Maeyama","doi":"10.1007/s10015-025-01018-0","DOIUrl":"10.1007/s10015-025-01018-0","url":null,"abstract":"<div><p>SLAM (simultaneous localization and mapping) plays a crucial role in autonomous navigation. In a previous study, SLAM based on AUKF (augmented unscented Kalman filter), called AUKF-SLAM, was proposed. This study demonstrated that simultaneous estimation of kinematic parameters improves the accuracy on 2D SLAM in an indoor environment. We currently aim to develop the 3D AUKF-SLAM for outdoor use, and this paper presents the 6DOF localization based on AUKF as a preliminary step. To expand 2D (3DOF) localization to 6DOF localization, we adopted quaternion for attitude representation. However, it is not the best way to estimate each element of the four-dimensional vector of quaternion as state variables because they do not vary independently. As a solution to this problem, the idea of estimating the attitude error represented as the three-dimensional parameter called GRPs (generalized Rodrigues parameters) was proposed. In addition, it was reported that simultaneous estimation of the attitude error represented as GRPs and states not represented as errors is effective for the estimation of the motion of space crafts. Therefore, we applied this method to wheeled mobile robots to address the problem and realize the 6DOF localization based on AUKF. We implemented this system on ROS (robot operating system) and experimented in simulational and real environments. As a result, we demonstrated that it could perform 6DOF localization and estimation of the wheel radius simultaneously.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"30 3","pages":"493 - 501"},"PeriodicalIF":0.8,"publicationDate":"2025-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145161656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-03DOI: 10.1007/s10015-025-01022-4
Masahito Takano, Shiori Oyama, Kent Nagumo, Akio Nozawa
This study investigates the use of facial skin temperature, measured through non-invasive facial thermal imaging, to classify stress-coping responses. While previous methods like Convolutional Neural Networks (CNN) and sparse coding have shown promise, capturing continuous changes in stress-coping states remains challenging. To address this limitation, we focus on t-SNE for dimensionality reduction, which compresses high-dimensional facial thermal data while preserving both local and global structure. Our findings show that facial thermal images from the same stress-coping response cluster together in the reduced space, allowing continuous monitoring of facial skin temperature changes. Additionally, the behavior of the data in the reduced space revealed a time lag between hemodynamic parameter variations and facial skin temperature distribution changes. These insights contribute to developing models that can continuously track stress-coping state changes.
{"title":"Discrimination of stress coping responses on dimensionality-reduced facial thermal image space","authors":"Masahito Takano, Shiori Oyama, Kent Nagumo, Akio Nozawa","doi":"10.1007/s10015-025-01022-4","DOIUrl":"10.1007/s10015-025-01022-4","url":null,"abstract":"<div><p>This study investigates the use of facial skin temperature, measured through non-invasive facial thermal imaging, to classify stress-coping responses. While previous methods like Convolutional Neural Networks (CNN) and sparse coding have shown promise, capturing continuous changes in stress-coping states remains challenging. To address this limitation, we focus on t-SNE for dimensionality reduction, which compresses high-dimensional facial thermal data while preserving both local and global structure. Our findings show that facial thermal images from the same stress-coping response cluster together in the reduced space, allowing continuous monitoring of facial skin temperature changes. Additionally, the behavior of the data in the reduced space revealed a time lag between hemodynamic parameter variations and facial skin temperature distribution changes. These insights contribute to developing models that can continuously track stress-coping state changes.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"30 3","pages":"424 - 431"},"PeriodicalIF":0.8,"publicationDate":"2025-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10015-025-01022-4.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145160968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-29DOI: 10.1007/s10015-025-01019-z
Christian Hahm
This paper details a simple experiment that tests two genetic encodings, NEAT and HyperNEAT, for the evolution of vision-based food-seeking behavior in neural-controlled 3D soft voxel robots. The evolution of food-seeking behavior is a preliminary step towards ecosystems of advanced artificial animals, in which the animals seek both food and mates. Two environments were tested: with and without deadly obstacles. Traditional evolutionary search was used, with an objective-based fitness function. Both NEAT and HyperNEAT encodings were tested for the evolution of robot neural controllers. The results of the experiment showed the NEAT encoding resulted in increasingly effective food-seeking behavior over time, whereas experiments with the HyperNEAT encoding did not achieve the desired behavior. This suggests that NEAT at least is a viable algorithm to evolve neural networks for the task of vision-based object-seeking in complex robots, and warrants further experimentation. On the other hand, HyperNEAT struggled with this task. This could be due to a number of reasons, including a common issue like EA being stuck in local optima, or because the encoding might struggle to evolve and represent irregular structures required for the task.
{"title":"Neuroevolution for vision-based seeking behavior in 3D soft voxel robots","authors":"Christian Hahm","doi":"10.1007/s10015-025-01019-z","DOIUrl":"10.1007/s10015-025-01019-z","url":null,"abstract":"<div><p>This paper details a simple experiment that tests two genetic encodings, NEAT and HyperNEAT, for the evolution of vision-based food-seeking behavior in neural-controlled 3D soft voxel robots. The evolution of food-seeking behavior is a preliminary step towards ecosystems of advanced artificial animals, in which the animals seek both food and mates. Two environments were tested: with and without deadly obstacles. Traditional evolutionary search was used, with an objective-based fitness function. Both NEAT and HyperNEAT encodings were tested for the evolution of robot neural controllers. The results of the experiment showed the NEAT encoding resulted in increasingly effective food-seeking behavior over time, whereas experiments with the HyperNEAT encoding did not achieve the desired behavior. This suggests that NEAT at least is a viable algorithm to evolve neural networks for the task of vision-based object-seeking in complex robots, and warrants further experimentation. On the other hand, HyperNEAT struggled with this task. This could be due to a number of reasons, including a common issue like EA being stuck in local optima, or because the encoding might struggle to evolve and represent irregular structures required for the task.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"30 3","pages":"502 - 511"},"PeriodicalIF":0.8,"publicationDate":"2025-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10015-025-01019-z.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145171559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-24DOI: 10.1007/s10015-025-01014-4
Atsuki Fukasawa, Shinya Watanabe
Dewatering is a crucial process in sludge treatment plants, and appropriate mixing of polymer and sludge is an important factor in achieving efficient dewatering. This study focused on the condition of flocs produced by mixing sludge and polymer, and estimated the floc condition through visual analysis of images. In this study, the estimation of floc condition was assumed to be a classification problem of mixer speed, and validation was conducted to classify the appropriate speed based on the images. The proposed methodology involved the development of a machine learning model characterized by high accuracy and transparency. This model was formulated using two features extracted from the images, i.e., the gaps between flocs and their texture, which are the parameters used by human operators to estimate floc condition. Explainable Boosting Machine was used as the machine learning model, which allows interpretation of the model’s contents and can be applied easily. The classification accuracy of this model was validated using both interpolated and extrapolated data, yielding accuracies exceeding 95% in both scenarios. Furthermore, comparative analysis was performed between the proposed transparent box model and a conventional Convolutional Neural Network (CNN) model. Despite its transparent box nature, the proposed approach demonstrated a comparable level of accuracy to the CNN model in this comparative study.
{"title":"Estimation of floc condition in a dewatering process by image analysis using machine learning","authors":"Atsuki Fukasawa, Shinya Watanabe","doi":"10.1007/s10015-025-01014-4","DOIUrl":"10.1007/s10015-025-01014-4","url":null,"abstract":"<div><p>Dewatering is a crucial process in sludge treatment plants, and appropriate mixing of polymer and sludge is an important factor in achieving efficient dewatering. This study focused on the condition of flocs produced by mixing sludge and polymer, and estimated the floc condition through visual analysis of images. In this study, the estimation of floc condition was assumed to be a classification problem of mixer speed, and validation was conducted to classify the appropriate speed based on the images. The proposed methodology involved the development of a machine learning model characterized by high accuracy and transparency. This model was formulated using two features extracted from the images, i.e., the gaps between flocs and their texture, which are the parameters used by human operators to estimate floc condition. Explainable Boosting Machine was used as the machine learning model, which allows interpretation of the model’s contents and can be applied easily. The classification accuracy of this model was validated using both interpolated and extrapolated data, yielding accuracies exceeding 95% in both scenarios. Furthermore, comparative analysis was performed between the proposed transparent box model and a conventional Convolutional Neural Network (CNN) model. Despite its transparent box nature, the proposed approach demonstrated a comparable level of accuracy to the CNN model in this comparative study.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"30 3","pages":"439 - 448"},"PeriodicalIF":0.8,"publicationDate":"2025-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145168380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, research on autonomous driving using reinforcement learning has been attracting attention. Much of the current research focuses on simply replacing human driving with autonomous driving. Compared to conventional human-driven vehicles, autonomous vehicles can utilize a wide variety of sensor measurements and share information with nearby vehicles through vehicle-to-vehicle communication for driving control. By actively utilizing these capabilities, we can consider overall optimal control through coordination of groups of autonomous vehicles, which is completely different from human driving control. One example is adaptive vehicle control in an environment that does not assume lane separation or directional separation (Single Carriageway Environment). In this study, we construct a simulation environment and focus on the efficient use of a Single Carriageway Environment, aiming to develop driving control strategies using reinforcement learning. In an environment with a road width equivalent to four lanes, without lane or directional separation, we acquire adaptive vehicle control through reinforcement learning using information obtained from sensors and vehicle-to-vehicle communication. To verify the effectiveness of the proposed method, we construct two types of environments: a Single Carriageway Environment and a conventional road environment with directional separation (Dual Carriageway Environment). We evaluate road utilization effectiveness by measuring the number of vehicles passing through and the average number of vehicles present on the road. The result of the evaluation shows that, in the Single Carriageway Environment, our method has adapted to road congestion and was seen to effectively utilize the available road space. Furthermore, both the number of vehicles passing through and the average number of vehicles present have also improved.
{"title":"Reinforcement learning-based autonomous driving control for efficient road utilization in lane-less environments","authors":"Mao Tobisawa, Kenji Matsuda, Tenta Suzuki, Tomohiro Harada, Junya Hoshino, Yuki Itoh, Kaito Kumagae, Johei Matsuoka, Kiyohiko Hattori","doi":"10.1007/s10015-025-01013-5","DOIUrl":"10.1007/s10015-025-01013-5","url":null,"abstract":"<div><p>In recent years, research on autonomous driving using reinforcement learning has been attracting attention. Much of the current research focuses on simply replacing human driving with autonomous driving. Compared to conventional human-driven vehicles, autonomous vehicles can utilize a wide variety of sensor measurements and share information with nearby vehicles through vehicle-to-vehicle communication for driving control. By actively utilizing these capabilities, we can consider overall optimal control through coordination of groups of autonomous vehicles, which is completely different from human driving control. One example is adaptive vehicle control in an environment that does not assume lane separation or directional separation (Single Carriageway Environment). In this study, we construct a simulation environment and focus on the efficient use of a Single Carriageway Environment, aiming to develop driving control strategies using reinforcement learning. In an environment with a road width equivalent to four lanes, without lane or directional separation, we acquire adaptive vehicle control through reinforcement learning using information obtained from sensors and vehicle-to-vehicle communication. To verify the effectiveness of the proposed method, we construct two types of environments: a Single Carriageway Environment and a conventional road environment with directional separation (Dual Carriageway Environment). We evaluate road utilization effectiveness by measuring the number of vehicles passing through and the average number of vehicles present on the road. The result of the evaluation shows that, in the Single Carriageway Environment, our method has adapted to road congestion and was seen to effectively utilize the available road space. Furthermore, both the number of vehicles passing through and the average number of vehicles present have also improved.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"30 2","pages":"276 - 288"},"PeriodicalIF":0.8,"publicationDate":"2025-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143925711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this study, we evaluated the performance of blood pressure estimation using an RGB-NIR camera. While non- contact blood pressure measurement methods using RGB images are available, they prove ineffective under low-light conditions. In addition, Visible light penetrates only the skin’s capillaries, failing to reach deeper vessels like arteries. In contrast, near-infrared (NIR) light penetrates deeper into the skin, reaching the arterial layer. By integrating visible and NIR light, we can capture information from both capillaries and arteries. This research proposes a method that combines visible and NIR light to improve blood pressure measurement accuracy. The performance of this combined approach was compared with methods that rely solely on either visible or NIR light. For our experiments, an RGB-NIR camera, capable of simultaneously capturing both RGB and NIR video images, was utilized.
{"title":"Performance evaluation of blood pressure estimation using an RGB-NIR camera","authors":"Sae Kawasaki, Masaya Kinefuchi, Yuta Hino, Atsushi Kobayashi, Shoji Kawahito, Masato Takahashi, Norimichi Tsumura","doi":"10.1007/s10015-025-01015-3","DOIUrl":"10.1007/s10015-025-01015-3","url":null,"abstract":"<div><p>In this study, we evaluated the performance of blood pressure estimation using an RGB-NIR camera. While non- contact blood pressure measurement methods using RGB images are available, they prove ineffective under low-light conditions. In addition, Visible light penetrates only the skin’s capillaries, failing to reach deeper vessels like arteries. In contrast, near-infrared (NIR) light penetrates deeper into the skin, reaching the arterial layer. By integrating visible and NIR light, we can capture information from both capillaries and arteries. This research proposes a method that combines visible and NIR light to improve blood pressure measurement accuracy. The performance of this combined approach was compared with methods that rely solely on either visible or NIR light. For our experiments, an RGB-NIR camera, capable of simultaneously capturing both RGB and NIR video images, was utilized.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"30 3","pages":"484 - 492"},"PeriodicalIF":0.8,"publicationDate":"2025-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10015-025-01015-3.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145168652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Individuals with visual impairments often rely on assistive tools such as white canes and guide dogs to navigate their environments. While these tools provide a certain level of support, their effectiveness is frequently constrained in complex or dynamically changing environments, even with extensive user training. To address these limitations, we have developed a smart navigation robot that integrates artificial intelligence for object detection, offering a viable alternative to traditional assistive tools. The robot is designed to provide real-time assistance through auditory alerts, all while allowing the user to maintain full control over the robot’s direction according to their intentions. The robot’s effectiveness was evaluated through an experimental study in which participants navigated diverse environments using both the smart navigation robot and a white cane. Participant perceptions of the robot’s usability, reliability, safety, and interaction quality were evaluated using the Godspeed Questionnaire Series. The comparative analysis revealed that the smart navigation robot outperformed the white cane, particularly in dynamic scenarios. These findings suggest that the robot has the potential to substantially improve the quality of life and independence of individuals with visual impairments, offering a greater degree of freedom than was previously attainable.
{"title":"Development of smart navigation robot for the visually impaired","authors":"Jin Yien Lee, Taiga Eguchi, Wen Liang Yeoh, Hiroshi Okumura, Osamu Fukuda","doi":"10.1007/s10015-025-01012-6","DOIUrl":"10.1007/s10015-025-01012-6","url":null,"abstract":"<div><p>Individuals with visual impairments often rely on assistive tools such as white canes and guide dogs to navigate their environments. While these tools provide a certain level of support, their effectiveness is frequently constrained in complex or dynamically changing environments, even with extensive user training. To address these limitations, we have developed a smart navigation robot that integrates artificial intelligence for object detection, offering a viable alternative to traditional assistive tools. The robot is designed to provide real-time assistance through auditory alerts, all while allowing the user to maintain full control over the robot’s direction according to their intentions. The robot’s effectiveness was evaluated through an experimental study in which participants navigated diverse environments using both the smart navigation robot and a white cane. Participant perceptions of the robot’s usability, reliability, safety, and interaction quality were evaluated using the Godspeed Questionnaire Series. The comparative analysis revealed that the smart navigation robot outperformed the white cane, particularly in dynamic scenarios. These findings suggest that the robot has the potential to substantially improve the quality of life and independence of individuals with visual impairments, offering a greater degree of freedom than was previously attainable.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"30 2","pages":"265 - 275"},"PeriodicalIF":0.8,"publicationDate":"2025-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143925597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}