Pub Date : 2023-07-11DOI: 10.1007/s10015-023-00883-x
Abhijeet Ravankar, Arpit Rawankar, Ankit A. Ravankar
Field robots equipped with visual sensors have been used to automate several services. In many scenarios, these robots are tele-operated by a remote operator who controls the robot motion based on a live video feed from the robot’s cameras. In other cases, like surveillance and monitoring applications, the video recorded by the robot is later analyzed or inspected manually. A shaky video is produced on an uneven terrain. It could also be caused due to loose and vibrating mechanical frame on which the camera has been mounted. Jitters or shakes in these videos are undesired for tele-operation, and to maintain desired quality of service. In this paper, we present an algorithm to stabilize the undesired jitters in a shaky video using only the camera information for different areas of vineyard based on terrain profile. The algorithm works by tracking robust feature points in the successive frames of the camera, smoothing the trajectory, and generating desired transformations to output a stabilized video. We have tested the algorithm in actual field robots in uneven terrains used for agriculture, and found the algorithm to produce good results.
{"title":"Video stabilization algorithm for field robots in uneven terrain","authors":"Abhijeet Ravankar, Arpit Rawankar, Ankit A. Ravankar","doi":"10.1007/s10015-023-00883-x","DOIUrl":"10.1007/s10015-023-00883-x","url":null,"abstract":"<div><p>Field robots equipped with visual sensors have been used to automate several services. In many scenarios, these robots are tele-operated by a remote operator who controls the robot motion based on a live video feed from the robot’s cameras. In other cases, like surveillance and monitoring applications, the video recorded by the robot is later analyzed or inspected manually. A shaky video is produced on an uneven terrain. It could also be caused due to loose and vibrating mechanical frame on which the camera has been mounted. Jitters or shakes in these videos are undesired for tele-operation, and to maintain desired quality of service. In this paper, we present an algorithm to stabilize the undesired jitters in a shaky video using only the camera information for different areas of vineyard based on terrain profile. The algorithm works by tracking robust feature points in the successive frames of the camera, smoothing the trajectory, and generating desired transformations to output a stabilized video. We have tested the algorithm in actual field robots in uneven terrains used for agriculture, and found the algorithm to produce good results.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"28 3","pages":"502 - 508"},"PeriodicalIF":0.9,"publicationDate":"2023-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48999699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-08DOI: 10.1007/s10015-023-00885-9
Koki Arima, Fusaomi Nagata, Tatsuki Shimizu, Akimasa Otsuka, Hirohisa Kato, Keigo Watanabe, Maki K. Habib
Recently, CNN (Convolutional Neural Network) and Grad-CAM (Gradient-weighted Class Activation Map) are being applied to various kinds of defect detection and position recognition for industrial products. However, in training process of a CNN model, a large amount of image data are required to acquire a desired generalization ability. In addition, it is not easy for Grad-CAM to clearly identify the defect area which is predicted as the basis of a classification result. Moreover, when they are deployed in an actual production line, two calculation processes for CNN and Grad-CAM have to be sequentially called for defect detection and position recognition, so that the processing time is concerned. In this paper, the authors try to apply YOLOv2 (You Only Look Once) to defect detection and its visualization to process them at once. In general, a YOLOv2 model can be built with less training images; however, a complicated labeling process is required to prepare ground truth data for training. A data set for training a YOLOv2 model has to be composed of image files and the corresponding ground truth data file named gTruth. The gTruth file has names of all the image files and their labeled information, such as label names and box dimensions. Therefore, YOLOv2 requires complex data set augmentation for not only images but also gTruth data. Actually, target products dealt with in this paper are produced with various kinds and small quantity, and also the frequency of occurrence of the defect is infrequent. Moreover, due to the fixed indoor production line, the valid image augmentation to be applied is limited to the horizontal flip. In this paper, a data set augmentation method is proposed to efficiently generate training data for YOLOv2 even in such a production situation and to consequently enhance the performance of defect detection and its visualization. The effectiveness is shown through experiments.
近年来,CNN(卷积神经网络)和Grad-CAM(梯度加权类激活图)正被应用于工业产品的各种缺陷检测和位置识别。然而,在CNN模型的训练过程中,需要大量的图像数据才能获得期望的泛化能力。此外,Grad CAM不容易清楚地识别作为分类结果基础预测的缺陷区域。此外,当它们部署在实际生产线上时,必须依次调用CNN和Grad-CAM的两个计算过程来进行缺陷检测和位置识别,因此处理时间受到关注。在本文中,作者试图将YOLOv2(You Only Look Once)应用于缺陷检测及其可视化,以一次处理它们。通常,YOLOv2模型可以用较少的训练图像来构建;然而,需要复杂的标记过程来准备用于训练的地面实况数据。用于训练YOLOv2模型的数据集必须由图像文件和名为gTruth的相应地面实况数据文件组成。gTruth文件包含所有图像文件的名称及其标记信息,例如标签名称和框尺寸。因此,YOLOv2不仅需要图像,还需要gTruth数据的复杂数据集扩充。实际上,本文处理的目标产品种类繁多,数量很少,而且缺陷的发生频率也很低。此外,由于固定的室内生产线,要应用的有效图像增强仅限于水平翻转。在本文中,提出了一种数据集扩充方法,即使在这种生产情况下,也能有效地生成YOLOv2的训练数据,从而提高缺陷检测及其可视化的性能。实验证明了该方法的有效性。
{"title":"Improvements of detection accuracy and its confidence of defective areas by YOLOv2 using a data set augmentation method","authors":"Koki Arima, Fusaomi Nagata, Tatsuki Shimizu, Akimasa Otsuka, Hirohisa Kato, Keigo Watanabe, Maki K. Habib","doi":"10.1007/s10015-023-00885-9","DOIUrl":"10.1007/s10015-023-00885-9","url":null,"abstract":"<div><p>Recently, CNN (Convolutional Neural Network) and Grad-CAM (Gradient-weighted Class Activation Map) are being applied to various kinds of defect detection and position recognition for industrial products. However, in training process of a CNN model, a large amount of image data are required to acquire a desired generalization ability. In addition, it is not easy for Grad-CAM to clearly identify the defect area which is predicted as the basis of a classification result. Moreover, when they are deployed in an actual production line, two calculation processes for CNN and Grad-CAM have to be sequentially called for defect detection and position recognition, so that the processing time is concerned. In this paper, the authors try to apply YOLOv2 (You Only Look Once) to defect detection and its visualization to process them at once. In general, a YOLOv2 model can be built with less training images; however, a complicated labeling process is required to prepare ground truth data for training. A data set for training a YOLOv2 model has to be composed of image files and the corresponding ground truth data file named gTruth. The gTruth file has names of all the image files and their labeled information, such as label names and box dimensions. Therefore, YOLOv2 requires complex data set augmentation for not only images but also gTruth data. Actually, target products dealt with in this paper are produced with various kinds and small quantity, and also the frequency of occurrence of the defect is infrequent. Moreover, due to the fixed indoor production line, the valid image augmentation to be applied is limited to the horizontal flip. In this paper, a data set augmentation method is proposed to efficiently generate training data for YOLOv2 even in such a production situation and to consequently enhance the performance of defect detection and its visualization. The effectiveness is shown through experiments.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"28 3","pages":"625 - 631"},"PeriodicalIF":0.9,"publicationDate":"2023-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49479986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-06DOI: 10.1007/s10015-023-00882-y
Abhijeet Ravankar, Arpit Rawankar, Ankit A. Ravankar
In recent years, many countries including Japan are facing the problems of increasing old-age population and shortage of labor. This has increased the demands of automating several tasks using robots and artificial intelligence in agriculture, production, and healthcare sectors. With increasing old-age population, an increasing number of people are expected to be admitted in old-age home and rehabilitation centers in the coming years where they receive proper care and attention. In such a scenario, it can be foreseen that it will be increasingly difficult to accurately monitor each patient. This requires an automation of patient’s activity detection. To this end, this paper proposes to use computer vision for automatic detection of patient’s behavior. The proposed work first detects the pose of the patient through a Convolution Neural Network. Next, the coordinates of the different body parts are detected. These coordinates are input in the decision generation layer which uses the relationship between the coordinates to predict the person’s actions. This paper focuses on the detection of important activities like: sudden fall, sitting, eating, sleeping, exercise, and computer usage. Although previous works in behavior detection focused only on detecting a particular activity, the proposed work can detect multiple activities in real-time. We verify the proposed system thorough experiments in real environment with actual sensors. The experimental results shows that the proposed system can accurately detect the activities of the patient in the room. Critical scenarios like sudden fall are detected and an alarm is raised for immediate support. Moreover, the the privacy of the patient is preserved though an ID based method in which only the detected activities are chronologically stored in the database.
{"title":"Real-time monitoring of elderly people through computer vision","authors":"Abhijeet Ravankar, Arpit Rawankar, Ankit A. Ravankar","doi":"10.1007/s10015-023-00882-y","DOIUrl":"10.1007/s10015-023-00882-y","url":null,"abstract":"<div><p>In recent years, many countries including Japan are facing the problems of increasing old-age population and shortage of labor. This has increased the demands of automating several tasks using robots and artificial intelligence in agriculture, production, and healthcare sectors. With increasing old-age population, an increasing number of people are expected to be admitted in old-age home and rehabilitation centers in the coming years where they receive proper care and attention. In such a scenario, it can be foreseen that it will be increasingly difficult to accurately monitor each patient. This requires an automation of patient’s activity detection. To this end, this paper proposes to use computer vision for automatic detection of patient’s behavior. The proposed work first detects the pose of the patient through a Convolution Neural Network. Next, the coordinates of the different body parts are detected. These coordinates are input in the decision generation layer which uses the relationship between the coordinates to predict the person’s actions. This paper focuses on the detection of important activities like: sudden fall, sitting, eating, sleeping, exercise, and computer usage. Although previous works in behavior detection focused only on detecting a particular activity, the proposed work can detect multiple activities in real-time. We verify the proposed system thorough experiments in real environment with actual sensors. The experimental results shows that the proposed system can accurately detect the activities of the patient in the room. Critical scenarios like sudden fall are detected and an alarm is raised for immediate support. Moreover, the the privacy of the patient is preserved though an ID based method in which only the detected activities are chronologically stored in the database.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"28 3","pages":"496 - 501"},"PeriodicalIF":0.9,"publicationDate":"2023-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10015-023-00882-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48188938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-05DOI: 10.1007/s10015-023-00884-w
Aiko Miyamoto, Mitsuharu Matsumoto
Inspired by the molting behavior of living organisms, this paper describes a molting robot structure with a self-repair function. In past robot self-repair methods, the strength after repair was usually lower than before the repair. To realize a robot that can repeatedly repair its exterior while maintaining its quality, the replacement exterior that becomes the new outer skin is folded like origami and enclosed inside the robot. During the repair, the outer exterior can be replaced by extracting the replacement exterior from inside the robot. A prototype of the proposed molting structure was experimentally tested and its proper operation was confirmed. In addition, a honeycomb structure was combined with a bellows structure to improve the strength of the outer skin.
{"title":"Development of an origami-based robot molting structure","authors":"Aiko Miyamoto, Mitsuharu Matsumoto","doi":"10.1007/s10015-023-00884-w","DOIUrl":"10.1007/s10015-023-00884-w","url":null,"abstract":"<div><p>Inspired by the molting behavior of living organisms, this paper describes a molting robot structure with a self-repair function. In past robot self-repair methods, the strength after repair was usually lower than before the repair. To realize a robot that can repeatedly repair its exterior while maintaining its quality, the replacement exterior that becomes the new outer skin is folded like origami and enclosed inside the robot. During the repair, the outer exterior can be replaced by extracting the replacement exterior from inside the robot. A prototype of the proposed molting structure was experimentally tested and its proper operation was confirmed. In addition, a honeycomb structure was combined with a bellows structure to improve the strength of the outer skin.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"28 4","pages":"645 - 651"},"PeriodicalIF":0.9,"publicationDate":"2023-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48918063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marine robots play a crucial role in exploring and investigating underwater and seafloor environments, organisms, structures, and resources. In this study, we developed a control system for a small marine robot and conducted simulation experiments to evaluate its performance. The control system is based on fuzzy control, which resembles human control by defining rules, quantifying them through membership functions, and determining the appropriate manipulation level. Moreover, a genetic algorithm was employed to optimize the coefficients of a function utilized by the proposed controller in the non-fuzzification process to establish the operating parameters. When implementing this control system during simulations, the marine robot successfully reached a desired position within a specified time frame.
{"title":"Fuzzy controller for AUV robots based on machine learning and genetic algorithm","authors":"Toya Yamada, Hiroshi Kinjo, Kunihiko Nakazono, Naoki Oshiro, Eiho Uezato","doi":"10.1007/s10015-023-00881-z","DOIUrl":"10.1007/s10015-023-00881-z","url":null,"abstract":"<div><p>Marine robots play a crucial role in exploring and investigating underwater and seafloor environments, organisms, structures, and resources. In this study, we developed a control system for a small marine robot and conducted simulation experiments to evaluate its performance. The control system is based on fuzzy control, which resembles human control by defining rules, quantifying them through membership functions, and determining the appropriate manipulation level. Moreover, a genetic algorithm was employed to optimize the coefficients of a function utilized by the proposed controller in the non-fuzzification process to establish the operating parameters. When implementing this control system during simulations, the marine robot successfully reached a desired position within a specified time frame.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"28 3","pages":"632 - 641"},"PeriodicalIF":0.9,"publicationDate":"2023-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42661840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-24DOI: 10.1007/s10015-023-00880-0
Omar M. T. Abdel Deen, Wei-Horng Jean, Shou-Zen Fan, Maysam F. Abbod, Jiann-Shing Shieh
Pain monitoring is crucial to provide proper healthcare for patients during general anesthesia (GA). In this study, photoplethysmographic waveform amplitude (PPGA), heartbeat interval (HBI), and surgical pleth index (SPI) are utilized for predicting pain scores during GA based on expert medical doctors’ assessments (EMDAs). Time series features are fed into different long short-term memory (LSTM) models, with different hyperparameters. The models’ performance is evaluated using mean absolute error (MAE), standard deviation (SD), and correlation (Corr). Three different models are used, the first model resulted in 6.9271 ± 1.913, 9.4635 ± 2.456, and 0.5955 0.069 for an overall MAE, SD, and Corr, respectively. The second model resulted in 3.418 ± 0.715, 3.847 ± 0.557, and 0.634 ± 0.068 for an overall MAE, SD, and Corr, respectively. In contrast, the third model resulted in 3.4009 ± 0.648, 3.909 ± 0.548, and 0.6197 ± 0.0625 for an overall MAE, SD, and Corr, respectively. The second model is selected as the best model based on its performance and applied 5-fold cross-validation for verification. Statistical results are quite similar: 4.722 ± 0.742, 3.922 ± 0.672, and 0.597 ± 0.053 for MAE, SD, and Corr, respectively. In conclusion, the SPI effectively predicted pain score based on EMDA, not only on good evaluation performance, but the trend of EMDA is replicated, which can be interpreted as a relation between SPI and EMDA; however, further improvements on data consistency are also needed to validate the results and obtain better performance. Furthermore, the usage of further signal features could be considered along with SPI.
{"title":"Pain scores estimation using surgical pleth index and long short-term memory neural networks","authors":"Omar M. T. Abdel Deen, Wei-Horng Jean, Shou-Zen Fan, Maysam F. Abbod, Jiann-Shing Shieh","doi":"10.1007/s10015-023-00880-0","DOIUrl":"10.1007/s10015-023-00880-0","url":null,"abstract":"<div><p>Pain monitoring is crucial to provide proper healthcare for patients during general anesthesia (GA). In this study, photoplethysmographic waveform amplitude (PPGA), heartbeat interval (HBI), and surgical pleth index (SPI) are utilized for predicting pain scores during GA based on expert medical doctors’ assessments (EMDAs). Time series features are fed into different long short-term memory (LSTM) models, with different hyperparameters. The models’ performance is evaluated using mean absolute error (MAE), standard deviation (SD), and correlation (Corr). Three different models are used, the first model resulted in 6.9271 ± 1.913, 9.4635 ± 2.456, and 0.5955 0.069 for an overall MAE, SD, and Corr, respectively. The second model resulted in 3.418 ± 0.715, 3.847 ± 0.557, and 0.634 ± 0.068 for an overall MAE, SD, and Corr, respectively. In contrast, the third model resulted in 3.4009 ± 0.648, 3.909 ± 0.548, and 0.6197 ± 0.0625 for an overall MAE, SD, and Corr, respectively. The second model is selected as the best model based on its performance and applied 5-fold cross-validation for verification. Statistical results are quite similar: 4.722 ± 0.742, 3.922 ± 0.672, and 0.597 ± 0.053 for MAE, SD, and Corr, respectively. In conclusion, the SPI effectively predicted pain score based on EMDA, not only on good evaluation performance, but the trend of EMDA is replicated, which can be interpreted as a relation between SPI and EMDA; however, further improvements on data consistency are also needed to validate the results and obtain better performance. Furthermore, the usage of further signal features could be considered along with SPI.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"28 3","pages":"600 - 608"},"PeriodicalIF":0.9,"publicationDate":"2023-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45684306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-17DOI: 10.1007/s10015-023-00878-8
Shoma Abe, Jun Ogawa, Yosuke Watanabe, MD Nahin Islam Shiblee, Masaru Kawakami, Hidemitsu Furukawa
Soft modular robotics combines soft materials and modular mechanisms. We are developing a vacuum-driven actuator module, MORI-A, which combines a 3D-printed flexible parallel cross structure with a cube-shaped hollow silicone. The MORI-A module has five deformation modes: no deformation, uniform contraction, uniaxial contraction, flexion, and shear. By combining these modules, soft robots with a variety of deformabilities can be constructed. However, assembling MORI-A requires predicting the deformation from the posture and mode of the modules, making assembly difficult. To overcome this problem, this study aims to construct a system called “MORI-A CPS,” which can predict the motion of a soft robot composed of MORI-A modules by simply arranging cubes in a virtual space. This paper evaluates how well the motion of virtual MORI-A modules, defined as a combination of swelling and shrinking voxels, approximates real-world motion. Then, it shows that the deformations of virtual soft robots constructed via MORI-A CPS are similar to those of real robots.
{"title":"MORI-A CPS: 3D printed soft actuators with 4D assembly simulation","authors":"Shoma Abe, Jun Ogawa, Yosuke Watanabe, MD Nahin Islam Shiblee, Masaru Kawakami, Hidemitsu Furukawa","doi":"10.1007/s10015-023-00878-8","DOIUrl":"10.1007/s10015-023-00878-8","url":null,"abstract":"<div><p>Soft modular robotics combines soft materials and modular mechanisms. We are developing a vacuum-driven actuator module, MORI-A, which combines a 3D-printed flexible parallel cross structure with a cube-shaped hollow silicone. The MORI-A module has five deformation modes: no deformation, uniform contraction, uniaxial contraction, flexion, and shear. By combining these modules, soft robots with a variety of deformabilities can be constructed. However, assembling MORI-A requires predicting the deformation from the posture and mode of the modules, making assembly difficult. To overcome this problem, this study aims to construct a system called “MORI-A CPS,” which can predict the motion of a soft robot composed of MORI-A modules by simply arranging cubes in a virtual space. This paper evaluates how well the motion of virtual MORI-A modules, defined as a combination of swelling and shrinking voxels, approximates real-world motion. Then, it shows that the deformations of virtual soft robots constructed via MORI-A CPS are similar to those of real robots.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"28 3","pages":"609 - 617"},"PeriodicalIF":0.9,"publicationDate":"2023-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45212581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-15DOI: 10.1007/s10015-023-00879-7
Yu Zhang, Wenjing Cao, Hanqing Zhao, Shuang Gao
In this study, we considered the electric power delivery problem when using electric vehicles (EVs) for multiple households located in a remote region or a region isolated by disasters. Two optimization problems are formulated and compared; they yield the optimal routes that minimize the overall traveling distance of the EVs and their overall electric power consumption, respectively. We assume that the number of households requiring power delivery and the number of EVs used for power delivery in the region are given constants. Subsequently, we divide the households into groups and assign the households in each group to one EV. Each EV is required to return to its initial position after delivering electric power to all the households in the assigned group. In the first method, the benchmark method, the optimal route that minimizes the overall traveling distance of all the EVs is determined using the dynamic programming method. However, owing to traffic congestion on the roads, the optimal path that minimizes the overall traveling distance of all the EVs does not necessarily yield their minimum overall electric power consumption. In this study, to directly minimize the overall electric power consumption of all the considered EVs, we propose an optimization method that considers traffic congestion. Therefore, a second method is proposed, which minimizes the overall electric power consumption considering traffic congestion. The electric power consumed during the travel of each EV is calculated as a function of the length of each road section and the nominal average speed of vehicles on the road section. A case study in which four EVs are assigned to deliver electric power to serve eight households is conducted to validate the proposed method. To verify the effectiveness of the proposed method, the calculation results considering traffic congestion are compared with the benchmark method results, which minimizes the traveling distance. The comparison of the results from the two different methods shows that the optimal solution for the proposed method reduces the overall electric power consumption of all the EVs by 236.5(kWh) (9.4%) compared with the benchmark method. Therefore, the proposed method is preferable for the reduction of the overall electric power consumption of EVs.
{"title":"Route planning algorithm based on dynamic programming for electric vehicles delivering electric power to a region isolated from power grid","authors":"Yu Zhang, Wenjing Cao, Hanqing Zhao, Shuang Gao","doi":"10.1007/s10015-023-00879-7","DOIUrl":"10.1007/s10015-023-00879-7","url":null,"abstract":"<div><p>In this study, we considered the electric power delivery problem when using electric vehicles (EVs) for multiple households located in a remote region or a region isolated by disasters. Two optimization problems are formulated and compared; they yield the optimal routes that minimize the overall traveling distance of the EVs and their overall electric power consumption, respectively. We assume that the number of households requiring power delivery and the number of EVs used for power delivery in the region are given constants. Subsequently, we divide the households into groups and assign the households in each group to one EV. Each EV is required to return to its initial position after delivering electric power to all the households in the assigned group. In the first method, the benchmark method, the optimal route that minimizes the overall traveling distance of all the EVs is determined using the dynamic programming method. However, owing to traffic congestion on the roads, the optimal path that minimizes the overall traveling distance of all the EVs does not necessarily yield their minimum overall electric power consumption. In this study, to directly minimize the overall electric power consumption of all the considered EVs, we propose an optimization method that considers traffic congestion. Therefore, a second method is proposed, which minimizes the overall electric power consumption considering traffic congestion. The electric power consumed during the travel of each EV is calculated as a function of the length of each road section and the nominal average speed of vehicles on the road section. A case study in which four EVs are assigned to deliver electric power to serve eight households is conducted to validate the proposed method. To verify the effectiveness of the proposed method, the calculation results considering traffic congestion are compared with the benchmark method results, which minimizes the traveling distance. The comparison of the results from the two different methods shows that the optimal solution for the proposed method reduces the overall electric power consumption of all the EVs by 236.5(kWh) (9.4%) compared with the benchmark method. Therefore, the proposed method is preferable for the reduction of the overall electric power consumption of EVs.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"28 3","pages":"583 - 590"},"PeriodicalIF":0.9,"publicationDate":"2023-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47019896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-13DOI: 10.1007/s10015-023-00877-9
Kento Tsuchiya, Ryo Hatano, Hiroyuki Nishiyama
{"title":"Correction to: Detecting deception using machine learning with facial expressions and pulse rate","authors":"Kento Tsuchiya, Ryo Hatano, Hiroyuki Nishiyama","doi":"10.1007/s10015-023-00877-9","DOIUrl":"10.1007/s10015-023-00877-9","url":null,"abstract":"","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"28 3","pages":"643 - 643"},"PeriodicalIF":0.9,"publicationDate":"2023-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10015-023-00877-9.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50477760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-12DOI: 10.1007/s10015-023-00875-x
Zihao Yu, Mark Christian S. G. Guinto, Brian Godwin S. Lim, Renzo Roel P. Tan, Junichiro Yoshimoto, Kazushi Ikeda, Yasumi Ohta, Jun Ohta
In working toward the goal of uncovering the inner workings of the brain, various imaging techniques have been the subject of research. Among the prominent technologies are devices that are based on the ability of transgenic animals to signal neuronal activity through fluorescent indicators. This paper investigates the utility of an original ultra-lightweight needle-type device in fluorescence neuroimaging. A generalizable data processing pipeline is proposed to compensate for the reduced image resolution of the lensless device. In particular, a modular solution centered on baseline-induced noise reduction and principal component analysis is designed as a stand-in for physical lenses in the aggregation and quasi-reconstruction of neuronal activity. Data-driven evidence backing the identification of regions of interest is then demonstrated, establishing the relative superiority of the method over neuroscience conventions within comparable contexts.
{"title":"Engineering a data processing pipeline for an ultra-lightweight lensless fluorescence imaging device with neuronal-cluster resolution","authors":"Zihao Yu, Mark Christian S. G. Guinto, Brian Godwin S. Lim, Renzo Roel P. Tan, Junichiro Yoshimoto, Kazushi Ikeda, Yasumi Ohta, Jun Ohta","doi":"10.1007/s10015-023-00875-x","DOIUrl":"10.1007/s10015-023-00875-x","url":null,"abstract":"<div><p>In working toward the goal of uncovering the inner workings of the brain, various imaging techniques have been the subject of research. Among the prominent technologies are devices that are based on the ability of transgenic animals to signal neuronal activity through fluorescent indicators. This paper investigates the utility of an original ultra-lightweight needle-type device in fluorescence neuroimaging. A generalizable data processing pipeline is proposed to compensate for the reduced image resolution of the lensless device. In particular, a modular solution centered on baseline-induced noise reduction and principal component analysis is designed as a stand-in for physical lenses in the aggregation and quasi-reconstruction of neuronal activity. Data-driven evidence backing the identification of regions of interest is then demonstrated, establishing the relative superiority of the method over neuroscience conventions within comparable contexts.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"28 3","pages":"483 - 495"},"PeriodicalIF":0.9,"publicationDate":"2023-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41513555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}