With the aim of reducing the mental and physical burden on physicians and patients in endoscopic treatment, an endoscope-connected microrobot actuator and a self-propelled wheeled microrobot that uses Reuleaux triangle as the wheel shape is described for the use of medical carbon dioxide gas. A turbine-type actuator measuring 5.17 mm (long) × 5.13 mm (wide) × 1.96 mm (thick) with a mass of 0.15 g showed rotational speeds of 26,784 rpm, 56,250 rpm, and 57,690 rpm at pressures of 0.1 MPa, 0.2 MPa, and 0.3 MPa and a flow rate of 1.0 L/min, respectively. The dimensions of the traveling microrobot with wheels attached to the actuator were 7.59 mm (length) × 6.49 mm (width) × 7.59 mm (height) (excluding the brass tube) with a mass of 0.25 g. The robot ran at 73 mm/s at a flow rate of 1.0 L/min at 0.3 MPa and at 56 mm/s at a flow rate of 0.9 L/min. The results confirmed that the flow rate of the material was 0.9 L/min at a pressure of 0.3 MPa.
{"title":"Actuator for endoscope-connected microrobot driven by compressed gas","authors":"Takamichi Funakoshi, Yuya Niki, Koki Takasumi, Chise Takeshita, Minami Kaneko, Fumio Uchikoba","doi":"10.1007/s10015-024-00994-z","DOIUrl":"10.1007/s10015-024-00994-z","url":null,"abstract":"<div><p>With the aim of reducing the mental and physical burden on physicians and patients in endoscopic treatment, an endoscope-connected microrobot actuator and a self-propelled wheeled microrobot that uses Reuleaux triangle as the wheel shape is described for the use of medical carbon dioxide gas. A turbine-type actuator measuring 5.17 mm (long) × 5.13 mm (wide) × 1.96 mm (thick) with a mass of 0.15 g showed rotational speeds of 26,784 rpm, 56,250 rpm, and 57,690 rpm at pressures of 0.1 MPa, 0.2 MPa, and 0.3 MPa and a flow rate of 1.0 L/min, respectively. The dimensions of the traveling microrobot with wheels attached to the actuator were 7.59 mm (length) × 6.49 mm (width) × 7.59 mm (height) (excluding the brass tube) with a mass of 0.25 g. The robot ran at 73 mm/s at a flow rate of 1.0 L/min at 0.3 MPa and at 56 mm/s at a flow rate of 0.9 L/min. The results confirmed that the flow rate of the material was 0.9 L/min at a pressure of 0.3 MPa.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"30 1","pages":"63 - 71"},"PeriodicalIF":0.8,"publicationDate":"2024-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143481110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-19DOI: 10.1007/s10015-024-00999-8
Kodai Ochi, Mitsuharu Matsumoto
Robot skin plays a crucial role in shaping both the appearance and physical properties of robots. While various types of robot skins have been developed in recent years, their physical performance tends to degrade with use despite being optimal during manufacture. In contrast, plants and animals naturally adapt and change their physical properties as they grow. In this research, we explore a novel concept of robot skin by incorporating plants and leveraging their growth capabilities. We focused on the rapid growth of sprouts, cultivating them hydroponically on soft materials. Through experiments using a compression tester on composite samples of the grown sprouts and soft materials, we observed an increase in compressive stress due to plant growth. Our findings demonstrate that plant-symbiotic skin has the potential to enhance rigidity through specific plant growth. Furthermore, we examined the relationship between the number of plants and Young’s modulus, which was calculated by linearly approximating the compression curve, and discovered that plant roots significantly affect Young’s modulus, particularly in the later stages of compression.
{"title":"Enhancing the rigidity of robot skin through the incorporation of plant growth","authors":"Kodai Ochi, Mitsuharu Matsumoto","doi":"10.1007/s10015-024-00999-8","DOIUrl":"10.1007/s10015-024-00999-8","url":null,"abstract":"<div><p>Robot skin plays a crucial role in shaping both the appearance and physical properties of robots. While various types of robot skins have been developed in recent years, their physical performance tends to degrade with use despite being optimal during manufacture. In contrast, plants and animals naturally adapt and change their physical properties as they grow. In this research, we explore a novel concept of robot skin by incorporating plants and leveraging their growth capabilities. We focused on the rapid growth of sprouts, cultivating them hydroponically on soft materials. Through experiments using a compression tester on composite samples of the grown sprouts and soft materials, we observed an increase in compressive stress due to plant growth. Our findings demonstrate that plant-symbiotic skin has the potential to enhance rigidity through specific plant growth. Furthermore, we examined the relationship between the number of plants and Young’s modulus, which was calculated by linearly approximating the compression curve, and discovered that plant roots significantly affect Young’s modulus, particularly in the later stages of compression.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"30 2","pages":"208 - 218"},"PeriodicalIF":0.8,"publicationDate":"2024-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143925653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-19DOI: 10.1007/s10015-024-00992-1
Jumpei Yamasaki, Shuxin Lyu, Katsuyuki Morishita, Ken Saito
Some researchers expect quadruped robots to be a labor force because of their ability to move stably over uneven terrain. However, their control requires a significant computational cost. Therefore, the authors have been studying neuromorphic circuits that mimic biological neurons with analog electronic circuits to implement the flexibility of biological control in robots. We have previously shown that the gait of a normal-type quadruped robot equipped with neuromorphic circuits changes depending on the mechanical structure of the robot. In this study, we conducted walking experiments on a normal-type quadruped robot and a camel-type quadruped robot implemented with neuromorphic integrated circuits. The results showed that the normal-type quadruped robot generated a trot gait, and the camel-type quadruped robot generated a trot and pace gait. Also, we analyzed how the movement costs of the two types of robots and the two gait types change depending on the movement speed. The analysis revealed the camel-type quadruped robot has a wider range of speeds at which it generates gait than the normal-type quadruped robot, but at the same speeds, the cost of transport of the camel-type quadruped robot is higher. Comparing the two gait types of the camel-type quadruped robot, the movement cost of the pace gait was slightly smaller at the same speed.
{"title":"Gait and cost of transport analysis for quadruped robot with neuromorphic integrated circuit","authors":"Jumpei Yamasaki, Shuxin Lyu, Katsuyuki Morishita, Ken Saito","doi":"10.1007/s10015-024-00992-1","DOIUrl":"10.1007/s10015-024-00992-1","url":null,"abstract":"<div><p>Some researchers expect quadruped robots to be a labor force because of their ability to move stably over uneven terrain. However, their control requires a significant computational cost. Therefore, the authors have been studying neuromorphic circuits that mimic biological neurons with analog electronic circuits to implement the flexibility of biological control in robots. We have previously shown that the gait of a normal-type quadruped robot equipped with neuromorphic circuits changes depending on the mechanical structure of the robot. In this study, we conducted walking experiments on a normal-type quadruped robot and a camel-type quadruped robot implemented with neuromorphic integrated circuits. The results showed that the normal-type quadruped robot generated a trot gait, and the camel-type quadruped robot generated a trot and pace gait. Also, we analyzed how the movement costs of the two types of robots and the two gait types change depending on the movement speed. The analysis revealed the camel-type quadruped robot has a wider range of speeds at which it generates gait than the normal-type quadruped robot, but at the same speeds, the cost of transport of the camel-type quadruped robot is higher. Comparing the two gait types of the camel-type quadruped robot, the movement cost of the pace gait was slightly smaller at the same speed.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"30 2","pages":"227 - 235"},"PeriodicalIF":0.8,"publicationDate":"2024-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143925654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-16DOI: 10.1007/s10015-024-00995-y
Tingting Wang, Yunlong Zhao, Kui Li, Yanyun Bi
Circularly symmetric targets are widely used in industry; therefore, how to identify, locate, and grasp circularly symmetrical structures accurately is an important issue in the field of industrial robots. This paper proposed a more general visual servoing solution for circularly symmetric targets, and the proposed visual servoing scheme not only compensates for the limitation that ellipse features can only control 5-DOF (degrees of freedom) of the manipulator, but also solves the problem of slow convergence of image moment features when approaching the desired pose. An adaptive linear controller that combines ellipse features and image moment features is further proposed, thus achieving rapid convergence of the six degrees of freedom of the manipulator. Experimental results verify the effectiveness of the proposed method.
{"title":"Research on adaptive visual servo method for circular symmetrical objects","authors":"Tingting Wang, Yunlong Zhao, Kui Li, Yanyun Bi","doi":"10.1007/s10015-024-00995-y","DOIUrl":"10.1007/s10015-024-00995-y","url":null,"abstract":"<div><p>Circularly symmetric targets are widely used in industry; therefore, how to identify, locate, and grasp circularly symmetrical structures accurately is an important issue in the field of industrial robots. This paper proposed a more general visual servoing solution for circularly symmetric targets, and the proposed visual servoing scheme not only compensates for the limitation that ellipse features can only control 5-DOF (degrees of freedom) of the manipulator, but also solves the problem of slow convergence of image moment features when approaching the desired pose. An adaptive linear controller that combines ellipse features and image moment features is further proposed, thus achieving rapid convergence of the six degrees of freedom of the manipulator. Experimental results verify the effectiveness of the proposed method.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"30 1","pages":"98 - 106"},"PeriodicalIF":0.8,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143481220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-14DOI: 10.1007/s10015-024-00990-3
Kazuma Nagashima, Jumpei Nishikawa, Junya Morita
Immersion in a task is a pre-requisite for creativity. However, excessive arousal in a single task has drawbacks, such as overlooking events outside of the task. To examine such a negative aspect, this study constructs a computational model of arousal dynamics where the excessively increased arousal makes the task transition difficult. The model was developed using functions integrated into the cognitive architecture Adaptive Control of Thought-Rational (ACT-R). Under the framework, arousal is treated as a coefficient affecting the overall activation level in the model. In our simulations, we set up two conditions demanding low and high arousal, trying to replicate corresponding human experiments. In each simulation condition, two sets of ACT-R parameters were assumed from different interpretations of the human experimental settings. The results showed consistency of behavior between humans and models both in the two different simulation settings. This result suggests the validity of our assumptions and has implications of controlling arousal in our daily life.
{"title":"Modeling task immersion based on goal activation mechanism","authors":"Kazuma Nagashima, Jumpei Nishikawa, Junya Morita","doi":"10.1007/s10015-024-00990-3","DOIUrl":"10.1007/s10015-024-00990-3","url":null,"abstract":"<div><p>Immersion in a task is a pre-requisite for creativity. However, excessive arousal in a single task has drawbacks, such as overlooking events outside of the task. To examine such a negative aspect, this study constructs a computational model of arousal dynamics where the excessively increased arousal makes the task transition difficult. The model was developed using functions integrated into the cognitive architecture Adaptive Control of Thought-Rational (ACT-R). Under the framework, arousal is treated as a coefficient affecting the overall activation level in the model. In our simulations, we set up two conditions demanding low and high arousal, trying to replicate corresponding human experiments. In each simulation condition, two sets of ACT-R parameters were assumed from different interpretations of the human experimental settings. The results showed consistency of behavior between humans and models both in the two different simulation settings. This result suggests the validity of our assumptions and has implications of controlling arousal in our daily life.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"30 1","pages":"72 - 87"},"PeriodicalIF":0.8,"publicationDate":"2024-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143481104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-11DOI: 10.1007/s10015-024-00993-0
Xiaochuan Tian, Hironori Hiraishi
An advanced crowd counting algorithm based on CSRNet has been proposed in this study to improve the long training and convergence times. In this regard, three points were changed from the original CSRNet: (i) The first 12 layers in VGG19 were adopted in the front-end to enhance the capacity of the extracting features. (ii) The dilated convolutional network in the back-end was replaced with the standard convolutional network to speed up the processing time. (iii) Dense connection was applied in the back-end to reuse the output of the convolutional layer and achieve faster convergence. ShanghaiTech dataset was used to verify the improved CSRNet. In the case of high-density images, the accuracy was observed to be very close to the original CSRNet. Moreover, the average training time per sample was three times faster and average testing time per image was six times faster. In the case of low-density images, the accuracy was not close to that of the original CSRNet. However, the training time was 10 times faster and the testing time was six times faster. However, by dividing the image, the count number came close to the real count. The experimental results obtained from this study show that the improved CSRNet performs well. Although it is slightly less accurate than the original CSRNet, its processing time is much faster since it does not use dilated convolution. This indicates that it is more suitable for the actual needs of real-time detection. A system with improved CSRNet for counting people in real time has also been designed in this study.
{"title":"Design of crowd counting system based on improved CSRNet","authors":"Xiaochuan Tian, Hironori Hiraishi","doi":"10.1007/s10015-024-00993-0","DOIUrl":"10.1007/s10015-024-00993-0","url":null,"abstract":"<div><p>An advanced crowd counting algorithm based on CSRNet has been proposed in this study to improve the long training and convergence times. In this regard, three points were changed from the original CSRNet: (i) The first 12 layers in VGG19 were adopted in the front-end to enhance the capacity of the extracting features. (ii) The dilated convolutional network in the back-end was replaced with the standard convolutional network to speed up the processing time. (iii) Dense connection was applied in the back-end to reuse the output of the convolutional layer and achieve faster convergence. ShanghaiTech dataset was used to verify the improved CSRNet. In the case of high-density images, the accuracy was observed to be very close to the original CSRNet. Moreover, the average training time per sample was three times faster and average testing time per image was six times faster. In the case of low-density images, the accuracy was not close to that of the original CSRNet. However, the training time was 10 times faster and the testing time was six times faster. However, by dividing the image, the count number came close to the real count. The experimental results obtained from this study show that the improved CSRNet performs well. Although it is slightly less accurate than the original CSRNet, its processing time is much faster since it does not use dilated convolution. This indicates that it is more suitable for the actual needs of real-time detection. A system with improved CSRNet for counting people in real time has also been designed in this study.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"30 1","pages":"3 - 11"},"PeriodicalIF":0.8,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143481251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Conventional bipedal robots are mainly controlled by motors using central processing units (CPUs) and software, and they are being developed with control methods and mechanisms that are different from those used by humans. Humans generate basic movement patterns using a central pattern generator (CPG) localized in the spinal cord and create complex and efficient movements through muscle synergies that coordinate multiple muscles. For a robot to mimic the human musculoskeletal structure and reproduce walking movements, muscle parameters are required. In this paper, inverse dynamics analysis is used to determine the muscle displacements and forces required for walking in a musculoskeletal humanoid model, and forward dynamics analysis is used to investigate these values.
{"title":"Muscle displacement and force related to walking by dynamics studies of musculoskeletal humanoid robot","authors":"Kentaro Yamazaki, Tatsumi Goto, Yugo Kokubun, Minami Kaneko, Fumio Uchikoba","doi":"10.1007/s10015-024-00986-z","DOIUrl":"10.1007/s10015-024-00986-z","url":null,"abstract":"<div><p>Conventional bipedal robots are mainly controlled by motors using central processing units (CPUs) and software, and they are being developed with control methods and mechanisms that are different from those used by humans. Humans generate basic movement patterns using a central pattern generator (CPG) localized in the spinal cord and create complex and efficient movements through muscle synergies that coordinate multiple muscles. For a robot to mimic the human musculoskeletal structure and reproduce walking movements, muscle parameters are required. In this paper, inverse dynamics analysis is used to determine the muscle displacements and forces required for walking in a musculoskeletal humanoid model, and forward dynamics analysis is used to investigate these values.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"30 1","pages":"88 - 97"},"PeriodicalIF":0.8,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143481250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We have developed a point cloud processing system within the Unreal Engine to analyze changes in large time-series point cloud data collected by laser scanners and extract structured information. Currently, human interaction is required to create CAD data associated with the time-series point cloud data. The Unreal Engine, known for its 3D visualization capabilities, was chosen due to its suitability for data visualization and automation. Our system features a user interface that automates update procedures with a single button press, allowing for efficient evaluation of the interface’s effectiveness. The system effectively visualizes structural changes by extracting differences between pre- and post-change data, recognizing shape variations, and meshing the data. The difference extraction involves isolating only the added or deleted point clouds between the two datasets using the K-D tree method. Subsequent shape recognition utilizes pre-prepared training data associated with pipes and tanks, improving accuracy through classification into nine types and leveraging PointNet + + for deep learning recognition. Meshing of the shape-recognized point clouds, particularly those to be added, employs the ball pivoting algorithm (BPA), which was proven effective. Finally, the updated structural data are visualized by color-coding added and deleted data in red and blue, respectively, within the Unreal Engine. Despite increased processing time with a higher number of point cloud data, down sampling prior to difference extraction significantly reduces the automatic update time, enhancing overall efficiency.
{"title":"Development of time-series point cloud data changes and automatic structure recognition system using Unreal Engine","authors":"Toru Kato, Hiroki Takahashi, Meguru Yamashita, Akio Doi, Takashi Imabuchi","doi":"10.1007/s10015-024-00983-2","DOIUrl":"10.1007/s10015-024-00983-2","url":null,"abstract":"<div><p>We have developed a point cloud processing system within the Unreal Engine to analyze changes in large time-series point cloud data collected by laser scanners and extract structured information. Currently, human interaction is required to create CAD data associated with the time-series point cloud data. The Unreal Engine, known for its 3D visualization capabilities, was chosen due to its suitability for data visualization and automation. Our system features a user interface that automates update procedures with a single button press, allowing for efficient evaluation of the interface’s effectiveness. The system effectively visualizes structural changes by extracting differences between pre- and post-change data, recognizing shape variations, and meshing the data. The difference extraction involves isolating only the added or deleted point clouds between the two datasets using the K-D tree method. Subsequent shape recognition utilizes pre-prepared training data associated with pipes and tanks, improving accuracy through classification into nine types and leveraging PointNet + + for deep learning recognition. Meshing of the shape-recognized point clouds, particularly those to be added, employs the ball pivoting algorithm (BPA), which was proven effective. Finally, the updated structural data are visualized by color-coding added and deleted data in red and blue, respectively, within the Unreal Engine. Despite increased processing time with a higher number of point cloud data, down sampling prior to difference extraction significantly reduces the automatic update time, enhancing overall efficiency.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"30 1","pages":"126 - 135"},"PeriodicalIF":0.8,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143480903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-13DOI: 10.1007/s10015-024-00985-0
Yafei Fan, Lijuan Liang
The demand for data information in indoor scenes has increased. However, the indoor scene model construction is relatively complex. Meanwhile, there are many measurement and positional deviations in the current scene. Therefore, virtual reality technology and deep learning algorithms are used to build indoor scenes. The deep neural network and multi-point perspective imaging algorithm are used to analyze the image pixels of the scene, reduce the noise in current scene image recognition, and achieve the three-dimensional model construction of indoor scenes. The research results indicated that the new method improved the accuracy of indoor 3D scenes by eliminating noise in 3D scene data and constructing image data. The accuracy of the new method for item recognition was above 93%. Simultaneously, it can complete the construction of 3D scenes. The accuracy value of the new method was 3.00% higher than that of the CNN algorithm and 4.00% higher than that of the SVO algorithm. The error value was stable within the range of 0.2–0.3. At the same time, the loss function value of the algorithm used in this study was relatively small. The algorithm performance is more stable. From this, the new method model can accurately construct scenes, which has certain research value for indoor 3D scene construction.
{"title":"A 3D interactive scene construction method for interior design based on virtual reality","authors":"Yafei Fan, Lijuan Liang","doi":"10.1007/s10015-024-00985-0","DOIUrl":"10.1007/s10015-024-00985-0","url":null,"abstract":"<div><p>The demand for data information in indoor scenes has increased. However, the indoor scene model construction is relatively complex. Meanwhile, there are many measurement and positional deviations in the current scene. Therefore, virtual reality technology and deep learning algorithms are used to build indoor scenes. The deep neural network and multi-point perspective imaging algorithm are used to analyze the image pixels of the scene, reduce the noise in current scene image recognition, and achieve the three-dimensional model construction of indoor scenes. The research results indicated that the new method improved the accuracy of indoor 3D scenes by eliminating noise in 3D scene data and constructing image data. The accuracy of the new method for item recognition was above 93%. Simultaneously, it can complete the construction of 3D scenes. The accuracy value of the new method was 3.00% higher than that of the CNN algorithm and 4.00% higher than that of the SVO algorithm. The error value was stable within the range of 0.2–0.3. At the same time, the loss function value of the algorithm used in this study was relatively small. The algorithm performance is more stable. From this, the new method model can accurately construct scenes, which has certain research value for indoor 3D scene construction.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"30 1","pages":"173 - 183"},"PeriodicalIF":0.8,"publicationDate":"2024-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143481099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-11DOI: 10.1007/s10015-024-00982-3
Ayaka Fujimoto, Yuta Miyama, Toru Moriyama
Pill bug is an arthropod of about 1 cm in length that lives under fallen leaves and stones. When it turns in an L-shaped passage and encounters a T-maze next, it turns in the opposite direction of the turn in the L-shaped passage mostly. This reaction is called turn alternation. In this paper, we report our experiments to investigate whether pill bugs have a tendency to keep turn alternation or not in a pathway where the distance between the L-shaped passage (forced turn point) and the T-maze (free choice point) is long. Our results suggest that some pill bugs tend to decrease turn alternation, i.e., increase turn repetition, as the distance between the forced turn and free choice points is longer. In nature, these pill bugs may use turn alternation in places where there are many obstacles like stones and dead leaves and do turn repetition in those such as sandy squares with sunlight where there are few obstacles.
{"title":"Turn repetition in pill bugs","authors":"Ayaka Fujimoto, Yuta Miyama, Toru Moriyama","doi":"10.1007/s10015-024-00982-3","DOIUrl":"10.1007/s10015-024-00982-3","url":null,"abstract":"<div><p>Pill bug is an arthropod of about 1 cm in length that lives under fallen leaves and stones. When it turns in an L-shaped passage and encounters a T-maze next, it turns in the opposite direction of the turn in the L-shaped passage mostly. This reaction is called turn alternation. In this paper, we report our experiments to investigate whether pill bugs have a tendency to keep turn alternation or not in a pathway where the distance between the L-shaped passage (forced turn point) and the T-maze (free choice point) is long. Our results suggest that some pill bugs tend to decrease turn alternation, i.e., increase turn repetition, as the distance between the forced turn and free choice points is longer. In nature, these pill bugs may use turn alternation in places where there are many obstacles like stones and dead leaves and do turn repetition in those such as sandy squares with sunlight where there are few obstacles.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"30 2","pages":"260 - 264"},"PeriodicalIF":0.8,"publicationDate":"2024-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143925553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}