首页 > 最新文献

Frontiers in Robotics and AI最新文献

英文 中文
Pig tongue soft robot mimicking intrinsic tongue muscle structure.
IF 2.9 Q2 ROBOTICS Pub Date : 2025-01-09 eCollection Date: 2024-01-01 DOI: 10.3389/frobt.2024.1511422
Yuta Ishikawa, Hiroyuki Nabae, Megu Gunji, Gen Endo, Koichi Suzumori

Animal muscles have complex, three-dimensional structures with fibers oriented in various directions. The tongue, in particular, features a highly intricate muscular system composed of four intrinsic muscles and several types of extrinsic muscles, enabling flexible and diverse movements essential for feeding, swallowing, and speech production. Replicating these structures could lead to the development of multifunctional manipulators and advanced platforms for studying muscle-motion relationships. In this study, we developed a pig tongue soft robot that focuses on replicating the intrinsic muscles using thin McKibben artificial muscles, silicone rubber, and gel. We began by performing three-dimensional scans and sectional observations in the coronal and sagittal planes to examine the arrangement and orientation of the intrinsic muscles in the actual pig tongue. Additionally, we used the diffusible iodine-based contrast-enhanced computed tomography (Dice-CT) technique to observe the three-dimensional flow of muscle pathways. Based on these observations, we constructed a three-dimensional model and molded the pig tongue shape with silicone rubber and gel, embedding artificial muscles into the robot body. We conducted experiments to assess both the motion of the tongue robot's tip and its stiffness during muscle contractions. The results confirmed characteristic tongue motions, such as tip extension, flexion, and lateral bending, as well as stiffness changes during actuation, suggesting the potential for this soft robot to serve as a platform for academic and engineering studies.

{"title":"Pig tongue soft robot mimicking intrinsic tongue muscle structure.","authors":"Yuta Ishikawa, Hiroyuki Nabae, Megu Gunji, Gen Endo, Koichi Suzumori","doi":"10.3389/frobt.2024.1511422","DOIUrl":"10.3389/frobt.2024.1511422","url":null,"abstract":"<p><p>Animal muscles have complex, three-dimensional structures with fibers oriented in various directions. The tongue, in particular, features a highly intricate muscular system composed of four intrinsic muscles and several types of extrinsic muscles, enabling flexible and diverse movements essential for feeding, swallowing, and speech production. Replicating these structures could lead to the development of multifunctional manipulators and advanced platforms for studying muscle-motion relationships. In this study, we developed a pig tongue soft robot that focuses on replicating the intrinsic muscles using thin McKibben artificial muscles, silicone rubber, and gel. We began by performing three-dimensional scans and sectional observations in the coronal and sagittal planes to examine the arrangement and orientation of the intrinsic muscles in the actual pig tongue. Additionally, we used the diffusible iodine-based contrast-enhanced computed tomography (Dice-CT) technique to observe the three-dimensional flow of muscle pathways. Based on these observations, we constructed a three-dimensional model and molded the pig tongue shape with silicone rubber and gel, embedding artificial muscles into the robot body. We conducted experiments to assess both the motion of the tongue robot's tip and its stiffness during muscle contractions. The results confirmed characteristic tongue motions, such as tip extension, flexion, and lateral bending, as well as stiffness changes during actuation, suggesting the potential for this soft robot to serve as a platform for academic and engineering studies.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1511422"},"PeriodicalIF":2.9,"publicationDate":"2025-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11754050/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143029950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semantic segmentation using synthetic images of underwater marine-growth.
IF 2.9 Q2 ROBOTICS Pub Date : 2025-01-08 eCollection Date: 2024-01-01 DOI: 10.3389/frobt.2024.1459570
Christian Mai, Jesper Liniger, Simon Pedersen

Introduction: Subsea applications recently received increasing attention due to the global expansion of offshore energy, seabed infrastructure, and maritime activities; complex inspection, maintenance, and repair tasks in this domain are regularly solved with pilot-controlled, tethered remote-operated vehicles to reduce the use of human divers. However, collecting and precisely labeling submerged data is challenging due to uncontrollable and harsh environmental factors. As an alternative, synthetic environments offer cost-effective, controlled alternatives to real-world operations, with access to detailed ground-truth data. This study investigates the potential of synthetic underwater environments to offer cost-effective, controlled alternatives to real-world operations, by rendering detailed labeled datasets and their application to machine-learning.

Methods: Two synthetic datasets with over 1000 rendered images each were used to train DeepLabV3+ neural networks with an Xception backbone. The dataset includes environmental classes like seawater and seafloor, offshore structures components, ship hulls, and several marine growth classes. The machine-learning models were trained using transfer learning and data augmentation techniques.

Results: Testing showed high accuracy in segmenting synthetic images. In contrast, testing on real-world imagery yielded promising results for two out of three of the studied cases, though challenges in distinguishing some classes persist.

Discussion: This study demonstrates the efficiency of synthetic environments for training subsea machine learning models but also highlights some important limitations in certain cases. Improvements can be pursued by introducing layered species into synthetic environments and improving real-world optical information quality-better color representation, reduced compression artifacts, and minimized motion blur-are key focus areas. Future work involves more extensive validation with expert-labeled datasets to validate and enhance real-world application accuracy.

{"title":"Semantic segmentation using synthetic images of underwater marine-growth.","authors":"Christian Mai, Jesper Liniger, Simon Pedersen","doi":"10.3389/frobt.2024.1459570","DOIUrl":"10.3389/frobt.2024.1459570","url":null,"abstract":"<p><strong>Introduction: </strong>Subsea applications recently received increasing attention due to the global expansion of offshore energy, seabed infrastructure, and maritime activities; complex inspection, maintenance, and repair tasks in this domain are regularly solved with pilot-controlled, tethered remote-operated vehicles to reduce the use of human divers. However, collecting and precisely labeling submerged data is challenging due to uncontrollable and harsh environmental factors. As an alternative, synthetic environments offer cost-effective, controlled alternatives to real-world operations, with access to detailed ground-truth data. This study investigates the potential of synthetic underwater environments to offer cost-effective, controlled alternatives to real-world operations, by rendering detailed labeled datasets and their application to machine-learning.</p><p><strong>Methods: </strong>Two synthetic datasets with over 1000 rendered images each were used to train DeepLabV3+ neural networks with an Xception backbone. The dataset includes environmental classes like seawater and seafloor, offshore structures components, ship hulls, and several marine growth classes. The machine-learning models were trained using transfer learning and data augmentation techniques.</p><p><strong>Results: </strong>Testing showed high accuracy in segmenting synthetic images. In contrast, testing on real-world imagery yielded promising results for two out of three of the studied cases, though challenges in distinguishing some classes persist.</p><p><strong>Discussion: </strong>This study demonstrates the efficiency of synthetic environments for training subsea machine learning models but also highlights some important limitations in certain cases. Improvements can be pursued by introducing layered species into synthetic environments and improving real-world optical information quality-better color representation, reduced compression artifacts, and minimized motion blur-are key focus areas. Future work involves more extensive validation with expert-labeled datasets to validate and enhance real-world application accuracy.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1459570"},"PeriodicalIF":2.9,"publicationDate":"2025-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11751705/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143025221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A fast monocular 6D pose estimation method for textureless objects based on perceptual hashing and template matching.
IF 2.9 Q2 ROBOTICS Pub Date : 2025-01-08 eCollection Date: 2024-01-01 DOI: 10.3389/frobt.2024.1424036
Jose Moises Araya-Martinez, Vinicius Soares Matthiesen, Simon Bøgh, Jens Lambrecht, Rui Pimentel de Figueiredo

Object pose estimation is essential for computer vision applications such as quality inspection, robotic bin picking, and warehouse logistics. However, this task often requires expensive equipment such as 3D cameras or Lidar sensors, as well as significant computational resources. Many state-of-the-art methods for 6D pose estimation depend on deep neural networks, which are computationally demanding and require GPUs for real-time performance. Moreover, they usually involve the collection and labeling of large training datasets, which is costly and time-consuming. In this study, we propose a template-based matching algorithm that utilizes a novel perceptual hashing method for binary images, enabling fast and robust pose estimation. This approach allows the automatic preselection of a subset of templates, significantly reducing inference time while maintaining similar accuracy. Our solution runs efficiently on multiple devices without GPU support, offering reduced runtime and high accuracy on cost-effective hardware. We benchmarked our proposed approach on a body-in-white automotive part and a widely used publicly available dataset. Our set of experiments on a synthetically generated dataset reveals a trade-off between accuracy and computation time superior to a previous work on the same automotive-production use case. Additionally, our algorithm efficiently utilizes all CPU cores and includes adjustable parameters for balancing computation time and accuracy, making it suitable for a wide range of applications where hardware cost and power efficiency are critical. For instance, with a rotation step of 10° in the template database, we achieve an average rotation error of 10 ° , matching the template quantization level, and an average translation error of 14% of the object's size, with an average processing time of 0.3 s per image on a small form-factor NVIDIA AGX Orin device. We also evaluate robustness under partial occlusions (up to 10% occlusion) and noisy inputs (signal-to-noise ratios [SNRs] up to 10 dB), with only minor losses in accuracy. Additionally, we compare our method to state-of-the-art deep learning models on a public dataset. Although our algorithm does not outperform them in absolute accuracy, it provides a more favorable trade-off between accuracy and processing time, which is especially relevant to applications using resource-constrained devices.

{"title":"A fast monocular 6D pose estimation method for textureless objects based on perceptual hashing and template matching.","authors":"Jose Moises Araya-Martinez, Vinicius Soares Matthiesen, Simon Bøgh, Jens Lambrecht, Rui Pimentel de Figueiredo","doi":"10.3389/frobt.2024.1424036","DOIUrl":"10.3389/frobt.2024.1424036","url":null,"abstract":"<p><p>Object pose estimation is essential for computer vision applications such as quality inspection, robotic bin picking, and warehouse logistics. However, this task often requires expensive equipment such as 3D cameras or Lidar sensors, as well as significant computational resources. Many state-of-the-art methods for 6D pose estimation depend on deep neural networks, which are computationally demanding and require GPUs for real-time performance. Moreover, they usually involve the collection and labeling of large training datasets, which is costly and time-consuming. In this study, we propose a template-based matching algorithm that utilizes a novel perceptual hashing method for binary images, enabling fast and robust pose estimation. This approach allows the automatic preselection of a subset of templates, significantly reducing inference time while maintaining similar accuracy. Our solution runs efficiently on multiple devices without GPU support, offering reduced runtime and high accuracy on cost-effective hardware. We benchmarked our proposed approach on a body-in-white automotive part and a widely used publicly available dataset. Our set of experiments on a synthetically generated dataset reveals a trade-off between accuracy and computation time superior to a previous work on the same automotive-production use case. Additionally, our algorithm efficiently utilizes all CPU cores and includes adjustable parameters for balancing computation time and accuracy, making it suitable for a wide range of applications where hardware cost and power efficiency are critical. For instance, with a rotation step of 10° in the template database, we achieve an average rotation error of <math><mrow><mn>10</mn> <mo>°</mo></mrow> </math> , matching the template quantization level, and an average translation error of 14% of the object's size, with an average processing time of <math><mrow><mn>0.3</mn> <mi>s</mi></mrow> </math> per image on a small form-factor NVIDIA AGX Orin device. We also evaluate robustness under partial occlusions (up to 10% occlusion) and noisy inputs (signal-to-noise ratios [SNRs] up to 10 dB), with only minor losses in accuracy. Additionally, we compare our method to state-of-the-art deep learning models on a public dataset. Although our algorithm does not outperform them in absolute accuracy, it provides a more favorable trade-off between accuracy and processing time, which is especially relevant to applications using resource-constrained devices.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1424036"},"PeriodicalIF":2.9,"publicationDate":"2025-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11750840/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143025200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A comparative psychological evaluation of a robotic avatar in Dubai and Japan. 迪拜和日本机器人化身的比较心理评估。
IF 2.9 Q2 ROBOTICS Pub Date : 2025-01-07 eCollection Date: 2024-01-01 DOI: 10.3389/frobt.2024.1426717
Hiroko Kamide, Yukiko Horikawa, Moe Sato, Atsushi Toyoda, Kurima Sakai, Takashi Minato, Takahiro Miyashita, Hiroshi Ishiguro

Introduction: This study focused on the psychological evaluation of an avatar robot in two distinct regions, Dubai in the Middle East and Japan in the Far East. Dubai has experienced remarkable development in advanced technology, while Japan boasts a culture that embraces robotics. These regions are distinctively characterized by their respective relationships with robotics. In addition, the use of robots as avatars is anticipated to increase, and this research aimed to compare the psychological impressions of people from these regions when interacting with an avatar as opposed to a human.

Methods: Considering that avatars can be presented on screens or as physical robots, two methodologies were employed: a video presentation survey (Study 1, Dubai: n = 120, Japan: n = 120) and an experiment involving live interactions with a physical robot avatar (Study 2, Dubai: n = 28, Japan: n = 30).

Results and discussion: Results from the video presentations indicated that participants from Dubai experienced significantly lower levels of discomfort towards the avatar compared to their Japanese counterparts. In contrast, during live interactions, Japanese participants showed a notably positive evaluation towards a Japanese human operator. The findings suggest that screen-presented avatars may be more readily accepted in Dubai, while humans were generally preferred over avatars in terms of positive evaluations when physical robots were used as avatars. The study also discusses the implications of these findings for the appropriate tasks for avatars and the relationship between cultural backgrounds and avatar evaluations.

本研究集中在两个不同的地区,中东的迪拜和远东的日本,对化身机器人的心理评估。迪拜在先进技术方面取得了令人瞩目的发展,而日本则拥有拥抱机器人的文化。这些区域因其各自与机器人的关系而具有鲜明的特征。此外,机器人作为虚拟化身的使用预计会增加,这项研究旨在比较来自这些地区的人们在与虚拟化身互动时的心理印象,而不是与人类互动。方法:考虑到虚拟化身可以在屏幕上呈现,也可以作为实体机器人呈现,我们采用了两种方法:一种是视频呈现调查(研究1,迪拜:n = 120,日本:n = 120),另一种是与实体机器人虚拟化身进行现场互动的实验(研究2,迪拜:n = 28,日本:n = 30)。结果和讨论:视频演示的结果表明,与日本参与者相比,迪拜参与者对虚拟形象的不适程度要低得多。相比之下,在现场互动中,日本参与者对日本人工操作员表现出明显的积极评价。研究结果表明,在迪拜,屏幕上呈现的虚拟形象可能更容易被接受,而当使用实体机器人作为虚拟形象时,人们通常更喜欢真人,而不是虚拟形象。本研究还讨论了这些发现对虚拟角色的适当任务的影响以及文化背景与虚拟角色评估之间的关系。
{"title":"A comparative psychological evaluation of a robotic avatar in Dubai and Japan.","authors":"Hiroko Kamide, Yukiko Horikawa, Moe Sato, Atsushi Toyoda, Kurima Sakai, Takashi Minato, Takahiro Miyashita, Hiroshi Ishiguro","doi":"10.3389/frobt.2024.1426717","DOIUrl":"10.3389/frobt.2024.1426717","url":null,"abstract":"<p><strong>Introduction: </strong>This study focused on the psychological evaluation of an avatar robot in two distinct regions, Dubai in the Middle East and Japan in the Far East. Dubai has experienced remarkable development in advanced technology, while Japan boasts a culture that embraces robotics. These regions are distinctively characterized by their respective relationships with robotics. In addition, the use of robots as avatars is anticipated to increase, and this research aimed to compare the psychological impressions of people from these regions when interacting with an avatar as opposed to a human.</p><p><strong>Methods: </strong>Considering that avatars can be presented on screens or as physical robots, two methodologies were employed: a video presentation survey (Study 1, Dubai: n = 120, Japan: n = 120) and an experiment involving live interactions with a physical robot avatar (Study 2, Dubai: n = 28, Japan: n = 30).</p><p><strong>Results and discussion: </strong>Results from the video presentations indicated that participants from Dubai experienced significantly lower levels of discomfort towards the avatar compared to their Japanese counterparts. In contrast, during live interactions, Japanese participants showed a notably positive evaluation towards a Japanese human operator. The findings suggest that screen-presented avatars may be more readily accepted in Dubai, while humans were generally preferred over avatars in terms of positive evaluations when physical robots were used as avatars. The study also discusses the implications of these findings for the appropriate tasks for avatars and the relationship between cultural backgrounds and avatar evaluations.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1426717"},"PeriodicalIF":2.9,"publicationDate":"2025-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11746044/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143013901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reliable and robust robotic handling of microplates via computer vision and touch feedback.
IF 2.9 Q2 ROBOTICS Pub Date : 2025-01-07 eCollection Date: 2024-01-01 DOI: 10.3389/frobt.2024.1462717
Vincenzo Scamarcio, Jasper Tan, Francesco Stellacci, Josie Hughes

Laboratory automation requires reliable and precise handling of microplates, but existing robotic systems often struggle to achieve this, particularly when navigating around the dynamic and variable nature of laboratory environments. This work introduces a novel method integrating simultaneous localization and mapping (SLAM), computer vision, and tactile feedback for the precise and autonomous placement of microplates. Implemented on a bi-manual mobile robot, the method achieves fine-positioning accuracies of ± 1.2 mm and ± 0.4°. The approach was validated through experiments using both mockup and real laboratory instruments, demonstrating at least a 95% success rate across varied conditions and robust performance in a multi-stage protocol. Compared to existing methods, our framework effectively generalizes to different instruments without compromising efficiency. These findings highlight the potential for enhanced robotic manipulation in laboratory automation, paving the way for more reliable and reproducible experimental workflows.

{"title":"Reliable and robust robotic handling of microplates via computer vision and touch feedback.","authors":"Vincenzo Scamarcio, Jasper Tan, Francesco Stellacci, Josie Hughes","doi":"10.3389/frobt.2024.1462717","DOIUrl":"10.3389/frobt.2024.1462717","url":null,"abstract":"<p><p>Laboratory automation requires reliable and precise handling of microplates, but existing robotic systems often struggle to achieve this, particularly when navigating around the dynamic and variable nature of laboratory environments. This work introduces a novel method integrating simultaneous localization and mapping (SLAM), computer vision, and tactile feedback for the precise and autonomous placement of microplates. Implemented on a bi-manual mobile robot, the method achieves fine-positioning accuracies of <math><mrow><mo>±</mo></mrow> </math> 1.2 mm and <math><mrow><mo>±</mo></mrow> </math> 0.4°. The approach was validated through experiments using both mockup and real laboratory instruments, demonstrating at least a 95% success rate across varied conditions and robust performance in a multi-stage protocol. Compared to existing methods, our framework effectively generalizes to different instruments without compromising efficiency. These findings highlight the potential for enhanced robotic manipulation in laboratory automation, paving the way for more reliable and reproducible experimental workflows.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1462717"},"PeriodicalIF":2.9,"publicationDate":"2025-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11752899/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143025205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Android avatar improves educational effects by embodied anthropomorphization. Android化身通过体现人格化来提高教育效果。
IF 2.9 Q2 ROBOTICS Pub Date : 2025-01-06 eCollection Date: 2024-01-01 DOI: 10.3389/frobt.2024.1469626
Naoki Kodani, Takahisa Uchida, Nahoko Kameo, Kurima Sakai, Tomo Funayama, Takashi Minato, Akane Kikuchi, Hiroshi Ishiguro

After the COVID-19 pandemic, the adoption of distance learning has been accelerated in educational institutions in multiple countries. In addition to using a videoconferencing system with camera images, avatars can also be used for remote classes. In particular, an android avatar with a sense of presence has the potential to provide higher quality education than a video-recorded lecture. To investigate the specific educational effects of android avatars, we used a Geminoid. an android with the appearance of a specific individual, and conducted both laboratory experiment and large-scale field experiment. The first compared the android avatar lecture with a videoconferencing system. We found that the use of an android avatar for the lecture led to the significantly higher subjective feelings of being seen, feeling more motivated, and focused on the lecture compared to the video lecture. We further conducted a large-scale field experiment with an android avatar to clarify what contributes to such educational effects. The results suggest that the students' perception of android's anthroppomorphism and competence has a positive impact, and discomfort has a negative impact on the subjective experence of educational effect. These results indicate the role of embodied anthropomorphization in positive educational experience. The important point of this study is that both the laboratory experiment and the large-scale experiment were conducted to clarify the educational effects of androids. These results support several related studies and are clarified in detail. Based on these results, the potential for the future usage of androids in education is discussed.

在2019冠状病毒病大流行之后,许多国家的教育机构加快了远程学习的采用。除了使用带有摄像机图像的视频会议系统外,虚拟角色还可以用于远程课程。特别是,一个具有存在感的机器人化身有可能提供比视频讲座更高质量的教育。为了研究机器人化身的特殊教育效果,我们使用了一个Geminoid。一个具有特定个体外观的机器人,并进行了实验室实验和大规模的现场实验。第一项研究将机器人化身讲座与视频会议系统进行了比较。我们发现,与视频讲座相比,在讲座中使用机器人替身会让学生产生明显更高的主观感受,感觉更有动力,更专注于讲座。我们进一步对一个机器人化身进行了大规模的实地实验,以阐明是什么促成了这种教育效果。结果表明,学生对机器人的拟人化和能力感知有积极影响,而不适对教育效果的主观体验有消极影响。这些结果表明了具身人格化在积极教育体验中的作用。本研究的重点是通过实验室实验和大规模实验来阐明机器人的教育效果。这些结果支持了几个相关的研究,并进行了详细的澄清。基于这些结果,讨论了未来在教育中使用机器人的潜力。
{"title":"Android avatar improves educational effects by embodied anthropomorphization.","authors":"Naoki Kodani, Takahisa Uchida, Nahoko Kameo, Kurima Sakai, Tomo Funayama, Takashi Minato, Akane Kikuchi, Hiroshi Ishiguro","doi":"10.3389/frobt.2024.1469626","DOIUrl":"https://doi.org/10.3389/frobt.2024.1469626","url":null,"abstract":"<p><p>After the COVID-19 pandemic, the adoption of distance learning has been accelerated in educational institutions in multiple countries. In addition to using a videoconferencing system with camera images, avatars can also be used for remote classes. In particular, an android avatar with a sense of presence has the potential to provide higher quality education than a video-recorded lecture. To investigate the specific educational effects of android avatars, we used a Geminoid. an android with the appearance of a specific individual, and conducted both laboratory experiment and large-scale field experiment. The first compared the android avatar lecture with a videoconferencing system. We found that the use of an android avatar for the lecture led to the significantly higher subjective feelings of being seen, feeling more motivated, and focused on the lecture compared to the video lecture. We further conducted a large-scale field experiment with an android avatar to clarify what contributes to such educational effects. The results suggest that the students' perception of android's anthroppomorphism and competence has a positive impact, and discomfort has a negative impact on the subjective experence of educational effect. These results indicate the role of embodied anthropomorphization in positive educational experience. The important point of this study is that both the laboratory experiment and the large-scale experiment were conducted to clarify the educational effects of androids. These results support several related studies and are clarified in detail. Based on these results, the potential for the future usage of androids in education is discussed.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1469626"},"PeriodicalIF":2.9,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11743274/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143013904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bicycle-inspired simple balance control method for quadruped robots in high-speed running. 基于自行车的四足机器人高速奔跑简单平衡控制方法。
IF 2.9 Q2 ROBOTICS Pub Date : 2025-01-06 eCollection Date: 2024-01-01 DOI: 10.3389/frobt.2024.1473628
Shoei Hattori, Shura Suzuki, Akira Fukuhara, Takeshi Kano, Akio Ishiguro

This paper explores the applicability of bicycle-inspired balance control in a quadruped robot model. Bicycles maintain stability and change direction by intuitively steering the handle, which induces yaw motion in the body frame and generates an inertial effect to support balance. Inspired by this balancing strategy, we implemented a similar mechanism in a quadruped robot model, introducing a yaw trunk joint analogous to a bicycle's steering handle. Simulation results demonstrate that the proposed model achieves stable high-speed locomotion with robustness against external disturbances and maneuverability that allows directional changes with only slight speed reduction. These findings suggest that utilizing centrifugal force plays a critical role in agile locomotion, aligning with the movement strategies of cursorial animals. This study underscores the potential of bicycle balance control as an effective and straightforward control approach for enhancing the agility and stability of quadruped robots as well as potentially offering insights into animal motor control mechanisms for agile locomotion.

本文探讨了自行车式平衡控制在四足机器人模型中的适用性。自行车通过直观地操纵手柄来保持稳定和改变方向,从而在车体框架中引起偏航运动,产生惯性效应来支撑平衡。受这种平衡策略的启发,我们在四足机器人模型中实现了类似的机制,引入了类似于自行车转向手柄的偏航躯干关节。仿真结果表明,该模型能够实现稳定的高速运动,具有抗外界干扰的鲁棒性和可操作性,仅在轻微减速的情况下改变方向。这些发现表明,离心力的利用在敏捷运动中起着至关重要的作用,与移动动物的运动策略一致。这项研究强调了自行车平衡控制作为一种有效和直接的控制方法的潜力,可以提高四足机器人的敏捷性和稳定性,并有可能为敏捷运动的动物运动控制机制提供见解。
{"title":"Bicycle-inspired simple balance control method for quadruped robots in high-speed running.","authors":"Shoei Hattori, Shura Suzuki, Akira Fukuhara, Takeshi Kano, Akio Ishiguro","doi":"10.3389/frobt.2024.1473628","DOIUrl":"https://doi.org/10.3389/frobt.2024.1473628","url":null,"abstract":"<p><p>This paper explores the applicability of bicycle-inspired balance control in a quadruped robot model. Bicycles maintain stability and change direction by intuitively steering the handle, which induces yaw motion in the body frame and generates an inertial effect to support balance. Inspired by this balancing strategy, we implemented a similar mechanism in a quadruped robot model, introducing a yaw trunk joint analogous to a bicycle's steering handle. Simulation results demonstrate that the proposed model achieves stable high-speed locomotion with robustness against external disturbances and maneuverability that allows directional changes with only slight speed reduction. These findings suggest that utilizing centrifugal force plays a critical role in agile locomotion, aligning with the movement strategies of cursorial animals. This study underscores the potential of bicycle balance control as an effective and straightforward control approach for enhancing the agility and stability of quadruped robots as well as potentially offering insights into animal motor control mechanisms for agile locomotion.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1473628"},"PeriodicalIF":2.9,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11743184/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143013906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Versatile graceful degradation framework for bio-inspired proprioception with redundant soft sensors. 具有冗余软传感器的仿生本体感觉的多功能优雅降解框架。
IF 2.9 Q2 ROBOTICS Pub Date : 2025-01-06 eCollection Date: 2024-01-01 DOI: 10.3389/frobt.2024.1504651
Taku Sugiyama, Kyo Kutsuzawa, Dai Owaki, Elijah Almanzor, Fumiya Iida, Mitsuhiro Hayashibe

Reliable proprioception and feedback from soft sensors are crucial for enabling soft robots to function intelligently in real-world environments. Nevertheless, soft sensors are fragile and are susceptible to various damage sources in such environments. Some researchers have utilized redundant configuration, where healthy sensors compensate instantaneously for lost ones to maintain proprioception accuracy. However, achieving consistently reliable proprioception under diverse sensor degradation remains a challenge. This paper proposes a novel framework for graceful degradation in redundant soft sensor systems, incorporating a stochastic Long Short-Term Memory (LSTM) and a Time-Delay Feedforward Neural Network (TDFNN). The LSTM estimates readings from healthy sensors to compare them with actual data. Then, statistically abnormal readings are zeroed out. The TDFNN receives the processed sensor readings to perform proprioception. Simulation experiments with a musculoskeletal leg that contains 40 nonlinear soft sensors demonstrate the effectiveness of the proposed framework. Results show that the knee angle proprioception accuracy is retained across four distinct degradation scenarios. Notably, the mean proprioception error increases by less than 1.91°(1.36%) when 30 % of the sensors are degraded. These results suggest that the proposed framework enhances the reliability of soft sensor proprioception, thereby improving the robustness of soft robots in real-world applications.

可靠的本体感觉和来自软传感器的反馈对于使软机器人在现实环境中智能地工作至关重要。然而,软传感器很脆弱,在这样的环境中容易受到各种损坏源的影响。一些研究人员利用冗余配置,其中健康的传感器立即补偿丢失的传感器,以保持本体感觉的准确性。然而,在不同的传感器退化情况下实现一致可靠的本体感觉仍然是一个挑战。本文提出了一种基于随机长短期记忆(LSTM)和时滞前馈神经网络(TDFNN)的冗余软传感器系统优雅退化框架。LSTM估计来自健康传感器的读数,并将其与实际数据进行比较。然后,统计异常读数归零。TDFNN接收处理后的传感器读数来执行本体感觉。通过包含40个非线性软传感器的肌肉骨骼腿的仿真实验证明了该框架的有效性。结果表明,在四种不同的退化情况下,膝关节角度本体感觉的准确性仍然保持不变。值得注意的是,当30%的传感器退化时,平均本体感觉误差增加不到1.91°(1.36%)。这些结果表明,所提出的框架增强了软传感器本体感觉的可靠性,从而提高了软机器人在实际应用中的鲁棒性。
{"title":"Versatile graceful degradation framework for bio-inspired proprioception with redundant soft sensors.","authors":"Taku Sugiyama, Kyo Kutsuzawa, Dai Owaki, Elijah Almanzor, Fumiya Iida, Mitsuhiro Hayashibe","doi":"10.3389/frobt.2024.1504651","DOIUrl":"https://doi.org/10.3389/frobt.2024.1504651","url":null,"abstract":"<p><p>Reliable proprioception and feedback from soft sensors are crucial for enabling soft robots to function intelligently in real-world environments. Nevertheless, soft sensors are fragile and are susceptible to various damage sources in such environments. Some researchers have utilized redundant configuration, where healthy sensors compensate instantaneously for lost ones to maintain proprioception accuracy. However, achieving consistently reliable proprioception under diverse sensor degradation remains a challenge. This paper proposes a novel framework for graceful degradation in redundant soft sensor systems, incorporating a stochastic Long Short-Term Memory (LSTM) and a Time-Delay Feedforward Neural Network (TDFNN). The LSTM estimates readings from healthy sensors to compare them with actual data. Then, statistically abnormal readings are zeroed out. The TDFNN receives the processed sensor readings to perform proprioception. Simulation experiments with a musculoskeletal leg that contains 40 nonlinear soft sensors demonstrate the effectiveness of the proposed framework. Results show that the knee angle proprioception accuracy is retained across four distinct degradation scenarios. Notably, the mean proprioception error increases by less than 1.91°(1.36%) when <math><mrow><mn>30</mn> <mi>%</mi></mrow> </math> of the sensors are degraded. These results suggest that the proposed framework enhances the reliability of soft sensor proprioception, thereby improving the robustness of soft robots in real-world applications.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1504651"},"PeriodicalIF":2.9,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11743178/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143013938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
WearMoCap: multimodal pose tracking for ubiquitous robot control using a smartwatch. WearMoCap:使用智能手表进行无处不在的机器人控制的多模态姿势跟踪。
IF 2.9 Q2 ROBOTICS Pub Date : 2025-01-03 eCollection Date: 2024-01-01 DOI: 10.3389/frobt.2024.1478016
Fabian C Weigend, Neelesh Kumar, Oya Aran, Heni Ben Amor

We present WearMoCap, an open-source library to track the human pose from smartwatch sensor data and leveraging pose predictions for ubiquitous robot control. WearMoCap operates in three modes: 1) a Watch Only mode, which uses a smartwatch only, 2) a novel Upper Arm mode, which utilizes the smartphone strapped onto the upper arm and 3) a Pocket mode, which determines body orientation from a smartphone in any pocket. We evaluate all modes on large-scale datasets consisting of recordings from up to 8 human subjects using a range of consumer-grade devices. Further, we discuss real-robot applications of underlying works and evaluate WearMoCap in handover and teleoperation tasks, resulting in performances that are within 2 cm of the accuracy of the gold-standard motion capture system. Our Upper Arm mode provides the most accurate wrist position estimates with a Root Mean Squared prediction error of 6.79 cm. To evaluate WearMoCap in more scenarios and investigate strategies to mitigate sensor drift, we publish the WearMoCap system with thorough documentation as open source. The system is designed to foster future research in smartwatch-based motion capture for robotics applications where ubiquity matters. www.github.com/wearable-motion-capture.

我们介绍了WearMoCap,一个开源库,用于跟踪智能手表传感器数据中的人体姿势,并利用姿势预测无处不在的机器人控制。WearMoCap有三种模式:1)Watch Only模式,只使用智能手表;2)新颖的上臂模式,利用绑在上臂上的智能手机;3)Pocket模式,通过任何口袋里的智能手机来确定身体方向。我们评估了大规模数据集上的所有模式,这些数据集由多达8名人类受试者使用一系列消费级设备的记录组成。此外,我们讨论了底层作品的真实机器人应用,并评估了WearMoCap在切换和远程操作任务中的应用,从而使其性能与金标准运动捕捉系统的精度相差在2厘米以内。我们的上臂模式提供了最准确的手腕位置估计,均方根预测误差为6.79厘米。为了在更多的场景中评估WearMoCap并研究减轻传感器漂移的策略,我们将WearMoCap系统与完整的文档作为开源发布。该系统旨在促进未来基于智能手表的运动捕捉技术的研究,以实现无处不在的机器人应用。www.github.com/wearable-motion-capture。
{"title":"WearMoCap: multimodal pose tracking for ubiquitous robot control using a smartwatch.","authors":"Fabian C Weigend, Neelesh Kumar, Oya Aran, Heni Ben Amor","doi":"10.3389/frobt.2024.1478016","DOIUrl":"https://doi.org/10.3389/frobt.2024.1478016","url":null,"abstract":"<p><p>We present WearMoCap, an open-source library to track the human pose from smartwatch sensor data and leveraging pose predictions for ubiquitous robot control. WearMoCap operates in three modes: 1) a Watch Only mode, which uses a smartwatch only, 2) a novel Upper Arm mode, which utilizes the smartphone strapped onto the upper arm and 3) a Pocket mode, which determines body orientation from a smartphone in any pocket. We evaluate all modes on large-scale datasets consisting of recordings from up to 8 human subjects using a range of consumer-grade devices. Further, we discuss real-robot applications of underlying works and evaluate WearMoCap in handover and teleoperation tasks, resulting in performances that are within 2 cm of the accuracy of the gold-standard motion capture system. Our Upper Arm mode provides the most accurate wrist position estimates with a Root Mean Squared prediction error of 6.79 cm. To evaluate WearMoCap in more scenarios and investigate strategies to mitigate sensor drift, we publish the WearMoCap system with thorough documentation as open source. The system is designed to foster future research in smartwatch-based motion capture for robotics applications where ubiquity matters. www.github.com/wearable-motion-capture.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1478016"},"PeriodicalIF":2.9,"publicationDate":"2025-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11738771/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143013940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EzSkiROS: enhancing robot skill composition with embedded DSL for early error detection. EzSkiROS:通过嵌入式DSL增强机器人技能组成,用于早期错误检测。
IF 2.9 Q2 ROBOTICS Pub Date : 2025-01-03 eCollection Date: 2024-01-01 DOI: 10.3389/frobt.2024.1363443
Momina Rizwan, Christoph Reichenbach, Ricardo Caldas, Matthias Mayr, Volker Krueger

When developing general-purpose robot software components, we often lack complete knowledge of the specific contexts in which they will be executed. This limits our ability to make predictions, including our ability to detect program bugs statically. Since running a robot is an expensive task, finding errors at runtime can prolong the debugging loop or even cause safety hazards. This paper proposes an approach to help developers catch these errors as soon as we have some context (typically at pre-launch time) with minimal additional efforts. We use embedded domain-specific language (DSL) techniques to enforce early checks. We describe design patterns suitable for robot programming and show how to use these design patterns for DSL embedding in Python, using two case studies on an open-source robot skill platform SkiROS2, designed for the composition of robot skills. These two case studies help us understand how to use DSL embedding on two abstraction levels: the high-level skill description that focuses on what the robot can do and under what circumstances and the lower-level decision-making and execution flow of tasks. Using our DSL EzSkiROS, we show how our design patterns enable robotics software platforms to detect bugs in the high-level contracts between the robot's capabilities and the robot's understanding of the world. We also apply the same techniques to detect bugs in the lower-level implementation code, such as writing behavior trees (BTs), to control the robot's behavior based on its capabilities. We perform consistency checks during the code deployment phase, significantly earlier than the typical runtime checks. This enhances the overall safety by identifying potential issues with the skill execution before they can impact robot behavior. An initial study with SkiROS2 developers shows that our DSL-based approach is useful for finding bugs early and thus improving the maintainability of the code.

在开发通用机器人软件组件时,我们经常缺乏对执行这些组件的具体环境的完整了解。这限制了我们进行预测的能力,包括静态检测程序错误的能力。由于运行机器人是一项昂贵的任务,在运行时发现错误可能会延长调试循环,甚至造成安全隐患。本文提出了一种方法,可以帮助开发人员在我们有一些背景(通常是在发布前)时,以最小的额外努力捕获这些错误。我们使用嵌入式领域特定语言(DSL)技术来执行早期检查。我们描述了适合机器人编程的设计模式,并展示了如何在Python中使用这些设计模式进行DSL嵌入,并使用了两个基于开源机器人技能平台SkiROS2的案例研究,该平台专为机器人技能的组合而设计。这两个案例研究帮助我们理解如何在两个抽象层次上使用DSL嵌入:关注机器人在什么情况下可以做什么的高级技能描述,以及任务的低级决策和执行流程。使用DSL EzSkiROS,我们展示了我们的设计模式如何使机器人软件平台能够检测机器人能力和机器人对世界的理解之间的高级契约中的错误。我们还应用相同的技术来检测低级实现代码中的错误,例如编写行为树(bt),以根据机器人的能力控制机器人的行为。我们在代码部署阶段执行一致性检查,比典型的运行时检查要早得多。通过识别技能执行中的潜在问题,从而在它们影响机器人行为之前提高整体安全性。对SkiROS2开发人员的初步研究表明,我们基于dsl的方法有助于及早发现bug,从而提高代码的可维护性。
{"title":"EzSkiROS: enhancing robot skill composition with embedded DSL for early error detection.","authors":"Momina Rizwan, Christoph Reichenbach, Ricardo Caldas, Matthias Mayr, Volker Krueger","doi":"10.3389/frobt.2024.1363443","DOIUrl":"https://doi.org/10.3389/frobt.2024.1363443","url":null,"abstract":"<p><p>When developing general-purpose robot software components, we often lack complete knowledge of the specific contexts in which they will be executed. This limits our ability to make predictions, including our ability to detect program bugs statically. Since running a robot is an expensive task, finding errors at runtime can prolong the debugging loop or even cause safety hazards. This paper proposes an approach to help developers catch these errors as soon as we have some context (typically at pre-launch time) with minimal additional efforts. We use embedded domain-specific language (DSL) techniques to enforce early checks. We describe design patterns suitable for robot programming and show how to use these design patterns for DSL embedding in Python, using two case studies on an open-source robot skill platform SkiROS2, designed for the composition of robot skills. These two case studies help us understand how to use DSL embedding on two abstraction levels: the high-level skill description that focuses on what the robot can do and under what circumstances and the lower-level decision-making and execution flow of tasks. Using our DSL EzSkiROS, we show how our design patterns enable robotics software platforms to detect bugs in the high-level contracts between the robot's capabilities and the robot's understanding of the world. We also apply the same techniques to detect bugs in the lower-level implementation code, such as writing behavior trees (BTs), to control the robot's behavior based on its capabilities. We perform consistency checks during the code deployment phase, significantly earlier than the typical runtime checks. This enhances the overall safety by identifying potential issues with the skill execution before they can impact robot behavior. An initial study with SkiROS2 developers shows that our DSL-based approach is useful for finding bugs early and thus improving the maintainability of the code.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1363443"},"PeriodicalIF":2.9,"publicationDate":"2025-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11738934/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143013928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Frontiers in Robotics and AI
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1