首页 > 最新文献

Frontiers in Robotics and AI最新文献

英文 中文
Collective predictive coding hypothesis: symbol emergence as decentralized Bayesian inference 集体预测编码假说:作为分散贝叶斯推理的符号出现
Pub Date : 2024-07-23 DOI: 10.3389/frobt.2024.1353870
Tadahiro Taniguchi
Understanding the emergence of symbol systems, especially language, requires the construction of a computational model that reproduces both the developmental learning process in everyday life and the evolutionary dynamics of symbol emergence throughout history. This study introduces the collective predictive coding (CPC) hypothesis, which emphasizes and models the interdependence between forming internal representations through physical interactions with the environment and sharing and utilizing meanings through social semiotic interactions within a symbol emergence system. The total system dynamics is theorized from the perspective of predictive coding. The hypothesis draws inspiration from computational studies grounded in probabilistic generative models and language games, including the Metropolis–Hastings naming game. Thus, playing such games among agents in a distributed manner can be interpreted as a decentralized Bayesian inference of representations shared by a multi-agent system. Moreover, this study explores the potential link between the CPC hypothesis and the free-energy principle, positing that symbol emergence adheres to the society-wide free-energy principle. Furthermore, this paper provides a new explanation for why large language models appear to possess knowledge about the world based on experience, even though they have neither sensory organs nor bodies. This paper reviews past approaches to symbol emergence systems, offers a comprehensive survey of related prior studies, and presents a discussion on CPC-based generalizations. Future challenges and potential cross-disciplinary research avenues are highlighted.
要理解符号系统(尤其是语言)的出现,就需要构建一个计算模型,该模型既能再现日常生活中的发展学习过程,也能再现符号在历史上出现的进化动态。本研究提出了集体预测编码(CPC)假说,该假说强调并模拟了通过与环境的物理互动形成内部表征与通过符号出现系统中的社会符号学互动分享和利用意义之间的相互依存关系。从预测编码的角度对整个系统的动态进行了理论化。这一假设的灵感来源于以概率生成模型和语言游戏(包括 Metropolis-Hastings 命名游戏)为基础的计算研究。因此,代理之间以分布式方式进行此类游戏,可以解释为多代理系统共享表征的分散贝叶斯推理。此外,本研究还探讨了 CPC 假设与自由能原理之间的潜在联系,认为符号的出现遵循了全社会的自由能原理。此外,本文还提供了一种新的解释,说明为什么大型语言模型既没有感觉器官,也没有身体,却似乎拥有基于经验的世界知识。本文回顾了过去研究符号涌现系统的方法,全面考察了之前的相关研究,并对基于 CPC 的泛化进行了讨论。本文强调了未来的挑战和潜在的跨学科研究途径。
{"title":"Collective predictive coding hypothesis: symbol emergence as decentralized Bayesian inference","authors":"Tadahiro Taniguchi","doi":"10.3389/frobt.2024.1353870","DOIUrl":"https://doi.org/10.3389/frobt.2024.1353870","url":null,"abstract":"Understanding the emergence of symbol systems, especially language, requires the construction of a computational model that reproduces both the developmental learning process in everyday life and the evolutionary dynamics of symbol emergence throughout history. This study introduces the collective predictive coding (CPC) hypothesis, which emphasizes and models the interdependence between forming internal representations through physical interactions with the environment and sharing and utilizing meanings through social semiotic interactions within a symbol emergence system. The total system dynamics is theorized from the perspective of predictive coding. The hypothesis draws inspiration from computational studies grounded in probabilistic generative models and language games, including the Metropolis–Hastings naming game. Thus, playing such games among agents in a distributed manner can be interpreted as a decentralized Bayesian inference of representations shared by a multi-agent system. Moreover, this study explores the potential link between the CPC hypothesis and the free-energy principle, positing that symbol emergence adheres to the society-wide free-energy principle. Furthermore, this paper provides a new explanation for why large language models appear to possess knowledge about the world based on experience, even though they have neither sensory organs nor bodies. This paper reviews past approaches to symbol emergence systems, offers a comprehensive survey of related prior studies, and presents a discussion on CPC-based generalizations. Future challenges and potential cross-disciplinary research avenues are highlighted.","PeriodicalId":504612,"journal":{"name":"Frontiers in Robotics and AI","volume":"131 36","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141811245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Adaptive satellite attitude control for varying masses using deep reinforcement learning 利用深度强化学习实现不同质量的自适应卫星姿态控制
Pub Date : 2024-07-23 DOI: 10.3389/frobt.2024.1402846
Wiebke Retagne, Jonas Dauer, Günther Waxenegger-Wilfing
Traditional spacecraft attitude control often relies heavily on the dimension and mass information of the spacecraft. In active debris removal scenarios, these characteristics cannot be known beforehand because the debris can take any shape or mass. Additionally, it is not possible to measure the mass of the combined system of satellite and debris object in orbit. Therefore, it is crucial to develop an adaptive satellite attitude control that can extract mass information about the satellite system from other measurements. The authors propose using deep reinforcement learning (DRL) algorithms, employing stacked observations to handle widely varying masses. The satellite is simulated in Basilisk software, and the control performance is assessed using Monte Carlo simulations. The results demonstrate the benefits of DRL with stacked observations compared to a classical proportional–integral–derivative (PID) controller for the spacecraft attitude control. The algorithm is able to adapt, especially in scenarios with changing physical properties.
传统的航天器姿态控制通常在很大程度上依赖于航天器的尺寸和质量信息。在主动清除碎片的情况下,这些特征无法事先知道,因为碎片可以是任何形状或质量。此外,也无法测量轨道上卫星和碎片物体组合系统的质量。因此,开发一种能从其他测量中提取卫星系统质量信息的自适应卫星姿态控制至关重要。作者建议使用深度强化学习(DRL)算法,利用堆叠观测数据来处理千差万别的质量。在 Basilisk 软件中对卫星进行了模拟,并使用蒙特卡罗模拟对控制性能进行了评估。结果表明,与用于航天器姿态控制的经典比例-积分-派生(PID)控制器相比,使用堆叠观测数据的 DRL 更具优势。该算法能够适应,尤其是在物理特性不断变化的情况下。
{"title":"Adaptive satellite attitude control for varying masses using deep reinforcement learning","authors":"Wiebke Retagne, Jonas Dauer, Günther Waxenegger-Wilfing","doi":"10.3389/frobt.2024.1402846","DOIUrl":"https://doi.org/10.3389/frobt.2024.1402846","url":null,"abstract":"Traditional spacecraft attitude control often relies heavily on the dimension and mass information of the spacecraft. In active debris removal scenarios, these characteristics cannot be known beforehand because the debris can take any shape or mass. Additionally, it is not possible to measure the mass of the combined system of satellite and debris object in orbit. Therefore, it is crucial to develop an adaptive satellite attitude control that can extract mass information about the satellite system from other measurements. The authors propose using deep reinforcement learning (DRL) algorithms, employing stacked observations to handle widely varying masses. The satellite is simulated in Basilisk software, and the control performance is assessed using Monte Carlo simulations. The results demonstrate the benefits of DRL with stacked observations compared to a classical proportional–integral–derivative (PID) controller for the spacecraft attitude control. The algorithm is able to adapt, especially in scenarios with changing physical properties.","PeriodicalId":504612,"journal":{"name":"Frontiers in Robotics and AI","volume":"89 10","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141812727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards reconciling usability and usefulness of policy explanations for sequential decision-making systems 努力协调顺序决策系统政策解释的可用性和实用性
Pub Date : 2024-07-22 DOI: 10.3389/frobt.2024.1375490
Pradyumna Tambwekar, Matthew C. Gombolay
Safefy-critical domains often employ autonomous agents which follow a sequential decision-making setup, whereby the agent follows a policy to dictate the appropriate action at each step. AI-practitioners often employ reinforcement learning algorithms to allow an agent to find the best policy. However, sequential systems often lack clear and immediate signs of wrong actions, with consequences visible only in hindsight, making it difficult to humans to understand system failure. In reinforcement learning, this is referred to as the credit assignment problem. To effectively collaborate with an autonomous system, particularly in a safety-critical setting, explanations should enable a user to better understand the policy of the agent and predict system behavior so that users are cognizant of potential failures and these failures can be diagnosed and mitigated. However, humans are diverse and have innate biases or preferences which may enhance or impair the utility of a policy explanation of a sequential agent. Therefore, in this paper, we designed and conducted human-subjects experiment to identify the factors which influence the perceived usability with the objective usefulness of policy explanations for reinforcement learning agents in a sequential setting. Our study had two factors: the modality of policy explanation shown to the user (Tree, Text, Modified Text, and Programs) and the “first impression” of the agent, i.e., whether the user saw the agent succeed or fail in the introductory calibration video. Our findings characterize a preference-performance tradeoff wherein participants perceived language-based policy explanations to be significantly more useable; however, participants were better able to objectively predict the agent’s behavior when provided an explanation in the form of a decision tree. Our results demonstrate that user-specific factors, such as computer science experience (p < 0.05), and situational factors, such as watching agent crash (p < 0.05), can significantly impact the perception and usefulness of the explanation. This research provides key insights to alleviate prevalent issues regarding innapropriate compliance and reliance, which are exponentially more detrimental in safety-critical settings, providing a path forward for XAI developers for future work on policy-explanations.
安全关键领域通常采用自主代理,这种代理遵循顺序决策设置,即代理在每一步都遵循政策来决定适当的行动。人工智能实践者通常采用强化学习算法,让代理找到最佳策略。然而,顺序系统往往缺乏明确而直接的错误行为迹象,其后果只有在事后才能看到,因此人类很难理解系统的故障。在强化学习中,这被称为信用分配问题。为了有效地与自主系统合作,特别是在对安全至关重要的环境中,解释应使用户能够更好地理解代理的策略并预测系统行为,从而使用户认识到潜在的故障,并对这些故障进行诊断和缓解。然而,人类是多种多样的,他们与生俱来的偏见或偏好可能会增强或削弱顺序代理策略解释的效用。因此,在本文中,我们设计并进行了以人为对象的实验,以确定在顺序环境中影响强化学习代理政策解释的感知可用性和客观有用性的因素。我们的研究有两个因素:向用户展示的策略解释模式(树状、文本、修改文本和程序)和对代理的 "第一印象",即用户在介绍性校准视频中看到代理成功还是失败。我们的研究结果表明,在偏好与性能的权衡中,参与者认为基于语言的策略解释更易于使用;然而,当提供决策树形式的解释时,参与者能更好地客观预测代理的行为。我们的研究结果表明,用户的特定因素,如计算机科学经验(p < 0.05)和情境因素,如观看代理崩溃(p < 0.05),会对解释的感知和实用性产生重大影响。这项研究为缓解普遍存在的不适当遵从和依赖问题提供了重要见解,这些问题在安全关键型环境中具有成倍的危害性,为 XAI 开发人员今后在政策解释方面的工作提供了前进的道路。
{"title":"Towards reconciling usability and usefulness of policy explanations for sequential decision-making systems","authors":"Pradyumna Tambwekar, Matthew C. Gombolay","doi":"10.3389/frobt.2024.1375490","DOIUrl":"https://doi.org/10.3389/frobt.2024.1375490","url":null,"abstract":"Safefy-critical domains often employ autonomous agents which follow a sequential decision-making setup, whereby the agent follows a policy to dictate the appropriate action at each step. AI-practitioners often employ reinforcement learning algorithms to allow an agent to find the best policy. However, sequential systems often lack clear and immediate signs of wrong actions, with consequences visible only in hindsight, making it difficult to humans to understand system failure. In reinforcement learning, this is referred to as the credit assignment problem. To effectively collaborate with an autonomous system, particularly in a safety-critical setting, explanations should enable a user to better understand the policy of the agent and predict system behavior so that users are cognizant of potential failures and these failures can be diagnosed and mitigated. However, humans are diverse and have innate biases or preferences which may enhance or impair the utility of a policy explanation of a sequential agent. Therefore, in this paper, we designed and conducted human-subjects experiment to identify the factors which influence the perceived usability with the objective usefulness of policy explanations for reinforcement learning agents in a sequential setting. Our study had two factors: the modality of policy explanation shown to the user (Tree, Text, Modified Text, and Programs) and the “first impression” of the agent, i.e., whether the user saw the agent succeed or fail in the introductory calibration video. Our findings characterize a preference-performance tradeoff wherein participants perceived language-based policy explanations to be significantly more useable; however, participants were better able to objectively predict the agent’s behavior when provided an explanation in the form of a decision tree. Our results demonstrate that user-specific factors, such as computer science experience (p < 0.05), and situational factors, such as watching agent crash (p < 0.05), can significantly impact the perception and usefulness of the explanation. This research provides key insights to alleviate prevalent issues regarding innapropriate compliance and reliance, which are exponentially more detrimental in safety-critical settings, providing a path forward for XAI developers for future work on policy-explanations.","PeriodicalId":504612,"journal":{"name":"Frontiers in Robotics and AI","volume":"27 12","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141814739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semantic learning from keyframe demonstration using object attribute constraints 利用对象属性约束从关键帧演示中进行语义学习
Pub Date : 2024-07-18 DOI: 10.3389/frobt.2024.1340334
Busra Sen, Jos Elfring, Elena Torta, René van de Molengraft
Learning from demonstration is an approach that allows users to personalize a robot’s tasks. While demonstrations often focus on conveying the robot’s motion or task plans, they can also communicate user intentions through object attributes in manipulation tasks. For instance, users might want to teach a robot to sort fruits and vegetables into separate boxes or to place cups next to plates of matching colors. This paper introduces a novel method that enables robots to learn the semantics of user demonstrations, with a particular emphasis on the relationships between object attributes. In our approach, users demonstrate essential task steps by manually guiding the robot through the necessary sequence of poses. We reduce the amount of data by utilizing only robot poses instead of trajectories, allowing us to focus on the task’s goals, specifically the objects related to these goals. At each step, known as a keyframe, we record the end-effector pose, object poses, and object attributes. However, the number of keyframes saved in each demonstration can vary due to the user’s decisions. This variability in each demonstration can lead to inconsistencies in the significance of keyframes, complicating keyframe alignment to generalize the robot’s motion and the user’s intention. Our method addresses this issue by focusing on teaching the higher-level goals of the task using only the required keyframes and relevant objects. It aims to teach the rationale behind object selection for a task and generalize this reasoning to environments with previously unseen objects. We validate our proposed method by conducting three manipulation tasks aiming at different object attribute constraints. In the reproduction phase, we demonstrate that even when the robot encounters previously unseen objects, it can generalize the user’s intention and execute the task.
从演示中学习是一种允许用户个性化机器人任务的方法。虽然演示通常侧重于传达机器人的运动或任务计划,但也可以在操作任务中通过物体属性传达用户意图。例如,用户可能希望教机器人将水果和蔬菜分类放入不同的盒子中,或者将杯子放在颜色匹配的盘子旁边。本文介绍了一种新颖的方法,它能让机器人学习用户演示的语义,尤其是物体属性之间的关系。在我们的方法中,用户通过手动引导机器人完成必要的姿势序列来演示基本任务步骤。我们只使用机器人的姿势而不是轨迹,从而减少了数据量,使我们能够专注于任务目标,特别是与这些目标相关的物体。在每一步(称为关键帧),我们都会记录末端执行器姿势、物体姿势和物体属性。不过,每次演示中保存的关键帧数量会因用户的决定而变化。每次演示中的这种变化会导致关键帧的重要性不一致,从而使关键帧对齐变得复杂,无法概括机器人的运动和用户的意图。为了解决这个问题,我们的方法只使用所需的关键帧和相关对象,重点教授任务的高层次目标。我们的方法旨在传授任务对象选择的基本原理,并将这种推理推广到以前未见过对象的环境中。我们通过针对不同的对象属性约束条件执行三项操作任务来验证我们提出的方法。在再现阶段,我们证明了即使机器人遇到以前从未见过的物体,它也能概括用户的意图并执行任务。
{"title":"Semantic learning from keyframe demonstration using object attribute constraints","authors":"Busra Sen, Jos Elfring, Elena Torta, René van de Molengraft","doi":"10.3389/frobt.2024.1340334","DOIUrl":"https://doi.org/10.3389/frobt.2024.1340334","url":null,"abstract":"Learning from demonstration is an approach that allows users to personalize a robot’s tasks. While demonstrations often focus on conveying the robot’s motion or task plans, they can also communicate user intentions through object attributes in manipulation tasks. For instance, users might want to teach a robot to sort fruits and vegetables into separate boxes or to place cups next to plates of matching colors. This paper introduces a novel method that enables robots to learn the semantics of user demonstrations, with a particular emphasis on the relationships between object attributes. In our approach, users demonstrate essential task steps by manually guiding the robot through the necessary sequence of poses. We reduce the amount of data by utilizing only robot poses instead of trajectories, allowing us to focus on the task’s goals, specifically the objects related to these goals. At each step, known as a keyframe, we record the end-effector pose, object poses, and object attributes. However, the number of keyframes saved in each demonstration can vary due to the user’s decisions. This variability in each demonstration can lead to inconsistencies in the significance of keyframes, complicating keyframe alignment to generalize the robot’s motion and the user’s intention. Our method addresses this issue by focusing on teaching the higher-level goals of the task using only the required keyframes and relevant objects. It aims to teach the rationale behind object selection for a task and generalize this reasoning to environments with previously unseen objects. We validate our proposed method by conducting three manipulation tasks aiming at different object attribute constraints. In the reproduction phase, we demonstrate that even when the robot encounters previously unseen objects, it can generalize the user’s intention and execute the task.","PeriodicalId":504612,"journal":{"name":"Frontiers in Robotics and AI","volume":" 100","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141825332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Gaze detection as a social cue to initiate natural human-robot collaboration in an assembly task 将目光检测作为社交线索,在装配任务中启动自然的人机协作
Pub Date : 2024-07-17 DOI: 10.3389/frobt.2024.1394379
Matteo Lavit Nicora, Pooja Prajod, Marta Mondellini, Giovanni Tauro, Rocco Vertechy, Elisabeth André, Matteo Malosio
Introduction: In this work we explore a potential approach to improve human-robot collaboration experience by adapting cobot behavior based on natural cues from the operator.Methods: Inspired by the literature on human-human interactions, we conducted a wizard-of-oz study to examine whether a gaze towards the cobot can serve as a trigger for initiating joint activities in collaborative sessions. In this study, 37 participants engaged in an assembly task while their gaze behavior was analyzed. We employed a gaze-based attention recognition model to identify when the participants look at the cobot.Results: Our results indicate that in most cases (83.74%), the joint activity is preceded by a gaze towards the cobot. Furthermore, during the entire assembly cycle, the participants tend to look at the cobot mostly around the time of the joint activity. Given the above results, a fully integrated system triggering joint action only when the gaze is directed towards the cobot was piloted with 10 volunteers, of which one characterized by high-functioning Autism Spectrum Disorder. Even though they had never interacted with the robot and did not know about the gaze-based triggering system, most of them successfully collaborated with the cobot and reported a smooth and natural interaction experience.Discussion: To the best of our knowledge, this is the first study to analyze the natural gaze behavior of participants working on a joint activity with a robot during a collaborative assembly task and to attempt the full integration of an automated gaze-based triggering system.
简介:在这项工作中,我们探索了一种潜在的方法,通过根据操作员的自然提示调整机器人的行为来改善人机协作体验:在这项工作中,我们探索了一种根据操作者的自然提示调整 cobot 行为,从而改善人机协作体验的潜在方法:受人机交互文献的启发,我们进行了一项 "向导 "研究,探讨在协作过程中,凝视 cobot 是否可以作为启动联合活动的触发器。在这项研究中,37 名参与者参与了一项装配任务,同时对他们的注视行为进行了分析。我们采用了一种基于注视的注意力识别模型来识别参与者何时注视 cobot:我们的研究结果表明,在大多数情况下(83.74%),参与者在进行联合活动之前都会注视 cobot。此外,在整个装配周期中,参与者往往在联合活动前后注视 cobot。鉴于上述结果,我们在 10 名志愿者(其中一人患有高功能自闭症)中试用了一个完全集成的系统,该系统只有在注视 cobot 时才会触发联合行动。尽管他们从未与机器人进行过互动,也不了解基于注视的触发系统,但他们中的大多数人都成功地与 cobot 进行了协作,并表示获得了流畅自然的互动体验:据我们所知,这是第一项分析参与者在协作装配任务中与机器人共同活动时的自然注视行为,并尝试全面整合基于注视的自动触发系统的研究。
{"title":"Gaze detection as a social cue to initiate natural human-robot collaboration in an assembly task","authors":"Matteo Lavit Nicora, Pooja Prajod, Marta Mondellini, Giovanni Tauro, Rocco Vertechy, Elisabeth André, Matteo Malosio","doi":"10.3389/frobt.2024.1394379","DOIUrl":"https://doi.org/10.3389/frobt.2024.1394379","url":null,"abstract":"Introduction: In this work we explore a potential approach to improve human-robot collaboration experience by adapting cobot behavior based on natural cues from the operator.Methods: Inspired by the literature on human-human interactions, we conducted a wizard-of-oz study to examine whether a gaze towards the cobot can serve as a trigger for initiating joint activities in collaborative sessions. In this study, 37 participants engaged in an assembly task while their gaze behavior was analyzed. We employed a gaze-based attention recognition model to identify when the participants look at the cobot.Results: Our results indicate that in most cases (83.74%), the joint activity is preceded by a gaze towards the cobot. Furthermore, during the entire assembly cycle, the participants tend to look at the cobot mostly around the time of the joint activity. Given the above results, a fully integrated system triggering joint action only when the gaze is directed towards the cobot was piloted with 10 volunteers, of which one characterized by high-functioning Autism Spectrum Disorder. Even though they had never interacted with the robot and did not know about the gaze-based triggering system, most of them successfully collaborated with the cobot and reported a smooth and natural interaction experience.Discussion: To the best of our knowledge, this is the first study to analyze the natural gaze behavior of participants working on a joint activity with a robot during a collaborative assembly task and to attempt the full integration of an automated gaze-based triggering system.","PeriodicalId":504612,"journal":{"name":"Frontiers in Robotics and AI","volume":" 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141831398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Distributed safe formation tracking control of multiquadcopter systems using barrier Lyapunov function 利用屏障 Lyapunov 函数实现多架四旋翼飞行器系统的分布式安全编队跟踪控制
Pub Date : 2024-07-15 DOI: 10.3389/frobt.2024.1370104
Nargess Sadeghzadeh-Nokhodberiz, Mohammad Reza Sadeghi, Rohollah Barzamini, Allahyar Montazeri
Coordinating the movements of a robotic fleet using consensus-based techniques is an important problem in achieving the desired goal of a specific task. Although most available techniques developed for consensus-based control ignore the collision of robots in the transient phase, they are either computationally expensive or cannot be applied in environments with dynamic obstacles. Therefore, we propose a new distributed collision-free formation tracking control scheme for multiquadcopter systems by exploiting the properties of the barrier Lyapunov function (BLF). Accordingly, the problem is formulated in a backstepping setting, and a distributed control law that guarantees collision-free formation tracking of the quads is derived. In other words, the problems of both tracking and interagent collision avoidance with a predefined accuracy are formulated using the proposed BLF for position subsystems, and the controllers are designed through augmentation of a quadratic Lyapunov function. Owing to the underactuated nature of the quadcopter system, virtual control inputs are considered for the translational (x and y axes) subsystems that are then used to generate the desired values for the roll and pitch angles for the attitude control subsystem. This provides a hierarchical controller structure for each quadcopter. The attitude controller is designed for each quadcopter locally by taking into account a predetermined error limit by another BLF. Finally, simulation results from the MATLAB-Simulink environment are provided to show the accuracy of the proposed method. A numerical comparison with an optimization-based technique is also provided to prove the superiority of the proposed method in terms of the computational cost, steady-state error, and response time.
使用基于共识的技术协调机器人群的运动,是实现特定任务预期目标的一个重要问题。虽然现有的大多数基于共识的控制技术都忽略了机器人在瞬态阶段的碰撞问题,但这些技术要么计算成本高昂,要么无法应用于有动态障碍物的环境。因此,我们利用障碍李亚普诺夫函数(BLF)的特性,为多四轴飞行器系统提出了一种新的分布式无碰撞编队跟踪控制方案。因此,我们将问题置于反步态环境中,并推导出一种能保证四旋翼无碰撞编队跟踪的分布式控制法则。换句话说,使用所提出的位置子系统 BLF,可以在预定的精度下同时解决跟踪和避免代理间碰撞的问题,并通过二次 Lyapunov 函数的增强来设计控制器。由于四旋翼飞行器系统的欠驱动性质,平移(X 轴和 Y 轴)子系统考虑了虚拟控制输入,然后利用虚拟控制输入为姿态控制子系统生成所需的滚动角和俯仰角值。这就为每架四旋翼飞行器提供了一个分层控制器结构。每架四旋翼飞行器的姿态控制器都是根据另一个 BLF 的预定误差限制进行本地设计的。最后,提供了 MATLAB-Simulink 环境下的仿真结果,以显示所提方法的准确性。还提供了与基于优化的技术的数值比较,以证明所提方法在计算成本、稳态误差和响应时间方面的优越性。
{"title":"Distributed safe formation tracking control of multiquadcopter systems using barrier Lyapunov function","authors":"Nargess Sadeghzadeh-Nokhodberiz, Mohammad Reza Sadeghi, Rohollah Barzamini, Allahyar Montazeri","doi":"10.3389/frobt.2024.1370104","DOIUrl":"https://doi.org/10.3389/frobt.2024.1370104","url":null,"abstract":"Coordinating the movements of a robotic fleet using consensus-based techniques is an important problem in achieving the desired goal of a specific task. Although most available techniques developed for consensus-based control ignore the collision of robots in the transient phase, they are either computationally expensive or cannot be applied in environments with dynamic obstacles. Therefore, we propose a new distributed collision-free formation tracking control scheme for multiquadcopter systems by exploiting the properties of the barrier Lyapunov function (BLF). Accordingly, the problem is formulated in a backstepping setting, and a distributed control law that guarantees collision-free formation tracking of the quads is derived. In other words, the problems of both tracking and interagent collision avoidance with a predefined accuracy are formulated using the proposed BLF for position subsystems, and the controllers are designed through augmentation of a quadratic Lyapunov function. Owing to the underactuated nature of the quadcopter system, virtual control inputs are considered for the translational (x and y axes) subsystems that are then used to generate the desired values for the roll and pitch angles for the attitude control subsystem. This provides a hierarchical controller structure for each quadcopter. The attitude controller is designed for each quadcopter locally by taking into account a predetermined error limit by another BLF. Finally, simulation results from the MATLAB-Simulink environment are provided to show the accuracy of the proposed method. A numerical comparison with an optimization-based technique is also provided to prove the superiority of the proposed method in terms of the computational cost, steady-state error, and response time.","PeriodicalId":504612,"journal":{"name":"Frontiers in Robotics and AI","volume":"33 35","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141645440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing emotional expression in cat-like robots: strategies for utilizing tail movements with human-like gazes 增强仿猫机器人的情感表达:利用尾巴运动与人类相似目光的策略
Pub Date : 2024-07-15 DOI: 10.3389/frobt.2024.1399012
Xinxiang Wang, Zihan Li, Songyang Wang, Yiming Yang, Yibo Peng, Changzeng Fu
In recent years, there has been a significant growth in research on emotion expression in the field of human-robot interaction. In the process of human-robot interaction, the effect of the robot’s emotional expression determines the user’s experience and acceptance. Gaze is widely accepted as an important media to express emotions in human-human interaction. But it has been found that users have difficulty in effectively recognizing emotions such as happiness and anger expressed by animaloid robots that use eye contact individually. In addition, in real interaction, effective nonverbal expression includes not only eye contact but also physical expression. However, current animaloid social robots consider human-like eyes as the main emotion expression pathway, which results in a dysfunctional robot appearance and behavioral approach, affecting the quality of emotional expression. Based on retaining the effectiveness of eyes for emotional communication, we added a mechanical tail as a physical expression to enhance the robot’s emotional expression in concert with the eyes. The results show that the collaboration between the mechanical tail and the bionic eye enhances emotional expression in all four emotions. Further more, we found that the mechanical tail can enhance the expression of specific emotions with different parameters. The above study is conducive to enhancing the robot’s emotional expression ability in human-robot interaction and improving the user’s interaction experience.
近年来,人机交互领域对情感表达的研究有了长足的发展。在人机交互过程中,机器人的情感表达效果决定了用户的体验和接受程度。在人机交互中,目光被广泛认为是表达情感的重要媒介。但研究发现,用户很难有效识别单独使用眼神交流的仿动物机器人所表达的喜怒哀乐等情绪。此外,在实际互动中,有效的非语言表达不仅包括眼神交流,还包括肢体表达。然而,目前的仿动物社交机器人将类似人类的眼神作为主要的情绪表达途径,这导致机器人外观和行为方式的失调,影响了情绪表达的质量。在保留眼睛情感交流有效性的基础上,我们增加了机械尾巴作为肢体表达方式,与眼睛协同增强机器人的情感表达。结果表明,机械尾巴和仿生眼的配合增强了四种情绪的情感表达。此外,我们还发现机械尾巴可以通过不同的参数增强特定情绪的表达。上述研究有助于增强机器人在人机交互中的情感表达能力,改善用户的交互体验。
{"title":"Enhancing emotional expression in cat-like robots: strategies for utilizing tail movements with human-like gazes","authors":"Xinxiang Wang, Zihan Li, Songyang Wang, Yiming Yang, Yibo Peng, Changzeng Fu","doi":"10.3389/frobt.2024.1399012","DOIUrl":"https://doi.org/10.3389/frobt.2024.1399012","url":null,"abstract":"In recent years, there has been a significant growth in research on emotion expression in the field of human-robot interaction. In the process of human-robot interaction, the effect of the robot’s emotional expression determines the user’s experience and acceptance. Gaze is widely accepted as an important media to express emotions in human-human interaction. But it has been found that users have difficulty in effectively recognizing emotions such as happiness and anger expressed by animaloid robots that use eye contact individually. In addition, in real interaction, effective nonverbal expression includes not only eye contact but also physical expression. However, current animaloid social robots consider human-like eyes as the main emotion expression pathway, which results in a dysfunctional robot appearance and behavioral approach, affecting the quality of emotional expression. Based on retaining the effectiveness of eyes for emotional communication, we added a mechanical tail as a physical expression to enhance the robot’s emotional expression in concert with the eyes. The results show that the collaboration between the mechanical tail and the bionic eye enhances emotional expression in all four emotions. Further more, we found that the mechanical tail can enhance the expression of specific emotions with different parameters. The above study is conducive to enhancing the robot’s emotional expression ability in human-robot interaction and improving the user’s interaction experience.","PeriodicalId":504612,"journal":{"name":"Frontiers in Robotics and AI","volume":"28 22","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141647992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Buoyant force learning through a visuo-haptic environment: a case study 通过视觉-触觉环境加强浮力学习:案例研究
Pub Date : 2024-07-12 DOI: 10.3389/frobt.2024.1276027
L. Neri, J. Noguez, David Escobar-Castillejos, Víctor Robledo-Rella, R. García-Castelán, Andres González-Nucamendi, Alejandra J. Magana, Bedrich Benes
Introduction: This study aimed to develop, implement, and test a visuo-haptic simulator designed to explore the buoyancy phenomenon for freshman engineering students enrolled in physics courses. The primary goal was to enhance students’ understanding of physical concepts through an immersive learning tool.Methods: The visuo-haptic simulator was created using the VIS-HAPT methodology, which provides high-quality visualization and reduces development time. A total of 182 undergraduate students were randomly assigned to either an experimental group that used the simulator or a control group that received an equivalent learning experience in terms of duration and content. Data were collected through pre- and post-tests and an exit-perception questionnaire.Results: Data analysis revealed that the experimental group achieved higher learning gains than the control group (p = 0.079). Additionally, students in the experimental group expressed strong enthusiasm for the simulator, noting its positive impact on their understanding of physical concepts. The VIS-HAPT methodology also reduced the average development time compared to similar visuo-haptic simulators.Discussion: The results demonstrate the efficacy of the buoyancy visuo-haptic simulator in improving students’ learning experiences and validate the utility of the VIS-HAPT method for creating immersive educational tools in physics.
简介本研究旨在开发、实施和测试一个视觉-触觉模拟器,目的是让学习物理课程的工程专业大一学生探索浮力现象。主要目的是通过身临其境的学习工具,加深学生对物理概念的理解:方法:视觉-触觉模拟器是采用 VIS-HAPT 方法制作的,该方法可提供高质量的可视化效果并缩短开发时间。共有 182 名本科生被随机分配到使用模拟器的实验组或在时间和内容上接受同等学习体验的对照组。通过前后测试和毕业感知问卷收集数据:数据分析显示,实验组的学习收获高于对照组(p = 0.079)。此外,实验组学生对模拟器表现出极大的热情,并指出模拟器对他们理解物理概念产生了积极影响。与类似的视觉-触觉模拟器相比,VIS-HAPT 方法还缩短了平均开发时间:讨论:研究结果证明了浮力视觉-触觉模拟器在改善学生学习体验方面的功效,并验证了 VIS-HAPT 方法在创建沉浸式物理教育工具方面的实用性。
{"title":"Enhancing Buoyant force learning through a visuo-haptic environment: a case study","authors":"L. Neri, J. Noguez, David Escobar-Castillejos, Víctor Robledo-Rella, R. García-Castelán, Andres González-Nucamendi, Alejandra J. Magana, Bedrich Benes","doi":"10.3389/frobt.2024.1276027","DOIUrl":"https://doi.org/10.3389/frobt.2024.1276027","url":null,"abstract":"Introduction: This study aimed to develop, implement, and test a visuo-haptic simulator designed to explore the buoyancy phenomenon for freshman engineering students enrolled in physics courses. The primary goal was to enhance students’ understanding of physical concepts through an immersive learning tool.Methods: The visuo-haptic simulator was created using the VIS-HAPT methodology, which provides high-quality visualization and reduces development time. A total of 182 undergraduate students were randomly assigned to either an experimental group that used the simulator or a control group that received an equivalent learning experience in terms of duration and content. Data were collected through pre- and post-tests and an exit-perception questionnaire.Results: Data analysis revealed that the experimental group achieved higher learning gains than the control group (p = 0.079). Additionally, students in the experimental group expressed strong enthusiasm for the simulator, noting its positive impact on their understanding of physical concepts. The VIS-HAPT methodology also reduced the average development time compared to similar visuo-haptic simulators.Discussion: The results demonstrate the efficacy of the buoyancy visuo-haptic simulator in improving students’ learning experiences and validate the utility of the VIS-HAPT method for creating immersive educational tools in physics.","PeriodicalId":504612,"journal":{"name":"Frontiers in Robotics and AI","volume":"10 12","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141654016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluation of a passive wearable arm ExoNET 评估被动式可穿戴手臂 ExoNET
Pub Date : 2024-07-10 DOI: 10.3389/frobt.2024.1387177
P. Ryali, Valentino Wilson, C. Celian, Adith V. Srivatsa, Yaseen Ghani, Jeremy Lentz, James L. Patton
Wearable ExoNETs offer a novel, wearable solution to support and facilitate upper extremity gravity compensation in healthy, unimpaired individuals. In this study, we investigated the safety and feasibility of gravity compensating ExoNETs on 10 healthy, unimpaired individuals across a series of tasks, including activities of daily living and resistance exercises. The direct muscle activity and kinematic effects of gravity compensation were compared to a sham control and no device control. Mixed effects analysis revealed significant reductions in muscle activity at the biceps, triceps and medial deltoids with effect sizes of −3.6%, −4.5%, and −7.2% rmsMVC, respectively, during gravity support. There were no significant changes in movement kinematics as evidenced by minimal change in coverage metrics at the wrist. These findings reveal the potential for the ExoNET to serve as an alternative to existing bulky and encumbering devices in post-stroke rehabilitation settings and pave the way for future clinical trials.
可穿戴 ExoNET 提供了一种新颖的可穿戴解决方案,可支持和促进健康无障碍人士的上肢重力补偿。在这项研究中,我们对重力补偿 ExoNET 的安全性和可行性进行了调查,调查对象是 10 名健康、无运动障碍的人,他们完成了一系列任务,包括日常生活活动和阻力练习。重力补偿的直接肌肉活动和运动效果与假对照组和无装置对照组进行了比较。混合效应分析显示,在重力支撑过程中,肱二头肌、肱三头肌和内侧三角肌的肌肉活动明显减少,效应大小分别为-3.6%、-4.5%和-7.2% rmsMVC。运动运动学方面没有明显变化,腕部的覆盖指标变化很小。这些发现揭示了 ExoNET 在中风后康复环境中替代现有笨重设备的潜力,并为未来的临床试验铺平了道路。
{"title":"Evaluation of a passive wearable arm ExoNET","authors":"P. Ryali, Valentino Wilson, C. Celian, Adith V. Srivatsa, Yaseen Ghani, Jeremy Lentz, James L. Patton","doi":"10.3389/frobt.2024.1387177","DOIUrl":"https://doi.org/10.3389/frobt.2024.1387177","url":null,"abstract":"Wearable ExoNETs offer a novel, wearable solution to support and facilitate upper extremity gravity compensation in healthy, unimpaired individuals. In this study, we investigated the safety and feasibility of gravity compensating ExoNETs on 10 healthy, unimpaired individuals across a series of tasks, including activities of daily living and resistance exercises. The direct muscle activity and kinematic effects of gravity compensation were compared to a sham control and no device control. Mixed effects analysis revealed significant reductions in muscle activity at the biceps, triceps and medial deltoids with effect sizes of −3.6%, −4.5%, and −7.2% rmsMVC, respectively, during gravity support. There were no significant changes in movement kinematics as evidenced by minimal change in coverage metrics at the wrist. These findings reveal the potential for the ExoNET to serve as an alternative to existing bulky and encumbering devices in post-stroke rehabilitation settings and pave the way for future clinical trials.","PeriodicalId":504612,"journal":{"name":"Frontiers in Robotics and AI","volume":"7 11","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141660201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robotont 3–an accessible 3D-printable ROS-supported open-source mobile robot for education and research Robotont 3--用于教育和研究的无障碍 3D 可打印 ROS 支持的开源移动机器人
Pub Date : 2024-07-10 DOI: 10.3389/frobt.2024.1406645
Eva Mõtshärg, V. Vunder, Renno Raudmäe, Marko Muro, Ingvar Drikkit, Leonid Tšigrinski, Raimo Köidam, A. Aabloo, Karl Kruusamäe
Educational robots offer a platform for training aspiring engineers and building trust in technology that is envisioned to shape how we work and live. In education, accessibility and modularity are significant in the choice of such a technological platform. In order to foster continuous development of the robots as well as to improve student engagement in the design and fabrication process, safe production methods with low accessibility barriers should be chosen. In this paper, we present Robotont 3, an open-source mobile robot that leverages Fused Deposition Modeling (FDM) 3D-printing for manufacturing the chassis and a single dedicated system board that can be ordered from online printed circuit board (PCB) assembly services. To promote accessibility, the project follows open hardware practices, such as design transparency, permissive licensing, accessibility in manufacturing methods, and comprehensive documentation. Semantic Versioning was incorporated to improve maintainability in development. Compared to the earlier versions, Robotont 3 maintains all the technical capabilities, while featuring an improved hardware setup to enhance the ease of fabrication and assembly, and modularity. The improvements increase the accessibility, scalability and flexibility of the platform in an educational setting.
教育机器人为培训有抱负的工程师和建立对技术的信任提供了一个平台,这种技术有望塑造我们的工作和生活方式。在教育领域,选择这样一个技术平台时,可访问性和模块化是非常重要的。为了促进机器人的持续发展,并提高学生在设计和制造过程中的参与度,应选择无障碍程度低的安全生产方法。在本文中,我们将介绍一款开源移动机器人Robotont 3,它利用熔融沉积建模(FDM)3D打印技术制造底盘和一块专用系统板,该系统板可从在线印刷电路板(PCB)组装服务订购。为促进可访问性,该项目遵循开放硬件实践,如设计透明、许可许可、制造方法的可访问性和全面的文档。为了提高开发过程中的可维护性,项目还采用了语义版本技术。与早期版本相比,Robotont 3 保留了所有技术能力,同时改进了硬件设置,以提高制造和组装的便利性以及模块化程度。这些改进提高了该平台在教育环境中的可访问性、可扩展性和灵活性。
{"title":"Robotont 3–an accessible 3D-printable ROS-supported open-source mobile robot for education and research","authors":"Eva Mõtshärg, V. Vunder, Renno Raudmäe, Marko Muro, Ingvar Drikkit, Leonid Tšigrinski, Raimo Köidam, A. Aabloo, Karl Kruusamäe","doi":"10.3389/frobt.2024.1406645","DOIUrl":"https://doi.org/10.3389/frobt.2024.1406645","url":null,"abstract":"Educational robots offer a platform for training aspiring engineers and building trust in technology that is envisioned to shape how we work and live. In education, accessibility and modularity are significant in the choice of such a technological platform. In order to foster continuous development of the robots as well as to improve student engagement in the design and fabrication process, safe production methods with low accessibility barriers should be chosen. In this paper, we present Robotont 3, an open-source mobile robot that leverages Fused Deposition Modeling (FDM) 3D-printing for manufacturing the chassis and a single dedicated system board that can be ordered from online printed circuit board (PCB) assembly services. To promote accessibility, the project follows open hardware practices, such as design transparency, permissive licensing, accessibility in manufacturing methods, and comprehensive documentation. Semantic Versioning was incorporated to improve maintainability in development. Compared to the earlier versions, Robotont 3 maintains all the technical capabilities, while featuring an improved hardware setup to enhance the ease of fabrication and assembly, and modularity. The improvements increase the accessibility, scalability and flexibility of the platform in an educational setting.","PeriodicalId":504612,"journal":{"name":"Frontiers in Robotics and AI","volume":"22 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141659528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Frontiers in Robotics and AI
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1