首页 > 最新文献

Artificial Life and Robotics最新文献

英文 中文
AI-driven enhancement of EEG-based detection of neurocognitive disorders 基于脑电图的神经认知障碍检测的ai驱动增强
IF 0.8 Q4 ROBOTICS Pub Date : 2025-07-21 DOI: 10.1007/s10015-025-01046-w
Kusum Tara, Ruimin Wang, Yoshitaka Matsuda, Satoru Goto, Takako Mitsudo, Takao Yamasaki, Takenao Sugi

Early detection and accurate diagnosis of neurocognitive disorders (NCDs) are essential for enabling timely interventions to slow disease progression, preserve cognitive function, and enhance quality of life. This study introduces an EEG-based classification framework utilizing four AI-driven deep learning models—ResNet50-V2, EfficientNetB0, NasNetMobile, and MobileNetV2—to classify four neurocognitive groups: healthy controls (HC), mild cognitive impairment (MCI), Alzheimer’s disease (AD), and epilepsy (Ep) under eyes-closed (EC) conditions. To reduce complexity and improve accuracy, Fisher’s score was used to select 16 significant EEG channels from frontal, parietal, occipital, temporal, and central regions. Phase-amplitude coupling (PAC) images were generated from EC EEG signals to capture cross-frequency interactions, specifically how delta (0.5–4Hz) and theta (4–8Hz) phase modulate alpha (8–12Hz) and beta (12–35Hz) amplitudes—revealing functional brain connectivity. HC exhibited strong delta-alpha and reduced theta-beta coupling, while MCI and AD showed weakened delta-alpha and increased theta-beta interactions, reflecting cognitive decline. In contrast, Ep displayed elevated delta-alpha and reduced delta-beta coupling, linked to neural hyperexcitability. Among all models, MobileNetV2 achieved the best performance with 98.25% accuracy and 98.40% F score, attributed to its lightweight design and effective image feature extraction. This study’s novelty lies in its PAC image-based approach for NCD classification using MobileNetV2, providing valuable insights into non-linear hemispheric dynamics related to cognitive decline.

神经认知障碍(NCDs)的早期发现和准确诊断对于能够及时干预以减缓疾病进展、保持认知功能和提高生活质量至关重要。本研究引入了一个基于脑电图的分类框架,利用四个人工智能驱动的深度学习模型——resnet50 - v2、EfficientNetB0、NasNetMobile和mobilenetv2——对四个神经认知组进行分类:健康对照(HC)、轻度认知障碍(MCI)、阿尔茨海默病(AD)和闭眼(EC)条件下的癫痫(Ep)。为了降低复杂性和提高准确性,我们使用Fisher评分从额叶、顶叶、枕叶、颞叶和中央区域选择16个重要的EEG通道。相幅耦合(PAC)图像是由脑电图信号生成的,以捕捉交叉频率的相互作用,特别是delta (0.5-4Hz)和theta (4-8Hz)相位如何调制alpha (8-12Hz)和beta (12-35Hz)振幅,从而揭示大脑功能连接。HC表现出较强的δ - α和较低的δ - α耦合,而MCI和AD表现出较弱的δ - α和较强的δ - α相互作用,反映出认知能力下降。相反,Ep表现出δ - α升高和δ - β耦合降低,这与神经过度兴奋性有关。在所有模型中,MobileNetV2的表现最好,准确率为98.25%,F分为98.40%,这归功于其轻巧的设计和有效的图像特征提取。这项研究的新颖之处在于其使用MobileNetV2进行非传染性疾病分类的基于PAC图像的方法,为与认知能力下降相关的非线性半球动力学提供了有价值的见解。
{"title":"AI-driven enhancement of EEG-based detection of neurocognitive disorders","authors":"Kusum Tara,&nbsp;Ruimin Wang,&nbsp;Yoshitaka Matsuda,&nbsp;Satoru Goto,&nbsp;Takako Mitsudo,&nbsp;Takao Yamasaki,&nbsp;Takenao Sugi","doi":"10.1007/s10015-025-01046-w","DOIUrl":"10.1007/s10015-025-01046-w","url":null,"abstract":"<div><p>Early detection and accurate diagnosis of neurocognitive disorders (NCDs) are essential for enabling timely interventions to slow disease progression, preserve cognitive function, and enhance quality of life. This study introduces an EEG-based classification framework utilizing four AI-driven deep learning models—ResNet50-V2, EfficientNetB0, NasNetMobile, and MobileNetV2—to classify four neurocognitive groups: healthy controls (HC), mild cognitive impairment (MCI), Alzheimer’s disease (AD), and epilepsy (Ep) under eyes-closed (EC) conditions. To reduce complexity and improve accuracy, Fisher’s score was used to select 16 significant EEG channels from frontal, parietal, occipital, temporal, and central regions. Phase-amplitude coupling (PAC) images were generated from EC EEG signals to capture cross-frequency interactions, specifically how delta (0.5–4Hz) and theta (4–8Hz) phase modulate alpha (8–12Hz) and beta (12–35Hz) amplitudes—revealing functional brain connectivity. HC exhibited strong delta-alpha and reduced theta-beta coupling, while MCI and AD showed weakened delta-alpha and increased theta-beta interactions, reflecting cognitive decline. In contrast, Ep displayed elevated delta-alpha and reduced delta-beta coupling, linked to neural hyperexcitability. Among all models, MobileNetV2 achieved the best performance with 98.25% accuracy and 98.40% F score, attributed to its lightweight design and effective image feature extraction. This study’s novelty lies in its PAC image-based approach for NCD classification using MobileNetV2, providing valuable insights into non-linear hemispheric dynamics related to cognitive decline.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"30 4","pages":"613 - 621"},"PeriodicalIF":0.8,"publicationDate":"2025-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MonoDGAE: depth-guided attention and bilateral filtering for robust monocular 3D object detection MonoDGAE:用于鲁棒单目3D目标检测的深度引导注意力和双边滤波
IF 0.8 Q4 ROBOTICS Pub Date : 2025-07-20 DOI: 10.1007/s10015-025-01048-8
George Albert Bitwire, Samuel Kakuba, Dae Woong Cha, Dong Seog Han

Robust monocular 3D object detection remains a pivotal challenge for intelligent robotic systems due to the absence of explicit depth information in single RGB images. In this paper, we propose a novel depth-guided attention enhancement (DGAE) module, integrated into the MonoDTR framework, to address its limitations in handling noisy depth supervision and refining spatial inconsistencies. DGAE leverages coarse depth maps as attention priors to guide visual feature refinement through temperature-scaled softmax and Gaussian smoothing, enabling enhanced spatial reasoning and robustness in cluttered scenes. To support this attention mechanism, we generate high-quality depth maps by projecting LiDAR points into the image plane and interpolating missing regions using a nearest-neighbor approach, followed by bilateral filtering and block downsampling to preserve edge details while reducing noise. This depth estimation pipeline improves the quality and coherence of the fused features used by DGAE. Extensive experiments on the KITTI 3D object detection benchmark show that our approach achieves state-of-the-art performance in moderate and hard detection scenarios for cars, pedestrians, and cyclists, while maintaining real-time inference speeds. These results underscore the effectiveness and practicality of our DGAE module for real-world 3D perception in autonomous driving applications.

由于在单个RGB图像中缺乏明确的深度信息,鲁棒的单目3D物体检测仍然是智能机器人系统面临的关键挑战。在本文中,我们提出了一种新的深度引导注意力增强(DGAE)模块,并将其集成到MonoDTR框架中,以解决其在处理噪声深度监督和细化空间不一致性方面的局限性。DGAE利用粗深度图作为注意力先验,通过温度缩放的softmax和高斯平滑来指导视觉特征细化,从而增强了混乱场景中的空间推理和鲁棒性。为了支持这种注意机制,我们通过将LiDAR点投影到图像平面上,并使用最近邻方法插值缺失区域,然后进行双边滤波和块降采样,以在降低噪声的同时保留边缘细节,从而生成高质量的深度图。这种深度估计管道提高了DGAE使用的融合特征的质量和一致性。在KITTI 3D对象检测基准上进行的大量实验表明,我们的方法在汽车、行人和骑自行车的人的中等和硬检测场景中实现了最先进的性能,同时保持了实时推理速度。这些结果强调了DGAE模块在自动驾驶应用中实现真实世界3D感知的有效性和实用性。
{"title":"MonoDGAE: depth-guided attention and bilateral filtering for robust monocular 3D object detection","authors":"George Albert Bitwire,&nbsp;Samuel Kakuba,&nbsp;Dae Woong Cha,&nbsp;Dong Seog Han","doi":"10.1007/s10015-025-01048-8","DOIUrl":"10.1007/s10015-025-01048-8","url":null,"abstract":"<div><p>Robust monocular 3D object detection remains a pivotal challenge for intelligent robotic systems due to the absence of explicit depth information in single RGB images. In this paper, we propose a novel depth-guided attention enhancement (DGAE) module, integrated into the MonoDTR framework, to address its limitations in handling noisy depth supervision and refining spatial inconsistencies. DGAE leverages coarse depth maps as attention priors to guide visual feature refinement through temperature-scaled softmax and Gaussian smoothing, enabling enhanced spatial reasoning and robustness in cluttered scenes. To support this attention mechanism, we generate high-quality depth maps by projecting LiDAR points into the image plane and interpolating missing regions using a nearest-neighbor approach, followed by bilateral filtering and block downsampling to preserve edge details while reducing noise. This depth estimation pipeline improves the quality and coherence of the fused features used by DGAE. Extensive experiments on the KITTI 3D object detection benchmark show that our approach achieves state-of-the-art performance in moderate and hard detection scenarios for cars, pedestrians, and cyclists, while maintaining real-time inference speeds. These results underscore the effectiveness and practicality of our DGAE module for real-world 3D perception in autonomous driving applications.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"30 4","pages":"555 - 566"},"PeriodicalIF":0.8,"publicationDate":"2025-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development of artificial muscles formed from shape memory alloys and elastomers based on dynamics analysis of walking and application to musculoskeletal humanoid legs 基于行走动力学分析的形状记忆合金和弹性体人造肌肉的研制及其在肌肉骨骼类人腿上的应用
IF 0.8 Q4 ROBOTICS Pub Date : 2025-07-19 DOI: 10.1007/s10015-025-01044-y
Yugo Kokubun, Kentaro Yamazaki, Ginjiro Takashi, Tatsumi Goto, Ontatsu Haku, Fumio Uchikoba, Minami Kaneko

Most humanoid robots use high-performance CPUs to process huge amounts of numerical calculations at high speed and control joint angles with servomotors. Humans, on the other hand, use neural networks to generate signals to contract and relax multiple muscles for efficient joint movement. We have created a musculoskeletal humanoid model that mimics the human musculoskeletal structure and have conducted dynamics analysis of walking. In the analysis, we obtained the time-specific generated force and contraction displacement of 12 different muscles during one walking cycle. In this study, based on dynamics analysis, artificial muscles were fabricated using elastomers and shape memory alloys for a total of 12 muscles: gluteus maximus, iliopsoas, rectus femoris, long head of biceps femoris, short head, vastus medialis and lateralis, gastrocnemius medialis and lateralis, tibialis anterior, and soleus muscles, and attached to the legs of a musculoskeletal humanoid. Using electrical signals from an external power source, the artificial muscles were driven based on the timing of muscle contraction during one cycle of walking, and the motion of the hip, knee, and ankle joints was reproduced in a stationary system. To validate the parameters of the artificial muscles obtained from the dynamics analysis of walking, the leg movements of the musculoskeletal humanoid were compared with the simulation results from the forward dynamics analysis.

大多数人形机器人使用高性能的cpu来高速处理大量的数值计算,并通过伺服电机控制关节角度。另一方面,人类使用神经网络产生信号来收缩和放松多个肌肉,以实现有效的关节运动。我们创建了一个模仿人类肌肉骨骼结构的类人肌肉骨骼模型,并对行走进行了动力学分析。在分析中,我们获得了12块不同肌肉在一个步行周期内的时间特异性生成力和收缩位移。本研究在动力学分析的基础上,利用弹性体和形状记忆合金对臀大肌、髂腰肌、股直肌、股二头肌长头、短头、股内侧肌和外侧肌、腓肠肌内侧肌和外侧肌、胫骨前肌和比目鱼肌等12块肌肉进行了人工肌肉的制备,并将其附着在一个肌肉骨骼类人的腿上。使用来自外部电源的电信号,人造肌肉在一个步行周期中根据肌肉收缩的时间来驱动,并且在固定系统中复制髋关节,膝关节和踝关节的运动。为了验证由步行动力学分析得到的人造肌肉参数,将肌肉骨骼人形机器人的腿部运动与前向动力学分析的仿真结果进行了比较。
{"title":"Development of artificial muscles formed from shape memory alloys and elastomers based on dynamics analysis of walking and application to musculoskeletal humanoid legs","authors":"Yugo Kokubun,&nbsp;Kentaro Yamazaki,&nbsp;Ginjiro Takashi,&nbsp;Tatsumi Goto,&nbsp;Ontatsu Haku,&nbsp;Fumio Uchikoba,&nbsp;Minami Kaneko","doi":"10.1007/s10015-025-01044-y","DOIUrl":"10.1007/s10015-025-01044-y","url":null,"abstract":"<div><p>Most humanoid robots use high-performance CPUs to process huge amounts of numerical calculations at high speed and control joint angles with servomotors. Humans, on the other hand, use neural networks to generate signals to contract and relax multiple muscles for efficient joint movement. We have created a musculoskeletal humanoid model that mimics the human musculoskeletal structure and have conducted dynamics analysis of walking. In the analysis, we obtained the time-specific generated force and contraction displacement of 12 different muscles during one walking cycle. In this study, based on dynamics analysis, artificial muscles were fabricated using elastomers and shape memory alloys for a total of 12 muscles: gluteus maximus, iliopsoas, rectus femoris, long head of biceps femoris, short head, vastus medialis and lateralis, gastrocnemius medialis and lateralis, tibialis anterior, and soleus muscles, and attached to the legs of a musculoskeletal humanoid. Using electrical signals from an external power source, the artificial muscles were driven based on the timing of muscle contraction during one cycle of walking, and the motion of the hip, knee, and ankle joints was reproduced in a stationary system. To validate the parameters of the artificial muscles obtained from the dynamics analysis of walking, the leg movements of the musculoskeletal humanoid were compared with the simulation results from the forward dynamics analysis.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"30 4","pages":"622 - 635"},"PeriodicalIF":0.8,"publicationDate":"2025-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Task-oriented adaptive learning of robot manipulation skills 面向任务的机器人操作技能自适应学习
IF 0.8 Q4 ROBOTICS Pub Date : 2025-07-18 DOI: 10.1007/s10015-025-01036-y
Kexin Jin, Guohui Tian, Bin Huang, Yongcheng Cui, Xiaoyu Zheng

In industrial environments, robots are confronted with constantly changing working conditions and manipulation tasks. Traditional approaches that require robots to learn skills from scratch for new tasks are often slow and heavily dependent on human intervention. To address inefficiency and enhance robots’ adaptability, we propose a general Intelligent Transfer System (ITS) that enables autonomous and rapid new skill learning in dynamic environments. ITS integrates Large Language Models (LLMs) with Transfer Reinforcement Learning (TRL), harnessing both the advanced comprehension and generative capabilities of LLMs and the pre-acquired skill knowledge of robotic systems. First, to enable robots to comprehend unseen task commands and learn skills autonomously, we propose a reward function generation method based on task-specific reward components. This approach improves time efficiency and accuracy while eliminating the need for manual design. Secondly, to accelerate the learning speed of new robotic skills, we propose an Intelligent Transfer Network (ITN) within the ITS. Unlike traditional methods that merely reuse or adapt existing skills, ITN intelligently integrates related skill features, enhancing learning efficiency through knowledge fusion. We evaluate our method in simulation, demonstrating that our method enables the system to learn skills autonomously without pre-programmed behaviors, achieving 72.22% and 65.17% faster learning speeds for two major tasks compared to learning from scratch. Supplementary materials are accessible via our project page:https://jkx-yy.github.io/

在工业环境中,机器人面临着不断变化的工作条件和操作任务。要求机器人从零开始学习新任务技能的传统方法通常很慢,而且严重依赖于人为干预。为了解决低效率问题并增强机器人的适应性,我们提出了一种通用智能迁移系统(ITS),该系统能够在动态环境中自主快速地学习新技能。ITS集成了大型语言模型(llm)和迁移强化学习(TRL),利用llm的高级理解和生成能力以及机器人系统的预先获得的技能知识。首先,为了使机器人能够理解看不见的任务命令并自主学习技能,我们提出了一种基于任务特定奖励组件的奖励函数生成方法。这种方法提高了时间效率和准确性,同时消除了人工设计的需要。其次,为了加快机器人新技能的学习速度,我们在ITS中提出了一个智能转移网络(ITN)。与传统方法仅仅对已有技能进行重用或改编不同,ITN可以智能地整合相关技能特征,通过知识融合提高学习效率。我们在仿真中评估了我们的方法,证明我们的方法使系统能够在没有预编程行为的情况下自主学习技能,与从头开始学习相比,我们的方法在两个主要任务上的学习速度提高了72.22%和65.17%。补充材料可通过我们的项目页面访问:https://jkx-yy.github.io/
{"title":"Task-oriented adaptive learning of robot manipulation skills","authors":"Kexin Jin,&nbsp;Guohui Tian,&nbsp;Bin Huang,&nbsp;Yongcheng Cui,&nbsp;Xiaoyu Zheng","doi":"10.1007/s10015-025-01036-y","DOIUrl":"10.1007/s10015-025-01036-y","url":null,"abstract":"<div><p>In industrial environments, robots are confronted with constantly changing working conditions and manipulation tasks. Traditional approaches that require robots to learn skills from scratch for new tasks are often slow and heavily dependent on human intervention. To address inefficiency and enhance robots’ adaptability, we propose a general Intelligent Transfer System (ITS) that enables autonomous and rapid new skill learning in dynamic environments. ITS integrates Large Language Models (LLMs) with Transfer Reinforcement Learning (TRL), harnessing both the advanced comprehension and generative capabilities of LLMs and the pre-acquired skill knowledge of robotic systems. First, to enable robots to comprehend unseen task commands and learn skills autonomously, we propose a reward function generation method based on task-specific reward components. This approach improves time efficiency and accuracy while eliminating the need for manual design. Secondly, to accelerate the learning speed of new robotic skills, we propose an Intelligent Transfer Network (ITN) within the ITS. Unlike traditional methods that merely reuse or adapt existing skills, ITN intelligently integrates related skill features, enhancing learning efficiency through knowledge fusion. We evaluate our method in simulation, demonstrating that our method enables the system to learn skills autonomously without pre-programmed behaviors, achieving 72.22% and 65.17% faster learning speeds for two major tasks compared to learning from scratch. Supplementary materials are accessible via our project page<i>:</i>https://jkx-yy.github.io/</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"30 4","pages":"567 - 576"},"PeriodicalIF":0.8,"publicationDate":"2025-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Vehicle motion planning for ride comfort without passenger feedback using linear MPC 基于线性MPC的无乘客反馈车辆运动规划
IF 0.8 Q4 ROBOTICS Pub Date : 2025-07-08 DOI: 10.1007/s10015-025-01040-2
Takumi Todaka, Kaito Sato, Kenji Sawada, Katsuhiko Sando

Ride comfort is an emerging focus in autonomous driving, yet integrating passenger behavior into vehicle motion planning remains challenging due to the impracticality of real-time passenger state feedback and high computational demands. This paper proposes a solution that approximates Ideal Motion Planning (IMP)—which traditionally requires nonlinear model predictive control (NMPC) and passenger state feedback—using linear model predictive control (LMPC) with weight tuning via Bayesian optimization. Our method eliminates the need for passenger state feedback and reduces the computation time, allowing real-time implementation. Although simplifying the cost function to prioritize tracking and stability, we achieve vehicle motion that maintains ride comfort equivalent to IMP, as demonstrated in simulations. This approach offers a practical pathway to improve passenger comfort in autonomous vehicles without additional sensory input.

乘坐舒适性是自动驾驶的一个新兴焦点,但由于实时乘客状态反馈的不实用性和高计算需求,将乘客行为整合到车辆运动规划中仍然具有挑战性。本文提出了一种近似于理想运动规划(IMP)的解决方案,该方案传统上需要非线性模型预测控制(NMPC)和乘客状态反馈,该解决方案采用线性模型预测控制(LMPC),并通过贝叶斯优化进行权值调整。我们的方法消除了对乘客状态反馈的需要,减少了计算时间,允许实时实现。虽然简化了成本函数以优先考虑跟踪和稳定性,但正如仿真所示,我们实现了与IMP保持同等乘坐舒适性的车辆运动。这种方法为提高自动驾驶汽车乘客的舒适度提供了一条实用的途径,而无需额外的感官输入。
{"title":"Vehicle motion planning for ride comfort without passenger feedback using linear MPC","authors":"Takumi Todaka,&nbsp;Kaito Sato,&nbsp;Kenji Sawada,&nbsp;Katsuhiko Sando","doi":"10.1007/s10015-025-01040-2","DOIUrl":"10.1007/s10015-025-01040-2","url":null,"abstract":"<div><p>Ride comfort is an emerging focus in autonomous driving, yet integrating passenger behavior into vehicle motion planning remains challenging due to the impracticality of real-time passenger state feedback and high computational demands. This paper proposes a solution that approximates Ideal Motion Planning (IMP)—which traditionally requires nonlinear model predictive control (NMPC) and passenger state feedback—using linear model predictive control (LMPC) with weight tuning via Bayesian optimization. Our method eliminates the need for passenger state feedback and reduces the computation time, allowing real-time implementation. Although simplifying the cost function to prioritize tracking and stability, we achieve vehicle motion that maintains ride comfort equivalent to IMP, as demonstrated in simulations. This approach offers a practical pathway to improve passenger comfort in autonomous vehicles without additional sensory input.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"30 4","pages":"584 - 593"},"PeriodicalIF":0.8,"publicationDate":"2025-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10015-025-01040-2.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development of a MaskNet-based posture estimation method for robot vision systems 基于masknet的机器人视觉系统姿态估计方法研究
IF 0.8 Q4 ROBOTICS Pub Date : 2025-07-06 DOI: 10.1007/s10015-025-01041-1
Yu Iwai, Soma Fumoto, Masato Kitamura, Takeshi Nishida

Real-time posture estimation based on incomplete three-dimensional (3D) measurements is crucial for vision systems used in industrial robots. Conventional systems rely on manual pre-registering of multiple partial point cloud models for each workpiece and often fail when the 3D sensor is repositioned or its viewpoint changes. To overcome this bottleneck, we extend a fast MaskNet + singular‑value‑decomposition framework. However, its time‑series estimates still fluctuate owing to sensor and inference noise. To improve accuracy under realistic conditions, MaskNet was retrained on a large ray‑casting‑augmented CAD dataset that simulates random sensor viewpoints, and a Kalman filter was introduced to suppress temporal noise. The new training enhances mask‑vector accuracy, and the Kalman filter suppresses temporal fluctuations. Experiments confirm that the proposed method operates in real time on standard hardware, requires no pre‑registration after sensor movement, and can be seamlessly incorporated into a robot vision system for reliable target‑picking tasks.

基于不完全三维(3D)测量的实时姿态估计对于工业机器人视觉系统至关重要。传统的系统依赖于每个工件的多个局部点云模型的手动预配准,当3D传感器重新定位或其视点改变时,通常会失败。为了克服这个瓶颈,我们扩展了一个快速的MaskNet +奇异值分解框架。然而,由于传感器和推理噪声的影响,其时间序列估计仍然存在波动。为了提高真实条件下的精度,MaskNet在模拟随机传感器视点的大型光线投射增强CAD数据集上进行了再训练,并引入了卡尔曼滤波器来抑制时间噪声。新的训练方法提高了掩模向量的精度,卡尔曼滤波器抑制了时间波动。实验证实,该方法可以在标准硬件上实时运行,在传感器运动后不需要预配准,并且可以无缝地集成到机器人视觉系统中进行可靠的目标拾取任务。
{"title":"Development of a MaskNet-based posture estimation method for robot vision systems","authors":"Yu Iwai,&nbsp;Soma Fumoto,&nbsp;Masato Kitamura,&nbsp;Takeshi Nishida","doi":"10.1007/s10015-025-01041-1","DOIUrl":"10.1007/s10015-025-01041-1","url":null,"abstract":"<div><p>Real-time posture estimation based on incomplete three-dimensional (3D) measurements is crucial for vision systems used in industrial robots. Conventional systems rely on manual pre-registering of multiple partial point cloud models for each workpiece and often fail when the 3D sensor is repositioned or its viewpoint changes. To overcome this bottleneck, we extend a fast MaskNet + singular‑value‑decomposition framework. However, its time‑series estimates still fluctuate owing to sensor and inference noise. To improve accuracy under realistic conditions, MaskNet was retrained on a large ray‑casting‑augmented CAD dataset that simulates random sensor viewpoints, and a Kalman filter was introduced to suppress temporal noise. The new training enhances mask‑vector accuracy, and the Kalman filter suppresses temporal fluctuations. Experiments confirm that the proposed method operates in real time on standard hardware, requires no pre‑registration after sensor movement, and can be seamlessly incorporated into a robot vision system for reliable target‑picking tasks.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"30 4","pages":"594 - 602"},"PeriodicalIF":0.8,"publicationDate":"2025-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detection of anomalous individuals in swarm using explainable AI 使用可解释的AI检测群体中的异常个体
IF 0.8 Q4 ROBOTICS Pub Date : 2025-07-03 DOI: 10.1007/s10015-025-01038-w
Hiroshi Sato, Yohei Fukuyama, Masao Kubo, Haruta Isobe, Tomoyashu Deguchi

Effective swarm management requires detecting individuals that negatively impact overall performance. This paper proposes a method using Explainable AI (XAI) to both detect swarm-level anomalies and identify the causative agents. We train a convolutional neural network (CNN)-based neural network to classify swarms as normal or abnormal, then use Grad-CAM (an XAI technique) to pinpoint the responsible individuals. We simulate swarms using the Boid model, introducing “anomaly agents” with slightly altered parameters for Alignment, Cohesion, and Separation. A novel model, adding a Lambda Layer to VGG16, is proposed and compared with standard CNNs (VGG16, ResNet50, DenseNet121, EfficientNetB0). The Lambda Layer model achieved the highest accuracy in both anomaly detection and agent identification. Experimental results show high accuracy in identifying agents with altered Alignment and Separation parameters. However, identifying agents with altered Cohesion is more challenging due to their proximity to normal agents, leading to increased misidentifications. The results demonstrate the effectiveness of combining CNNs and XAI for anomaly detection and root cause analysis in swarms.

有效的群体管理需要发现对整体表现产生负面影响的个体。本文提出了一种使用可解释人工智能(XAI)来检测群体级异常和识别致病因子的方法。我们训练了一个基于卷积神经网络(CNN)的神经网络来对蜂群进行正常或异常分类,然后使用Grad-CAM(一种XAI技术)来确定负责任的个体。我们使用Boid模型模拟群体,引入了“异常代理”,对对齐、内聚和分离参数进行了稍微改变。提出了一种新的模型,在VGG16上增加Lambda层,并与标准cnn (VGG16、ResNet50、DenseNet121、EfficientNetB0)进行了比较。Lambda层模型在异常检测和智能体识别方面都达到了最高的准确率。实验结果表明,该方法具有较高的识别精度。然而,识别具有改变内聚的代理更具挑战性,因为它们接近正常代理,导致错误识别增加。实验结果表明,将cnn与XAI相结合,可以有效地进行蜂群异常检测和根本原因分析。
{"title":"Detection of anomalous individuals in swarm using explainable AI","authors":"Hiroshi Sato,&nbsp;Yohei Fukuyama,&nbsp;Masao Kubo,&nbsp;Haruta Isobe,&nbsp;Tomoyashu Deguchi","doi":"10.1007/s10015-025-01038-w","DOIUrl":"10.1007/s10015-025-01038-w","url":null,"abstract":"<div><p>Effective swarm management requires detecting individuals that negatively impact overall performance. This paper proposes a method using Explainable AI (XAI) to both detect swarm-level anomalies and identify the causative agents. We train a convolutional neural network (CNN)-based neural network to classify swarms as normal or abnormal, then use Grad-CAM (an XAI technique) to pinpoint the responsible individuals. We simulate swarms using the Boid model, introducing “anomaly agents” with slightly altered parameters for Alignment, Cohesion, and Separation. A novel model, adding a Lambda Layer to VGG16, is proposed and compared with standard CNNs (VGG16, ResNet50, DenseNet121, EfficientNetB0). The Lambda Layer model achieved the highest accuracy in both anomaly detection and agent identification. Experimental results show high accuracy in identifying agents with altered Alignment and Separation parameters. However, identifying agents with altered Cohesion is more challenging due to their proximity to normal agents, leading to increased misidentifications. The results demonstrate the effectiveness of combining CNNs and XAI for anomaly detection and root cause analysis in swarms.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"30 4","pages":"577 - 583"},"PeriodicalIF":0.8,"publicationDate":"2025-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Synergistic development model of population growth and infrastructure networks based on the slime mold network 基于黏菌网络的人口增长与基础设施网络协同发展模型
IF 0.8 Q4 ROBOTICS Pub Date : 2025-06-20 DOI: 10.1007/s10015-025-01035-z
Megumi Uza, Airi Kinjo, Itsuki Kunita

Developing efficient transportation infrastructure networks capable of accommodating increases in population and demand is essential in urban planning. The conventional approaches to urban planning involve simulations using mathematical models that incorporate temporal changes. The current models are often based on static factors like existing land and road networks. However, land use and road networks need to be adapted to environmental and systemic changes to better capture urban dynamics. In this study, we aimed to address this by proposing a novel synergistic development model of population growth and infrastructure networks inspired by the adaptive network formation of slime mold Physarum polycephalum. The proposed model builds on the Physarum solver by incorporating two dynamic processes: adding new source points and deleting sink points with low flow. Adding source points simulates population growth and increases infrastructure demand, whereas deleting sink points enhances network efficiency by removing redundant paths. The numerical simulations were conducted under various conditions to evaluate the effect of these processes on network formation. The results indicate that deleting sink points accelerates the convergence of the network by eliminating unnecessary paths. However, an increased flow can result in higher energy loss if the number of paths is insufficient. These findings indicate that adaptive feedback mechanisms, inspired by biological systems, play a crucial role in optimizing infrastructure networks in response to population growth, offering insights for flexible urban development strategies.

发展能够适应人口和需求增加的有效交通基础设施网络是城市规划的关键。传统的城市规划方法包括使用包含时间变化的数学模型进行模拟。目前的模型通常是基于静态因素,如现有的土地和道路网络。然而,土地利用和道路网络需要适应环境和系统变化,以更好地捕捉城市动态。在这项研究中,我们旨在通过提出一种新的人口增长和基础设施网络的协同发展模型来解决这个问题,该模型的灵感来自于黏菌多头绒泡菌的自适应网络形成。该模型建立在绒泡菌求解器的基础上,结合了两个动态过程:增加新的源点和删除低流量的汇点。增加源点模拟人口增长,增加基础设施需求,而删除汇聚点通过消除冗余路径提高网络效率。在不同条件下进行了数值模拟,以评估这些过程对网络形成的影响。结果表明,删除汇聚点可以消除不必要的路径,从而加快网络的收敛速度。然而,如果路径数量不足,增加的流量会导致更高的能量损失。这些发现表明,受生物系统启发的自适应反馈机制在优化基础设施网络以应对人口增长方面发挥着至关重要的作用,为灵活的城市发展战略提供了见解。
{"title":"Synergistic development model of population growth and infrastructure networks based on the slime mold network","authors":"Megumi Uza,&nbsp;Airi Kinjo,&nbsp;Itsuki Kunita","doi":"10.1007/s10015-025-01035-z","DOIUrl":"10.1007/s10015-025-01035-z","url":null,"abstract":"<div><p>Developing efficient transportation infrastructure networks capable of accommodating increases in population and demand is essential in urban planning. The conventional approaches to urban planning involve simulations using mathematical models that incorporate temporal changes. The current models are often based on static factors like existing land and road networks. However, land use and road networks need to be adapted to environmental and systemic changes to better capture urban dynamics. In this study, we aimed to address this by proposing a novel synergistic development model of population growth and infrastructure networks inspired by the adaptive network formation of slime mold <i>Physarum polycephalum</i>. The proposed model builds on the Physarum solver by incorporating two dynamic processes: adding new source points and deleting sink points with low flow. Adding source points simulates population growth and increases infrastructure demand, whereas deleting sink points enhances network efficiency by removing redundant paths. The numerical simulations were conducted under various conditions to evaluate the effect of these processes on network formation. The results indicate that deleting sink points accelerates the convergence of the network by eliminating unnecessary paths. However, an increased flow can result in higher energy loss if the number of paths is insufficient. These findings indicate that adaptive feedback mechanisms, inspired by biological systems, play a crucial role in optimizing infrastructure networks in response to population growth, offering insights for flexible urban development strategies.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"30 3","pages":"523 - 533"},"PeriodicalIF":0.8,"publicationDate":"2025-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145168109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Target specific multi-image 3D scrambling algorithm for security cameras 针对安全摄像机目标的多图像三维置乱算法
IF 0.8 Q4 ROBOTICS Pub Date : 2025-06-16 DOI: 10.1007/s10015-025-01033-1
Abhijeet Ravankar, Arpit Rawankar, Ankit A. Ravankar

With the proliferation of security cameras, image content protection is a major challenge. Image scrambling has increasingly been used for content protection as it does not degrade the quality of image. However, security cameras pose challenges of real-time implementation and target specific content protection. To this end, this paper presents a target specific, linear transform-based multi-image scrambling algorithm. The algorithm can scramble the image in 2D and 3D. Scrambling in 3D enables inter-image pixel scrambling which prevents brute-force attacks. The algorithm can be implemented using MMA (matrix–matrix multiply add) operation for parallel computing. A faster algorithm is proposed for serial computation. Both square and rectangular images can be scrambled. Along with complete image, targeted areas of the image can be scrambled in real time. The quality of scrambling is evaluated using PSNR (peak-signal-to-noise ratio) parameter. Experiment results with actual security cameras with motion detection feature shows that the proposed algorithm can be used in real time with high pixel irregularity for content protection.

随着安防摄像机的普及,图像内容的保护成为一项重大挑战。由于图像置乱不会降低图像质量,因此越来越多地用于内容保护。然而,安全摄像机对实时实现和目标特定内容保护提出了挑战。为此,本文提出了一种针对目标的、基于线性变换的多图像置乱算法。该算法可以对二维和三维图像进行置乱。三维置乱使图像间像素置乱能够防止暴力攻击。该算法可采用矩阵-矩阵乘加运算实现并行计算。提出了一种更快的串行计算算法。正方形和矩形图像都可以被打乱。与完整的图像一起,可以对图像的目标区域进行实时置乱。用峰值信噪比(PSNR)参数评价加扰质量。在具有运动检测功能的实际安防摄像机上的实验结果表明,该算法可用于高像素不规则度的实时内容保护。
{"title":"Target specific multi-image 3D scrambling algorithm for security cameras","authors":"Abhijeet Ravankar,&nbsp;Arpit Rawankar,&nbsp;Ankit A. Ravankar","doi":"10.1007/s10015-025-01033-1","DOIUrl":"10.1007/s10015-025-01033-1","url":null,"abstract":"<div><p>With the proliferation of security cameras, image content protection is a major challenge. Image scrambling has increasingly been used for content protection as it does not degrade the quality of image. However, security cameras pose challenges of real-time implementation and target specific content protection. To this end, this paper presents a target specific, linear transform-based multi-image scrambling algorithm. The algorithm can scramble the image in 2D and 3D. Scrambling in 3D enables inter-image pixel scrambling which prevents brute-force attacks. The algorithm can be implemented using MMA (matrix–matrix multiply add) operation for parallel computing. A faster algorithm is proposed for serial computation. Both square and rectangular images can be scrambled. Along with complete image, targeted areas of the image can be scrambled in real time. The quality of scrambling is evaluated using PSNR (peak-signal-to-noise ratio) parameter. Experiment results with actual security cameras with motion detection feature shows that the proposed algorithm can be used in real time with high pixel irregularity for content protection.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"30 3","pages":"372 - 382"},"PeriodicalIF":0.8,"publicationDate":"2025-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145165510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robots reading recipes: large language models as translators between humans and machines 阅读食谱的机器人:作为人类和机器之间翻译的大型语言模型
IF 0.8 Q4 ROBOTICS Pub Date : 2025-06-13 DOI: 10.1007/s10015-025-01031-3
Oliver Wang, Grant Cheng, Luc Caspar, Akira Yokota, Mahdi Khosravy, Olaf Witkowski

Large Language Models (LLMs) are a type of machine learning model trained on vast amounts of natural language that have demonstrated novel capabilities in tasks such as text prediction and generation. These tasks allow LLMs to be remarkably suited for understanding the semantics of natural language, which in turn enables applications such as planning real world tasks, writing code for computers, and translating between human languages. Even though LLMs could provide more flexibility in interpreting user requests and have shown to possess some commonsense knowledge, their capabilities for translating natural language instructions into code to control robot actions is only starting to be explored. More specifically, in this paper we are interested in the control of robots tasked with preparing cocktails. Within this context, it is assumed that the LLM has access to a repository of well-formatted recipes. This means that each recipe is written according to the following layout: a list of ingredients, then a subsequent description of how to prepare and mix the various items. Moreover, a set of low-level modules responsible for robot manipulation and vision-related tasks is also provided to the LLM in the shape of an application programming interface (API). Consequently, the main focus of the LLM is on generating a sequence of calls to the API, along with the right parameters, to produce the cocktail requested by users in natural language. Here, we show that it is feasible for LLMs to perform this type of translation on a small number of custom modules, and that certain techniques provide a measurable benefit to the accuracy and consistency of this task without fine-tuning. We found in particular that the use of an ensemble-voting strategy, where multiple trials are repeated and the most common answer is selected, increases accuracy to a certain extent. In addition, there is moderate support for the use of natural language parsing to adjust the prompt of the LLM prior to translation. Lastly, building on previous knowledge we also provide a set of guidelines to help design prompts to improve the accuracy of the resulting sequence of actions. In general, these results suggest that while LLMs can be used as translators of robot instructions, they are best applied in conjunction with these other strategies. The impact of these findings could influence future robotics development, as it provides directions for implementing LLMs more effectively and broadening the accessibility of robotic control to users without an extensive software background.

大型语言模型(llm)是一种基于大量自然语言训练的机器学习模型,在文本预测和生成等任务中展示了新颖的功能。这些任务使得llm非常适合理解自然语言的语义,这反过来又使诸如规划现实世界的任务、为计算机编写代码以及在人类语言之间进行翻译等应用程序成为可能。尽管llm可以在解释用户请求方面提供更大的灵活性,并且已经显示出拥有一些常识性知识,但它们将自然语言指令翻译成代码以控制机器人动作的能力才刚刚开始被探索。更具体地说,在本文中,我们对负责调制鸡尾酒的机器人的控制感兴趣。在此上下文中,假定LLM可以访问格式良好的食谱存储库。这意味着每个食谱都是按照以下布局编写的:一个配料列表,然后是如何准备和混合各种物品的后续描述。此外,还以应用程序编程接口(API)的形式向LLM提供了一组负责机器人操作和视觉相关任务的底层模块。因此,LLM的主要重点是生成一系列对API的调用,以及正确的参数,以生成用户用自然语言请求的鸡尾酒。在这里,我们展示了llm在少量自定义模块上执行这种类型的翻译是可行的,并且某些技术为该任务的准确性和一致性提供了可衡量的好处,而无需微调。我们特别发现,使用集合投票策略,即重复多次试验并选择最常见的答案,在一定程度上提高了准确性。此外,还适度支持使用自然语言解析来调整LLM在翻译前的提示。最后,基于之前的知识,我们还提供了一组指导方针来帮助设计提示,以提高结果操作序列的准确性。总的来说,这些结果表明,虽然llm可以用作机器人指令的翻译,但它们最好与这些其他策略结合使用。这些发现的影响可能会影响未来机器人技术的发展,因为它为更有效地实施llm提供了方向,并为没有广泛软件背景的用户扩大了机器人控制的可访问性。
{"title":"Robots reading recipes: large language models as translators between humans and machines","authors":"Oliver Wang,&nbsp;Grant Cheng,&nbsp;Luc Caspar,&nbsp;Akira Yokota,&nbsp;Mahdi Khosravy,&nbsp;Olaf Witkowski","doi":"10.1007/s10015-025-01031-3","DOIUrl":"10.1007/s10015-025-01031-3","url":null,"abstract":"<div><p>Large Language Models (LLMs) are a type of machine learning model trained on vast amounts of natural language that have demonstrated novel capabilities in tasks such as text prediction and generation. These tasks allow LLMs to be remarkably suited for understanding the semantics of natural language, which in turn enables applications such as planning real world tasks, writing code for computers, and translating between human languages. Even though LLMs could provide more flexibility in interpreting user requests and have shown to possess some commonsense knowledge, their capabilities for translating natural language instructions into code to control robot actions is only starting to be explored. More specifically, in this paper we are interested in the control of robots tasked with preparing cocktails. Within this context, it is assumed that the LLM has access to a repository of well-formatted recipes. This means that each recipe is written according to the following layout: a list of ingredients, then a subsequent description of how to prepare and mix the various items. Moreover, a set of low-level modules responsible for robot manipulation and vision-related tasks is also provided to the LLM in the shape of an application programming interface (API). Consequently, the main focus of the LLM is on generating a sequence of calls to the API, along with the right parameters, to produce the cocktail requested by users in natural language. Here, we show that it is feasible for LLMs to perform this type of translation on a small number of custom modules, and that certain techniques provide a measurable benefit to the accuracy and consistency of this task without fine-tuning. We found in particular that the use of an ensemble-voting strategy, where multiple trials are repeated and the most common answer is selected, increases accuracy to a certain extent. In addition, there is moderate support for the use of natural language parsing to adjust the prompt of the LLM prior to translation. Lastly, building on previous knowledge we also provide a set of guidelines to help design prompts to improve the accuracy of the resulting sequence of actions. In general, these results suggest that while LLMs can be used as translators of robot instructions, they are best applied in conjunction with these other strategies. The impact of these findings could influence future robotics development, as it provides directions for implementing LLMs more effectively and broadening the accessibility of robotic control to users without an extensive software background.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"30 3","pages":"407 - 416"},"PeriodicalIF":0.8,"publicationDate":"2025-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10015-025-01031-3.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145165228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Artificial Life and Robotics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1