首页 > 最新文献

IEEE Transactions on Cognitive and Developmental Systems最新文献

英文 中文
Attention Mechanism and Out-of-Distribution Data on Cross Language Image Matching for Weakly Supervised Semantic Segmentation 弱监督语义分割跨语言图像匹配的注意机制和分布外数据
IF 5 3区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-04-02 DOI: 10.1109/TCDS.2024.3382914
Chi-Chia Sun;Jing-Ming Guo;Chen-Hung Chung;Bo-Yu Chen
The fully supervised semantic segmentation requires detailed annotation of each pixel, which is time-consuming and laborious at the pixel-by-pixel level. To solve this problem, the direction of this article is to perform the semantic segmentation task by using image-level categorical annotation. Existing methods using image level annotation usually use class activation maps (CAMs) to find the location of the target object as the first step. By training a classifier, the presence of objects in the image can be searched effectively. However, CAMs appear that as follows: 1) objects are excessively focused on specific regions, capturing only the most prominent and critical areas and 2) it is easy to misinterpret the frequently occurring background regions, the foreground and background are confused. This article introduces cross language image matching based on out-of-distribution data and convolutional block attention module (CLODA), the concept of double branching in the cross language image matching framework, and adds a convolutional attention module to the attention branch to solve the problem of excess focus on objects in the CAMs. Importing out-of-distribution data on out of distribution branches helps classification networks improve misinterpretation of areas of focus. Optimizing regions of interest for attentional branch learning using cross pseudosupervision on two branches. Experimental results show that the pseudomasks generated by the proposed network can achieve 75.3% in mean Intersection over Union (mIoU) with the pattern analysis, statistical modeling and computational learning visual object classes (PASCAL VOC) 2012 training set. The performance of the segmentation network trained with the pseudomasks is up to 72.3% and 72.1% in mIoU on the validation and testing set of PASCAL VOC 2012.
完全有监督的语义分割需要对每个像素进行详细标注,而逐个像素的标注费时费力。为了解决这个问题,本文的研究方向是利用图像级分类标注来完成语义分割任务。使用图像级标注的现有方法通常首先使用类激活图(CAM)来查找目标对象的位置。通过训练分类器,可以有效地搜索图像中是否存在物体。然而,类激活图出现了以下问题:1) 物体过度集中在特定区域,只捕捉到最突出、最关键的区域;2) 容易误读经常出现的背景区域,混淆前景和背景。本文介绍了基于分布外数据和卷积块注意力模块(CLODA)的跨语言图像匹配,即跨语言图像匹配框架中的双分支概念,并在注意力分支中加入了卷积注意力模块,以解决 CAM 中物体过度聚焦的问题。在分布外分支上导入分布外数据有助于分类网络改善对焦点区域的误读。利用两个分支上的交叉伪监督优化注意力分支学习的兴趣区域。实验结果表明,通过模式分析、统计建模和计算学习视觉对象类别(PASCAL VOC)2012 训练集,由所提出的网络生成的伪任务在平均交叉超过联合(mIoU)方面能达到 75.3%。在 PASCAL VOC 2012 验证集和测试集上,使用伪掩码训练的分割网络的 mIoU 性能分别达到 72.3% 和 72.1%。
{"title":"Attention Mechanism and Out-of-Distribution Data on Cross Language Image Matching for Weakly Supervised Semantic Segmentation","authors":"Chi-Chia Sun;Jing-Ming Guo;Chen-Hung Chung;Bo-Yu Chen","doi":"10.1109/TCDS.2024.3382914","DOIUrl":"10.1109/TCDS.2024.3382914","url":null,"abstract":"The fully supervised semantic segmentation requires detailed annotation of each pixel, which is time-consuming and laborious at the pixel-by-pixel level. To solve this problem, the direction of this article is to perform the semantic segmentation task by using image-level categorical annotation. Existing methods using image level annotation usually use class activation maps (CAMs) to find the location of the target object as the first step. By training a classifier, the presence of objects in the image can be searched effectively. However, CAMs appear that as follows: 1) objects are excessively focused on specific regions, capturing only the most prominent and critical areas and 2) it is easy to misinterpret the frequently occurring background regions, the foreground and background are confused. This article introduces cross language image matching based on out-of-distribution data and convolutional block attention module (CLODA), the concept of double branching in the cross language image matching framework, and adds a convolutional attention module to the attention branch to solve the problem of excess focus on objects in the CAMs. Importing out-of-distribution data on out of distribution branches helps classification networks improve misinterpretation of areas of focus. Optimizing regions of interest for attentional branch learning using cross pseudosupervision on two branches. Experimental results show that the pseudomasks generated by the proposed network can achieve 75.3% in mean Intersection over Union (mIoU) with the pattern analysis, statistical modeling and computational learning visual object classes (PASCAL VOC) 2012 training set. The performance of the segmentation network trained with the pseudomasks is up to 72.3% and 72.1% in mIoU on the validation and testing set of PASCAL VOC 2012.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 4","pages":"1604-1610"},"PeriodicalIF":5.0,"publicationDate":"2024-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140593955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DatUS: Data-Driven Unsupervised Semantic Segmentation With Pretrained Self-Supervised Vision Transformer DatUS:数据驱动的无监督语义分割与预训练的自监督视觉转换器
IF 5 3区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-04-02 DOI: 10.1109/TCDS.2024.3383952
Sonal Kumar;Arijit Sur;Rashmi Dutta Baruah
Successive proposals of several self-supervised training schemes (STSs) continue to emerge, taking one step closer to developing a universal foundation model. In this process, unsupervised downstream tasks are recognized as one of the evaluation methods to validate the quality of visual features learned with self-supervised training. However, unsupervised dense semantic segmentation has yet to be explored as a downstream task, which can utilize and evaluate the quality of semantic information introduced in patch-level feature representations during self-supervised training of vision transformers. Therefore, we propose a novel data-driven framework, DatUS, to perform unsupervised dense semantic segmentation (DSS) as a downstream task. DatUS generates semantically consistent pseudosegmentation masks for an unlabeled image dataset without using visual prior or synchronized data. The experiment shows that the proposed framework achieves the highest MIoU (24.90) and average F1 score (36.3) by choosing DINOv2 and the highest pixel accuracy (62.18) by choosing DINO as the STS on the training set of SUIM dataset. It also outperforms state-of-the-art methods for the unsupervised DSS task with 15.02% MIoU, 21.47% pixel accuracy, and 16.06% average F1 score on the validation set of SUIM dataset. It achieves a competitive level of accuracy for a large-scale COCO dataset.
一些自我监督训练方案(STS)的提案不断涌现,向开发通用基础模型的目标又迈进了一步。在这一过程中,无监督下游任务被认为是验证通过自我监督训练学习到的视觉特征质量的评估方法之一。然而,无监督密集语义分割作为一种下游任务,还有待于探索,它可以利用和评估在视觉转换器的自我监督训练过程中引入到补丁级特征表征中的语义信息的质量。因此,我们提出了一种新颖的数据驱动框架 DatUS,将无监督密集语义分割(DSS)作为一项下游任务来执行。DatUS 无需使用视觉先验数据或同步数据,即可为无标记图像数据集生成语义一致的伪分割掩码。实验结果表明,在 SUIM 数据集的训练集上,通过选择 DINOv2,提议的框架获得了最高的 MIoU(24.90)和平均 F1 分数(36.3);通过选择 DINO 作为 STS,提议的框架获得了最高的像素准确率(62.18)。在 SUIM 数据集的验证集上,它还以 15.02% 的 MIoU、21.47% 的像素准确率和 16.06% 的平均 F1 得分超越了无监督 DSS 任务的先进方法。对于大规模 COCO 数据集来说,它达到了具有竞争力的准确率水平。
{"title":"DatUS: Data-Driven Unsupervised Semantic Segmentation With Pretrained Self-Supervised Vision Transformer","authors":"Sonal Kumar;Arijit Sur;Rashmi Dutta Baruah","doi":"10.1109/TCDS.2024.3383952","DOIUrl":"10.1109/TCDS.2024.3383952","url":null,"abstract":"Successive proposals of several self-supervised training schemes (STSs) continue to emerge, taking one step closer to developing a universal foundation model. In this process, unsupervised downstream tasks are recognized as one of the evaluation methods to validate the quality of visual features learned with self-supervised training. However, unsupervised dense semantic segmentation has yet to be explored as a downstream task, which can utilize and evaluate the quality of semantic information introduced in patch-level feature representations during self-supervised training of vision transformers. Therefore, we propose a novel data-driven framework, DatUS, to perform unsupervised dense semantic segmentation (DSS) as a downstream task. DatUS generates semantically consistent pseudosegmentation masks for an unlabeled image dataset without using visual prior or synchronized data. The experiment shows that the proposed framework achieves the highest MIoU (24.90) and average F1 score (36.3) by choosing DINOv2 and the highest pixel accuracy (62.18) by choosing DINO as the STS on the training set of SUIM dataset. It also outperforms state-of-the-art methods for the unsupervised DSS task with 15.02% MIoU, 21.47% pixel accuracy, and 16.06% average F1 score on the validation set of SUIM dataset. It achieves a competitive level of accuracy for a large-scale COCO dataset.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 5","pages":"1775-1788"},"PeriodicalIF":5.0,"publicationDate":"2024-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140593839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep-Reinforcement-Learning-Based Driving Policy at Intersections Utilizing Lane Graph Networks 利用车道图网络制定基于深度强化学习的交叉路口驾驶策略
IF 5 3区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-04-02 DOI: 10.1109/TCDS.2024.3384269
Yuqi Liu;Qichao Zhang;Yinfeng Gao;Dongbin Zhao
Learning an efficient and safe driving strategy in a traffic-heavy intersection scenario and generalizing it to different intersections remains a challenging task for autonomous driving. This is because there are differences in the structure of roads at different intersections, and autonomous vehicles need to generalize the strategies they have learned in the training environments. This requires the autonomous vehicle to capture not only the interactions between agents but also the relationships between agents and the map effectively. To address this challenge, we present a technique that integrates the information of high-definition (HD) maps and traffic participants into vector representations, called lane graph vectorization (LGV). In order to construct a driving policy for intersection navigation, we incorporate LGV into the twin-delayed deep deterministic policy gradient (TD3) algorithm with prioritized experience replay (PER). To train and validate the proposed algorithm, we construct a gym environment for intersection navigation within the high-fidelity CARLA simulator, integrating dense interactive traffic flow and various generalization test intersection scenarios. Experimental results demonstrate the effectiveness of LGV for intersection navigation tasks and outperform the state-of-the-art in our proposed scenarios.
在交通繁忙的交叉路口场景中学习高效、安全的驾驶策略,并将其推广到不同的交叉路口,对于自动驾驶来说仍然是一项具有挑战性的任务。这是因为不同交叉路口的道路结构存在差异,自动驾驶车辆需要将其在训练环境中学到的策略加以推广。这就要求自动驾驶汽车不仅要捕捉到驾驶员之间的互动,还要有效地捕捉到驾驶员与地图之间的关系。为了应对这一挑战,我们提出了一种将高清(HD)地图和交通参与者的信息整合为矢量表示的技术,称为车道图矢量化(LGV)。为了构建交叉路口导航的驾驶策略,我们将 LGV 纳入了带有优先经验重放(PER)的双延迟深度确定性策略梯度(TD3)算法中。为了训练和验证所提出的算法,我们在高保真 CARLA 模拟器中构建了一个用于路口导航的健身房环境,其中集成了密集的交互式交通流和各种通用测试路口场景。实验结果证明了 LGV 在交叉口导航任务中的有效性,在我们提出的场景中,LGV 的表现优于最先进的算法。
{"title":"Deep-Reinforcement-Learning-Based Driving Policy at Intersections Utilizing Lane Graph Networks","authors":"Yuqi Liu;Qichao Zhang;Yinfeng Gao;Dongbin Zhao","doi":"10.1109/TCDS.2024.3384269","DOIUrl":"10.1109/TCDS.2024.3384269","url":null,"abstract":"Learning an efficient and safe driving strategy in a traffic-heavy intersection scenario and generalizing it to different intersections remains a challenging task for autonomous driving. This is because there are differences in the structure of roads at different intersections, and autonomous vehicles need to generalize the strategies they have learned in the training environments. This requires the autonomous vehicle to capture not only the interactions between agents but also the relationships between agents and the map effectively. To address this challenge, we present a technique that integrates the information of high-definition (HD) maps and traffic participants into vector representations, called lane graph vectorization (LGV). In order to construct a driving policy for intersection navigation, we incorporate LGV into the twin-delayed deep deterministic policy gradient (TD3) algorithm with prioritized experience replay (PER). To train and validate the proposed algorithm, we construct a gym environment for intersection navigation within the high-fidelity CARLA simulator, integrating dense interactive traffic flow and various generalization test intersection scenarios. Experimental results demonstrate the effectiveness of LGV for intersection navigation tasks and outperform the state-of-the-art in our proposed scenarios.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 5","pages":"1759-1774"},"PeriodicalIF":5.0,"publicationDate":"2024-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140594174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BitSNNs: Revisiting Energy-Efficient Spiking Neural Networks BitSNNs:重新审视高能效尖峰神经网络
IF 5 3区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-04-01 DOI: 10.1109/TCDS.2024.3383428
Yangfan Hu;Qian Zheng;Gang Pan
To address the energy bottleneck in deep neural networks (DNNs), the research community has developed binary neural networks (BNNs) and spiking neural networks (SNNs) from different perspectives. To combine the advantages of both BNNs and SNNs for better energy efficiency, this article proposes BitSNNs, which leverage binary weights, single-step inference, and activation sparsity. During the development of BitSNNs, we observed performance degradation in deep ResNets due to the gradient approximation error. To mitigate this issue, we delve into the learning process and propose the utilization of a hardtanh function before activation binarization. Additionally, this article investigates the critical role of activation sparsity in BitSNNs for energy efficiency, a topic often overlooked in the existing literature. Our study reveals strategies to strike a balance between accuracy and energy consumption during the training/testing stage, potentially benefiting applications in edge computing. Notably, our proposed method achieves state-of-the-art performance while significantly reducing energy consumption.
为解决深度神经网络(DNN)的能量瓶颈问题,研究界从不同角度开发了二元神经网络(BNN)和尖峰神经网络(SNN)。为了结合 BNN 和 SNN 的优势以提高能效,本文提出了 BitSNN,它充分利用了二进制权重、单步推理和激活稀疏性。在开发 BitSNNs 的过程中,我们发现由于梯度逼近误差,深度 ResNets 的性能有所下降。为了缓解这一问题,我们深入研究了学习过程,并建议在激活二值化之前使用硬坦函数。此外,本文还研究了激活稀疏性在 BitSNNs 中提高能效的关键作用,这是现有文献中经常忽略的一个话题。我们的研究揭示了在训练/测试阶段如何在准确性和能耗之间取得平衡的策略,这对边缘计算中的应用大有裨益。值得注意的是,我们提出的方法在大幅降低能耗的同时,还实现了最先进的性能。
{"title":"BitSNNs: Revisiting Energy-Efficient Spiking Neural Networks","authors":"Yangfan Hu;Qian Zheng;Gang Pan","doi":"10.1109/TCDS.2024.3383428","DOIUrl":"10.1109/TCDS.2024.3383428","url":null,"abstract":"To address the energy bottleneck in deep neural networks (DNNs), the research community has developed binary neural networks (BNNs) and spiking neural networks (SNNs) from different perspectives. To combine the advantages of both BNNs and SNNs for better energy efficiency, this article proposes BitSNNs, which leverage binary weights, single-step inference, and activation sparsity. During the development of BitSNNs, we observed performance degradation in deep ResNets due to the gradient approximation error. To mitigate this issue, we delve into the learning process and propose the utilization of a hardtanh function before activation binarization. Additionally, this article investigates the critical role of activation sparsity in BitSNNs for energy efficiency, a topic often overlooked in the existing literature. Our study reveals strategies to strike a balance between accuracy and energy consumption during the training/testing stage, potentially benefiting applications in edge computing. Notably, our proposed method achieves state-of-the-art performance while significantly reducing energy consumption.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 5","pages":"1736-1747"},"PeriodicalIF":5.0,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140593968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MAT: Morphological Adaptive Transformer for Universal Morphology Policy Learning MAT:用于通用形态学策略学习的形态学自适应变换器
IF 5 3区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-04-01 DOI: 10.1109/TCDS.2024.3383158
Boyu Li;Haoran Li;Yuanheng Zhu;Dongbin Zhao
Agent-agnostic reinforcement learning aims to learn a universal control policy that can simultaneously control a set of robots with different morphologies. Recent studies have suggested that using the transformer model can address variations in state and action spaces caused by different morphologies, and morphology information is necessary to improve policy performance. However, existing methods have limitations in exploiting morphological information, where the rationality of observation integration cannot be guaranteed. We propose morphological adaptive transformer (MAT), a transformer-based universal control algorithm that can adapt to various morphologies without any modifications. MAT includes two essential components: functional position encoding (FPE) and morphological attention mechanism (MAM). The FPE provides robust and consistent positional prior information for limb observation to avoid limb confusion and implicitly obtain functional descriptions of limbs. The MAM enhances the attribute prior information of limbs, improves the correlation between observations, and makes the policy pay attention to more limbs. We combine observation with prior information to help policy adapt to the morphology of robots, thereby optimizing its performance with unknown morphologies. Experiments on agent-agnostic tasks in Gym MuJoCo environment demonstrate that our algorithm can assign more reasonable morphological prior information to each limb, and the performance of our algorithm is comparable to the prior state-of-the-art algorithm with better generalization.
与代理无关的强化学习旨在学习一种通用控制策略,该策略可同时控制一组具有不同形态的机器人。最近的研究表明,使用变压器模型可以解决不同形态引起的状态和行动空间的变化,而形态信息是提高策略性能的必要条件。然而,现有方法在利用形态信息方面存在局限性,无法保证观测整合的合理性。我们提出了形态自适应变换器(MAT),这是一种基于变换器的通用控制算法,无需任何修改即可适应各种形态。MAT 包括两个基本组成部分:功能位置编码(FPE)和形态注意机制(MAM)。FPE 为肢体观察提供稳健一致的位置先验信息,以避免肢体混淆,并隐含地获得肢体的功能描述。MAM 增强了肢体的属性先验信息,提高了观察之间的相关性,并使政策关注更多的肢体。我们将观察结果与先验信息相结合,帮助策略适应机器人的形态,从而优化其在未知形态下的性能。在 Gym MuJoCo 环境中进行的与代理无关的任务实验表明,我们的算法可以为每个肢体分配更合理的形态先验信息,而且我们算法的性能与之前最先进的算法相当,并具有更好的泛化能力。
{"title":"MAT: Morphological Adaptive Transformer for Universal Morphology Policy Learning","authors":"Boyu Li;Haoran Li;Yuanheng Zhu;Dongbin Zhao","doi":"10.1109/TCDS.2024.3383158","DOIUrl":"10.1109/TCDS.2024.3383158","url":null,"abstract":"Agent-agnostic reinforcement learning aims to learn a universal control policy that can simultaneously control a set of robots with different morphologies. Recent studies have suggested that using the transformer model can address variations in state and action spaces caused by different morphologies, and morphology information is necessary to improve policy performance. However, existing methods have limitations in exploiting morphological information, where the rationality of observation integration cannot be guaranteed. We propose morphological adaptive transformer (MAT), a transformer-based universal control algorithm that can adapt to various morphologies without any modifications. MAT includes two essential components: functional position encoding (FPE) and morphological attention mechanism (MAM). The FPE provides robust and consistent positional prior information for limb observation to avoid limb confusion and implicitly obtain functional descriptions of limbs. The MAM enhances the attribute prior information of limbs, improves the correlation between observations, and makes the policy pay attention to more limbs. We combine observation with prior information to help policy adapt to the morphology of robots, thereby optimizing its performance with unknown morphologies. Experiments on agent-agnostic tasks in Gym MuJoCo environment demonstrate that our algorithm can assign more reasonable morphological prior information to each limb, and the performance of our algorithm is comparable to the prior state-of-the-art algorithm with better generalization.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 4","pages":"1611-1621"},"PeriodicalIF":5.0,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140593972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Measuring Human Comfort in Human–Robot Collaboration via Wearable Sensing 通过可穿戴传感技术测量人机协作中的舒适度
IF 5 3区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-03-29 DOI: 10.1109/TCDS.2024.3383296
Yuchen Yan;Haotian Su;Yunyi Jia
The development of collaborative robots has enabled a safer and more efficient human–robot collaboration (HRC) manufacturing environment. Tremendous research efforts have been conducted to improve user safety and robot working efficiency after the debut of collaborative robots. However, human comfort in HRC scenarios has not been thoroughly discussed but is critically important to the user acceptance of collaborative robots. Previous studies mostly utilize the subjective rating method to evaluate how human comfort varies as one robot factor changes, yet such method is limited in evaluating comfort online. Some other studies leverage wearable sensors to collect physiological signals to detect human emotions, but few of them implement this for a human comfort model in HRC scenarios. In this study, we designed an online comfort model for HRC using wearable sensing data. The model uses physiological signals acquired from wearable sensing and calculates the in-situ human comfort levels based on our developed algorithms. We have conducted experiments in realistic HRC tasks, and the prediction results demonstrated the effectiveness of the proposed approach in identifying human comfort levels in HRC.
协作机器人的发展为人机协作(HRC)制造环境提供了更安全、更高效的条件。协作机器人问世后,为提高用户安全性和机器人工作效率,人们开展了大量研究工作。然而,人机协作场景中的人类舒适度尚未得到深入讨论,但这对用户接受协作机器人至关重要。以往的研究大多采用主观评分法来评估人的舒适度如何随着机器人某一因素的变化而变化,但这种方法在在线评估舒适度方面存在局限性。还有一些研究利用可穿戴传感器收集生理信号来检测人的情绪,但很少有研究将其用于人机交互场景中的人类舒适度模型。在本研究中,我们利用可穿戴传感数据设计了一个人机交互的在线舒适度模型。该模型使用从可穿戴传感设备获取的生理信号,并根据我们开发的算法计算现场人体舒适度。我们在现实的人机交互任务中进行了实验,预测结果证明了所提出的方法在识别人机交互中人体舒适度水平方面的有效性。
{"title":"Measuring Human Comfort in Human–Robot Collaboration via Wearable Sensing","authors":"Yuchen Yan;Haotian Su;Yunyi Jia","doi":"10.1109/TCDS.2024.3383296","DOIUrl":"10.1109/TCDS.2024.3383296","url":null,"abstract":"The development of collaborative robots has enabled a safer and more efficient human–robot collaboration (HRC) manufacturing environment. Tremendous research efforts have been conducted to improve user safety and robot working efficiency after the debut of collaborative robots. However, human comfort in HRC scenarios has not been thoroughly discussed but is critically important to the user acceptance of collaborative robots. Previous studies mostly utilize the subjective rating method to evaluate how human comfort varies as one robot factor changes, yet such method is limited in evaluating comfort online. Some other studies leverage wearable sensors to collect physiological signals to detect human emotions, but few of them implement this for a human comfort model in HRC scenarios. In this study, we designed an online comfort model for HRC using wearable sensing data. The model uses physiological signals acquired from wearable sensing and calculates the in-situ human comfort levels based on our developed algorithms. We have conducted experiments in realistic HRC tasks, and the prediction results demonstrated the effectiveness of the proposed approach in identifying human comfort levels in HRC.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 5","pages":"1748-1758"},"PeriodicalIF":5.0,"publicationDate":"2024-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140594186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Neural Networks for Automatic Sleep Stage Classification and Consciousness Assessment in Patients With Disorder of Consciousness 用于意识障碍患者自动睡眠阶段分类和意识评估的深度神经网络
IF 5 3区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-03-26 DOI: 10.1109/TCDS.2024.3382109
Jiahui Pan;Yangzuyi Yu;Jianhui Wu;Xinjie Zhou;Yanbin He;Yuanqing Li
Disorders of consciousness (DOC) are often related to serious changes in sleep structure. This article presents a sleep evaluation algorithm that scores the sleep structure of DOC patients to assist in assessing their consciousness level. The sleep evaluation algorithm is divided into two parts: 1) automatic sleep staging model: convolutional neural networks (CNNs) are employed for the extraction of signal features from electroencephalogram (EEG) and electrooculogram (EOG), and bidirectional long short-term memory (Bi-LSTM) with attention mechanism is applied to learn sequential information; and 2) consciousness assessment: automated sleep staging results are used to extract consciousness-related sleep features that are utilized by a support vector machine (SVM) classifier to assess consciousness. In this study, the CNN-BiLSTM model with an attention sleep network (CBASleepNet) was conducted using the sleep-EDF and MASS datasets. The experimental results demonstrated the effectiveness of the proposed model, which outperformed similar models. Moreover, CBASleepNet was applied to sleep staging in DOC patients through transfer learning and fine-tuning. Consciousness assessments were conducted on seven minimally conscious state (MCS) patients and four vegetative state (VS)/unresponsive wakefulness syndrome (UWS) patients, achieving an overall accuracy of 81.8%. The sleep evaluation algorithm can be used to evaluate the consciousness level of patients effectively.
意识障碍(DOC)通常与睡眠结构的严重变化有关。本文介绍了一种睡眠评估算法,可对 DOC 患者的睡眠结构进行评分,以帮助评估其意识水平。该睡眠评估算法分为两部分:1)自动睡眠分期模型:采用卷积神经网络(CNN)从脑电图(EEG)和脑电图(EOG)中提取信号特征,并应用具有注意力机制的双向长短期记忆(Bi-LSTM)学习序列信息;2)意识评估:利用自动睡眠分期结果提取与意识相关的睡眠特征,并利用支持向量机(SVM)分类器评估意识。在本研究中,使用睡眠-EDF 和 MASS 数据集对带有注意力睡眠网络(CBASleepNet)的 CNN-BiLSTM 模型进行了实验。实验结果证明了所提出模型的有效性,其表现优于同类模型。此外,通过迁移学习和微调,CBASleepNet 被应用于 DOC 患者的睡眠分期。对七名微意识状态(MCS)患者和四名植物人状态(VS)/无反应清醒综合征(UWS)患者进行了意识评估,总体准确率达到 81.8%。该睡眠评估算法可用于有效评估患者的意识水平。
{"title":"Deep Neural Networks for Automatic Sleep Stage Classification and Consciousness Assessment in Patients With Disorder of Consciousness","authors":"Jiahui Pan;Yangzuyi Yu;Jianhui Wu;Xinjie Zhou;Yanbin He;Yuanqing Li","doi":"10.1109/TCDS.2024.3382109","DOIUrl":"10.1109/TCDS.2024.3382109","url":null,"abstract":"Disorders of consciousness (DOC) are often related to serious changes in sleep structure. This article presents a sleep evaluation algorithm that scores the sleep structure of DOC patients to assist in assessing their consciousness level. The sleep evaluation algorithm is divided into two parts: 1) automatic sleep staging model: convolutional neural networks (CNNs) are employed for the extraction of signal features from electroencephalogram (EEG) and electrooculogram (EOG), and bidirectional long short-term memory (Bi-LSTM) with attention mechanism is applied to learn sequential information; and 2) consciousness assessment: automated sleep staging results are used to extract consciousness-related sleep features that are utilized by a support vector machine (SVM) classifier to assess consciousness. In this study, the CNN-BiLSTM model with an attention sleep network (CBASleepNet) was conducted using the sleep-EDF and MASS datasets. The experimental results demonstrated the effectiveness of the proposed model, which outperformed similar models. Moreover, CBASleepNet was applied to sleep staging in DOC patients through transfer learning and fine-tuning. Consciousness assessments were conducted on seven minimally conscious state (MCS) patients and four vegetative state (VS)/unresponsive wakefulness syndrome (UWS) patients, achieving an overall accuracy of 81.8%. The sleep evaluation algorithm can be used to evaluate the consciousness level of patients effectively.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 4","pages":"1589-1603"},"PeriodicalIF":5.0,"publicationDate":"2024-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140315148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EventAugment: Learning Augmentation Policies From Asynchronous Event-Based Data EventAugment:从基于事件的异步数据中学习增强策略
IF 5 3区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-03-22 DOI: 10.1109/TCDS.2024.3380907
Fuqiang Gu;Jiarui Dou;Mingyan Li;Xianlei Long;Songtao Guo;Chao Chen;Kai Liu;Xianlong Jiao;Ruiyuan Li
Data augmentation is an effective way to overcome the overfitting problem of deep learning models. However, most existing studies on data augmentation work on framelike data (e.g., images), and few tackles with event-based data. Event-based data are different from framelike data, rendering the augmentation techniques designed for framelike data unsuitable for event-based data. This work deals with data augmentation for event-based object classification and semantic segmentation, which is important for self-driving and robot manipulation. Specifically, we introduce EventAugment, a new method to augment asynchronous event-based data by automatically learning augmentation policies. We first identify 13 types of operations for augmenting event-based data. Next, we formulate the problem of finding optimal augmentation policies as a hyperparameter optimization problem. To tackle this problem, we propose a random search-based framework. Finally, we evaluate the proposed method on six public datasets including N-Caltech101, N-Cars, ST-MNIST, N-MNIST, DVSGesture, and DDD17. Experimental results demonstrate that EventAugment exhibits substantial performance improvements for both deep neural network-based and spiking neural network-based models, with gains of up to approximately 4%. Notably, EventAugment outperform state-of-the-art methods in terms of overall performance.
数据增强是克服深度学习模型过拟合问题的有效方法。然而,现有的数据增强研究大多针对框架类数据(如图像),很少涉及基于事件的数据。基于事件的数据不同于框架类数据,因此为框架类数据设计的增强技术不适合基于事件的数据。这项工作涉及基于事件的对象分类和语义分割的数据增强,这对自动驾驶和机器人操纵非常重要。具体来说,我们引入了 EventAugment,这是一种通过自动学习增强策略来增强基于事件的异步数据的新方法。我们首先确定了 13 种增强基于事件数据的操作。接下来,我们将寻找最佳增强策略的问题表述为一个超参数优化问题。为了解决这个问题,我们提出了一个基于随机搜索的框架。最后,我们在 N-Caltech101、N-Cars、ST-MNIST、N-MNIST、DVSGesture 和 DDD17 等六个公共数据集上评估了所提出的方法。实验结果表明,EventAugment 对基于深度神经网络的模型和基于尖峰神经网络的模型都有显著的性能提升,提升幅度高达约 4%。值得注意的是,EventAugment 在整体性能方面优于最先进的方法。
{"title":"EventAugment: Learning Augmentation Policies From Asynchronous Event-Based Data","authors":"Fuqiang Gu;Jiarui Dou;Mingyan Li;Xianlei Long;Songtao Guo;Chao Chen;Kai Liu;Xianlong Jiao;Ruiyuan Li","doi":"10.1109/TCDS.2024.3380907","DOIUrl":"10.1109/TCDS.2024.3380907","url":null,"abstract":"Data augmentation is an effective way to overcome the overfitting problem of deep learning models. However, most existing studies on data augmentation work on framelike data (e.g., images), and few tackles with event-based data. Event-based data are different from framelike data, rendering the augmentation techniques designed for framelike data unsuitable for event-based data. This work deals with data augmentation for event-based object classification and semantic segmentation, which is important for self-driving and robot manipulation. Specifically, we introduce EventAugment, a new method to augment asynchronous event-based data by automatically learning augmentation policies. We first identify 13 types of operations for augmenting event-based data. Next, we formulate the problem of finding optimal augmentation policies as a hyperparameter optimization problem. To tackle this problem, we propose a random search-based framework. Finally, we evaluate the proposed method on six public datasets including N-Caltech101, N-Cars, ST-MNIST, N-MNIST, DVSGesture, and DDD17. Experimental results demonstrate that EventAugment exhibits substantial performance improvements for both deep neural network-based and spiking neural network-based models, with gains of up to approximately 4%. Notably, EventAugment outperform state-of-the-art methods in terms of overall performance.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 4","pages":"1521-1532"},"PeriodicalIF":5.0,"publicationDate":"2024-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140199013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Emergence of Human Oculomotor Behavior in a Cable-Driven Biomimetic Robotic Eye Using Optimal Control 利用优化控制在线缆驱动的仿生机器人眼球中出现人类眼球运动行为
IF 5 3区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-03-18 DOI: 10.1109/TCDS.2024.3376072
Reza Javanmard Alitappeh;Akhil John;Bernardo Dias;A. John van Opstal;Alexandre Bernardino
This article explores the application of model-based optimal control principles in understanding stereotyped human oculomotor behaviors. Using a realistic model of the human eye with a six-muscle cable-driven actuation system, we tackle the novel challenges of addressing a system with six degrees of freedom. We apply nonlinear optimal control techniques to optimize accuracy, energy, and duration of eye-movement trajectories. Employing a recurrent neural network to emulate system dynamics, we focus on generating rapid, unconstrained saccadic eye-movements. Remarkably, our model replicates realistic 3-D rotational kinematics and dynamics observed in human saccades, with the six cables organizing themselves into appropriate antagonistic muscle pairs, resembling the primate oculomotor system.
本文探讨了基于模型的最优控制原理在理解人类刻板眼球运动行为中的应用。我们使用一个具有六肌肉拉索驱动执行系统的现实人眼模型,解决了处理具有六个自由度的系统所面临的新挑战。我们应用非线性优化控制技术来优化眼球运动轨迹的精度、能量和持续时间。我们采用递归神经网络来模拟系统动力学,重点是产生快速、无约束的眼球运动。值得注意的是,我们的模型复制了在人类眼球运动中观察到的真实三维旋转运动学和动力学,六条缆线组织成适当的拮抗肌肉对,类似于灵长类动物的眼球运动系统。
{"title":"Emergence of Human Oculomotor Behavior in a Cable-Driven Biomimetic Robotic Eye Using Optimal Control","authors":"Reza Javanmard Alitappeh;Akhil John;Bernardo Dias;A. John van Opstal;Alexandre Bernardino","doi":"10.1109/TCDS.2024.3376072","DOIUrl":"10.1109/TCDS.2024.3376072","url":null,"abstract":"This article explores the application of model-based optimal control principles in understanding stereotyped human oculomotor behaviors. Using a realistic model of the human eye with a six-muscle cable-driven actuation system, we tackle the novel challenges of addressing a system with six degrees of freedom. We apply nonlinear optimal control techniques to optimize accuracy, energy, and duration of eye-movement trajectories. Employing a recurrent neural network to emulate system dynamics, we focus on generating rapid, unconstrained saccadic eye-movements. Remarkably, our model replicates realistic 3-D rotational kinematics and dynamics observed in human saccades, with the six cables organizing themselves into appropriate antagonistic muscle pairs, resembling the primate oculomotor system.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 4","pages":"1546-1560"},"PeriodicalIF":5.0,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10474482","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140166438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Inadequacy of Reinforcement Learning From Human Feedback—Radicalizing Large Language Models via Semantic Vulnerabilities 从人类反馈中强化学习的不足--通过语义漏洞激化大型语言模型
IF 5 3区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-03-18 DOI: 10.1109/TCDS.2024.3377445
Timothy R. McIntosh;Teo Susnjak;Tong Liu;Paul Watters;Malka N. Halgamuge
This study is an empirical investigation into the semantic vulnerabilities of four popular pretrained commercial large language models (LLMs) to ideological manipulation. Using tactics reminiscent of human semantic conditioning in psychology, we have induced and assessed ideological misalignments and their retention in four commercial pretrained LLMs, in response to 30 controversial questions that spanned a broad ideological and social spectrum, encompassing both extreme left- and right-wing viewpoints. Such semantic vulnerabilities arise due to fundamental limitations in LLMs’ capability to comprehend detailed linguistic variations, making them susceptible to ideological manipulation through targeted semantic exploits. We observed reinforcement learning from human feedback (RLHF) in effect to LLM initial answers, but highlighted the limitations of RLHF in two aspects: 1) its inability to fully mitigate the impact of ideological conditioning prompts, leading to partial alleviation of LLM semantic vulnerabilities; and 2) its inadequacy in representing a diverse set of “human values,” often reflecting the predefined values of certain groups controlling the LLMs. Our findings have provided empirical evidence of semantic vulnerabilities inherent in current LLMs, challenged both the robustness and the adequacy of RLHF as a mainstream method for aligning LLMs with human values, and underscored the need for a multidisciplinary approach in developing ethical and resilient artificial intelligence (AI).
本研究是对四种流行的预训练商业大语言模型(LLM)在意识形态操纵下的语义脆弱性进行的实证调查。我们采用与心理学中的人类语义条件反射类似的策略,诱导并评估了四种商业预训练大语言模型在回答 30 个有争议的问题时的意识形态错位及其保留情况,这些问题涉及广泛的意识形态和社会范围,包括极左和极右观点。这种语义漏洞的产生是由于 LLMs 理解语言细节变化的能力存在根本性的限制,这使得它们很容易被有针对性的语义利用来进行意识形态操纵。我们观察到从人类反馈中强化学习(RLHF)对 LLM 初始答案的影响,但强调了 RLHF 在两个方面的局限性:1)它无法完全缓解意识形态条件提示的影响,导致部分减轻了 LLM 的语义漏洞;以及 2)它无法充分体现多样化的 "人类价值观",往往反映的是控制 LLM 的某些群体的预定义价值观。我们的研究结果提供了当前 LLM 固有语义漏洞的实证证据,对 RLHF 作为使 LLM 符合人类价值观的主流方法的稳健性和适当性提出了质疑,并强调了在开发符合伦理和具有弹性的人工智能(AI)时采用多学科方法的必要性。
{"title":"The Inadequacy of Reinforcement Learning From Human Feedback—Radicalizing Large Language Models via Semantic Vulnerabilities","authors":"Timothy R. McIntosh;Teo Susnjak;Tong Liu;Paul Watters;Malka N. Halgamuge","doi":"10.1109/TCDS.2024.3377445","DOIUrl":"10.1109/TCDS.2024.3377445","url":null,"abstract":"This study is an empirical investigation into the semantic vulnerabilities of four popular pretrained commercial large language models (LLMs) to ideological manipulation. Using tactics reminiscent of human semantic conditioning in psychology, we have induced and assessed ideological misalignments and their retention in four commercial pretrained LLMs, in response to 30 controversial questions that spanned a broad ideological and social spectrum, encompassing both extreme left- and right-wing viewpoints. Such semantic vulnerabilities arise due to fundamental limitations in LLMs’ capability to comprehend detailed linguistic variations, making them susceptible to ideological manipulation through targeted semantic exploits. We observed reinforcement learning from human feedback (RLHF) in effect to LLM initial answers, but highlighted the limitations of RLHF in two aspects: 1) its inability to fully mitigate the impact of ideological conditioning prompts, leading to partial alleviation of LLM semantic vulnerabilities; and 2) its inadequacy in representing a diverse set of “human values,” often reflecting the predefined values of certain groups controlling the LLMs. Our findings have provided empirical evidence of semantic vulnerabilities inherent in current LLMs, challenged both the robustness and the adequacy of RLHF as a mainstream method for aligning LLMs with human values, and underscored the need for a multidisciplinary approach in developing ethical and resilient artificial intelligence (AI).","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 4","pages":"1561-1574"},"PeriodicalIF":5.0,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140166472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Cognitive and Developmental Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1