首页 > 最新文献

IEEE Transactions on Cognitive and Developmental Systems最新文献

英文 中文
IEEE Transactions on Cognitive and Developmental Systems Information for Authors 电气和电子工程师学会《认知与发展系统》期刊 为作者提供的信息
IF 5 3区 计算机科学 Q1 Computer Science Pub Date : 2024-02-02 DOI: 10.1109/TCDS.2024.3352775
{"title":"IEEE Transactions on Cognitive and Developmental Systems Information for Authors","authors":"","doi":"10.1109/TCDS.2024.3352775","DOIUrl":"https://doi.org/10.1109/TCDS.2024.3352775","url":null,"abstract":"","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":null,"pages":null},"PeriodicalIF":5.0,"publicationDate":"2024-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10419135","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139676399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Electroencephalography-Based Brain–Computer Interface for Emotion Regulation With Virtual Reality Neurofeedback 基于脑电图的脑机接口,通过虚拟现实神经反馈调节情绪
IF 5 3区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-01-29 DOI: 10.1109/TCDS.2024.3357547
Kendi Li;Weichen Huang;Wei Gao;Zijing Guan;Qiyun Huang;Jin-Gang Yu;Zhu Liang Yu;Yuanqing Li
An increasing number of people fail to properly regulate their emotions for various reasons. Although brain–computer interfaces (BCIs) have shown potential in neural regulation, few effective BCI systems have been developed to assist users in emotion regulation. In this article, we propose an electroencephalography (EEG)-based BCI for emotion regulation with virtual reality (VR) neurofeedback. Specifically, music clips with positive, neutral, and negative emotions were first presented, based on which the participants were asked to regulate their emotions. The BCI system simultaneously collected the participants’ EEG signals and then assessed their emotions. Furthermore, based on the emotion recognition results, the neurofeedback was provided to participants in the form of a facial expression of a virtual pop star on a three-dimensional (3-D) virtual stage. Eighteen healthy participants achieved satisfactory performance with an average accuracy of 81.1% with neurofeedback. Additionally, the average accuracy increased significantly from 65.4% at the start to 87.6% at the end of a regulation trial (a trial corresponded to a music clip). In comparison, these participants could not significantly improve the accuracy within a regulation trial without neurofeedback. The results demonstrated the effectiveness of our system and showed that VR neurofeedback played a key role during emotion regulation.
由于各种原因,越来越多的人无法正确调节自己的情绪。虽然脑机接口(BCI)在神经调节方面已显示出潜力,但目前还很少开发出有效的BCI系统来帮助用户进行情绪调节。在这篇文章中,我们提出了一种基于脑电图(EEG)的BCI,用于通过虚拟现实(VR)神经反馈进行情绪调节。具体来说,首先播放带有积极、中性和消极情绪的音乐片段,然后要求参与者根据这些片段调节情绪。BCI系统同时收集参与者的脑电信号,然后评估他们的情绪。此外,根据情绪识别结果,神经反馈以三维(3-D)虚拟舞台上虚拟歌星面部表情的形式提供给参与者。18 名健康参与者通过神经反馈取得了令人满意的成绩,平均准确率达到 81.1%。此外,在一次调节试验(一次试验对应一个音乐片段)中,平均准确率从开始时的 65.4% 显著提高到结束时的 87.6%。相比之下,在没有神经反馈的情况下,这些参与者在调节试验中的准确率并没有明显提高。结果证明了我们系统的有效性,并表明 VR 神经反馈在情绪调节过程中发挥了关键作用。
{"title":"An Electroencephalography-Based Brain–Computer Interface for Emotion Regulation With Virtual Reality Neurofeedback","authors":"Kendi Li;Weichen Huang;Wei Gao;Zijing Guan;Qiyun Huang;Jin-Gang Yu;Zhu Liang Yu;Yuanqing Li","doi":"10.1109/TCDS.2024.3357547","DOIUrl":"10.1109/TCDS.2024.3357547","url":null,"abstract":"An increasing number of people fail to properly regulate their emotions for various reasons. Although brain–computer interfaces (BCIs) have shown potential in neural regulation, few effective BCI systems have been developed to assist users in emotion regulation. In this article, we propose an electroencephalography (EEG)-based BCI for emotion regulation with virtual reality (VR) neurofeedback. Specifically, music clips with positive, neutral, and negative emotions were first presented, based on which the participants were asked to regulate their emotions. The BCI system simultaneously collected the participants’ EEG signals and then assessed their emotions. Furthermore, based on the emotion recognition results, the neurofeedback was provided to participants in the form of a facial expression of a virtual pop star on a three-dimensional (3-D) virtual stage. Eighteen healthy participants achieved satisfactory performance with an average accuracy of 81.1% with neurofeedback. Additionally, the average accuracy increased significantly from 65.4% at the start to 87.6% at the end of a regulation trial (a trial corresponded to a music clip). In comparison, these participants could not significantly improve the accuracy within a regulation trial without neurofeedback. The results demonstrated the effectiveness of our system and showed that VR neurofeedback played a key role during emotion regulation.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":null,"pages":null},"PeriodicalIF":5.0,"publicationDate":"2024-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139954649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Depression Detection Using an Automatic Sleep Staging Method With an Interpretable Channel-Temporal Attention Mechanism 利用具有可解释通道-时间注意机制的自动睡眠分期法检测抑郁症
IF 5 3区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-01-26 DOI: 10.1109/TCDS.2024.3358022
Jiahui Pan;Jie Liu;Jianhao Zhang;Xueli Li;Dongming Quan;Yuanqing Li
Despite previous efforts in depression detection studies, there is a scarcity of research on automatic depression detection using sleep structure, and several challenges remain: 1) how to apply sleep staging to detect depression and distinguish easily misjudged classes; and 2) how to adaptively capture attentive channel-dimensional information to enhance the interpretability of sleep staging methods. To address these challenges, an automatic sleep staging method based on a channel-temporal attention mechanism and a depression detection method based on sleep structure features are proposed. In sleep staging, a temporal attention mechanism is adopted to update the feature matrix, confidence scores are estimated for each sleep stage, the weight of each channel is adjusted based on these scores, and the final results are obtained through a temporal convolutional network. In depression detection, seven sleep structure features based on the results of sleep staging are extracted for depression detection between unipolar depressive disorder (UDD) patients, bipolar disorder (BD) patients, and healthy subjects. Experiments demonstrate the effectiveness of the proposed approaches, and the visualization of the channel attention mechanism illustrates the interpretability of our method. Additionally, this is the first attempt to employ sleep structure features to automatically detect UDD and BD in patients.
尽管之前在抑郁检测研究方面做了很多努力,但利用睡眠结构自动检测抑郁的研究还很少,而且仍然存在一些挑战:1) 如何应用睡眠分期检测抑郁并区分容易误判的类别;以及 2) 如何自适应地捕捉注意力通道维度信息以增强睡眠分期方法的可解释性。针对这些挑战,我们提出了一种基于通道-时间注意力机制的自动睡眠分期方法和一种基于睡眠结构特征的抑郁检测方法。在睡眠分期中,采用时空注意机制更新特征矩阵,估计每个睡眠阶段的置信度分数,根据这些分数调整每个通道的权重,并通过时空卷积网络获得最终结果。在抑郁检测中,根据睡眠分期的结果提取了七个睡眠结构特征,用于单相抑郁症(UDD)患者、双相抑郁症(BD)患者和健康人之间的抑郁检测。实验证明了所提方法的有效性,而通道注意机制的可视化则说明了我们方法的可解释性。此外,这是利用睡眠结构特征自动检测 UDD 和 BD 患者的首次尝试。
{"title":"Depression Detection Using an Automatic Sleep Staging Method With an Interpretable Channel-Temporal Attention Mechanism","authors":"Jiahui Pan;Jie Liu;Jianhao Zhang;Xueli Li;Dongming Quan;Yuanqing Li","doi":"10.1109/TCDS.2024.3358022","DOIUrl":"10.1109/TCDS.2024.3358022","url":null,"abstract":"Despite previous efforts in depression detection studies, there is a scarcity of research on automatic depression detection using sleep structure, and several challenges remain: 1) how to apply sleep staging to detect depression and distinguish easily misjudged classes; and 2) how to adaptively capture attentive channel-dimensional information to enhance the interpretability of sleep staging methods. To address these challenges, an automatic sleep staging method based on a channel-temporal attention mechanism and a depression detection method based on sleep structure features are proposed. In sleep staging, a temporal attention mechanism is adopted to update the feature matrix, confidence scores are estimated for each sleep stage, the weight of each channel is adjusted based on these scores, and the final results are obtained through a temporal convolutional network. In depression detection, seven sleep structure features based on the results of sleep staging are extracted for depression detection between unipolar depressive disorder (UDD) patients, bipolar disorder (BD) patients, and healthy subjects. Experiments demonstrate the effectiveness of the proposed approaches, and the visualization of the channel attention mechanism illustrates the interpretability of our method. Additionally, this is the first attempt to employ sleep structure features to automatically detect UDD and BD in patients.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":null,"pages":null},"PeriodicalIF":5.0,"publicationDate":"2024-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139954268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Husformer: A Multimodal Transformer for Multimodal Human State Recognition Husformer:用于多模态人体状态识别的多模态变换器
IF 5 3区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-01-23 DOI: 10.1109/TCDS.2024.3357618
Ruiqi Wang;Wonse Jo;Dezhong Zhao;Weizheng Wang;Arjun Gupte;Baijian Yang;Guohua Chen;Byung-Cheol Min
Human state recognition is a critical topic with pervasive and important applications in human–machine systems. Multimodal fusion, which entails integrating metrics from various data sources, has proven to be a potent method for boosting recognition performance. Although recent multimodal-based models have shown promising results, they often fall short in fully leveraging sophisticated fusion strategies essential for modeling adequate cross-modal dependencies in the fusion representation. Instead, they rely on costly and inconsistent feature crafting and alignment. To address this limitation, we propose an end-to-end multimodal transformer framework for multimodal human state recognition called Husformer. Specifically, we propose using cross-modal transformers, which inspire one modality to reinforce itself through directly attending to latent relevance revealed in other modalities, to fuse different modalities while ensuring sufficient awareness of the cross-modal interactions introduced. Subsequently, we utilize a self-attention transformer to further prioritize contextual information in the fusion representation. Extensive experiments on two human emotion corpora (DEAP and WESAD) and two cognitive load datasets [multimodal dataset for objective cognitive workload assessment on simultaneous tasks (MOCAS) and CogLoad] demonstrate that in the recognition of the human state, our Husformer outperforms both state-of-the-art multimodal baselines and the use of a single modality by a large margin, especially when dealing with raw multimodal features. We also conducted an ablation study to show the benefits of each component in Husformer. Experimental details and source code are available at https://github.com/SMARTlab-Purdue/Husformer.
人类状态识别是一个重要课题,在人机系统中有着广泛而重要的应用。多模态融合需要整合来自不同数据源的指标,已被证明是提高识别性能的有效方法。虽然最近基于多模态的模型已经取得了可喜的成果,但它们往往不能充分利用复杂的融合策略,而这些策略对于在融合表示中建立适当的跨模态依赖关系模型至关重要。相反,它们依赖于代价高昂且不一致的特征制作和对齐。为了解决这一局限性,我们提出了一种用于多模态人体状态识别的端到端多模态转换器框架,称为 Husformer。具体来说,我们建议使用跨模态转换器,通过直接关注其他模态中揭示的潜在相关性来激发一种模态强化自身,从而融合不同模态,同时确保对引入的跨模态交互有足够的认识。随后,我们利用自我关注转换器进一步确定融合表征中上下文信息的优先级。在两个人类情感语料库(DEAP 和 WESAD)和两个认知负荷数据集(用于同时任务的客观认知负荷评估的多模态数据集(MOCAS)和 CogLoad)上进行的广泛实验表明,在识别人类状态方面,我们的 Husformer 远远优于最先进的多模态基线和使用单一模态的方法,尤其是在处理原始多模态特征时。我们还进行了一项消融研究,以展示 Husformer 中每个组件的优势。实验详情和源代码请访问 https://github.com/SMARTlab-Purdue/Husformer。
{"title":"Husformer: A Multimodal Transformer for Multimodal Human State Recognition","authors":"Ruiqi Wang;Wonse Jo;Dezhong Zhao;Weizheng Wang;Arjun Gupte;Baijian Yang;Guohua Chen;Byung-Cheol Min","doi":"10.1109/TCDS.2024.3357618","DOIUrl":"10.1109/TCDS.2024.3357618","url":null,"abstract":"Human state recognition is a critical topic with pervasive and important applications in human–machine systems. Multimodal fusion, which entails integrating metrics from various data sources, has proven to be a potent method for boosting recognition performance. Although recent multimodal-based models have shown promising results, they often fall short in fully leveraging sophisticated fusion strategies essential for modeling adequate cross-modal dependencies in the fusion representation. Instead, they rely on costly and inconsistent feature crafting and alignment. To address this limitation, we propose an end-to-end multimodal transformer framework for multimodal human state recognition called \u0000<italic>Husformer</i>\u0000. Specifically, we propose using cross-modal transformers, which inspire one modality to reinforce itself through directly attending to latent relevance revealed in other modalities, to fuse different modalities while ensuring sufficient awareness of the cross-modal interactions introduced. Subsequently, we utilize a self-attention transformer to further prioritize contextual information in the fusion representation. Extensive experiments on two human emotion corpora (DEAP and WESAD) and two cognitive load datasets [multimodal dataset for objective cognitive workload assessment on simultaneous tasks (MOCAS) and CogLoad] demonstrate that in the recognition of the human state, our \u0000<italic>Husformer</i>\u0000 outperforms both state-of-the-art multimodal baselines and the use of a single modality by a large margin, especially when dealing with raw multimodal features. We also conducted an ablation study to show the benefits of each component in \u0000<italic>Husformer</i>\u0000. Experimental details and source code are available at \u0000<uri>https://github.com/SMARTlab-Purdue/Husformer</uri>\u0000.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":null,"pages":null},"PeriodicalIF":5.0,"publicationDate":"2024-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139954274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PLOT: Human-Like Push-Grasping Synergy Learning in Clutter With One-Shot Target Recognition PLOT:杂波中的类人推抓协同学习与单次目标识别
IF 5 3区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-01-22 DOI: 10.1109/TCDS.2024.3357084
Xiaoge Cao;Tao Lu;Liming Zheng;Yinghao Cai;Shuo Wang
In unstructured environments, robotic grasping tasks are frequently required to interactively search for and retrieve specific objects from a cluttered workspace under the condition that only partial information about the target is available, like images, text descriptions, 3-D models, etc. It is a great challenge to correctly recognize the targets with limited information and learn synergies between different action primitives to grasp the targets from densely occluding objects efficiently. In this article, we propose a novel human-like push-grasping method that could grasp unknown objects in clutter using only one target RGB with Depth (RGB-D) image, called push-grasping synergy learning in clutter with one-shot target recognition (PLOT). First, we propose a target recognition (TR) method which automatically segments the objects both from the query image and workspace image, and extract the robust features of each segmented object. Through the designed feature matching criterion, the targets could be quickly located in the workspace. Second, we introduce a self-supervised target-oriented grasping system based on synergies between push and grasp actions. In this system, we propose a salient Q (SQ)-learning framework that focuses the Q value learning in the area including targets and a coordination mechanism (CM) that selects the proper actions to search and isolate the targets from the surrounding objects, even in the condition of targets invisible. Our method is inspired by the working memory mechanism of human brain and can grasp any target object shown through the image and has good generality in application. Experimental results in simulation and real-world show that our method achieved the best performance compared with the baselines in finding the unknown target objects from the cluttered environment with only one demonstrated target RGB-D image and had the high efficiency of grasping under the synergies of push and grasp actions.
在非结构化环境中,机器人抓取任务经常需要在只有目标的部分信息(如图像、文字描述、三维模型等)的条件下,从杂乱的工作空间中交互式地搜索和检索特定物体。如何在信息有限的情况下正确识别目标,并学习不同动作原语之间的协同作用,从而高效地从密集遮挡的物体中抓取目标,是一项巨大的挑战。在本文中,我们提出了一种新颖的类人推抓方法,该方法只需使用一张目标 RGB 与深度(RGB-D)图像即可在杂波中抓取未知物体,称为杂波中的推抓协同学习与单次目标识别(PLOT)。首先,我们提出了一种目标识别(TR)方法,它能自动从查询图像和工作区图像中分割出目标,并提取每个分割出的目标的鲁棒特征。通过所设计的特征匹配标准,可以快速定位工作区中的目标。其次,我们引入了基于推和抓动作协同作用的自监督目标导向抓取系统。在该系统中,我们提出了一个突出 Q 值(SQ)学习框架,将 Q 值学习集中在包括目标在内的区域;同时还提出了一个协调机制(CM),即使在目标不可见的情况下,也能选择适当的动作来搜索目标并将其从周围物体中分离出来。我们的方法受人脑工作记忆机制的启发,能抓住图像中显示的任何目标对象,具有良好的应用通用性。仿真和真实世界的实验结果表明,与基线方法相比,我们的方法在仅有一幅展示的目标 RGB-D 图像的情况下,就能从杂乱的环境中找到未知目标物体,并且在推和抓动作的协同作用下具有较高的抓取效率。
{"title":"PLOT: Human-Like Push-Grasping Synergy Learning in Clutter With One-Shot Target Recognition","authors":"Xiaoge Cao;Tao Lu;Liming Zheng;Yinghao Cai;Shuo Wang","doi":"10.1109/TCDS.2024.3357084","DOIUrl":"10.1109/TCDS.2024.3357084","url":null,"abstract":"In unstructured environments, robotic grasping tasks are frequently required to interactively search for and retrieve specific objects from a cluttered workspace under the condition that only partial information about the target is available, like images, text descriptions, 3-D models, etc. It is a great challenge to correctly recognize the targets with limited information and learn synergies between different action primitives to grasp the targets from densely occluding objects efficiently. In this article, we propose a novel human-like push-grasping method that could grasp unknown objects in clutter using only one target RGB with Depth (RGB-D) image, called push-grasping synergy learning in clutter with one-shot target recognition (PLOT). First, we propose a target recognition (TR) method which automatically segments the objects both from the query image and workspace image, and extract the robust features of each segmented object. Through the designed feature matching criterion, the targets could be quickly located in the workspace. Second, we introduce a self-supervised target-oriented grasping system based on synergies between push and grasp actions. In this system, we propose a salient Q (SQ)-learning framework that focuses the \u0000<italic>Q</i>\u0000 value learning in the area including targets and a coordination mechanism (CM) that selects the proper actions to search and isolate the targets from the surrounding objects, even in the condition of targets invisible. Our method is inspired by the working memory mechanism of human brain and can grasp any target object shown through the image and has good generality in application. Experimental results in simulation and real-world show that our method achieved the best performance compared with the baselines in finding the unknown target objects from the cluttered environment with only one demonstrated target RGB-D image and had the high efficiency of grasping under the synergies of push and grasp actions.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":null,"pages":null},"PeriodicalIF":5.0,"publicationDate":"2024-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139954139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Kernel-Ridge-Regression-Based Randomized Network for Brain Age Classification and Estimation 基于核岭回归的脑年龄分类与估算随机网络
IF 5 3区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-01-18 DOI: 10.1109/TCDS.2024.3349593
Raveendra Pilli;Tripti Goel;R. Murugan;M. Tanveer;P. N. Suganthan
Accelerated brain aging and abnormalities are associated with variations in brain patterns. Effective and reliable assessment methods are required to accurately classify and estimate brain age. In this study, a brain age classification and estimation framework is proposed using structural magnetic resonance imaging (sMRI) scans, a 3-D convolutional neural network (3-D-CNN), and a kernel ridge regression-based random vector functional link (KRR-RVFL) network. We used 480 brain MRI images from the publicly availabel IXI database and segmented them into gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) images to show age-related associations by region. Features from MRI images are extracted using 3-D-CNN and fed into the wavelet KRR-RVFL network for brain age classification and prediction. The proposed algorithm achieved high classification accuracy, 97.22%, 99.31%, and 95.83% for GM, WM, and CSF regions, respectively. Moreover, the proposed algorithm demonstrated excellent prediction accuracy with a mean absolute error (MAE) of $3.89$ years, $3.64$ years, and $4.49$ years for GM, WM, and CSF regions, confirming that changes in WM volume are significantly associated with normal brain aging. Additionally, voxel-based morphometry (VBM) examines age-related anatomical alterations in different brain regions in GM, WM, and CSF tissue volumes.
大脑加速衰老和异常与大脑模式的变化有关。需要有效可靠的评估方法来准确地分类和估计脑年龄。本研究利用结构磁共振成像(sMRI)扫描、三维卷积神经网络(3-D-CNN)和基于核脊回归的随机向量功能链接(KRR-RVFL)网络,提出了一种脑年龄分类和估算框架。我们使用了公开的 IXI 数据库中的 480 张大脑 MRI 图像,并将其分割为灰质(GM)、白质(WM)和脑脊液(CSF)图像,按区域显示与年龄相关的关联。利用 3-D-CNN 从核磁共振图像中提取特征,并将其输入小波 KRR-RVFL 网络,用于脑年龄分类和预测。所提出的算法实现了较高的分类准确率,对 GM、WM 和 CSF 区域的分类准确率分别为 97.22%、99.31% 和 95.83%。此外,所提出的算法还表现出了极高的预测准确性,对 GM、WM 和 CSF 区域的平均绝对误差(MAE)分别为 3.89 美元年、3.64 美元年和 4.49 美元年,证实了 WM 体积的变化与正常脑衰老有显著相关性。此外,基于体素的形态测量(VBM)检查了不同脑区与年龄相关的 GM、WM 和 CSF 组织体积的解剖学改变。
{"title":"Kernel-Ridge-Regression-Based Randomized Network for Brain Age Classification and Estimation","authors":"Raveendra Pilli;Tripti Goel;R. Murugan;M. Tanveer;P. N. Suganthan","doi":"10.1109/TCDS.2024.3349593","DOIUrl":"10.1109/TCDS.2024.3349593","url":null,"abstract":"Accelerated brain aging and abnormalities are associated with variations in brain patterns. Effective and reliable assessment methods are required to accurately classify and estimate brain age. In this study, a brain age classification and estimation framework is proposed using structural magnetic resonance imaging (sMRI) scans, a 3-D convolutional neural network (3-D-CNN), and a kernel ridge regression-based random vector functional link (KRR-RVFL) network. We used 480 brain MRI images from the publicly availabel IXI database and segmented them into gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) images to show age-related associations by region. Features from MRI images are extracted using 3-D-CNN and fed into the wavelet KRR-RVFL network for brain age classification and prediction. The proposed algorithm achieved high classification accuracy, 97.22%, 99.31%, and 95.83% for GM, WM, and CSF regions, respectively. Moreover, the proposed algorithm demonstrated excellent prediction accuracy with a mean absolute error (MAE) of \u0000<inline-formula><tex-math>$3.89$</tex-math></inline-formula>\u0000 years, \u0000<inline-formula><tex-math>$3.64$</tex-math></inline-formula>\u0000 years, and \u0000<inline-formula><tex-math>$4.49$</tex-math></inline-formula>\u0000 years for GM, WM, and CSF regions, confirming that changes in WM volume are significantly associated with normal brain aging. Additionally, voxel-based morphometry (VBM) examines age-related anatomical alterations in different brain regions in GM, WM, and CSF tissue volumes.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":null,"pages":null},"PeriodicalIF":5.0,"publicationDate":"2024-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10405861","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139954140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Effect of Expressive Robot Behavior on Users’ Mental Effort: A Pupillometry Study 表情机器人行为对用户脑力劳动的影响:瞳孔测量研究
IF 5 3区 计算机科学 Q1 Computer Science Pub Date : 2024-01-15 DOI: 10.1109/TCDS.2024.3352893
Marieke van Otterdijk;Bruno Laeng;Diana Saplacan Lindblom;Jim Torresen
Robots are becoming part of our social landscape. Social interaction with humans must be efficient and intuitive to understand because nonverbal cues make social interactions between humans and robots more efficient. This study measures mental effort to investigate what factors influence the intuitive understanding of expressive nonverbal robot motions. Fifty participants were asked to watch, while their pupil response and gaze were measured with an eye tracker, eighteen short video clips of three different robot types while performing expressive robot behaviors. Our findings indicate that the appearance of the robot, the viewing angle, and the expression shown by the robot all influence the cognitive load, and therefore, they may influence the intuitive understanding of expressive robot behavior. Furthermore, we found differences in the fixation time for different features of the different robots. With these insights, we identified possible improvement directions for making interactions between humans and robots more efficient and intuitive.
机器人正在成为我们社会景观的一部分。与人类的社交互动必须高效且直观易懂,因为非语言线索能使人类与机器人之间的社交互动更加高效。本研究通过测量脑力劳动来探究哪些因素会影响人们对机器人非语言动作的直观理解。研究人员要求 50 名参与者观看 18 个视频短片,其中有三种不同类型的机器人在做出富有表现力的机器人行为时的瞳孔反应和注视情况,并用眼动仪进行测量。我们的研究结果表明,机器人的外观、观看角度和机器人所表现出的表情都会影响认知负荷,从而影响对机器人表现行为的直观理解。此外,我们还发现了不同机器人的不同特征在固定时间上的差异。有了这些认识,我们确定了可能的改进方向,使人类与机器人之间的互动更加高效和直观。
{"title":"The Effect of Expressive Robot Behavior on Users’ Mental Effort: A Pupillometry Study","authors":"Marieke van Otterdijk;Bruno Laeng;Diana Saplacan Lindblom;Jim Torresen","doi":"10.1109/TCDS.2024.3352893","DOIUrl":"10.1109/TCDS.2024.3352893","url":null,"abstract":"Robots are becoming part of our social landscape. Social interaction with humans must be efficient and intuitive to understand because nonverbal cues make social interactions between humans and robots more efficient. This study measures mental effort to investigate what factors influence the intuitive understanding of expressive nonverbal robot motions. Fifty participants were asked to watch, while their pupil response and gaze were measured with an eye tracker, eighteen short video clips of three different robot types while performing expressive robot behaviors. Our findings indicate that the appearance of the robot, the viewing angle, and the expression shown by the robot all influence the cognitive load, and therefore, they may influence the intuitive understanding of expressive robot behavior. Furthermore, we found differences in the fixation time for different features of the different robots. With these insights, we identified possible improvement directions for making interactions between humans and robots more efficient and intuitive.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":null,"pages":null},"PeriodicalIF":5.0,"publicationDate":"2024-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139954281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TR-TransGAN: Temporal Recurrent Transformer Generative Adversarial Network for Longitudinal MRI Dataset Expansion TR-TransGAN:用于纵向磁共振成像数据集扩展的时序递归变换生成对抗网络
IF 5 3区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-01-08 DOI: 10.1109/TCDS.2023.3345922
Chen-Chen Fan;Hongjun Yang;Liang Peng;Xiao-Hu Zhou;Shiqi Liu;Sheng Chen;Zeng-Guang Hou
Longitudinal magnetic resonance imaging (MRI) datasets have important implications for the study of degenerative diseases because such datasets have data from multiple points in time to track disease progression. However, longitudinal datasets are often incomplete due to unexpected quits of patients. In previous work, we proposed an augmentation method temporal recurrent generative adversarial network (TR-GAN) that can complement missing session data of MRI datasets. TR-GAN uses a simple U-Net as a generator, which limits its performance. Transformers have had great success in the research of computer vision and this article attempts to introduce it into longitudinal dataset completion tasks. The multihead attention mechanism in transformer has huge memory requirements, and it is difficult to train 3-D MRI data on graphics processing units (GPUs) with small memory. To build a memory-friendly transformer-based generator, we introduce a Hilbert transform module (HTM) to convert 3-D data to 2-D data that preserves locality fairly well. To make up for the insufficiency of convolutional neural network (CNN)-based models that are difficult to establish long-range dependencies, we propose an Swin transformer-based up/down sampling module (STU/STD) module that combines the Swin transformer module and CNN module to capture global and local information simultaneously. Extensive experiments show that our model can reduce mean squared error (MMSE) by at least 7.16% compared to the previous state-of-the-art method.
纵向磁共振成像(MRI)数据集对退行性疾病的研究具有重要意义,因为这类数据集拥有多个时间点的数据,可以追踪疾病的进展。然而,纵向数据集往往因患者意外退出而不完整。在之前的工作中,我们提出了一种增强方法--时序递归生成对抗网络(TR-GAN),可以补充核磁共振成像数据集缺失的会话数据。TR-GAN 使用简单的 U-Net 作为生成器,这限制了它的性能。变换器在计算机视觉研究中取得了巨大成功,本文试图将其引入纵向数据集完成任务中。变换器中的多头注意力机制对内存的要求很高,很难在内存较小的图形处理器(GPU)上训练三维核磁共振成像数据。为了建立一个内存友好的基于变换器的生成器,我们引入了希尔伯特变换模块(HTM),将三维数据转换为二维数据,从而很好地保留了局部性。为了弥补基于卷积神经网络(CNN)的模型难以建立长程依赖关系的不足,我们提出了基于斯温变换器的上下采样模块(STU/STD),该模块结合了斯温变换器模块和 CNN 模块,可同时捕捉全局和局部信息。广泛的实验表明,与之前最先进的方法相比,我们的模型可以将均方误差(MMSE)至少降低 7.16%。
{"title":"TR-TransGAN: Temporal Recurrent Transformer Generative Adversarial Network for Longitudinal MRI Dataset Expansion","authors":"Chen-Chen Fan;Hongjun Yang;Liang Peng;Xiao-Hu Zhou;Shiqi Liu;Sheng Chen;Zeng-Guang Hou","doi":"10.1109/TCDS.2023.3345922","DOIUrl":"10.1109/TCDS.2023.3345922","url":null,"abstract":"Longitudinal magnetic resonance imaging (MRI) datasets have important implications for the study of degenerative diseases because such datasets have data from multiple points in time to track disease progression. However, longitudinal datasets are often incomplete due to unexpected quits of patients. In previous work, we proposed an augmentation method temporal recurrent generative adversarial network (TR-GAN) that can complement missing session data of MRI datasets. TR-GAN uses a simple U-Net as a generator, which limits its performance. Transformers have had great success in the research of computer vision and this article attempts to introduce it into longitudinal dataset completion tasks. The multihead attention mechanism in transformer has huge memory requirements, and it is difficult to train 3-D MRI data on graphics processing units (GPUs) with small memory. To build a memory-friendly transformer-based generator, we introduce a Hilbert transform module (HTM) to convert 3-D data to 2-D data that preserves locality fairly well. To make up for the insufficiency of convolutional neural network (CNN)-based models that are difficult to establish long-range dependencies, we propose an Swin transformer-based up/down sampling module (STU/STD) module that combines the Swin transformer module and CNN module to capture global and local information simultaneously. Extensive experiments show that our model can reduce mean squared error (MMSE) by at least 7.16% compared to the previous state-of-the-art method.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":null,"pages":null},"PeriodicalIF":5.0,"publicationDate":"2024-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139954270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multiple Instance Learning for Cheating Detection and Localization in Online Examinations 在线考试作弊检测和定位的多实例学习
IF 5 3区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-01-05 DOI: 10.1109/TCDS.2024.3349705
Yemeng Liu;Jing Ren;Jianshuo Xu;Xiaomei Bai;Roopdeep Kaur;Feng Xia
The spread of the Coronavirus disease-2019 epidemic has caused many courses and exams to be conducted online. The cheating behavior detection model in examination invigilation systems plays a pivotal role in guaranteeing the equality of long-distance examinations. However, cheating behavior is rare, and most researchers do not comprehensively take into account features such as head posture, gaze angle, body posture, and background information in the task of cheating behavior detection. In this article, we develop and present CHEESE, a CHEating detection framework via multiple instance learning. The framework consists of a label generator that implements weak supervision and a feature encoder to learn discriminative features. In addition, the framework combines body posture and background features extracted by 3-D convolution with eye gaze, head posture, and facial features captured by OpenFace 2.0. These features are fed into the spatiotemporal graph module by stitching to analyze the spatiotemporal changes in video clips to detect the cheating behaviors. Our experiments on three datasets, University of Central Florida (UCF)-Crime, ShanghaiTech, and online exam proctoring (OEP), prove the effectiveness of our method as compared to the state-of-the-art approaches and obtain the frame-level area under the curve (AUC) score of 87.58% on the OEP dataset.
冠状病毒病-2019疫情的蔓延导致许多课程和考试在网上进行。监考系统中的作弊行为检测模型在保障远程考试的公平性方面发挥着举足轻重的作用。然而,作弊行为很少见,大多数研究者在作弊行为检测任务中没有全面考虑头部姿势、注视角度、身体姿势和背景信息等特征。在本文中,我们开发并提出了一个通过多通道学习(multiplE inStancE learning)进行作弊检测的框架--CHEESE。该框架由一个实现弱监督的标签生成器和一个学习鉴别特征的特征编码器组成。此外,该框架还将三维卷积提取的身体姿态和背景特征与 OpenFace 2.0 采集的眼睛注视、头部姿态和面部特征相结合。这些特征通过拼接输入时空图模块,分析视频片段的时空变化,从而检测出作弊行为。我们在 UCF-Crime、ShanghaiTech 和在线考试监考(OEP)三个数据集上的实验证明,与最先进的方法相比,我们的方法非常有效,在 OEP 数据集上获得了 87.58% 的帧级 AUC 分数。
{"title":"Multiple Instance Learning for Cheating Detection and Localization in Online Examinations","authors":"Yemeng Liu;Jing Ren;Jianshuo Xu;Xiaomei Bai;Roopdeep Kaur;Feng Xia","doi":"10.1109/TCDS.2024.3349705","DOIUrl":"10.1109/TCDS.2024.3349705","url":null,"abstract":"The spread of the Coronavirus disease-2019 epidemic has caused many courses and exams to be conducted online. The cheating behavior detection model in examination invigilation systems plays a pivotal role in guaranteeing the equality of long-distance examinations. However, cheating behavior is rare, and most researchers do not comprehensively take into account features such as head posture, gaze angle, body posture, and background information in the task of cheating behavior detection. In this article, we develop and present CHEESE, a CHEating detection framework via multiple instance learning. The framework consists of a label generator that implements weak supervision and a feature encoder to learn discriminative features. In addition, the framework combines body posture and background features extracted by 3-D convolution with eye gaze, head posture, and facial features captured by OpenFace 2.0. These features are fed into the spatiotemporal graph module by stitching to analyze the spatiotemporal changes in video clips to detect the cheating behaviors. Our experiments on three datasets, University of Central Florida (UCF)-Crime, ShanghaiTech, and online exam proctoring (OEP), prove the effectiveness of our method as compared to the state-of-the-art approaches and obtain the frame-level area under the curve (AUC) score of 87.58% on the OEP dataset.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":null,"pages":null},"PeriodicalIF":5.0,"publicationDate":"2024-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139850467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MIMo: A Multimodal Infant Model for Studying Cognitive Development MIMo:用于研究认知发展的多模式婴儿模型
IF 5 3区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-01-05 DOI: 10.1109/TCDS.2024.3350448
Dominik Mattern;Pierre Schumacher;Francisco M. López;Marcel C. Raabe;Markus R. Ernst;Arthur Aubret;Jochen Triesch
Human intelligence and human consciousness emerge gradually during the process of cognitive development. Understanding this development is an essential aspect of understanding the human mind and may facilitate the construction of artificial minds with similar properties. Importantly, human cognitive development relies on embodied interactions with the physical and social environment, which is perceived via complementary sensory modalities. These interactions allow the developing mind to probe the causal structure of the world. This is in stark contrast to common machine learning approaches, e.g., for large language models, which are merely passively “digesting” large amounts of training data, but are not in control of their sensory inputs. However, computational modeling of the kind of self-determined embodied interactions that lead to human intelligence and consciousness is a formidable challenge. Here, we present Multimodal Infant Model (MiMo), an open-source multimodal infant model for studying early cognitive development through computer simulations. MIMo's body is modeled after an 18-month-old child with detailed five-fingered hands. MIMo perceives its surroundings via binocular vision, a vestibular system, proprioception, and touch perception through a full-body virtual skin, while two different actuation models allow control of his body. We describe the design and interfaces of MIMo and provide examples illustrating its use.
人类智慧和人类意识是在认知发展过程中逐渐形成的。了解这一发展过程是理解人类思维的一个重要方面,也有助于构建具有类似特性的人工思维。重要的是,人类的认知发展依赖于与物理和社会环境的互动,而物理和社会环境是通过互补的感官模式感知的。这些互动使发展中的思维能够探究世界的因果结构。这与常见的机器学习方法(如大型语言模型)形成了鲜明对比,后者只是被动地 "消化 "大量训练数据,却无法控制其感官输入。然而,对人类智能和意识所产生的那种自我决定的具身互动进行计算建模是一项艰巨的挑战。在此,我们介绍多模态婴儿模型(MiMo),这是一个开源的多模态婴儿模型,用于通过计算机模拟研究早期认知发展。MIMo 的身体以 18 个月大的婴儿为模型,五指细致入微。MIMo 通过双目视觉、前庭系统、本体感觉和全身虚拟皮肤的触觉感知周围环境,同时通过两种不同的执行模型控制身体。我们介绍了 MIMo 的设计和界面,并举例说明了其使用方法。
{"title":"MIMo: A Multimodal Infant Model for Studying Cognitive Development","authors":"Dominik Mattern;Pierre Schumacher;Francisco M. López;Marcel C. Raabe;Markus R. Ernst;Arthur Aubret;Jochen Triesch","doi":"10.1109/TCDS.2024.3350448","DOIUrl":"10.1109/TCDS.2024.3350448","url":null,"abstract":"Human intelligence and human consciousness emerge gradually during the process of cognitive development. Understanding this development is an essential aspect of understanding the human mind and may facilitate the construction of artificial minds with similar properties. Importantly, human cognitive development relies on embodied interactions with the physical and social environment, which is perceived via complementary sensory modalities. These interactions allow the developing mind to probe the causal structure of the world. This is in stark contrast to common machine learning approaches, e.g., for large language models, which are merely passively “digesting” large amounts of training data, but are not in control of their sensory inputs. However, computational modeling of the kind of self-determined embodied interactions that lead to human intelligence and consciousness is a formidable challenge. Here, we present Multimodal Infant Model (MiMo), an open-source multimodal infant model for studying early cognitive development through computer simulations. MIMo's body is modeled after an 18-month-old child with detailed five-fingered hands. MIMo perceives its surroundings via binocular vision, a vestibular system, proprioception, and touch perception through a full-body virtual skin, while two different actuation models allow control of his body. We describe the design and interfaces of MIMo and provide examples illustrating its use.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":null,"pages":null},"PeriodicalIF":5.0,"publicationDate":"2024-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139954426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Cognitive and Developmental Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1