首页 > 最新文献

IEEE Transactions on Cognitive and Developmental Systems最新文献

英文 中文
Automotive Object Detection via Learning Sparse Events by Spiking Neurons 通过尖峰神经元学习稀疏事件进行汽车物体检测
IF 5 3区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-06-06 DOI: 10.1109/TCDS.2024.3410371
Hu Zhang;Yanchen Li;Luziwei Leng;Kaiwei Che;Qian Liu;Qinghai Guo;Jianxing Liao;Ran Cheng
Event-based sensors, distinguished by their high temporal resolution of $1 {boldsymbol{mu}}text{s}$ and a dynamic range of $120 mathrm{dB}$, stand out as ideal tools for deployment in fast-paced settings such as vehicles and drones. Traditional object detection techniques that utilize artificial neural networks (ANNs) face challenges due to the sparse and asynchronous nature of the events these sensors capture. In contrast, spiking neural networks (SNNs) offer a promising alternative, providing a temporal representation that is inherently aligned with event-based data. This article explores the unique membrane potential dynamics of SNNs and their ability to modulate sparse events. We introduce an innovative spike-triggered adaptive threshold mechanism designed for stable training. Building on these insights, we present a specialized spiking feature pyramid network (SpikeFPN) optimized for automotive event-based object detection. Comprehensive evaluations demonstrate that SpikeFPN surpasses both traditional SNNs and advanced ANNs enhanced with attention mechanisms. Evidently, SpikeFPN achieves a mean average precision (mAP) of 0.477 on the GEN1 automotive detection (GAD) benchmark dataset, marking significant increases over the selected SNN baselines. Moreover, the efficient design of SpikeFPN ensures robust performance while optimizing computational resources, attributed to its innate sparse computation capabilities.
基于事件的传感器以其$1 {boldsymbol{mu}}text{s}$的高时间分辨率和$120 math {dB}$的动态范围而闻名,是在车辆和无人机等快节奏环境中部署的理想工具。利用人工神经网络(ann)的传统目标检测技术由于这些传感器捕获的事件的稀疏性和异步性而面临挑战。相比之下,峰值神经网络(snn)提供了一个很有前途的替代方案,它提供了一个与基于事件的数据内在一致的时间表示。本文探讨了snn独特的膜电位动力学及其调节稀疏事件的能力。我们引入了一种创新的峰值触发自适应阈值机制,用于稳定训练。基于这些见解,我们提出了一种专门针对汽车基于事件的目标检测进行优化的峰值特征金字塔网络(SpikeFPN)。综合评价表明,SpikeFPN优于传统的snn和增强了注意力机制的高级ann。显然,SpikeFPN在GEN1汽车检测(GAD)基准数据集上实现了0.477的平均精度(mAP),这标志着所选SNN基线的显著提高。此外,由于其固有的稀疏计算能力,SpikeFPN的高效设计在优化计算资源的同时确保了鲁棒性。
{"title":"Automotive Object Detection via Learning Sparse Events by Spiking Neurons","authors":"Hu Zhang;Yanchen Li;Luziwei Leng;Kaiwei Che;Qian Liu;Qinghai Guo;Jianxing Liao;Ran Cheng","doi":"10.1109/TCDS.2024.3410371","DOIUrl":"10.1109/TCDS.2024.3410371","url":null,"abstract":"Event-based sensors, distinguished by their high temporal resolution of \u0000<inline-formula><tex-math>$1 {boldsymbol{mu}}text{s}$</tex-math></inline-formula>\u0000 and a dynamic range of \u0000<inline-formula><tex-math>$120 mathrm{dB}$</tex-math></inline-formula>\u0000, stand out as ideal tools for deployment in fast-paced settings such as vehicles and drones. Traditional object detection techniques that utilize artificial neural networks (ANNs) face challenges due to the sparse and asynchronous nature of the events these sensors capture. In contrast, spiking neural networks (SNNs) offer a promising alternative, providing a temporal representation that is inherently aligned with event-based data. This article explores the unique membrane potential dynamics of SNNs and their ability to modulate sparse events. We introduce an innovative spike-triggered adaptive threshold mechanism designed for stable training. Building on these insights, we present a specialized spiking feature pyramid network (SpikeFPN) optimized for automotive event-based object detection. Comprehensive evaluations demonstrate that SpikeFPN surpasses both traditional SNNs and advanced ANNs enhanced with attention mechanisms. Evidently, SpikeFPN achieves a mean average precision (mAP) of 0.477 on the GEN1 automotive detection (GAD) benchmark dataset, marking significant increases over the selected SNN baselines. Moreover, the efficient design of SpikeFPN ensures robust performance while optimizing computational resources, attributed to its innate sparse computation capabilities.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 6","pages":"2110-2124"},"PeriodicalIF":5.0,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141940256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Estimation of the Cyclopean Eye From Binocular Smooth Pursuit Tests 通过双目平滑追视测试估测回旋眼的视力
IF 5 3区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-06-05 DOI: 10.1109/TCDS.2024.3410110
Elisa Luque-Buzo;Mehdi Bejani;Julián D. Arias-Londoñ;Jorge A. Gómez-García;Francisco Grandas-Pérez;Juan I. Godino-Llorente
In binocular vision, the visual system combines images in the retina to generate a single perception, which triggers a sensorimotor process that forces the eyes to point to the same target. Thus, following a moving target, both eyes are expected to move synchronously following identical motor triggers but, in practise, significant differences between eyes are found due to the presence of certain artifacts and effects. Thus, a better indirect characterization of the underlying neurological behavior during eye motion would require new automatic preprocessing methods applied to the eye-tracking sequences for rendering the common and most significant movements of both eyes. To address this need, the present study proposes an automatic method for extracting the common components of the left- and right-eye motions from a set of Smooth Pursuit tests by applying an independent component analysis. To do so, both sequences are decomposed into two independent latent components: the first presumably correlates with the common motor triggering at the brain, while the second collects artifacts introduced during the recording process and small effects due to convergence deficits and eye dominance biases. The evaluations were carried out using data corresponding to 12 different smooth pursuit eye movements tests, which were collected using an infrared high-speed video-based eye-tracking device from 41 parkinsonian patients and 47 controls. The results show that the automatic method can separate the aforementioned components in 99.50% of cases, extracting a latent component correlated with the common motor triggering at the brain, which we hypothesize is characterizing the movements of the cyclopean eye. The estimated component could be used to simplify any other potential automatic analysis.
在双目视觉中,视觉系统将视网膜中的图像结合起来产生单一的感知,从而触发一个感觉运动过程,迫使眼睛指向同一个目标。因此,跟随一个移动的目标,两只眼睛预计会在相同的运动触发下同步移动,但在实践中,由于某些伪影和效果的存在,发现两只眼睛之间存在显着差异。因此,为了更好地间接表征眼球运动过程中潜在的神经行为,需要将新的自动预处理方法应用于眼球追踪序列,以呈现两只眼睛的常见和最重要的运动。为了满足这一需求,本研究提出了一种自动方法,通过应用独立分量分析从一组平滑追踪测试中提取左眼和右眼运动的共同分量。为此,这两个序列被分解为两个独立的潜在组件:第一个可能与大脑的共同运动触发相关,而第二个收集在记录过程中引入的伪影以及由于收敛缺陷和眼睛优势偏见而产生的小影响。研究人员使用基于红外高速视频的眼动追踪设备收集了41名帕金森患者和47名对照者的12种不同的平滑追踪眼动测试数据,并对这些数据进行了评估。结果表明,在99.50%的情况下,自动方法可以分离上述成分,提取出与大脑常见运动触发相关的潜在成分,我们假设这是独眼运动的特征。估计的成分可以用来简化任何其他潜在的自动分析。
{"title":"Estimation of the Cyclopean Eye From Binocular Smooth Pursuit Tests","authors":"Elisa Luque-Buzo;Mehdi Bejani;Julián D. Arias-Londoñ;Jorge A. Gómez-García;Francisco Grandas-Pérez;Juan I. Godino-Llorente","doi":"10.1109/TCDS.2024.3410110","DOIUrl":"10.1109/TCDS.2024.3410110","url":null,"abstract":"In binocular vision, the visual system combines images in the retina to generate a single perception, which triggers a sensorimotor process that forces the eyes to point to the same target. Thus, following a moving target, both eyes are expected to move synchronously following identical motor triggers but, in practise, significant differences between eyes are found due to the presence of certain artifacts and effects. Thus, a better indirect characterization of the underlying neurological behavior during eye motion would require new automatic preprocessing methods applied to the eye-tracking sequences for rendering the common and most significant movements of both eyes. To address this need, the present study proposes an automatic method for extracting the common components of the left- and right-eye motions from a set of Smooth Pursuit tests by applying an independent component analysis. To do so, both sequences are decomposed into two independent latent components: the first presumably correlates with the common motor triggering at the brain, while the second collects artifacts introduced during the recording process and small effects due to convergence deficits and eye dominance biases. The evaluations were carried out using data corresponding to 12 different smooth pursuit eye movements tests, which were collected using an infrared high-speed video-based eye-tracking device from 41 parkinsonian patients and 47 controls. The results show that the automatic method can separate the aforementioned components in 99.50% of cases, extracting a latent component correlated with the common motor triggering at the brain, which we hypothesize is characterizing the movements of the cyclopean eye. The estimated component could be used to simplify any other potential automatic analysis.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 6","pages":"2125-2137"},"PeriodicalIF":5.0,"publicationDate":"2024-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10549994","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141940257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Decoding Joint-Level Hand Movements With Intracortical Neural Signals in a Human Brain–Computer Interface 在人脑-计算机接口中利用皮层内神经信号解码关节级手部运动
IF 5 3区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-06-04 DOI: 10.1109/TCDS.2024.3409555
Huaqin Sun;Yu Qi;Xiaodi Wu;Junming Zhu;Jianmin Zhang;Yueming Wang
Fine movements of hands play an important role in everyday life. While existing studies have successfully decoded hand gestures or finger movements from brain signals, direct decoding of single-joint kinematics remains challenging. This study aims to investigate the decoding of fine hand movements at the single-joint level. Neural activities were recorded from the motor cortex (MC) of a human participant while imagining eleven different hand movements. We comprehensively evaluated the decoding efficiency of various brain signal features, neural decoding algorithms, and single-joint kinematic variables for decoding. Results showed that using the spiking band power (SBP) signals, we could faithfully decode the single-joint angles with an average correlation coefficient of 0.77, outperforming other brain signal features. Nonlinear approaches that incorporate temporal context information, particularly recurrent neural networks, significantly outperformed traditional methods. Decoding joint angles yielded superior results compared to joint angular velocities. Our approach facilitates the construction of high-performance brain–computer interfaces for dexterous hand control.
手部的精细动作在日常生活中起着重要作用。虽然现有的研究已经成功地从大脑信号中解码了手势或手指运动,但直接解码单关节的运动学仍然具有挑战性。本研究旨在探讨手部精细动作在单关节水平上的解码。当参与者想象11种不同的手部动作时,从运动皮层(MC)记录下神经活动。我们综合评估了各种脑信号特征、神经解码算法和单关节运动变量的解码效率。结果表明,利用峰值带功率(SBP)信号可以准确地解码单关节角,平均相关系数为0.77,优于其他脑信号特征。结合时间上下文信息的非线性方法,特别是递归神经网络,显著优于传统方法。与关节角速度相比,解码关节角度产生了更好的结果。我们的方法有助于构建用于灵巧手控制的高性能脑机接口。
{"title":"Decoding Joint-Level Hand Movements With Intracortical Neural Signals in a Human Brain–Computer Interface","authors":"Huaqin Sun;Yu Qi;Xiaodi Wu;Junming Zhu;Jianmin Zhang;Yueming Wang","doi":"10.1109/TCDS.2024.3409555","DOIUrl":"10.1109/TCDS.2024.3409555","url":null,"abstract":"Fine movements of hands play an important role in everyday life. While existing studies have successfully decoded hand gestures or finger movements from brain signals, direct decoding of single-joint kinematics remains challenging. This study aims to investigate the decoding of fine hand movements at the single-joint level. Neural activities were recorded from the motor cortex (MC) of a human participant while imagining eleven different hand movements. We comprehensively evaluated the decoding efficiency of various brain signal features, neural decoding algorithms, and single-joint kinematic variables for decoding. Results showed that using the spiking band power (SBP) signals, we could faithfully decode the single-joint angles with an average correlation coefficient of 0.77, outperforming other brain signal features. Nonlinear approaches that incorporate temporal context information, particularly recurrent neural networks, significantly outperformed traditional methods. Decoding joint angles yielded superior results compared to joint angular velocities. Our approach facilitates the construction of high-performance brain–computer interfaces for dexterous hand control.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 6","pages":"2100-2109"},"PeriodicalIF":5.0,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141940258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimal Strategies and Cooperative Teaming for 3-D Multiplayer Reach-Avoid Games 三维多人避险游戏的最佳策略与合作组队
IF 5 3区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-06-03 DOI: 10.1109/TCDS.2024.3406889
Peng Gao;Xiuxian Li;Jinwen Hu
This article studies multiplayer reach-avoid games with a plane being the goal in 3-D space. Due to the difficulty that directly analyzing multipursuer multievader scenarios brings the curse of dimensionality, the whole problem is decomposed to distinct subgames. In the subgames, a single pursuer or multiple pursuers, which have different speeds, form a team to capture one evader cooperatively while the evader struggles to reach the plane. With the players’ dominance region based on the definition of isochronous surfaces, the target points and value functions are obtained for the game of degree by using Apollonius spheres. Additionally, the corresponding closed-loop saddle-point strategies are shown to be Nash equilibrium. The degeneration between scenarios of different scales is also discussed. To minimize the sum of subgames’ costs, the tasks of intercepting multiple evaders are assigned to individuals or teams in the form of bipartite graph matching. A hierarchical matching algorithm and a state-feedback rematching method are proposed which can be updated in real-time to improve the solution. Finally, diverse empirical experiments and comparisons with state-of-the-art methods are illustrated to demonstrate the optimality of proposed strategies and algorithms in this article.
本文研究了三维空间中以平面为目标的多人触达躲避游戏。由于直接分析多追逐者多逃避者场景的难度带来了维度的限制,因此将整个问题分解为不同的子博弈。在子游戏中,一个追击者或多个追击者(它们的速度不同)组成一个团队,合作捕获一个逃避者,而逃避者则努力到达飞机。在等时曲面定义的基础上,根据玩家的优势区域,利用阿波罗球求出度博弈的目标点和值函数。此外,相应的闭环鞍点策略是纳什均衡。讨论了不同尺度情景间的退化。为了最小化子博弈的代价总和,拦截多个逃逃者的任务以二部图匹配的形式分配给个人或团队。提出了一种分层匹配算法和一种实时更新的状态反馈再匹配方法。最后,不同的实证实验和与最先进的方法进行了比较,以证明本文提出的策略和算法的最优性。
{"title":"Optimal Strategies and Cooperative Teaming for 3-D Multiplayer Reach-Avoid Games","authors":"Peng Gao;Xiuxian Li;Jinwen Hu","doi":"10.1109/TCDS.2024.3406889","DOIUrl":"10.1109/TCDS.2024.3406889","url":null,"abstract":"This article studies multiplayer reach-avoid games with a plane being the goal in 3-D space. Due to the difficulty that directly analyzing multipursuer multievader scenarios brings the curse of dimensionality, the whole problem is decomposed to distinct subgames. In the subgames, a single pursuer or multiple pursuers, which have different speeds, form a team to capture one evader cooperatively while the evader struggles to reach the plane. With the players’ dominance region based on the definition of isochronous surfaces, the target points and value functions are obtained for the game of degree by using Apollonius spheres. Additionally, the corresponding closed-loop saddle-point strategies are shown to be Nash equilibrium. The degeneration between scenarios of different scales is also discussed. To minimize the sum of subgames’ costs, the tasks of intercepting multiple evaders are assigned to individuals or teams in the form of bipartite graph matching. A hierarchical matching algorithm and a state-feedback rematching method are proposed which can be updated in real-time to improve the solution. Finally, diverse empirical experiments and comparisons with state-of-the-art methods are illustrated to demonstrate the optimality of proposed strategies and algorithms in this article.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 6","pages":"2085-2099"},"PeriodicalIF":5.0,"publicationDate":"2024-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141940306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HR-SNN: An End-to-End Spiking Neural Network for Four-Class Classification Motor Imagery Brain–Computer Interface HR-SNN:用于四级分类运动图像的端到端尖峰神经网络 脑机接口
IF 5 3区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-04-30 DOI: 10.1109/TCDS.2024.3395443
Yulin Li;Liangwei Fan;Hui Shen;Dewen Hu
Spiking neural network (SNN) excels in processing temporal information and conserving energy, particularly when deployed on neuromorphic hardware. These strengths position SNN as an ideal choice for developing wearable brain–computer interface (BCI) devices. However, the application of SNN in complex BCI tasks, like four-class motor imagery classification, is limited. In light of this, this study introduces a powerful SNN architecture hybrid response SNN (HR-SNN). We employ parameterwise gradient descent methods to optimize spike encoding efficiency. The SNN's frequency perception is improved by integrating a hybrid response spiking module. In addition, a diff-potential spiking decoder is designed to optimize SNN output potential utilization. Validation experiments are performed on PhysioNet and BCI competition IV 2a datasets. On PhysioNet, our model achieves accuracies of 67.24% and 74.95% using global training and subject-specific transfer learning, respectively. On BCI competition IV 2a, our approach attains an average accuracy of 77.58%, surpassing all the compared SNN models and demonstrating competitiveness against state-of-the-art (SOTA) convolution neural network (CNN) approaches. We validate the robustness of HR-SNN under noise and channel loss scenarios. Additionally, energy analysis reveals HR-SNN's superior energy efficiency compared to existing CNN models. Notably, HR-SNN exhibits a 2–16 times energy consumption advantage over existing SNN methods.
尖峰神经网络(SNN)在处理时间信息和节约能量方面表现突出,特别是在神经形态硬件上部署时。这些优势使SNN成为开发可穿戴脑机接口(BCI)设备的理想选择。然而,SNN在复杂脑机接口任务中的应用,如四类运动图像分类,是有限的。鉴于此,本研究引入了一种功能强大的SNN架构混合响应SNN (HR-SNN)。我们采用参数梯度下降方法来优化尖峰编码效率。通过集成混合响应尖峰模块,SNN的频率感知得到了改善。此外,设计了一种差分电位尖峰解码器,以优化SNN输出电位利用率。验证实验在PhysioNet和BCI competition IV 2a数据集上进行。在PhysioNet上,我们的模型使用全局训练和特定学科迁移学习分别达到67.24%和74.95%的准确率。在BCI竞赛IV 2a上,我们的方法达到了77.58%的平均准确率,超过了所有比较的SNN模型,并展示了与最先进的(SOTA)卷积神经网络(CNN)方法的竞争力。我们验证了HR-SNN在噪声和信道损失情况下的鲁棒性。此外,能量分析表明,与现有的CNN模型相比,HR-SNN具有优越的能源效率。值得注意的是,HR-SNN的能耗优势是现有SNN方法的2-16倍。
{"title":"HR-SNN: An End-to-End Spiking Neural Network for Four-Class Classification Motor Imagery Brain–Computer Interface","authors":"Yulin Li;Liangwei Fan;Hui Shen;Dewen Hu","doi":"10.1109/TCDS.2024.3395443","DOIUrl":"10.1109/TCDS.2024.3395443","url":null,"abstract":"Spiking neural network (SNN) excels in processing temporal information and conserving energy, particularly when deployed on neuromorphic hardware. These strengths position SNN as an ideal choice for developing wearable brain–computer interface (BCI) devices. However, the application of SNN in complex BCI tasks, like four-class motor imagery classification, is limited. In light of this, this study introduces a powerful SNN architecture hybrid response SNN (HR-SNN). We employ parameterwise gradient descent methods to optimize spike encoding efficiency. The SNN's frequency perception is improved by integrating a hybrid response spiking module. In addition, a diff-potential spiking decoder is designed to optimize SNN output potential utilization. Validation experiments are performed on PhysioNet and BCI competition IV 2a datasets. On PhysioNet, our model achieves accuracies of 67.24% and 74.95% using global training and subject-specific transfer learning, respectively. On BCI competition IV 2a, our approach attains an average accuracy of 77.58%, surpassing all the compared SNN models and demonstrating competitiveness against state-of-the-art (SOTA) convolution neural network (CNN) approaches. We validate the robustness of HR-SNN under noise and channel loss scenarios. Additionally, energy analysis reveals HR-SNN's superior energy efficiency compared to existing CNN models. Notably, HR-SNN exhibits a 2–16 times energy consumption advantage over existing SNN methods.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 6","pages":"1955-1968"},"PeriodicalIF":5.0,"publicationDate":"2024-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140829368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive Framework for Long-Term Sensory Home Training: A Feasibility Study 长期感官家庭训练的适应性框架:可行性研究
IF 5 3区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-04-25 DOI: 10.1109/TCDS.2024.3393635
Stefano Silvoni;Simon Desch;Florian Beier;Robin Bekrater-Bodmann;Annette Löffler;Dieter Kleinböhl;Stefano Tamascelli;Herta Flor
Training programs, based on principles of brain-plasticity and skill learning, are useful in counteracting functional decline in pathological conditions. Training effects of such procedures are well described but their adaptive features are usually not reported. A software framework designed for a long-term home training program is presented. It gradually trains users, provides a multidimensional range of stimulus differentiation, encompasses a strategy to increase the task demand and includes motivational reinforcement components. The structured framework was tested in a feasibility study involving two perceptual discrimination tasks (visual and auditory) in four persons in middle-to-older adulthood who were trained for 30 days. Practicability of the training was shown in a home setting by high adherence to the procedure, adaptive increase in task demand over time and positive learning effects on an individual level. Participants learned to distinguish progressively smaller target objects in the visual task (with diminished contrast) and sweeps progressively varying less in frequency in the auditory task (with overlapping noise). This adaptive procedure can provide the basis for the design of extended training programs engaging sensory function in individuals with impaired sensorimotor and cognitive functions. Further investigations are necessary to assess the generalization of learning effects and clinical validity.
基于大脑可塑性和技能学习原理的训练计划,在对抗病理条件下的功能衰退方面是有用的。这类程序的训练效果已被很好地描述,但其适应性特征通常未被报道。提出了一种为长期家庭培训项目设计的软件框架。它逐步训练用户,提供多维刺激分化范围,包含增加任务需求的策略,并包括动机强化成分。在一项可行性研究中,对四名接受了30天训练的中老年成人进行了两项感知辨别任务(视觉和听觉)的测试。在家庭环境中,训练的实用性表现为对程序的高度遵守,随着时间的推移,任务需求的适应性增加以及个人层面上的积极学习效果。参与者学会了在视觉任务(对比度降低)中逐渐区分较小的目标物体,在听觉任务(有重叠噪声)中逐渐减少扫描频率。这种自适应过程可以为设计扩展训练计划提供基础,这些计划涉及感觉运动和认知功能受损的个体的感觉功能。需要进一步的研究来评估学习效果的推广和临床有效性。
{"title":"Adaptive Framework for Long-Term Sensory Home Training: A Feasibility Study","authors":"Stefano Silvoni;Simon Desch;Florian Beier;Robin Bekrater-Bodmann;Annette Löffler;Dieter Kleinböhl;Stefano Tamascelli;Herta Flor","doi":"10.1109/TCDS.2024.3393635","DOIUrl":"10.1109/TCDS.2024.3393635","url":null,"abstract":"Training programs, based on principles of brain-plasticity and skill learning, are useful in counteracting functional decline in pathological conditions. Training effects of such procedures are well described but their adaptive features are usually not reported. A software framework designed for a long-term home training program is presented. It gradually trains users, provides a multidimensional range of stimulus differentiation, encompasses a strategy to increase the task demand and includes motivational reinforcement components. The structured framework was tested in a feasibility study involving two perceptual discrimination tasks (visual and auditory) in four persons in middle-to-older adulthood who were trained for 30 days. Practicability of the training was shown in a home setting by high adherence to the procedure, adaptive increase in task demand over time and positive learning effects on an individual level. Participants learned to distinguish progressively smaller target objects in the visual task (with diminished contrast) and sweeps progressively varying less in frequency in the auditory task (with overlapping noise). This adaptive procedure can provide the basis for the design of extended training programs engaging sensory function in individuals with impaired sensorimotor and cognitive functions. Further investigations are necessary to assess the generalization of learning effects and clinical validity.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 6","pages":"1929-1942"},"PeriodicalIF":5.0,"publicationDate":"2024-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10508624","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140797954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Leveraging Spatiotemporal Estimation for Online Adaptive Steady-State Visual Evoked Potential Recognition 利用时空估计进行在线自适应稳态视觉诱发电位识别
IF 5 3区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-04-23 DOI: 10.1109/TCDS.2024.3392745
Jing Jin;Xinjie He;Brendan Z. Allison;Ke Qin;Xingyu Wang;Andrzej Cichocki
Online adaptive canonical correction analysis (OACCA) has been applied successfully in the recently popular steady-state visual evoked potential (SSVEP) target recognition methods. However, due to the significant amount of spatiotemporal relevant background noise in the online historical sample label data of OACCA, there is redundant noise component in the learned common spatial filter that can reduce online classification accuracy. Aiming at solving this defect in OACCA, we designed an online spatial–temporal equalization filter (STE) to suppress the background noise component in the electroencephalography (EEG). Meanwhile, an adaptive decoding method for SSVEP based on online spatial–temporal estimation (STE-OACCA) is proposed by combining the online STE filter and the OACCA algorithm. A pseudoonline test on the Tsinghua University FBCCA-DW dataset shows that the proposed STE-OACCA method significantly outperforms the CCA, MSI, OACCA approaches as well as STE-CCA. More importantly, proposed method can be directly used in online SSVEP recognition without calibration. The proposed algorithm is robust, which is promising for the development of practical brain computer interface (BCI).
在线自适应典型校正分析(OACCA)已成功应用于近年来流行的稳态视觉诱发电位(SSVEP)目标识别方法中。然而,由于OACCA在线历史样本标签数据中存在大量的时空相关背景噪声,因此在学习到的公共空间滤波器中存在冗余的噪声成分,会降低在线分类精度。针对这一缺陷,我们设计了一种在线时空均衡滤波器(STE)来抑制脑电图(EEG)中的背景噪声成分。同时,将在线STE滤波器与OACCA算法相结合,提出了一种基于在线时空估计的SSVEP自适应译码方法(STE-OACCA)。在清华大学FBCCA-DW数据集上进行的伪在线测试表明,所提出的STE-OACCA方法显著优于CCA、MSI、OACCA方法以及STE-CCA方法。更重要的是,该方法可以直接用于在线SSVEP识别而无需校准。该算法具有较强的鲁棒性,为开发实用的脑机接口(BCI)打下了良好的基础。
{"title":"Leveraging Spatiotemporal Estimation for Online Adaptive Steady-State Visual Evoked Potential Recognition","authors":"Jing Jin;Xinjie He;Brendan Z. Allison;Ke Qin;Xingyu Wang;Andrzej Cichocki","doi":"10.1109/TCDS.2024.3392745","DOIUrl":"10.1109/TCDS.2024.3392745","url":null,"abstract":"Online adaptive canonical correction analysis (OACCA) has been applied successfully in the recently popular steady-state visual evoked potential (SSVEP) target recognition methods. However, due to the significant amount of spatiotemporal relevant background noise in the online historical sample label data of OACCA, there is redundant noise component in the learned common spatial filter that can reduce online classification accuracy. Aiming at solving this defect in OACCA, we designed an online spatial–temporal equalization filter (STE) to suppress the background noise component in the electroencephalography (EEG). Meanwhile, an adaptive decoding method for SSVEP based on online spatial–temporal estimation (STE-OACCA) is proposed by combining the online STE filter and the OACCA algorithm. A pseudoonline test on the Tsinghua University FBCCA-DW dataset shows that the proposed STE-OACCA method significantly outperforms the CCA, MSI, OACCA approaches as well as STE-CCA. More importantly, proposed method can be directly used in online SSVEP recognition without calibration. The proposed algorithm is robust, which is promising for the development of practical brain computer interface (BCI).","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 6","pages":"1943-1954"},"PeriodicalIF":5.0,"publicationDate":"2024-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140797952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Minimizing EEG Human Interference: A Study of an Adaptive EEG Spatial Feature Extraction With Deep Convolutional Neural Networks 最小化脑电图人为干扰:利用深度卷积神经网络进行自适应脑电图空间特征提取的研究
IF 5 3区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-04-18 DOI: 10.1109/TCDS.2024.3391131
Haojin Deng;Shiqi Wang;Yimin Yang;W. G. Will Zhao;Hui Zhang;Ruizhong Wei;Q. M. Jonathan Wu;Bao-Liang Lu
Emotion is one of the main psychological factors that affects human behavior. Using a neural network model trained with electroencephalography (EEG)-based frequency features has been widely used to accurately recognize human emotions. However, utilizing EEG-based spatial information with popular 2-D kernels of convolutional neural networks (CNNs) has rarely been explored in the extant literature. This article addresses these challenges by proposing an EEG-based spatial-frequency-based framework for recognizing human emotion, resulting in fewer human interference parameters with better generalization performance. Specifically, we propose a two-stream hierarchical network framework that learns features from two networks, one trained from the frequency domain while another trained from the spatial domain. Our approach is extensively validated on the SEED, SEED-V, and DREAMER datasets. Our proposed method achieved an accuracy of 94.84% on the SEED dataset and 68.61% on the SEED-V dataset with EEG data only. The average accuracy of the Dreamer dataset is 93.01%, 92.04%, and 91.74% in valence, arousal, and dominance dimensions, respectively. The experiments directly support that our motivation of utilizing the two-stream domain features significantly improves the final recognition performance. The experimental results show that the proposed framework obtains improvements over state-of-the-art methods over these three varied scaled datasets. Furthermore, it also indicates the potential of the proposed framework in conjunction with current ImageNet pretrained models for improving performance on 1-D psychological signals.
情绪是影响人类行为的主要心理因素之一。利用基于脑电图(EEG)频率特征训练的神经网络模型已被广泛应用于人类情绪的准确识别。然而,利用基于脑电图的空间信息与流行的卷积神经网络(cnn)的二维核在现有文献中很少被探索。本文通过提出一种基于脑电图的基于空间频率的人类情感识别框架来解决这些挑战,从而减少了人为干扰参数,提高了泛化性能。具体来说,我们提出了一个两流分层网络框架,该框架从两个网络中学习特征,一个从频域训练,另一个从空间域训练。我们的方法在SEED、SEED- v和dream数据集上得到了广泛的验证。该方法在SEED数据集上的准确率为94.84%,在SEED- v数据集上的准确率为68.61%。在效价、唤醒和优势维度上,做梦者数据集的平均准确率分别为93.01%、92.04%和91.74%。实验结果直接支持了我们利用双流域特征的动机显著提高了最终的识别性能。实验结果表明,在这三种不同规模的数据集上,所提出的框架比最先进的方法得到了改进。此外,它还表明了所提出的框架与当前ImageNet预训练模型相结合的潜力,可以提高对一维心理信号的处理能力。
{"title":"Minimizing EEG Human Interference: A Study of an Adaptive EEG Spatial Feature Extraction With Deep Convolutional Neural Networks","authors":"Haojin Deng;Shiqi Wang;Yimin Yang;W. G. Will Zhao;Hui Zhang;Ruizhong Wei;Q. M. Jonathan Wu;Bao-Liang Lu","doi":"10.1109/TCDS.2024.3391131","DOIUrl":"10.1109/TCDS.2024.3391131","url":null,"abstract":"Emotion is one of the main psychological factors that affects human behavior. Using a neural network model trained with electroencephalography (EEG)-based frequency features has been widely used to accurately recognize human emotions. However, utilizing EEG-based spatial information with popular 2-D kernels of convolutional neural networks (CNNs) has rarely been explored in the extant literature. This article addresses these challenges by proposing an EEG-based spatial-frequency-based framework for recognizing human emotion, resulting in fewer human interference parameters with better generalization performance. Specifically, we propose a two-stream hierarchical network framework that learns features from two networks, one trained from the frequency domain while another trained from the spatial domain. Our approach is extensively validated on the SEED, SEED-V, and DREAMER datasets. Our proposed method achieved an accuracy of 94.84% on the SEED dataset and 68.61% on the SEED-V dataset with EEG data only. The average accuracy of the Dreamer dataset is 93.01%, 92.04%, and 91.74% in valence, arousal, and dominance dimensions, respectively. The experiments directly support that our motivation of utilizing the two-stream domain features significantly improves the final recognition performance. The experimental results show that the proposed framework obtains improvements over state-of-the-art methods over these three varied scaled datasets. Furthermore, it also indicates the potential of the proposed framework in conjunction with current ImageNet pretrained models for improving performance on 1-D psychological signals.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 6","pages":"1915-1928"},"PeriodicalIF":5.0,"publicationDate":"2024-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140627221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MAVIDSQL: A Model-Agnostic Visualization for Interpretation and Diagnosis of Text-to-SQL Tasks MAVIDSQL:用于解释和诊断文本到 SQL 任务的模型诊断可视化工具
IF 5 3区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-04-18 DOI: 10.1109/TCDS.2024.3391278
Jingwei Tang;Guodao Sun;Jiahui Chen;Gefei Zhang;Baofeng Chang;Haixia Wang;Ronghua Liang
Significant advancements in semantic parsing for text-to-SQL (T2S) tasks have been achieved through the employment of neural network models, such as LSTM, BERT, and T5. The exceptional performance of large language models, such as ChatGPT, has been demonstrated in recent research, even in zero-shot scenarios. However, the inherent transparency of T2S models presents them as black boxes, concealing their inner workings from both developers and users, which complicates the diagnosis of potential error patterns. Despite the fact that numerous visual analysis studies have been conducted in natural language processing communities, scant attention has been paid to addressing the challenges of semantic parsing, specifically in T2S tasks. This limitation hinders the development of effective tools for model optimization and evaluation. This article presents an interactive visual analysis tool, MAVIDSQL, to assist model developers and users in understanding and diagnosing T2S tasks. The system comprises three modules: the model manager, the feature extractor, and the visualization interface, which adopt a model-agnostic approach to diagnose potential errors and infer model decisions by analyzing input–output data, facilitating interactive visual analysis to identify error patterns and assess model performance. Two case studies and interviews with domain experts demonstrate the effectiveness of MAVIDSQL in facilitating the understanding of T2S tasks and identifying potential errors.
通过采用神经网络模型(如 LSTM、BERT 和 T5),文本到 SQL(T2S)任务的语义解析取得了重大进展。大型语言模型(如 ChatGPT)的卓越性能已在最近的研究中得到了证明,甚至在零镜头场景中也是如此。然而,T2S 模型固有的透明性使其成为黑盒子,对开发人员和用户都隐藏了其内部工作原理,这使得对潜在错误模式的诊断变得更加复杂。尽管自然语言处理界已经开展了大量的可视化分析研究,但很少有人关注语义解析的挑战,特别是在 T2S 任务中。这一局限性阻碍了用于模型优化和评估的有效工具的开发。本文介绍了一种交互式可视化分析工具 MAVIDSQL,以帮助模型开发人员和用户理解和诊断 T2S 任务。该系统由三个模块组成:模型管理器、特征提取器和可视化界面,它们采用了一种与模型无关的方法,通过分析输入输出数据来诊断潜在错误和推断模型决策,促进交互式可视化分析,以识别错误模式和评估模型性能。两个案例研究和与领域专家的访谈证明了 MAVIDSQL 在促进对 T2S 任务的理解和识别潜在错误方面的有效性。
{"title":"MAVIDSQL: A Model-Agnostic Visualization for Interpretation and Diagnosis of Text-to-SQL Tasks","authors":"Jingwei Tang;Guodao Sun;Jiahui Chen;Gefei Zhang;Baofeng Chang;Haixia Wang;Ronghua Liang","doi":"10.1109/TCDS.2024.3391278","DOIUrl":"10.1109/TCDS.2024.3391278","url":null,"abstract":"Significant advancements in semantic parsing for text-to-SQL (T2S) tasks have been achieved through the employment of neural network models, such as LSTM, BERT, and T5. The exceptional performance of large language models, such as ChatGPT, has been demonstrated in recent research, even in zero-shot scenarios. However, the inherent transparency of T2S models presents them as black boxes, concealing their inner workings from both developers and users, which complicates the diagnosis of potential error patterns. Despite the fact that numerous visual analysis studies have been conducted in natural language processing communities, scant attention has been paid to addressing the challenges of semantic parsing, specifically in T2S tasks. This limitation hinders the development of effective tools for model optimization and evaluation. This article presents an interactive visual analysis tool, MAVIDSQL, to assist model developers and users in understanding and diagnosing T2S tasks. The system comprises three modules: the model manager, the feature extractor, and the visualization interface, which adopt a model-agnostic approach to diagnose potential errors and infer model decisions by analyzing input–output data, facilitating interactive visual analysis to identify error patterns and assess model performance. Two case studies and interviews with domain experts demonstrate the effectiveness of MAVIDSQL in facilitating the understanding of T2S tasks and identifying potential errors.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 5","pages":"1887-1903"},"PeriodicalIF":5.0,"publicationDate":"2024-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140627741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Toward Two-Stream Foveation-Based Active Vision Learning 实现基于视觉的双流主动视觉学习
IF 5 3区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-04-17 DOI: 10.1109/TCDS.2024.3390597
Timur Ibrayev;Amitangshu Mukherjee;Sai Aparna Aketi;Kaushik Roy
Deep neural network (DNN) based machine perception frameworks process the entire input in a one-shot manner to provide answers to both “what object is being observed” and “where it is located.” In contrast, the “two-stream hypothesis” from neuroscience explains the neural processing in the human visual cortex as an active vision system that utilizes two separate regions of the brain to answer the what and the where questions. In this work, we propose a machine learning framework inspired by the “two-stream hypothesis” and explore the potential benefits that it offers. Specifically, the proposed framework models the following mechanisms: 1) ventral (what) stream focusing on the input regions perceived by the fovea part of an eye (foveation); 2) dorsal (where) stream providing visual guidance; and 3) iterative processing of the two streams to calibrate visual focus and process the sequence of focused image patches. The training of the proposed framework is accomplished by label-based DNN training for the ventral stream model and reinforcement learning (RL) for the dorsal stream model. We show that the two-stream foveation-based learning is applicable to the challenging task of weakly-supervised object localization (WSOL), where the training data is limited to the object class or its attributes. The framework is capable of both predicting the properties of an object and successfully localizing it by predicting its bounding box. We also show that, due to the independent nature of the two streams, the dorsal model can be applied on its own to unseen images to localize objects from different datasets.
基于深度神经网络(DNN)的机器感知框架以一次性的方式处理整个输入,为 "观察到什么物体 "和 "物体在哪里 "提供答案。相比之下,神经科学中的 "双流假说 "将人类视觉皮层的神经处理解释为一种主动视觉系统,利用大脑的两个独立区域来回答 "是什么 "和 "在哪里 "的问题。在这项工作中,我们提出了一个受 "双流假说 "启发的机器学习框架,并探索了该框架的潜在优势。具体来说,所提出的框架对以下机制进行建模:1)腹向(what)流聚焦于眼睛眼窝部分感知到的输入区域(foveation);2)背向(where)流提供视觉引导;以及 3)对两股流进行迭代处理,以校准视觉焦点并处理聚焦图像斑块序列。建议框架的训练是通过对腹侧流模型进行基于标签的 DNN 训练和对背侧流模型进行强化学习 (RL) 来完成的。我们的研究表明,基于双流的视网膜学习适用于弱监督对象定位(WSOL)这一具有挑战性的任务,在这种情况下,训练数据仅限于对象类别或其属性。该框架既能预测物体的属性,又能通过预测其边界框来成功定位物体。我们还证明,由于两个数据流的独立性质,背侧模型可以单独应用于未见过的图像,以定位来自不同数据集的物体。
{"title":"Toward Two-Stream Foveation-Based Active Vision Learning","authors":"Timur Ibrayev;Amitangshu Mukherjee;Sai Aparna Aketi;Kaushik Roy","doi":"10.1109/TCDS.2024.3390597","DOIUrl":"10.1109/TCDS.2024.3390597","url":null,"abstract":"Deep neural network (DNN) based machine perception frameworks process the entire input in a one-shot manner to provide answers to both “\u0000<italic>what</i>\u0000 object is being observed” and “\u0000<italic>where</i>\u0000 it is located.” In contrast, the \u0000<italic>“two-stream hypothesis”</i>\u0000 from neuroscience explains the neural processing in the human visual cortex as an active vision system that utilizes two separate regions of the brain to answer the \u0000<italic>what</i>\u0000 and the \u0000<italic>where</i>\u0000 questions. In this work, we propose a machine learning framework inspired by the \u0000<italic>“two-stream hypothesis”</i>\u0000 and explore the potential benefits that it offers. Specifically, the proposed framework models the following mechanisms: 1) ventral (\u0000<italic>what</i>\u0000) stream focusing on the input regions perceived by the fovea part of an eye (foveation); 2) dorsal (\u0000<italic>where</i>\u0000) stream providing visual guidance; and 3) iterative processing of the two streams to calibrate visual focus and process the sequence of focused image patches. The training of the proposed framework is accomplished by label-based DNN training for the ventral stream model and reinforcement learning (RL) for the dorsal stream model. We show that the two-stream foveation-based learning is applicable to the challenging task of weakly-supervised object localization (WSOL), where the training data is limited to the object class or its attributes. The framework is capable of both predicting the properties of an object \u0000<italic>and</i>\u0000 successfully localizing it by predicting its bounding box. We also show that, due to the independent nature of the two streams, the dorsal model can be applied on its own to unseen images to localize objects from different datasets.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 5","pages":"1843-1860"},"PeriodicalIF":5.0,"publicationDate":"2024-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140615331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Cognitive and Developmental Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1