Pub Date : 2024-07-10DOI: 10.1109/TCDS.2024.3406168
Xiaoshan Wu;Weihua He;Man Yao;Ziyang Zhang;Yaoyuan Wang;Bo Xu;Guoqi Li
Event cameras have gained popularity in depth estimation due to their superior features such as high-temporal resolution, low latency, and low-power consumption. Spiking neural network (SNN) is a promising approach for processing event camera inputs due to its spike-based event-driven nature. However, SNNs face performance degradation when the network becomes deeper, affecting their performance in depth estimation tasks. To address this issue, we propose a deep spiking U-Net model. Our spiking U-Net architecture leverages refined shortcuts and residual blocks to avoid performance degradation and boost task performance. We also propose a new event representation method designed for multistep SNNs to effectively utilize depth information in the temporal dimension. Our experiments on MVSEC dataset show that the proposed method improves accuracy by 18.50% and 25.18% compared to current state-of-the-art (SOTA) ANN and SNN models, respectively. Moreover, the energy efficiency can be improved up to 58 times by our proposed SNN model compared with the corresponding ANN with the same network structure.
{"title":"Event-Based Depth Prediction With Deep Spiking Neural Network","authors":"Xiaoshan Wu;Weihua He;Man Yao;Ziyang Zhang;Yaoyuan Wang;Bo Xu;Guoqi Li","doi":"10.1109/TCDS.2024.3406168","DOIUrl":"10.1109/TCDS.2024.3406168","url":null,"abstract":"Event cameras have gained popularity in depth estimation due to their superior features such as high-temporal resolution, low latency, and low-power consumption. Spiking neural network (SNN) is a promising approach for processing event camera inputs due to its spike-based event-driven nature. However, SNNs face performance degradation when the network becomes deeper, affecting their performance in depth estimation tasks. To address this issue, we propose a deep spiking U-Net model. Our spiking U-Net architecture leverages refined shortcuts and residual blocks to avoid performance degradation and boost task performance. We also propose a new event representation method designed for multistep SNNs to effectively utilize depth information in the temporal dimension. Our experiments on MVSEC dataset show that the proposed method improves accuracy by 18.50% and 25.18% compared to current state-of-the-art (SOTA) ANN and SNN models, respectively. Moreover, the energy efficiency can be improved up to 58 times by our proposed SNN model compared with the corresponding ANN with the same network structure.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 6","pages":"2008-2018"},"PeriodicalIF":5.0,"publicationDate":"2024-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141587534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-08DOI: 10.1109/TCDS.2024.3417534
Raveendrababu Vempati;Lakhan Dev Sharma;Rajesh Kumar Tripathy
Emotions are mental states that determine the behavior of a person in society. Automated identification of a person's emotion is vital in different applications such as brain–computer interfaces (BCIs), recommender systems (RSs), and cognitive neuroscience. This article proposes an automated approach based on multivariate fast iterative filtering (MvFIF) and an ensemble machine learning model to recognize cross-subject emotions from electroencephalogram (EEG) signals. The multichannel EEG signals are initially decomposed into multichannel intrinsic mode functions (MIMFs) using the MvFIF. The features, such as differential entropy (DE), dispersion entropy (DispEn), permutation entropy (PE), spectral entropy (SE), and distribution entropy (DistEn), are extracted from MIMFs. The binary atom search optimization (BASO) technique is employed to reduce the dimension of the feature space. The light gradient boosting machine (LGBM), extreme learning machine (ELM), and ensemble bagged tree (EBT) classifiers are used to recognize different human emotions using the features of EEG signals. The results demonstrate that the LGBM classifier has achieved the highest average accuracy of 99.50% and 98.79%, respectively, using multichannel EEG signals from the GAMEEMO and DREAMER databases for cross-subject emotion recognition (ER). Compared to other multivariate signal decomposition algorithms, the MvFIF-based method has demonstrated higher accuracy in recognizing emotions using multichannel EEG signals. The proposed (MvFIF+DE+BASO+LGBM) technique outperforms the existing state-of-the-art methods in ER using EEG signals.
{"title":"Cross-Subject Emotion Recognition From Multichannel EEG Signals Using Multivariate Decomposition and Ensemble Learning","authors":"Raveendrababu Vempati;Lakhan Dev Sharma;Rajesh Kumar Tripathy","doi":"10.1109/TCDS.2024.3417534","DOIUrl":"10.1109/TCDS.2024.3417534","url":null,"abstract":"Emotions are mental states that determine the behavior of a person in society. Automated identification of a person's emotion is vital in different applications such as brain–computer interfaces (BCIs), recommender systems (RSs), and cognitive neuroscience. This article proposes an automated approach based on multivariate fast iterative filtering (MvFIF) and an ensemble machine learning model to recognize cross-subject emotions from electroencephalogram (EEG) signals. The multichannel EEG signals are initially decomposed into multichannel intrinsic mode functions (MIMFs) using the MvFIF. The features, such as differential entropy (DE), dispersion entropy (DispEn), permutation entropy (PE), spectral entropy (SE), and distribution entropy (DistEn), are extracted from MIMFs. The binary atom search optimization (BASO) technique is employed to reduce the dimension of the feature space. The light gradient boosting machine (LGBM), extreme learning machine (ELM), and ensemble bagged tree (EBT) classifiers are used to recognize different human emotions using the features of EEG signals. The results demonstrate that the LGBM classifier has achieved the highest average accuracy of 99.50% and 98.79%, respectively, using multichannel EEG signals from the GAMEEMO and DREAMER databases for cross-subject emotion recognition (ER). Compared to other multivariate signal decomposition algorithms, the MvFIF-based method has demonstrated higher accuracy in recognizing emotions using multichannel EEG signals. The proposed (MvFIF+DE+BASO+LGBM) technique outperforms the existing state-of-the-art methods in ER using EEG signals.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"17 1","pages":"77-88"},"PeriodicalIF":5.0,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141571650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-08DOI: 10.1109/TCDS.2024.3424457
Stavros Ntalampiras;Alessandro Scalambrino
There is a direct correlation between noise and human health, while negative consequences may vary from sleep disruption and stress to hearing loss and reduced productivity. Despite its undeniable relevance, the underlying process governing the relationship between unpleasant sound events, and the annoyance they may cause has not been systematically studied yet. In this context, this work focuses on the disturbance caused by interfloor sound events, i.e., the audio signals transmitted through the floors of a building. Activities such as walking, running, using household appliances or other daily actions generate sounds that can be heard by those on an adjacent floor. To this end, we implemented a suitable dataset including diverse interfloor sound events annotated according to the perceived disturbance. Subsequently, we propose a framework able to quantify similarities exhibited by interfloor sound events starting from standardized time-frequency representations, which are processed by a Siamese neural network composed of a series of convolutional layers. Such similarities are then employed by a $k$-medoids regression scheme making disturbance predictions based on interfloor sound events with neighboring latent representations. After thorough experiments, we demonstrate the effectiveness of such a framework and its superiority over popular regression algorithms. Last but not least, the proposed solution offers interpretable predictions, which may be meaningfully utilized by human experts.
{"title":"Automatic Prediction of Disturbance Caused by Interfloor Sound Events","authors":"Stavros Ntalampiras;Alessandro Scalambrino","doi":"10.1109/TCDS.2024.3424457","DOIUrl":"10.1109/TCDS.2024.3424457","url":null,"abstract":"There is a direct correlation between noise and human health, while negative consequences may vary from sleep disruption and stress to hearing loss and reduced productivity. Despite its undeniable relevance, the underlying process governing the relationship between unpleasant sound events, and the annoyance they may cause has not been systematically studied yet. In this context, this work focuses on the disturbance caused by interfloor sound events, i.e., the audio signals transmitted through the floors of a building. Activities such as walking, running, using household appliances or other daily actions generate sounds that can be heard by those on an adjacent floor. To this end, we implemented a suitable dataset including diverse interfloor sound events annotated according to the perceived disturbance. Subsequently, we propose a framework able to quantify similarities exhibited by interfloor sound events starting from standardized time-frequency representations, which are processed by a Siamese neural network composed of a series of convolutional layers. Such similarities are then employed by a <inline-formula><tex-math>$k$</tex-math></inline-formula>-medoids regression scheme making disturbance predictions based on interfloor sound events with neighboring latent representations. After thorough experiments, we demonstrate the effectiveness of such a framework and its superiority over popular regression algorithms. Last but not least, the proposed solution offers interpretable predictions, which may be meaningfully utilized by human experts.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"17 1","pages":"147-154"},"PeriodicalIF":5.0,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141577775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Event cameras have unique advantages in object detection, capturing asynchronous events without continuous frames. They excel in dynamic range, low latency, and high-speed motion scenarios, with lower power consumption. However, aggregating event data into image frames leads to information loss and reduced detection performance. Applying traditional neural networks to event camera outputs is challenging due to event data's distinct characteristics. In this study, we present a novel spiking neural networks (SNNs)-based object detection model, the spiking vision transformer (SpikingViT) to address these issues. First, we design a dedicated event data converting module that effectively captures the unique characteristics of event data, mitigating the risk of information loss while preserving its spatiotemporal features. Second, we introduce SpikingViT, a novel object detection model that leverages SNNs capable of extracting spatiotemporal information among events data. SpikingViT combines the advantages of SNNs and transformer models, incorporating mechanisms such as attention and residual voltage memory to further enhance detection performance. Extensive experiments have substantiated the remarkable proficiency of SpikingViT in event-based object detection, positioning it as a formidable contender. Our proposed approach adeptly retains spatiotemporal information inherent in event data, leading to a substantial enhancement in detection performance.
{"title":"SpikingViT: A Multiscale Spiking Vision Transformer Model for Event-Based Object Detection","authors":"Lixing Yu;Hanqi Chen;Ziming Wang;Shaojie Zhan;Jiankun Shao;Qingjie Liu;Shu Xu","doi":"10.1109/TCDS.2024.3422873","DOIUrl":"10.1109/TCDS.2024.3422873","url":null,"abstract":"Event cameras have unique advantages in object detection, capturing asynchronous events without continuous frames. They excel in dynamic range, low latency, and high-speed motion scenarios, with lower power consumption. However, aggregating event data into image frames leads to information loss and reduced detection performance. Applying traditional neural networks to event camera outputs is challenging due to event data's distinct characteristics. In this study, we present a novel spiking neural networks (SNNs)-based object detection model, the spiking vision transformer (SpikingViT) to address these issues. First, we design a dedicated event data converting module that effectively captures the unique characteristics of event data, mitigating the risk of information loss while preserving its spatiotemporal features. Second, we introduce SpikingViT, a novel object detection model that leverages SNNs capable of extracting spatiotemporal information among events data. SpikingViT combines the advantages of SNNs and transformer models, incorporating mechanisms such as attention and residual voltage memory to further enhance detection performance. Extensive experiments have substantiated the remarkable proficiency of SpikingViT in event-based object detection, positioning it as a formidable contender. Our proposed approach adeptly retains spatiotemporal information inherent in event data, leading to a substantial enhancement in detection performance.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"17 1","pages":"130-146"},"PeriodicalIF":5.0,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141552987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The NMDA receptor (NMDAR), as a ubiquitous type of synapse in neural systems of the brain, presents slow dynamics to modulate neural spiking activity. For the cerebellum, NMDARs have been suggested for contributing complex spikes in Purkinje cells (PCs) as a mechanism for cognitive activity, learning, and memory. Recent experimental studies are debating the role of NMDAR in PC dendritic input, yet it remains unclear how the distribution of NMDARs in PC dendrites can affect their neural spiking coding properties. In this work, a detailed multiple-compartment PC model was used to study how slow-scale NMDARs together with fast-scale AMPA, regulate neural coding. We find that NMDARs act as a band-pass filter, increasing the excitability of PC firing under low-frequency input while reducing it under high frequency. This effect is positively related to the strength of NMDARs. For a response sequence containing a large number of regular and irregular spiking patterns, NMDARs reduce the overall regularity under high-frequency input while increasing the local regularity under low-frequency. Moreover, the inhibitory effect of NMDA receptors during high-frequency stimulation is associated with a reduced conductance of large conductance calcium-activated potassium (BK) channel. Taken together, our results suggest that NMDAR plays an important role in the regulation of neural coding strategies by utilizing its complex dendritic structure.
{"title":"Regulating Temporal Neural Coding via Fast and Slow Synaptic Dynamics","authors":"Yuanhong Tang;Lingling An;Xingyu Zhang;Huiling Huang;Zhaofei Yu","doi":"10.1109/TCDS.2024.3417477","DOIUrl":"10.1109/TCDS.2024.3417477","url":null,"abstract":"The NMDA receptor (NMDAR), as a ubiquitous type of synapse in neural systems of the brain, presents slow dynamics to modulate neural spiking activity. For the cerebellum, NMDARs have been suggested for contributing complex spikes in Purkinje cells (PCs) as a mechanism for cognitive activity, learning, and memory. Recent experimental studies are debating the role of NMDAR in PC dendritic input, yet it remains unclear how the distribution of NMDARs in PC dendrites can affect their neural spiking coding properties. In this work, a detailed multiple-compartment PC model was used to study how slow-scale NMDARs together with fast-scale AMPA, regulate neural coding. We find that NMDARs act as a band-pass filter, increasing the excitability of PC firing under low-frequency input while reducing it under high frequency. This effect is positively related to the strength of NMDARs. For a response sequence containing a large number of regular and irregular spiking patterns, NMDARs reduce the overall regularity under high-frequency input while increasing the local regularity under low-frequency. Moreover, the inhibitory effect of NMDA receptors during high-frequency stimulation is associated with a reduced conductance of large conductance calcium-activated potassium (BK) channel. Taken together, our results suggest that NMDAR plays an important role in the regulation of neural coding strategies by utilizing its complex dendritic structure.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"17 1","pages":"102-114"},"PeriodicalIF":5.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141510259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-26DOI: 10.1109/TCDS.2024.3418841
Anastasios E. Giannopoulos;Ioanna Zioga;Vaios Ziogas;Panos Papageorgiou;Georgios N. Papageorgiou;Charalabos Papageorgiou
The acoustic startle reflex (ASR) relies on the sensorimotor system and is affected by aging, sex, and psychopathology. ASR can be modulated by the prepulse inhibition (PPI) paradigm, which achieves the inhibition of reactivity to a startling stimulus (pulse) following a weak prepulse stimulus. Additionally, neurophysiological studies have found that brain activity is characterized by irregular patterns with high complexity, which however reduces with age. Our study investigated the relationship between prestartle nonlinear dynamics and PPI in healthy children versus adults. Fifty-six individuals took part in the experiment: 31 children and adolescents and 25 adults. Participants heard 51 pairs of tones (prepulse and startle) with a time difference of 30 to 500 ms. Subsequently, we assessed neural complexity by computing the largest Lyapunov exponent (LLE) during the prestartle period and assessed PPI by analyzing the poststartle event-related potentials (ERPs). Results showed higher neural complexity for children compared to adults, in line with previous research showing reduced complexity in the physiological signals in aging. As expected, PPI (as reflected in the P50 and P200 components) was enhanced in adults compared to children, potentially due to the maturation of the ASR for the former. Interestingly, prestartle complexity was correlated with the P50 component in children only, but not in adults, potentially due to the different stage of sensorimotor maturation between age groups. Overall, our study offers novel contributions for investigating brain dynamics, linking nonlinear with linear measures. Our findings are consistent with the loss of neural complexity in aging, and suggest differentiated links between nonlinear and linear metrics in children and adults.
{"title":"Prepulse Inhibition and Prestimulus Nonlinear Brain Dynamics in Childhood: A Lyapunov Exponent Approach","authors":"Anastasios E. Giannopoulos;Ioanna Zioga;Vaios Ziogas;Panos Papageorgiou;Georgios N. Papageorgiou;Charalabos Papageorgiou","doi":"10.1109/TCDS.2024.3418841","DOIUrl":"10.1109/TCDS.2024.3418841","url":null,"abstract":"The acoustic startle reflex (ASR) relies on the sensorimotor system and is affected by aging, sex, and psychopathology. ASR can be modulated by the prepulse inhibition (PPI) paradigm, which achieves the inhibition of reactivity to a startling stimulus (pulse) following a weak prepulse stimulus. Additionally, neurophysiological studies have found that brain activity is characterized by irregular patterns with high complexity, which however reduces with age. Our study investigated the relationship between prestartle nonlinear dynamics and PPI in healthy children versus adults. Fifty-six individuals took part in the experiment: 31 children and adolescents and 25 adults. Participants heard 51 pairs of tones (prepulse and startle) with a time difference of 30 to 500 ms. Subsequently, we assessed neural complexity by computing the largest Lyapunov exponent (LLE) during the prestartle period and assessed PPI by analyzing the poststartle event-related potentials (ERPs). Results showed higher neural complexity for children compared to adults, in line with previous research showing reduced complexity in the physiological signals in aging. As expected, PPI (as reflected in the P50 and P200 components) was enhanced in adults compared to children, potentially due to the maturation of the ASR for the former. Interestingly, prestartle complexity was correlated with the P50 component in children only, but not in adults, potentially due to the different stage of sensorimotor maturation between age groups. Overall, our study offers novel contributions for investigating brain dynamics, linking nonlinear with linear measures. Our findings are consistent with the loss of neural complexity in aging, and suggest differentiated links between nonlinear and linear metrics in children and adults.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"17 1","pages":"115-129"},"PeriodicalIF":5.0,"publicationDate":"2024-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10572331","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141510401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-24DOI: 10.1109/TCDS.2024.3417299
Daniel Leong;Thomas Do;Chin-Teng Lin
Object recognition and object identification are complex cognitive processes where information is integrated and processed by an extensive network of brain areas. However, although object recognition and object identification are similar, they are considered separate functions in the brain. Interestingly, the difference between object recognition and object identification has still not been characterized in a way that brain–computer interface (BCI) applications can detect or use. Hence, in this study, we investigated neural features during object recognition and identification tasks through functional brain connectivity. We conducted an experiment involving 25 participants to explore these neural features. Participants completed two tasks: an object recognition task, where they determined whether a target object belonged to a specified category, and an object identification task, where they identified the target object among four displayed images. Our aim was to discover reliable features that could distinguish between object recognition and identification. The results demonstrate a significant difference between object recognition and identification in the participation coefficient (PC) and clustering coefficient (CC) of delta activity in the visual and temporal regions of the brain. Further analysis at the category level shows that this coefficient differs for different categories of objects. Utilizing these discovered features for binary classification, the accuracy for the animal category reached 80.28%. The accuracy for flower and vehicle categories also improved when combining the PC and CC, although no improvement was observed for the food category. Overall, what we have found is a feature that might be able to be used to differentiate between object recognition and identification within a BCI object recognition system. Further, it may help BCI object recognition systems to determine a user’s intentions when selecting an object.
{"title":"The Distinction Between Object Recognition and Object Identification in Brain Connectivity for Brain–Computer Interface Applications","authors":"Daniel Leong;Thomas Do;Chin-Teng Lin","doi":"10.1109/TCDS.2024.3417299","DOIUrl":"10.1109/TCDS.2024.3417299","url":null,"abstract":"Object recognition and object identification are complex cognitive processes where information is integrated and processed by an extensive network of brain areas. However, although object recognition and object identification are similar, they are considered separate functions in the brain. Interestingly, the difference between object recognition and object identification has still not been characterized in a way that brain–computer interface (BCI) applications can detect or use. Hence, in this study, we investigated neural features during object recognition and identification tasks through functional brain connectivity. We conducted an experiment involving 25 participants to explore these neural features. Participants completed two tasks: an object recognition task, where they determined whether a target object belonged to a specified category, and an object identification task, where they identified the target object among four displayed images. Our aim was to discover reliable features that could distinguish between object recognition and identification. The results demonstrate a significant difference between object recognition and identification in the participation coefficient (PC) and clustering coefficient (CC) of delta activity in the visual and temporal regions of the brain. Further analysis at the category level shows that this coefficient differs for different categories of objects. Utilizing these discovered features for binary classification, the accuracy for the animal category reached 80.28%. The accuracy for flower and vehicle categories also improved when combining the PC and CC, although no improvement was observed for the food category. Overall, what we have found is a feature that might be able to be used to differentiate between object recognition and identification within a BCI object recognition system. Further, it may help BCI object recognition systems to determine a user’s intentions when selecting an object.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"17 1","pages":"89-101"},"PeriodicalIF":5.0,"publicationDate":"2024-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141510402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-10DOI: 10.1109/TCDS.2024.3398475
{"title":"IEEE Transactions on Cognitive and Developmental Systems Information for Authors","authors":"","doi":"10.1109/TCDS.2024.3398475","DOIUrl":"https://doi.org/10.1109/TCDS.2024.3398475","url":null,"abstract":"","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 3","pages":"C4-C4"},"PeriodicalIF":5.0,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10552676","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141304037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-10DOI: 10.1109/TCDS.2024.3398471
{"title":"IEEE Transactions on Cognitive and Developmental Systems Publication Information","authors":"","doi":"10.1109/TCDS.2024.3398471","DOIUrl":"https://doi.org/10.1109/TCDS.2024.3398471","url":null,"abstract":"","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 3","pages":"C2-C2"},"PeriodicalIF":5.0,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10552695","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141308635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}