Pub Date : 2024-08-28DOI: 10.1109/TCDS.2024.3451232
Meiling Wang;Wei Shao;Shuo Huang;Daoqiang Zhang
As a widely focused topic, brain imaging genetics has achieved great successes in the diagnosis of complex brain disorders. In clinical application, the imaging phenotypes affected via genetic factors will change over time. A clinical score-relevant exclusive relationship-induced multimodality learning (CS-ERMM) framework is proposed for integrating longitudinal neuroimage, genetics, and clinical score data. Specifically, first, the exclusive lasso term is used to construct the exclusive multimodality learning method, which can convey the unique information at a specific time point. The relationship-induced term is then introduced to automatically learn the relatedness among the multiple time-points from data, which explores the association between genotypes and longitudinal imaging phenotypes to facilitate the understanding of the degenerative process. Finally, the clinical score outcomes are integrated into such association model, which discovers longitudinal phenotypic markers associated with the Alzheimer's disease risk single nucleotide polymorphism that are relevant to clinical score outcomes. We also design a proximal alternating optimization strategy to solve the constructed CS-ERMM model. Extensive experimental results on brain imaging genetic data from the Alzheimer's disease neuroimaging initiative dataset have validated that our method outperforms several competing approaches, which achieve strong associations and identify important consistent markers across longitudinal phenotypes related to genetic risk biomarkers for disease interpretation.
{"title":"Identifying Longitudinal Intermediate Phenotypes Between Genotypes and Clinical Score via Exclusive Relationship-Induced Association Analysis in Alzheimer's Disease","authors":"Meiling Wang;Wei Shao;Shuo Huang;Daoqiang Zhang","doi":"10.1109/TCDS.2024.3451232","DOIUrl":"10.1109/TCDS.2024.3451232","url":null,"abstract":"As a widely focused topic, brain imaging genetics has achieved great successes in the diagnosis of complex brain disorders. In clinical application, the imaging phenotypes affected via genetic factors will change over time. A clinical score-relevant exclusive relationship-induced multimodality learning (CS-ERMM) framework is proposed for integrating longitudinal neuroimage, genetics, and clinical score data. Specifically, first, the exclusive lasso term is used to construct the exclusive multimodality learning method, which can convey the unique information at a specific time point. The relationship-induced term is then introduced to automatically learn the relatedness among the multiple time-points from data, which explores the association between genotypes and longitudinal imaging phenotypes to facilitate the understanding of the degenerative process. Finally, the clinical score outcomes are integrated into such association model, which discovers longitudinal phenotypic markers associated with the Alzheimer's disease risk single nucleotide polymorphism that are relevant to clinical score outcomes. We also design a proximal alternating optimization strategy to solve the constructed CS-ERMM model. Extensive experimental results on brain imaging genetic data from the Alzheimer's disease neuroimaging initiative dataset have validated that our method outperforms several competing approaches, which achieve strong associations and identify important consistent markers across longitudinal phenotypes related to genetic risk biomarkers for disease interpretation.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"17 2","pages":"340-351"},"PeriodicalIF":5.0,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142224052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-19DOI: 10.1109/TCDS.2024.3442957
Yang Lyu;Shuyue Wang;Tianmi Hu;Quan Pan
This article addresses the coverage path planning problem when an unmanned aerial vehicle (UAV) surveys an unknown site composed of multiple isolated areas. The problem is typically non-deterministic polynomial-time hard(NP-hard) and cannot be easily solved, especially when considering the scale of each area. By decomposing the problem into two cascaded subproblems—1) covering a specific polygon area; and 2) determining the optimal visiting order of different areas—an approximate solution can be found more efficiently. First, the target areas are approximated as convex polygons, and the coverage pattern is designed based on four control points. Then, the optimal visiting order is determined based on a state defined by area indices and control points. We propose two different optimization methods to solve this problem. The first method is a direct extension of the genetic algorithm, using a customized coding method. The second method is a reinforcement learning-based (RL-based) approach that solves the problem as a variant of the traveling salesman problem (TSP) through end-to-end policy training. The simulation results indicate that the proposed methods can provide solutions to the multiple-area coverage problem with competitive optimality and efficiency.
{"title":"UAV Coverage Path Planning of Multiple Disconnected Regions Based on Cooperative Optimization Algorithms","authors":"Yang Lyu;Shuyue Wang;Tianmi Hu;Quan Pan","doi":"10.1109/TCDS.2024.3442957","DOIUrl":"10.1109/TCDS.2024.3442957","url":null,"abstract":"This article addresses the coverage path planning problem when an unmanned aerial vehicle (UAV) surveys an unknown site composed of multiple isolated areas. The problem is typically non-deterministic polynomial-time hard(NP-hard) and cannot be easily solved, especially when considering the scale of each area. By decomposing the problem into two cascaded subproblems—1) covering a specific polygon area; and 2) determining the optimal visiting order of different areas—an approximate solution can be found more efficiently. First, the target areas are approximated as convex polygons, and the coverage pattern is designed based on four control points. Then, the optimal visiting order is determined based on a state defined by area indices and control points. We propose two different optimization methods to solve this problem. The first method is a direct extension of the genetic algorithm, using a customized coding method. The second method is a reinforcement learning-based (RL-based) approach that solves the problem as a variant of the traveling salesman problem (TSP) through end-to-end policy training. The simulation results indicate that the proposed methods can provide solutions to the multiple-area coverage problem with competitive optimality and efficiency.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"17 2","pages":"259-270"},"PeriodicalIF":5.0,"publicationDate":"2024-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142224053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The location reasoning of target objects in robot-operated environment is a challenging task. Objects that robots need to interact with are often located at a distance or are contained within containers, making them inaccessible for direct observation by the robot. The uncertainty of the storage location of the target objects and the lack of reasoning ability present considerable challenges. In this article, we propose a method for semantic localization of robot-operated objects based on human common sense and robot experiences. Instead of reasoning the object storage locations solely based on the category of the target object, a probabilistic ontology model is introduced to represent uncertain knowledge in the task of object localization, which combines the expressive power of classical first-order logic and the inference capability of Bayesian inference. The target location is then estimated using the probabilistic ontologies with dynamic integration of human common sense and robot experiences. Experimental results in both simulation and real-world environments demonstrate the effectiveness of the proposed integration of human common sense and robot experiences in the task of semantic localization of robot-operated objects.
{"title":"Location Reasoning of Target Objects Based on Human Common Sense and Robot Experiences","authors":"Yueguang Ge;Yinghao Cai;Shuo Wang;Shaolin Zhang;Tao Lu;Haitao Wang;Junhang Wei","doi":"10.1109/TCDS.2024.3442862","DOIUrl":"10.1109/TCDS.2024.3442862","url":null,"abstract":"The location reasoning of target objects in robot-operated environment is a challenging task. Objects that robots need to interact with are often located at a distance or are contained within containers, making them inaccessible for direct observation by the robot. The uncertainty of the storage location of the target objects and the lack of reasoning ability present considerable challenges. In this article, we propose a method for semantic localization of robot-operated objects based on human common sense and robot experiences. Instead of reasoning the object storage locations solely based on the category of the target object, a probabilistic ontology model is introduced to represent uncertain knowledge in the task of object localization, which combines the expressive power of classical first-order logic and the inference capability of Bayesian inference. The target location is then estimated using the probabilistic ontologies with dynamic integration of human common sense and robot experiences. Experimental results in both simulation and real-world environments demonstrate the effectiveness of the proposed integration of human common sense and robot experiences in the task of semantic localization of robot-operated objects.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"17 2","pages":"287-302"},"PeriodicalIF":5.0,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142224054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The ability of humanoid robots to exhibit empathetic facial expressions and provide corresponding responses is essential for natural human–robot interaction. To enhance this, we integrate the GPT3.5 model with a facial expression recognition model, creating a multimodal emotion recognition system. Additionally, we address the challenge of realistically mimicking human facial expressions by designing the physical structure of a humanoid robot. Initially, we develop a humanoid robot capable of adjusting the positions of its facial organs and neck through servo displacement to achieve more natural facial expressions. Subsequently, to overcome the current limitation where emotional interaction robots struggle to accurately recognize user emotions, we introduce a coupled generative pretrained transformer (GPT)-based multimodal emotion recognition method that utilizes both text and images, thereby enhancing the robot's emotion recognition accuracy. Finally, we integrate the GPT-3.5 model to generate empathetic responses based on recognized user emotional states and language text, which are then mapped onto the robot to enable empathetic expressions that can achieve a more comfortable human–machine interaction experience. Experimental results on benchmark databases demonstrate that the performance of the coupled GPT-based multimodal emotion recognition method using text and images outperforms other approaches, and it possesses unique empathetic response capabilities relative to alternative methods.
{"title":"Multimodal Emotion Fusion Mechanism and Empathetic Responses in Companion Robots","authors":"Xiaofeng Liu;Qincheng Lv;Jie Li;Siyang Song;Angelo Cangelosi","doi":"10.1109/TCDS.2024.3442203","DOIUrl":"10.1109/TCDS.2024.3442203","url":null,"abstract":"The ability of humanoid robots to exhibit empathetic facial expressions and provide corresponding responses is essential for natural human–robot interaction. To enhance this, we integrate the GPT3.5 model with a facial expression recognition model, creating a multimodal emotion recognition system. Additionally, we address the challenge of realistically mimicking human facial expressions by designing the physical structure of a humanoid robot. Initially, we develop a humanoid robot capable of adjusting the positions of its facial organs and neck through servo displacement to achieve more natural facial expressions. Subsequently, to overcome the current limitation where emotional interaction robots struggle to accurately recognize user emotions, we introduce a coupled generative pretrained transformer (GPT)-based multimodal emotion recognition method that utilizes both text and images, thereby enhancing the robot's emotion recognition accuracy. Finally, we integrate the GPT-3.5 model to generate empathetic responses based on recognized user emotional states and language text, which are then mapped onto the robot to enable empathetic expressions that can achieve a more comfortable human–machine interaction experience. Experimental results on benchmark databases demonstrate that the performance of the coupled GPT-based multimodal emotion recognition method using text and images outperforms other approaches, and it possesses unique empathetic response capabilities relative to alternative methods.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"17 2","pages":"271-286"},"PeriodicalIF":5.0,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142224055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-13DOI: 10.1109/TCDS.2024.3442172
Jing Luo;Chaoyi Zhang;Chao Zeng;Yiming Jiang;Chenguang Yang
In physical human–robot interaction (pHRI), the interaction profiles, such as impedance and interaction force are greatly influenced by the operator's muscle activities, impedance and interaction force between the robot and the operator. Actually, parameters of interaction profiles are easy to be measured, such as position, velocity, acceleration, and muscle activities. However, the impedance cannot be directly measured. In some areas, it is difficult to capture the force information, especially where the force sensor is hard to be attached on the robots. In this sense, it is worth developing a feasible and simple solution to recognize the impedance parameters by exploring the potential relationship among the above mentioned interaction profiles. To this end, a framework of impedance recognition based on different time-based weight membership functions with broad learning system (TWMF-BLS) is developed for stable/unstable pHRI. Specifically, a linear weight membership function and a nonlinear weight membership function are proposed for stable and unstable pHRI by using the hybrid features for estimating the interaction force. And then the human arm impedance can be estimated without a biological model or a robot's model. Experimental results have demonstrated the feasibility and effectiveness of the proposed approach.
{"title":"An Impedance Recognition Framework Based on Electromyogram for Physical Human–Robot Interaction","authors":"Jing Luo;Chaoyi Zhang;Chao Zeng;Yiming Jiang;Chenguang Yang","doi":"10.1109/TCDS.2024.3442172","DOIUrl":"10.1109/TCDS.2024.3442172","url":null,"abstract":"In physical human–robot interaction (pHRI), the interaction profiles, such as impedance and interaction force are greatly influenced by the operator's muscle activities, impedance and interaction force between the robot and the operator. Actually, parameters of interaction profiles are easy to be measured, such as position, velocity, acceleration, and muscle activities. However, the impedance cannot be directly measured. In some areas, it is difficult to capture the force information, especially where the force sensor is hard to be attached on the robots. In this sense, it is worth developing a feasible and simple solution to recognize the impedance parameters by exploring the potential relationship among the above mentioned interaction profiles. To this end, a framework of impedance recognition based on different time-based weight membership functions with broad learning system (TWMF-BLS) is developed for stable/unstable pHRI. Specifically, a linear weight membership function and a nonlinear weight membership function are proposed for stable and unstable pHRI by using the hybrid features for estimating the interaction force. And then the human arm impedance can be estimated without a biological model or a robot's model. Experimental results have demonstrated the feasibility and effectiveness of the proposed approach.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"17 1","pages":"205-218"},"PeriodicalIF":5.0,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142227654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-12DOI: 10.1109/TCDS.2024.3436255
{"title":"IEEE Transactions on Cognitive and Developmental Systems Information for Authors","authors":"","doi":"10.1109/TCDS.2024.3436255","DOIUrl":"https://doi.org/10.1109/TCDS.2024.3436255","url":null,"abstract":"","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 4","pages":"C4-C4"},"PeriodicalIF":5.0,"publicationDate":"2024-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10633870","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141973491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-12DOI: 10.1109/TCDS.2024.3436251
{"title":"IEEE Transactions on Cognitive and Developmental Systems Publication Information","authors":"","doi":"10.1109/TCDS.2024.3436251","DOIUrl":"https://doi.org/10.1109/TCDS.2024.3436251","url":null,"abstract":"","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 4","pages":"C2-C2"},"PeriodicalIF":5.0,"publicationDate":"2024-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10633810","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141973531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-12DOI: 10.1109/TCDS.2024.3436253
{"title":"IEEE Computational Intelligence Society Information","authors":"","doi":"10.1109/TCDS.2024.3436253","DOIUrl":"https://doi.org/10.1109/TCDS.2024.3436253","url":null,"abstract":"","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 4","pages":"C3-C3"},"PeriodicalIF":5.0,"publicationDate":"2024-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10633812","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141973521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-12DOI: 10.1109/TCDS.2024.3441865
Lu Dong;Pinle Ding;Xin Yuan;Andi Xu;Jie Gui
This article investigates the service path problem of multi-unmanned aerial vehicle (multi-UAV) providing communication services to multiuser in urban environments with limited endurance. Our goal is to learn an optimal multi-UAV centralized control policy that will enable UAVs to find the illumination areas in urban environments through curiosity-driven exploration and harvest energy to continue providing communication services to users. First, we propose a reinforcement learning (RL)-based multi-UAV centralized control strategy to maximize the accumulated communication service score. In the proposed framework, curiosity can act as an internal incentive signal, allowing UAVs to explore the environment without any prior knowledge. Second, a two-phase exploring protocol is proposed for practical implementation. Compared to the baseline method, our proposed method can achieve a significantly higher accumulated communication service score in the exploitation-intensive phase. The results demonstrate that the proposed method can obtain accurate service paths over the baseline method and handle the exploration-exploitation tradeoff well.
{"title":"Reinforcement-Learning-Based Multi-Unmanned Aerial Vehicle Optimal Control for Communication Services With Limited Endurance","authors":"Lu Dong;Pinle Ding;Xin Yuan;Andi Xu;Jie Gui","doi":"10.1109/TCDS.2024.3441865","DOIUrl":"10.1109/TCDS.2024.3441865","url":null,"abstract":"This article investigates the service path problem of multi-unmanned aerial vehicle (multi-UAV) providing communication services to multiuser in urban environments with limited endurance. Our goal is to learn an optimal multi-UAV centralized control policy that will enable UAVs to find the illumination areas in urban environments through curiosity-driven exploration and harvest energy to continue providing communication services to users. First, we propose a reinforcement learning (RL)-based multi-UAV centralized control strategy to maximize the accumulated communication service score. In the proposed framework, curiosity can act as an internal incentive signal, allowing UAVs to explore the environment without any prior knowledge. Second, a two-phase exploring protocol is proposed for practical implementation. Compared to the baseline method, our proposed method can achieve a significantly higher accumulated communication service score in the exploitation-intensive phase. The results demonstrate that the proposed method can obtain accurate service paths over the baseline method and handle the exploration-exploitation tradeoff well.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"17 1","pages":"219-231"},"PeriodicalIF":5.0,"publicationDate":"2024-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142227646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Since sudden and recurrent epileptic seizures seriously affect people's lives, computer-aided automatic seizure detection is crucial for precise diagnosis and prompt treatment. A novel seizure detection algorithm named channel selection-based temporal convolutional network (CS-TCN) was proposed in this article. First, electroencephalogram (EEG) recordings were segmented into 2-s intervals and features were extracted from both the time and frequency domains. Then, the expanded fisher score channel selection method was employed to select channels that contribute the most to seizure detection. Finally, the features from selected EEG channels were fed into the TCN to capture inherent temporal dependencies of EEG signals and detect seizure events. Children Hospital Boston and Massachusetts Institute of Technology (CHB-MIT) and Siena datasets were used to verify the detection performance of the CS-TCN algorithm, achieving sensitivities of 98.56% and 98.88%, and specificities of 99.80% and 99.88% in samplewise analysis, respectively. In eventwise analysis, the algorithm achieved sensitivities of 97.57% and 95.00%, with delays of 6.91 and 18.62 s, and FDR/h of 0.11 and 0.39, respectively. These results surpassed state-of-the-art few-channel algorithms for both datasets. CS-TCN algorithm offers excellent performance while simplifying model complexity and computational requirements, thus showcasing its potential for facilitating seizure detection in home environments.
{"title":"Channel-Selection-Based Temporal Convolutional Network for Patient-Specific Epileptic Seizure Detection","authors":"Guangming Wang;Xiyuan Lei;Wen Li;Won Hee Lee;Lianchi Huang;Jialin Zhu;Shanshan Jia;Dong Wang;Yang Zheng;Hua Zhang;Badong Chen;Gang Wang","doi":"10.1109/TCDS.2024.3433551","DOIUrl":"10.1109/TCDS.2024.3433551","url":null,"abstract":"Since sudden and recurrent epileptic seizures seriously affect people's lives, computer-aided automatic seizure detection is crucial for precise diagnosis and prompt treatment. A novel seizure detection algorithm named channel selection-based temporal convolutional network (CS-TCN) was proposed in this article. First, electroencephalogram (EEG) recordings were segmented into 2-s intervals and features were extracted from both the time and frequency domains. Then, the expanded fisher score channel selection method was employed to select channels that contribute the most to seizure detection. Finally, the features from selected EEG channels were fed into the TCN to capture inherent temporal dependencies of EEG signals and detect seizure events. Children Hospital Boston and Massachusetts Institute of Technology (CHB-MIT) and Siena datasets were used to verify the detection performance of the CS-TCN algorithm, achieving sensitivities of 98.56% and 98.88%, and specificities of 99.80% and 99.88% in samplewise analysis, respectively. In eventwise analysis, the algorithm achieved sensitivities of 97.57% and 95.00%, with delays of 6.91 and 18.62 s, and FDR/h of 0.11 and 0.39, respectively. These results surpassed state-of-the-art few-channel algorithms for both datasets. CS-TCN algorithm offers excellent performance while simplifying model complexity and computational requirements, thus showcasing its potential for facilitating seizure detection in home environments.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"17 1","pages":"179-188"},"PeriodicalIF":5.0,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141784744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}