首页 > 最新文献

Traitement Du Signal最新文献

英文 中文
Enhanced Classification of Alzheimer’s Disease Stages via Weighted Optimized Deep Neural Networks and MRI Image Analysis 基于加权优化深度神经网络和MRI图像分析的增强阿尔茨海默病分期分类
4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-10-30 DOI: 10.18280/ts.400538
Mudiyala Aparna, Battula Srinivasa Rao
Alzheimer's disease, a debilitating neurological disorder, precipitates irreversible cognitive decline and memory loss, predominantly affecting individuals aged 65 years and above. The need for an automated system capable of accurately diagnosing and stratifying Alzheimer's disease into distinct stages is paramount for early intervention and management. However, existing deep learning methodologies are often hampered by protracted training times. In this study, a time-efficient approach incorporating a two-phase transfer learning technique is proposed to surmount this challenge. This method is particularly efficacious in the analysis of Magnetic Resonance Imaging (MRI) data for the identification of Alzheimer's disease. The proposed detection system employs two-phase transfer learning, augmented with fine-tuning for multi-class classification of brain MRI scans. This allows for the categorization of images into four distinct classes: Mild Dementia (MD), Moderate Dementia (MOD), Non-Dementia (ND), and Very Mild Dementia (VMD). The classification of Alzheimer's disease was conducted using various pre-trained deep learning models, including ResNet50V2, InceptionResNetV2, Xception, DenseNet121, VGG16, and MobileNetV2. Among the models tested, ResNet50V2 demonstrated superior performance, achieving a training classification accuracy of 99.35% and a testing accuracy of 99.25%. The results underscore the potential of the proposed method in delivering more accurate classifications than those obtained from extant models, thereby contributing to the early detection and stratification of Alzheimer's disease.
{"title":"Enhanced Classification of Alzheimer’s Disease Stages via Weighted Optimized Deep Neural Networks and MRI Image Analysis","authors":"Mudiyala Aparna, Battula Srinivasa Rao","doi":"10.18280/ts.400538","DOIUrl":"https://doi.org/10.18280/ts.400538","url":null,"abstract":"Alzheimer's disease, a debilitating neurological disorder, precipitates irreversible cognitive decline and memory loss, predominantly affecting individuals aged 65 years and above. The need for an automated system capable of accurately diagnosing and stratifying Alzheimer's disease into distinct stages is paramount for early intervention and management. However, existing deep learning methodologies are often hampered by protracted training times. In this study, a time-efficient approach incorporating a two-phase transfer learning technique is proposed to surmount this challenge. This method is particularly efficacious in the analysis of Magnetic Resonance Imaging (MRI) data for the identification of Alzheimer's disease. The proposed detection system employs two-phase transfer learning, augmented with fine-tuning for multi-class classification of brain MRI scans. This allows for the categorization of images into four distinct classes: Mild Dementia (MD), Moderate Dementia (MOD), Non-Dementia (ND), and Very Mild Dementia (VMD). The classification of Alzheimer's disease was conducted using various pre-trained deep learning models, including ResNet50V2, InceptionResNetV2, Xception, DenseNet121, VGG16, and MobileNetV2. Among the models tested, ResNet50V2 demonstrated superior performance, achieving a training classification accuracy of 99.35% and a testing accuracy of 99.25%. The results underscore the potential of the proposed method in delivering more accurate classifications than those obtained from extant models, thereby contributing to the early detection and stratification of Alzheimer's disease.","PeriodicalId":49430,"journal":{"name":"Traitement Du Signal","volume":"105 -108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136023533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing the Quality of Compressed Breast Ultrasound Imagery through Application of Wavelet Convolutional Neural Networks 应用小波卷积神经网络提高乳房超声图像压缩质量
4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-10-30 DOI: 10.18280/ts.400531
Kenan Gencol, Murat Alparslan Gungor
ABSTRACT
{"title":"Enhancing the Quality of Compressed Breast Ultrasound Imagery through Application of Wavelet Convolutional Neural Networks","authors":"Kenan Gencol, Murat Alparslan Gungor","doi":"10.18280/ts.400531","DOIUrl":"https://doi.org/10.18280/ts.400531","url":null,"abstract":"ABSTRACT","PeriodicalId":49430,"journal":{"name":"Traitement Du Signal","volume":"135 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136067802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Predicting Flow-Induced Noise Based on an Improved Four-Dimensional Acoustic Analogy Model and Multi-Domain Feature Analysis 基于改进的四维声学类比模型和多域特征分析的流致噪声预测
4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-10-30 DOI: 10.18280/ts.400506
Wensi Zheng, Qiuhong Liu, Jinsheng Cai, Fang Wang
Flow-induced noise issues are widely present in practical engineering fields. Accurate prediction of noise signals is fundamental to studying the mechanism of noise generation and seeking effective noise suppression methods. Complete acoustic field information often includes both acoustic pressure and velocity vectors. However, the classic acoustic analogy theory can only consider the feature distribution of acoustic pressure. This study starts from the dimensionless Navier-Stokes equations followed by fluid motion and, with the concept of electromagnetic analogy, introduces a vector form of the fluctuation equation that includes density perturbations and velocities in three directions. By choosing the permeable integral surface surrounding the object as the sound source surface, this study further analyzes the composition of the volume source term and extract the complete load source term, proposing the time-domain integral analytical formula T4DC and the frequency-domain integral formula F4DC. Numerical predictions for stationary dipoles and rotating monopoles are carried out in the time domain, frequency domain, and spatial domain. The numerical results show that the time-domain and frequency-domain noise obtained by this method can be consistent with the analytical solution, while the method of Dunn has a significant difference from the analytical solution, especially for dipole noise distribution. Compared with the accurate solution, the acoustic velocity amplitude error obtained by Dunn's method reached more than 35% at m=1 frequency, fully demonstrating that our method can accurately predict far-field acoustic pressure and velocity vectors.
{"title":"Predicting Flow-Induced Noise Based on an Improved Four-Dimensional Acoustic Analogy Model and Multi-Domain Feature Analysis","authors":"Wensi Zheng, Qiuhong Liu, Jinsheng Cai, Fang Wang","doi":"10.18280/ts.400506","DOIUrl":"https://doi.org/10.18280/ts.400506","url":null,"abstract":"Flow-induced noise issues are widely present in practical engineering fields. Accurate prediction of noise signals is fundamental to studying the mechanism of noise generation and seeking effective noise suppression methods. Complete acoustic field information often includes both acoustic pressure and velocity vectors. However, the classic acoustic analogy theory can only consider the feature distribution of acoustic pressure. This study starts from the dimensionless Navier-Stokes equations followed by fluid motion and, with the concept of electromagnetic analogy, introduces a vector form of the fluctuation equation that includes density perturbations and velocities in three directions. By choosing the permeable integral surface surrounding the object as the sound source surface, this study further analyzes the composition of the volume source term and extract the complete load source term, proposing the time-domain integral analytical formula T4DC and the frequency-domain integral formula F4DC. Numerical predictions for stationary dipoles and rotating monopoles are carried out in the time domain, frequency domain, and spatial domain. The numerical results show that the time-domain and frequency-domain noise obtained by this method can be consistent with the analytical solution, while the method of Dunn has a significant difference from the analytical solution, especially for dipole noise distribution. Compared with the accurate solution, the acoustic velocity amplitude error obtained by Dunn's method reached more than 35% at m=1 frequency, fully demonstrating that our method can accurately predict far-field acoustic pressure and velocity vectors.","PeriodicalId":49430,"journal":{"name":"Traitement Du Signal","volume":"141 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136067954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hyper Spectral Imaging and Optimized Neural Networks for Early Detection of Grapevine Viral Disease 葡萄病毒病早期检测的高光谱成像和优化神经网络
4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-10-30 DOI: 10.18280/ts.400528
Rajalakshmi Somasundaram, Alagumani Selvaraj, Ananthi Rajakumar, Surendran Rajendran
.
{"title":"Hyper Spectral Imaging and Optimized Neural Networks for Early Detection of Grapevine Viral Disease","authors":"Rajalakshmi Somasundaram, Alagumani Selvaraj, Ananthi Rajakumar, Surendran Rajendran","doi":"10.18280/ts.400528","DOIUrl":"https://doi.org/10.18280/ts.400528","url":null,"abstract":".","PeriodicalId":49430,"journal":{"name":"Traitement Du Signal","volume":"181 S76","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136068616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhanced Emotion Recognition from Spoken Assamese Dialect: A Machine Learning Approach with Language-Independent Features 从阿萨姆语口语中增强情感识别:一种具有语言独立特征的机器学习方法
4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-10-30 DOI: 10.18280/ts.400532
Nupur Choudhury, Uzzal Sharma
ABSTRACT
{"title":"Enhanced Emotion Recognition from Spoken Assamese Dialect: A Machine Learning Approach with Language-Independent Features","authors":"Nupur Choudhury, Uzzal Sharma","doi":"10.18280/ts.400532","DOIUrl":"https://doi.org/10.18280/ts.400532","url":null,"abstract":"ABSTRACT","PeriodicalId":49430,"journal":{"name":"Traitement Du Signal","volume":"81 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136102948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Novel Swarm Unmanned Aerial Vehicle System: Incorporating Autonomous Flight, Real-Time Object Detection, and Coordinated Intelligence for Enhanced Performance 一种新型蜂群无人机系统:结合自主飞行、实时目标检测和协同智能以增强性能
4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-10-30 DOI: 10.18280/ts.400524
Murat Bakirci
Presently, swarm Unmanned Aerial Vehicle (UAV) systems confront an array of obstacles and constraints that detrimentally affect their efficiency and mission performance. These include restrictions on communication range, which impede operations across extensive terrains or remote locations; inadequate processing capabilities for intricate tasks such as real-time object detection or advanced data analytics; network congestion due to a large number of UAVs, resulting in delayed data exchange and potential communication failures; and power management inefficiencies reducing flight duration and overall mission endurance. Addressing these issues is paramount for the successful implementation and operation of swarm UAV systems across various real-world applications. This paper proposes a novel system designed to surmount these challenges through salient features such as fortified communication, collaborative hardware integration, task distribution, optimized network topology, and efficient routing protocols. Cost-effectiveness was prioritized in selecting the most accessible equipment satisfying minimum requirements, identified through comprehensive literature and market review. By focusing on energy efficiency and high performance, successful cooperation was facilitated through harmonized equipment and effective task division. The proposed system utilizes Raspberry Pi and Jetson Nano for task division, endowing the UAVs with superior intelligence for navigating intricate environments, real-time object detection, and the execution of coordinated actions. The incorporation of the Ad Hoc UAV Network's decentralized approach enables system adaptability and expansion in response to evolving environments and mission demands. An efficient routing protocol was selected for the system, minimizing unnecessary broadcasting and reducing network congestion, thereby ensuring extended flight durations and enhanced mission capabilities for UAVs with limited battery capacity. Through the careful selection and testing of hardware and software components, the proposed swarm UAV system improves communication range, processing power, autonomy, scalability, and energy efficiency. This makes it highly adaptable and effective for a broad spectrum of real-world applications. The proposed system sets a new standard in the field, demonstrating how the integration of intelligent hardware, optimized task division, and efficient networking can overcome the limitations of current swarm UAV systems.
{"title":"A Novel Swarm Unmanned Aerial Vehicle System: Incorporating Autonomous Flight, Real-Time Object Detection, and Coordinated Intelligence for Enhanced Performance","authors":"Murat Bakirci","doi":"10.18280/ts.400524","DOIUrl":"https://doi.org/10.18280/ts.400524","url":null,"abstract":"Presently, swarm Unmanned Aerial Vehicle (UAV) systems confront an array of obstacles and constraints that detrimentally affect their efficiency and mission performance. These include restrictions on communication range, which impede operations across extensive terrains or remote locations; inadequate processing capabilities for intricate tasks such as real-time object detection or advanced data analytics; network congestion due to a large number of UAVs, resulting in delayed data exchange and potential communication failures; and power management inefficiencies reducing flight duration and overall mission endurance. Addressing these issues is paramount for the successful implementation and operation of swarm UAV systems across various real-world applications. This paper proposes a novel system designed to surmount these challenges through salient features such as fortified communication, collaborative hardware integration, task distribution, optimized network topology, and efficient routing protocols. Cost-effectiveness was prioritized in selecting the most accessible equipment satisfying minimum requirements, identified through comprehensive literature and market review. By focusing on energy efficiency and high performance, successful cooperation was facilitated through harmonized equipment and effective task division. The proposed system utilizes Raspberry Pi and Jetson Nano for task division, endowing the UAVs with superior intelligence for navigating intricate environments, real-time object detection, and the execution of coordinated actions. The incorporation of the Ad Hoc UAV Network's decentralized approach enables system adaptability and expansion in response to evolving environments and mission demands. An efficient routing protocol was selected for the system, minimizing unnecessary broadcasting and reducing network congestion, thereby ensuring extended flight durations and enhanced mission capabilities for UAVs with limited battery capacity. Through the careful selection and testing of hardware and software components, the proposed swarm UAV system improves communication range, processing power, autonomy, scalability, and energy efficiency. This makes it highly adaptable and effective for a broad spectrum of real-world applications. The proposed system sets a new standard in the field, demonstrating how the integration of intelligent hardware, optimized task division, and efficient networking can overcome the limitations of current swarm UAV systems.","PeriodicalId":49430,"journal":{"name":"Traitement Du Signal","volume":"17 7","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136103998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic Depth Estimation and Background Blurring of Animated Scenes Based on Deep Learning 基于深度学习的动画场景自动深度估计和背景模糊
4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-10-30 DOI: 10.18280/ts.400539
Chao He, Yi Jia
Animation technology enables more accurate depth estimation and background blurring of animated scenes as it can enhance the sense of reality of the vision and increase its depth, thus it has become a hot spot in relevant research and production these days. However, although deep learning has made significant progresses in many research fields, its application in depth estimation and background blurring of animated scenes is still facing a few challenges. Most available technologies are for real world images, not animations, so there are certain difficulties capturing the unique styles of animations and their details. This study proposes two technical schemes specifically designed for animated scenes: a depth estimation model based on DenseNet , and a deblurring algorithm based on Very Deep Super Resolution ( VDSR ), in the hopes of providing solutions for the above mentioned matters, as well as forging more efficient and accurate tools for the animation industry.
{"title":"Automatic Depth Estimation and Background Blurring of Animated Scenes Based on Deep Learning","authors":"Chao He, Yi Jia","doi":"10.18280/ts.400539","DOIUrl":"https://doi.org/10.18280/ts.400539","url":null,"abstract":"Animation technology enables more accurate depth estimation and background blurring of animated scenes as it can enhance the sense of reality of the vision and increase its depth, thus it has become a hot spot in relevant research and production these days. However, although deep learning has made significant progresses in many research fields, its application in depth estimation and background blurring of animated scenes is still facing a few challenges. Most available technologies are for real world images, not animations, so there are certain difficulties capturing the unique styles of animations and their details. This study proposes two technical schemes specifically designed for animated scenes: a depth estimation model based on DenseNet , and a deblurring algorithm based on Very Deep Super Resolution ( VDSR ), in the hopes of providing solutions for the above mentioned matters, as well as forging more efficient and accurate tools for the animation industry.","PeriodicalId":49430,"journal":{"name":"Traitement Du Signal","volume":"218 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136022717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integration of Face and Gait Recognition via Transfer Learning: A Multiscale Biometric Identification Approach 基于迁移学习的人脸和步态识别集成:一种多尺度生物特征识别方法
4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-10-30 DOI: 10.18280/ts.400535
Dindar M. Ahmed, Basil Sh. Mahmood
The ubiquity of biometric identification systems and their applications is evident in today's world. Among various biometric features, face and gait are readily obtainable and thus hold significant value. Advances in computational vision and deep learning have paved the way for the integration of these biometric features at multiple scales. This study introduces a system for biometric recognition that synergises face and gait recognition through the lens of transfer learning. Feature extraction was accomplished using Inception_v3 and DenseNet201 algorithms, while classification was performed employing machine learning algorithms such as K-Nearest Neighbours (KNN) and Support Vector Classification (SVC). A unique dataset was constructed for this research, consisting of face and gait information extracted from video clips. The findings underscore the efficacy of integrating face and gait recognition, primarily through feature and score fusion, resulting in enhanced recognition accuracy. Specifically, the Inception_v3 algorithm was found to excel in feature extraction, and SVC was superior for classification purposes. The system achieved an accuracy of 98% when feature-level fusion was performed, and 97% accuracy was observed with score fusion using Decision Trees. The results highlight the potential of transfer learning in advancing multiscale biometric recognition systems.
{"title":"Integration of Face and Gait Recognition via Transfer Learning: A Multiscale Biometric Identification Approach","authors":"Dindar M. Ahmed, Basil Sh. Mahmood","doi":"10.18280/ts.400535","DOIUrl":"https://doi.org/10.18280/ts.400535","url":null,"abstract":"The ubiquity of biometric identification systems and their applications is evident in today's world. Among various biometric features, face and gait are readily obtainable and thus hold significant value. Advances in computational vision and deep learning have paved the way for the integration of these biometric features at multiple scales. This study introduces a system for biometric recognition that synergises face and gait recognition through the lens of transfer learning. Feature extraction was accomplished using Inception_v3 and DenseNet201 algorithms, while classification was performed employing machine learning algorithms such as K-Nearest Neighbours (KNN) and Support Vector Classification (SVC). A unique dataset was constructed for this research, consisting of face and gait information extracted from video clips. The findings underscore the efficacy of integrating face and gait recognition, primarily through feature and score fusion, resulting in enhanced recognition accuracy. Specifically, the Inception_v3 algorithm was found to excel in feature extraction, and SVC was superior for classification purposes. The system achieved an accuracy of 98% when feature-level fusion was performed, and 97% accuracy was observed with score fusion using Decision Trees. The results highlight the potential of transfer learning in advancing multiscale biometric recognition systems.","PeriodicalId":49430,"journal":{"name":"Traitement Du Signal","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136023259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Advancements in Image Feature-Based Classification of Motor Imagery EEG Data: A Comprehensive Review 基于图像特征的运动图像脑电数据分类研究进展
4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-10-30 DOI: 10.18280/ts.400507
Cagatay Murat Yilmaz, Bahar Hatipoglu Yilmaz
Non-invasive acquisition and analysis of human brain signals play a crucial role in the development of brain-computer interfaces, enabling their widespread applicability in daily life. Motor imagery has emerged as a prominent technique for the advancement of such interfaces. While initial machine and deep learning studies have shown promising results in the context of motor imagery, several challenges remain to be addressed prior to their extensive adoption. Deep learning, renowned for its automated feature extraction and classification capabilities, has been successfully employed in various domains. Notably, recent research efforts have focused on processing and classifying motor imagery EEG signals using two-dimensional data formats, yielding noteworthy advancements. Although existing literature encompasses reviews primarily centered on machine learning or deep learning techniques, this paper uniquely emphasizes the review of methods for constructing two-dimensional image features, marking the first comprehensive exploration of this subject. In this study, we present an overview of datasets, survey a range of signal-to-image conversion methods, and discuss classification approaches. Furthermore, we comprehensively examine the current challenges and outline future directions for this research domain.
{"title":"Advancements in Image Feature-Based Classification of Motor Imagery EEG Data: A Comprehensive Review","authors":"Cagatay Murat Yilmaz, Bahar Hatipoglu Yilmaz","doi":"10.18280/ts.400507","DOIUrl":"https://doi.org/10.18280/ts.400507","url":null,"abstract":"Non-invasive acquisition and analysis of human brain signals play a crucial role in the development of brain-computer interfaces, enabling their widespread applicability in daily life. Motor imagery has emerged as a prominent technique for the advancement of such interfaces. While initial machine and deep learning studies have shown promising results in the context of motor imagery, several challenges remain to be addressed prior to their extensive adoption. Deep learning, renowned for its automated feature extraction and classification capabilities, has been successfully employed in various domains. Notably, recent research efforts have focused on processing and classifying motor imagery EEG signals using two-dimensional data formats, yielding noteworthy advancements. Although existing literature encompasses reviews primarily centered on machine learning or deep learning techniques, this paper uniquely emphasizes the review of methods for constructing two-dimensional image features, marking the first comprehensive exploration of this subject. In this study, we present an overview of datasets, survey a range of signal-to-image conversion methods, and discuss classification approaches. Furthermore, we comprehensively examine the current challenges and outline future directions for this research domain.","PeriodicalId":49430,"journal":{"name":"Traitement Du Signal","volume":"31 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136068344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Utilizing Deep Learning-Based Fusion of Laser Point Cloud Data and Imagery for Digital Measurement in Steel Truss Member Applications 基于深度学习的激光点云数据和图像融合在钢桁架构件应用中的数字测量
4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-10-30 DOI: 10.18280/ts.400516
Wenxian Li, Zhimin Liu
{"title":"Utilizing Deep Learning-Based Fusion of Laser Point Cloud Data and Imagery for Digital Measurement in Steel Truss Member Applications","authors":"Wenxian Li, Zhimin Liu","doi":"10.18280/ts.400516","DOIUrl":"https://doi.org/10.18280/ts.400516","url":null,"abstract":"","PeriodicalId":49430,"journal":{"name":"Traitement Du Signal","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136103624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Traitement Du Signal
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1