Alzheimer's disease, a debilitating neurological disorder, precipitates irreversible cognitive decline and memory loss, predominantly affecting individuals aged 65 years and above. The need for an automated system capable of accurately diagnosing and stratifying Alzheimer's disease into distinct stages is paramount for early intervention and management. However, existing deep learning methodologies are often hampered by protracted training times. In this study, a time-efficient approach incorporating a two-phase transfer learning technique is proposed to surmount this challenge. This method is particularly efficacious in the analysis of Magnetic Resonance Imaging (MRI) data for the identification of Alzheimer's disease. The proposed detection system employs two-phase transfer learning, augmented with fine-tuning for multi-class classification of brain MRI scans. This allows for the categorization of images into four distinct classes: Mild Dementia (MD), Moderate Dementia (MOD), Non-Dementia (ND), and Very Mild Dementia (VMD). The classification of Alzheimer's disease was conducted using various pre-trained deep learning models, including ResNet50V2, InceptionResNetV2, Xception, DenseNet121, VGG16, and MobileNetV2. Among the models tested, ResNet50V2 demonstrated superior performance, achieving a training classification accuracy of 99.35% and a testing accuracy of 99.25%. The results underscore the potential of the proposed method in delivering more accurate classifications than those obtained from extant models, thereby contributing to the early detection and stratification of Alzheimer's disease.
{"title":"Enhanced Classification of Alzheimer’s Disease Stages via Weighted Optimized Deep Neural Networks and MRI Image Analysis","authors":"Mudiyala Aparna, Battula Srinivasa Rao","doi":"10.18280/ts.400538","DOIUrl":"https://doi.org/10.18280/ts.400538","url":null,"abstract":"Alzheimer's disease, a debilitating neurological disorder, precipitates irreversible cognitive decline and memory loss, predominantly affecting individuals aged 65 years and above. The need for an automated system capable of accurately diagnosing and stratifying Alzheimer's disease into distinct stages is paramount for early intervention and management. However, existing deep learning methodologies are often hampered by protracted training times. In this study, a time-efficient approach incorporating a two-phase transfer learning technique is proposed to surmount this challenge. This method is particularly efficacious in the analysis of Magnetic Resonance Imaging (MRI) data for the identification of Alzheimer's disease. The proposed detection system employs two-phase transfer learning, augmented with fine-tuning for multi-class classification of brain MRI scans. This allows for the categorization of images into four distinct classes: Mild Dementia (MD), Moderate Dementia (MOD), Non-Dementia (ND), and Very Mild Dementia (VMD). The classification of Alzheimer's disease was conducted using various pre-trained deep learning models, including ResNet50V2, InceptionResNetV2, Xception, DenseNet121, VGG16, and MobileNetV2. Among the models tested, ResNet50V2 demonstrated superior performance, achieving a training classification accuracy of 99.35% and a testing accuracy of 99.25%. The results underscore the potential of the proposed method in delivering more accurate classifications than those obtained from extant models, thereby contributing to the early detection and stratification of Alzheimer's disease.","PeriodicalId":49430,"journal":{"name":"Traitement Du Signal","volume":"105 -108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136023533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Enhancing the Quality of Compressed Breast Ultrasound Imagery through Application of Wavelet Convolutional Neural Networks","authors":"Kenan Gencol, Murat Alparslan Gungor","doi":"10.18280/ts.400531","DOIUrl":"https://doi.org/10.18280/ts.400531","url":null,"abstract":"ABSTRACT","PeriodicalId":49430,"journal":{"name":"Traitement Du Signal","volume":"135 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136067802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Flow-induced noise issues are widely present in practical engineering fields. Accurate prediction of noise signals is fundamental to studying the mechanism of noise generation and seeking effective noise suppression methods. Complete acoustic field information often includes both acoustic pressure and velocity vectors. However, the classic acoustic analogy theory can only consider the feature distribution of acoustic pressure. This study starts from the dimensionless Navier-Stokes equations followed by fluid motion and, with the concept of electromagnetic analogy, introduces a vector form of the fluctuation equation that includes density perturbations and velocities in three directions. By choosing the permeable integral surface surrounding the object as the sound source surface, this study further analyzes the composition of the volume source term and extract the complete load source term, proposing the time-domain integral analytical formula T4DC and the frequency-domain integral formula F4DC. Numerical predictions for stationary dipoles and rotating monopoles are carried out in the time domain, frequency domain, and spatial domain. The numerical results show that the time-domain and frequency-domain noise obtained by this method can be consistent with the analytical solution, while the method of Dunn has a significant difference from the analytical solution, especially for dipole noise distribution. Compared with the accurate solution, the acoustic velocity amplitude error obtained by Dunn's method reached more than 35% at m=1 frequency, fully demonstrating that our method can accurately predict far-field acoustic pressure and velocity vectors.
{"title":"Predicting Flow-Induced Noise Based on an Improved Four-Dimensional Acoustic Analogy Model and Multi-Domain Feature Analysis","authors":"Wensi Zheng, Qiuhong Liu, Jinsheng Cai, Fang Wang","doi":"10.18280/ts.400506","DOIUrl":"https://doi.org/10.18280/ts.400506","url":null,"abstract":"Flow-induced noise issues are widely present in practical engineering fields. Accurate prediction of noise signals is fundamental to studying the mechanism of noise generation and seeking effective noise suppression methods. Complete acoustic field information often includes both acoustic pressure and velocity vectors. However, the classic acoustic analogy theory can only consider the feature distribution of acoustic pressure. This study starts from the dimensionless Navier-Stokes equations followed by fluid motion and, with the concept of electromagnetic analogy, introduces a vector form of the fluctuation equation that includes density perturbations and velocities in three directions. By choosing the permeable integral surface surrounding the object as the sound source surface, this study further analyzes the composition of the volume source term and extract the complete load source term, proposing the time-domain integral analytical formula T4DC and the frequency-domain integral formula F4DC. Numerical predictions for stationary dipoles and rotating monopoles are carried out in the time domain, frequency domain, and spatial domain. The numerical results show that the time-domain and frequency-domain noise obtained by this method can be consistent with the analytical solution, while the method of Dunn has a significant difference from the analytical solution, especially for dipole noise distribution. Compared with the accurate solution, the acoustic velocity amplitude error obtained by Dunn's method reached more than 35% at m=1 frequency, fully demonstrating that our method can accurately predict far-field acoustic pressure and velocity vectors.","PeriodicalId":49430,"journal":{"name":"Traitement Du Signal","volume":"141 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136067954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Hyper Spectral Imaging and Optimized Neural Networks for Early Detection of Grapevine Viral Disease","authors":"Rajalakshmi Somasundaram, Alagumani Selvaraj, Ananthi Rajakumar, Surendran Rajendran","doi":"10.18280/ts.400528","DOIUrl":"https://doi.org/10.18280/ts.400528","url":null,"abstract":".","PeriodicalId":49430,"journal":{"name":"Traitement Du Signal","volume":"181 S76","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136068616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Presently, swarm Unmanned Aerial Vehicle (UAV) systems confront an array of obstacles and constraints that detrimentally affect their efficiency and mission performance. These include restrictions on communication range, which impede operations across extensive terrains or remote locations; inadequate processing capabilities for intricate tasks such as real-time object detection or advanced data analytics; network congestion due to a large number of UAVs, resulting in delayed data exchange and potential communication failures; and power management inefficiencies reducing flight duration and overall mission endurance. Addressing these issues is paramount for the successful implementation and operation of swarm UAV systems across various real-world applications. This paper proposes a novel system designed to surmount these challenges through salient features such as fortified communication, collaborative hardware integration, task distribution, optimized network topology, and efficient routing protocols. Cost-effectiveness was prioritized in selecting the most accessible equipment satisfying minimum requirements, identified through comprehensive literature and market review. By focusing on energy efficiency and high performance, successful cooperation was facilitated through harmonized equipment and effective task division. The proposed system utilizes Raspberry Pi and Jetson Nano for task division, endowing the UAVs with superior intelligence for navigating intricate environments, real-time object detection, and the execution of coordinated actions. The incorporation of the Ad Hoc UAV Network's decentralized approach enables system adaptability and expansion in response to evolving environments and mission demands. An efficient routing protocol was selected for the system, minimizing unnecessary broadcasting and reducing network congestion, thereby ensuring extended flight durations and enhanced mission capabilities for UAVs with limited battery capacity. Through the careful selection and testing of hardware and software components, the proposed swarm UAV system improves communication range, processing power, autonomy, scalability, and energy efficiency. This makes it highly adaptable and effective for a broad spectrum of real-world applications. The proposed system sets a new standard in the field, demonstrating how the integration of intelligent hardware, optimized task division, and efficient networking can overcome the limitations of current swarm UAV systems.
{"title":"A Novel Swarm Unmanned Aerial Vehicle System: Incorporating Autonomous Flight, Real-Time Object Detection, and Coordinated Intelligence for Enhanced Performance","authors":"Murat Bakirci","doi":"10.18280/ts.400524","DOIUrl":"https://doi.org/10.18280/ts.400524","url":null,"abstract":"Presently, swarm Unmanned Aerial Vehicle (UAV) systems confront an array of obstacles and constraints that detrimentally affect their efficiency and mission performance. These include restrictions on communication range, which impede operations across extensive terrains or remote locations; inadequate processing capabilities for intricate tasks such as real-time object detection or advanced data analytics; network congestion due to a large number of UAVs, resulting in delayed data exchange and potential communication failures; and power management inefficiencies reducing flight duration and overall mission endurance. Addressing these issues is paramount for the successful implementation and operation of swarm UAV systems across various real-world applications. This paper proposes a novel system designed to surmount these challenges through salient features such as fortified communication, collaborative hardware integration, task distribution, optimized network topology, and efficient routing protocols. Cost-effectiveness was prioritized in selecting the most accessible equipment satisfying minimum requirements, identified through comprehensive literature and market review. By focusing on energy efficiency and high performance, successful cooperation was facilitated through harmonized equipment and effective task division. The proposed system utilizes Raspberry Pi and Jetson Nano for task division, endowing the UAVs with superior intelligence for navigating intricate environments, real-time object detection, and the execution of coordinated actions. The incorporation of the Ad Hoc UAV Network's decentralized approach enables system adaptability and expansion in response to evolving environments and mission demands. An efficient routing protocol was selected for the system, minimizing unnecessary broadcasting and reducing network congestion, thereby ensuring extended flight durations and enhanced mission capabilities for UAVs with limited battery capacity. Through the careful selection and testing of hardware and software components, the proposed swarm UAV system improves communication range, processing power, autonomy, scalability, and energy efficiency. This makes it highly adaptable and effective for a broad spectrum of real-world applications. The proposed system sets a new standard in the field, demonstrating how the integration of intelligent hardware, optimized task division, and efficient networking can overcome the limitations of current swarm UAV systems.","PeriodicalId":49430,"journal":{"name":"Traitement Du Signal","volume":"17 7","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136103998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Animation technology enables more accurate depth estimation and background blurring of animated scenes as it can enhance the sense of reality of the vision and increase its depth, thus it has become a hot spot in relevant research and production these days. However, although deep learning has made significant progresses in many research fields, its application in depth estimation and background blurring of animated scenes is still facing a few challenges. Most available technologies are for real world images, not animations, so there are certain difficulties capturing the unique styles of animations and their details. This study proposes two technical schemes specifically designed for animated scenes: a depth estimation model based on DenseNet , and a deblurring algorithm based on Very Deep Super Resolution ( VDSR ), in the hopes of providing solutions for the above mentioned matters, as well as forging more efficient and accurate tools for the animation industry.
{"title":"Automatic Depth Estimation and Background Blurring of Animated Scenes Based on Deep Learning","authors":"Chao He, Yi Jia","doi":"10.18280/ts.400539","DOIUrl":"https://doi.org/10.18280/ts.400539","url":null,"abstract":"Animation technology enables more accurate depth estimation and background blurring of animated scenes as it can enhance the sense of reality of the vision and increase its depth, thus it has become a hot spot in relevant research and production these days. However, although deep learning has made significant progresses in many research fields, its application in depth estimation and background blurring of animated scenes is still facing a few challenges. Most available technologies are for real world images, not animations, so there are certain difficulties capturing the unique styles of animations and their details. This study proposes two technical schemes specifically designed for animated scenes: a depth estimation model based on DenseNet , and a deblurring algorithm based on Very Deep Super Resolution ( VDSR ), in the hopes of providing solutions for the above mentioned matters, as well as forging more efficient and accurate tools for the animation industry.","PeriodicalId":49430,"journal":{"name":"Traitement Du Signal","volume":"218 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136022717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The ubiquity of biometric identification systems and their applications is evident in today's world. Among various biometric features, face and gait are readily obtainable and thus hold significant value. Advances in computational vision and deep learning have paved the way for the integration of these biometric features at multiple scales. This study introduces a system for biometric recognition that synergises face and gait recognition through the lens of transfer learning. Feature extraction was accomplished using Inception_v3 and DenseNet201 algorithms, while classification was performed employing machine learning algorithms such as K-Nearest Neighbours (KNN) and Support Vector Classification (SVC). A unique dataset was constructed for this research, consisting of face and gait information extracted from video clips. The findings underscore the efficacy of integrating face and gait recognition, primarily through feature and score fusion, resulting in enhanced recognition accuracy. Specifically, the Inception_v3 algorithm was found to excel in feature extraction, and SVC was superior for classification purposes. The system achieved an accuracy of 98% when feature-level fusion was performed, and 97% accuracy was observed with score fusion using Decision Trees. The results highlight the potential of transfer learning in advancing multiscale biometric recognition systems.
{"title":"Integration of Face and Gait Recognition via Transfer Learning: A Multiscale Biometric Identification Approach","authors":"Dindar M. Ahmed, Basil Sh. Mahmood","doi":"10.18280/ts.400535","DOIUrl":"https://doi.org/10.18280/ts.400535","url":null,"abstract":"The ubiquity of biometric identification systems and their applications is evident in today's world. Among various biometric features, face and gait are readily obtainable and thus hold significant value. Advances in computational vision and deep learning have paved the way for the integration of these biometric features at multiple scales. This study introduces a system for biometric recognition that synergises face and gait recognition through the lens of transfer learning. Feature extraction was accomplished using Inception_v3 and DenseNet201 algorithms, while classification was performed employing machine learning algorithms such as K-Nearest Neighbours (KNN) and Support Vector Classification (SVC). A unique dataset was constructed for this research, consisting of face and gait information extracted from video clips. The findings underscore the efficacy of integrating face and gait recognition, primarily through feature and score fusion, resulting in enhanced recognition accuracy. Specifically, the Inception_v3 algorithm was found to excel in feature extraction, and SVC was superior for classification purposes. The system achieved an accuracy of 98% when feature-level fusion was performed, and 97% accuracy was observed with score fusion using Decision Trees. The results highlight the potential of transfer learning in advancing multiscale biometric recognition systems.","PeriodicalId":49430,"journal":{"name":"Traitement Du Signal","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136023259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Non-invasive acquisition and analysis of human brain signals play a crucial role in the development of brain-computer interfaces, enabling their widespread applicability in daily life. Motor imagery has emerged as a prominent technique for the advancement of such interfaces. While initial machine and deep learning studies have shown promising results in the context of motor imagery, several challenges remain to be addressed prior to their extensive adoption. Deep learning, renowned for its automated feature extraction and classification capabilities, has been successfully employed in various domains. Notably, recent research efforts have focused on processing and classifying motor imagery EEG signals using two-dimensional data formats, yielding noteworthy advancements. Although existing literature encompasses reviews primarily centered on machine learning or deep learning techniques, this paper uniquely emphasizes the review of methods for constructing two-dimensional image features, marking the first comprehensive exploration of this subject. In this study, we present an overview of datasets, survey a range of signal-to-image conversion methods, and discuss classification approaches. Furthermore, we comprehensively examine the current challenges and outline future directions for this research domain.
{"title":"Advancements in Image Feature-Based Classification of Motor Imagery EEG Data: A Comprehensive Review","authors":"Cagatay Murat Yilmaz, Bahar Hatipoglu Yilmaz","doi":"10.18280/ts.400507","DOIUrl":"https://doi.org/10.18280/ts.400507","url":null,"abstract":"Non-invasive acquisition and analysis of human brain signals play a crucial role in the development of brain-computer interfaces, enabling their widespread applicability in daily life. Motor imagery has emerged as a prominent technique for the advancement of such interfaces. While initial machine and deep learning studies have shown promising results in the context of motor imagery, several challenges remain to be addressed prior to their extensive adoption. Deep learning, renowned for its automated feature extraction and classification capabilities, has been successfully employed in various domains. Notably, recent research efforts have focused on processing and classifying motor imagery EEG signals using two-dimensional data formats, yielding noteworthy advancements. Although existing literature encompasses reviews primarily centered on machine learning or deep learning techniques, this paper uniquely emphasizes the review of methods for constructing two-dimensional image features, marking the first comprehensive exploration of this subject. In this study, we present an overview of datasets, survey a range of signal-to-image conversion methods, and discuss classification approaches. Furthermore, we comprehensively examine the current challenges and outline future directions for this research domain.","PeriodicalId":49430,"journal":{"name":"Traitement Du Signal","volume":"31 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136068344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Utilizing Deep Learning-Based Fusion of Laser Point Cloud Data and Imagery for Digital Measurement in Steel Truss Member Applications","authors":"Wenxian Li, Zhimin Liu","doi":"10.18280/ts.400516","DOIUrl":"https://doi.org/10.18280/ts.400516","url":null,"abstract":"","PeriodicalId":49430,"journal":{"name":"Traitement Du Signal","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136103624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}