Speech serves as a crucial mode of expression for individuals to articulate their thoughts and can offer valuable insight into their emotional state. Various research has been conducted to identify metrics that can be used to determine the emotional sentiment hidden in an audio signal. This paper presents an exploratory analysis of various audio features, including Chroma features, MFCCs, Spectral features, and flattened spectrogram features (obtained using VGG-19 convolutional neural network) for sentiment analysis in the audio signals. This study evaluates the effectiveness of combining various audio features in determining emotional states expressed in a speech using the Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS). Baseline techniques such as Random Forest, Multi-Layer Perceptron (MLP), Logistic Regression, XgBoost, and Support Vector Machine (SVM) are used to compare the performance of the features. The results obtained from the study provide insight into the potential of utilizing these audio features to determine emotional states expressed in speech.
{"title":"Statistical and Deep Convolutional Feature Fusion for Emotion Detection from Audio Signal","authors":"Durgesh Ameta, Vinay Gupta, Rohit Pilakkottil Sathian, Laxmidhar Behera, Tushar Sandhan","doi":"10.1109/ICBSII58188.2023.10181060","DOIUrl":"https://doi.org/10.1109/ICBSII58188.2023.10181060","url":null,"abstract":"Speech serves as a crucial mode of expression for individuals to articulate their thoughts and can offer valuable insight into their emotional state. Various research has been conducted to identify metrics that can be used to determine the emotional sentiment hidden in an audio signal. This paper presents an exploratory analysis of various audio features, including Chroma features, MFCCs, Spectral features, and flattened spectrogram features (obtained using VGG-19 convolutional neural network) for sentiment analysis in the audio signals. This study evaluates the effectiveness of combining various audio features in determining emotional states expressed in a speech using the Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS). Baseline techniques such as Random Forest, Multi-Layer Perceptron (MLP), Logistic Regression, XgBoost, and Support Vector Machine (SVM) are used to compare the performance of the features. The results obtained from the study provide insight into the potential of utilizing these audio features to determine emotional states expressed in speech.","PeriodicalId":388866,"journal":{"name":"2023 International Conference on Bio Signals, Images, and Instrumentation (ICBSII)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116765981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-16DOI: 10.1109/ICBSII58188.2023.10181073
Rohit Jacob George, S. Charaan, R. Swathi, S. Rani
This paper focuses on applying an algorithm for real-time blur detection in fundus images via hardware acceleration. Blur in fundus images is caused due to many factors, but most of the time, with a reasonable degree of accuracy, they could be classified as motion blur. A motion blur could be modelled as an image convolved with a blur transfer function. Blur metrics are identified via techniques such as Haar DWT as it gives reasonable accuracy for most types of linear blur. First, a hardware architecture using Verilog HDL is created that computes the edge maps of images. This architecture is based on a novel algorithm that encompasses a series of Haar DWT Units. The simplicity and flexibility in this proposed architecture allow any kind of software or hardware platform to integrate the proposed model with very little to no modification, onto them. Subsequently, the IP core for the proposed architecture is developed, which can be further extended into an SoC, which can then be programmed onto a suitable FPGA system, which could then be uploaded with images that get classified as blurred and clear images. The on-chip processing system of the FPGA-SoC reads the image data and sends it to the Blur Detector IP via the DMA IP in the SoC. The whole process uses a double-buffered design in order to reduce IP stall time and increase efficiency.
{"title":"Design of an IP core for motion blur detection in fundus images using an FPGA-based accelerator","authors":"Rohit Jacob George, S. Charaan, R. Swathi, S. Rani","doi":"10.1109/ICBSII58188.2023.10181073","DOIUrl":"https://doi.org/10.1109/ICBSII58188.2023.10181073","url":null,"abstract":"This paper focuses on applying an algorithm for real-time blur detection in fundus images via hardware acceleration. Blur in fundus images is caused due to many factors, but most of the time, with a reasonable degree of accuracy, they could be classified as motion blur. A motion blur could be modelled as an image convolved with a blur transfer function. Blur metrics are identified via techniques such as Haar DWT as it gives reasonable accuracy for most types of linear blur. First, a hardware architecture using Verilog HDL is created that computes the edge maps of images. This architecture is based on a novel algorithm that encompasses a series of Haar DWT Units. The simplicity and flexibility in this proposed architecture allow any kind of software or hardware platform to integrate the proposed model with very little to no modification, onto them. Subsequently, the IP core for the proposed architecture is developed, which can be further extended into an SoC, which can then be programmed onto a suitable FPGA system, which could then be uploaded with images that get classified as blurred and clear images. The on-chip processing system of the FPGA-SoC reads the image data and sends it to the Blur Detector IP via the DMA IP in the SoC. The whole process uses a double-buffered design in order to reduce IP stall time and increase efficiency.","PeriodicalId":388866,"journal":{"name":"2023 International Conference on Bio Signals, Images, and Instrumentation (ICBSII)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130517148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-16DOI: 10.1109/ICBSII58188.2023.10180905
Utkarsh Pancholi, Vijay Dave
Transcranial direct current stimulation (tDCS) is a part of the transcranial electrical stimulation method widely used for treating patients with neurological and psychological abnormalities, along with application in cognitive improvements. With a simple design and operating procedure, tDCS is considered a safe and effective therapy choice. With predefined treatment protocols, it is possible to achieve the required electric field within the inner structures of the brain to excite and inhibit neuronal activity and its outcomes. The generated electric field shows variation among individuals due to anatomical and functional changes in the brain tissues. In-situ modeling of therapeutic procedures can help to assure the probabilistic outcome of tDCS. In this study, we have obtained results for electric field strength variability in a cognitively normal subject. We have simulated the subject with variation in stimulating electrode size and shape, a combination of electrode-Gel and electrode-sponge with SimNIBS Ver (3.2.6), and measured electric field strength and focality. Simulated results show less dependence on gel or sponge thickness and more reliance on electrode size and shape for E-field and focality. The increasing size of electrodes reduces electric field strength and focality with asymmetrical E-field distribution, whereas decrement generates a more symmetrical and focused E-field with higher strength.
经颅直流电刺激(Transcranial direct current stimulation, tDCS)是经颅电刺激方法的一部分,广泛用于治疗神经和心理异常患者,同时在认知改善方面也有应用。tDCS设计简单,操作简便,是一种安全有效的治疗方法。有了预先设定的治疗方案,就有可能在大脑内部结构中获得所需的电场来激发和抑制神经元活动及其结果。由于脑组织的解剖和功能变化,产生的电场在个体之间表现出差异。治疗过程的原位建模有助于保证tDCS的概率结果。在这项研究中,我们获得了认知正常受试者的电场强度变异性的结果。我们模拟了刺激电极大小和形状的变化,结合了SimNIBS Ver(3.2.6)的电极凝胶和电极海绵,并测量了电场强度和聚焦。模拟结果表明,对凝胶或海绵厚度的依赖性较小,而对电场和聚焦的电极尺寸和形状的依赖性更大。随着电极尺寸的增大,电场强度和聚焦度减小,电场分布不对称;电极尺寸的减小,电场强度增大,电场分布更对称、更集中。
{"title":"Variability of E-field in Dorsolateral Prefrontal Cortex Upon a Change in Electrode Parameters in tDCS.","authors":"Utkarsh Pancholi, Vijay Dave","doi":"10.1109/ICBSII58188.2023.10180905","DOIUrl":"https://doi.org/10.1109/ICBSII58188.2023.10180905","url":null,"abstract":"Transcranial direct current stimulation (tDCS) is a part of the transcranial electrical stimulation method widely used for treating patients with neurological and psychological abnormalities, along with application in cognitive improvements. With a simple design and operating procedure, tDCS is considered a safe and effective therapy choice. With predefined treatment protocols, it is possible to achieve the required electric field within the inner structures of the brain to excite and inhibit neuronal activity and its outcomes. The generated electric field shows variation among individuals due to anatomical and functional changes in the brain tissues. In-situ modeling of therapeutic procedures can help to assure the probabilistic outcome of tDCS. In this study, we have obtained results for electric field strength variability in a cognitively normal subject. We have simulated the subject with variation in stimulating electrode size and shape, a combination of electrode-Gel and electrode-sponge with SimNIBS Ver (3.2.6), and measured electric field strength and focality. Simulated results show less dependence on gel or sponge thickness and more reliance on electrode size and shape for E-field and focality. The increasing size of electrodes reduces electric field strength and focality with asymmetrical E-field distribution, whereas decrement generates a more symmetrical and focused E-field with higher strength.","PeriodicalId":388866,"journal":{"name":"2023 International Conference on Bio Signals, Images, and Instrumentation (ICBSII)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132802997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-16DOI: 10.1109/ICBSII58188.2023.10181083
A. Sarath Vignesh, H. Denicke Solomon, P. Dheepan, G. Kavitha
Magnetic resonance imaging is the accepted standard for analyzing any deformation in brain. There are many biomarkers which can be considered for analyzing the effect of Alzheimer’s disease in brain. One such biomarker is the ventricle which expands during the progression of Alzheimer’s disease. Ventricle segmentation plays a vital role in the diagnosis. Automated segmentation approaches are preferred since manual segmentation takes a longer time. In this work, the magnetic resonance images are skull stripped using a combination of Fuzzy C-means clustering and the Chan-Vese contouring technique. segmentation of ventricle is performed by deep learning architectures, U-Net and SegUnet on 1164 transverse MR images acquired from ADNI (Alzheimer’s DiseaseNeuroimaging Initiative) database which is an open-source database for carrying researches on Dementia. The features are extracted from the segmented images using ResNet-101 and they are classified using a classifier merger approach which consists of 3 classifiers. The final class label is obtained by majority voting on the individual classifier predictions. The results were compared and analyzed.
{"title":"Segmentation and Severity Classification of Dementia in Magnetic Resonance Imaging using Deep Learning Networks","authors":"A. Sarath Vignesh, H. Denicke Solomon, P. Dheepan, G. Kavitha","doi":"10.1109/ICBSII58188.2023.10181083","DOIUrl":"https://doi.org/10.1109/ICBSII58188.2023.10181083","url":null,"abstract":"Magnetic resonance imaging is the accepted standard for analyzing any deformation in brain. There are many biomarkers which can be considered for analyzing the effect of Alzheimer’s disease in brain. One such biomarker is the ventricle which expands during the progression of Alzheimer’s disease. Ventricle segmentation plays a vital role in the diagnosis. Automated segmentation approaches are preferred since manual segmentation takes a longer time. In this work, the magnetic resonance images are skull stripped using a combination of Fuzzy C-means clustering and the Chan-Vese contouring technique. segmentation of ventricle is performed by deep learning architectures, U-Net and SegUnet on 1164 transverse MR images acquired from ADNI (Alzheimer’s DiseaseNeuroimaging Initiative) database which is an open-source database for carrying researches on Dementia. The features are extracted from the segmented images using ResNet-101 and they are classified using a classifier merger approach which consists of 3 classifiers. The final class label is obtained by majority voting on the individual classifier predictions. The results were compared and analyzed.","PeriodicalId":388866,"journal":{"name":"2023 International Conference on Bio Signals, Images, and Instrumentation (ICBSII)","volume":"191 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133321711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-16DOI: 10.1109/ICBSII58188.2023.10181092
J. Persiya., A. Sasithradevi, S. Roomi
A non-intrusive, contactless temperature measurement technique that provides real-time surface temperature distribution is infrared thermography. Ocular Surface Temperature (OST) can be measured using thermography without harming the subjects. It is used in wide variety of applications, especially in medical applications. Recently Infrared thermal images of eye are used to diagnose and detect many diseases and features of human eye. This paper examines the methods currently in use for diagnosing dry eye. The main focus is on thermal images. Thermographic technique is proved to be a highly sensitive, satisfying method and accurate for detecting eye disorders. Various machine learning and Deep learning algorithms are discussed. Finally, it is concluded as deep learning combined with thermography is more likely to be used to detect dry eye disease.
{"title":"Infrared Thermograms for Diagnosis of Dry Eye: A Review","authors":"J. Persiya., A. Sasithradevi, S. Roomi","doi":"10.1109/ICBSII58188.2023.10181092","DOIUrl":"https://doi.org/10.1109/ICBSII58188.2023.10181092","url":null,"abstract":"A non-intrusive, contactless temperature measurement technique that provides real-time surface temperature distribution is infrared thermography. Ocular Surface Temperature (OST) can be measured using thermography without harming the subjects. It is used in wide variety of applications, especially in medical applications. Recently Infrared thermal images of eye are used to diagnose and detect many diseases and features of human eye. This paper examines the methods currently in use for diagnosing dry eye. The main focus is on thermal images. Thermographic technique is proved to be a highly sensitive, satisfying method and accurate for detecting eye disorders. Various machine learning and Deep learning algorithms are discussed. Finally, it is concluded as deep learning combined with thermography is more likely to be used to detect dry eye disease.","PeriodicalId":388866,"journal":{"name":"2023 International Conference on Bio Signals, Images, and Instrumentation (ICBSII)","volume":"139 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115480170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-16DOI: 10.1109/ICBSII58188.2023.10181093
K. Dharanipriya, S. Sathyageetha, K. Sowmia, J. Srinidhi
Animal assaults on crops are one of the main risks to crop production reduction. Crop raiding is one of the most acrimonious disputes as farmed land encroaches on formerly uninhabited areas. Pests, natural disasters, and animal damage pose severe risks to Indian farmers, lowering productivity. Farmers’ traditional tactics are ineffective, and it is not practical to hire guards to watch over crops and keep animals away. Since animal and human safety are equally important, it is crucial to safeguard the crops from harm brought on by creatures without causing any harm, as well as divert the animal. As a result, we employ deep learning to recognize animals that visit our farm utilizing the deep neural network idea, a branch of computer vision, in order to overcome the aforementioned issues and achieve our goal. In this project, we will periodically check in on the entire farm using a camera that will continuously record its surroundings. We are able to recognize when animals are entering with the use of a deep learning model, and then we use an SD card and speaker to play the right sounds to scare them away. The many convolutional neural network libraries and principles that were used to build the model are described in this research.
{"title":"Smart Crop Protection System From Animals Using AI","authors":"K. Dharanipriya, S. Sathyageetha, K. Sowmia, J. Srinidhi","doi":"10.1109/ICBSII58188.2023.10181093","DOIUrl":"https://doi.org/10.1109/ICBSII58188.2023.10181093","url":null,"abstract":"Animal assaults on crops are one of the main risks to crop production reduction. Crop raiding is one of the most acrimonious disputes as farmed land encroaches on formerly uninhabited areas. Pests, natural disasters, and animal damage pose severe risks to Indian farmers, lowering productivity. Farmers’ traditional tactics are ineffective, and it is not practical to hire guards to watch over crops and keep animals away. Since animal and human safety are equally important, it is crucial to safeguard the crops from harm brought on by creatures without causing any harm, as well as divert the animal. As a result, we employ deep learning to recognize animals that visit our farm utilizing the deep neural network idea, a branch of computer vision, in order to overcome the aforementioned issues and achieve our goal. In this project, we will periodically check in on the entire farm using a camera that will continuously record its surroundings. We are able to recognize when animals are entering with the use of a deep learning model, and then we use an SD card and speaker to play the right sounds to scare them away. The many convolutional neural network libraries and principles that were used to build the model are described in this research.","PeriodicalId":388866,"journal":{"name":"2023 International Conference on Bio Signals, Images, and Instrumentation (ICBSII)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127321626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-16DOI: 10.1109/icbsii58188.2023.10181081
{"title":"Keynote Speakers’ Profile","authors":"","doi":"10.1109/icbsii58188.2023.10181081","DOIUrl":"https://doi.org/10.1109/icbsii58188.2023.10181081","url":null,"abstract":"","PeriodicalId":388866,"journal":{"name":"2023 International Conference on Bio Signals, Images, and Instrumentation (ICBSII)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125573806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-16DOI: 10.1109/ICBSII58188.2023.10181067
A. Prasanna., S. Saran, N. Manoj, S. Alagu
Acute lymphoblastic leukemia is a form of blood cancer in which the bone marrow overproduces immature white blood cells. A novel semantic segmentation of nucleus for detection of Acute lymphoblastic leukemia is proposed here. The input images are obtained from public database ‘‘ALLIDB2’’. Resizing, SMOTE and Augmentation are carried out as preprocessing. After pre-processing, segmentation of nucleus is performed by SegNet and ResUNet. The performance of SegNet and ResUNet are compared. The segmented images are given as input to the classification models. Using Xception, Inception-v3 and ResNet50 models, the segmented images are classified as healthy and blast cells. It is found that Inception-v3 performs better than Xception and ResNet50 with an accuracy of 93.74%. This will be helpful to detect Acute lymphoblastic leukemia at the earliest.
{"title":"A Deep Learning Framework for Semantic Segmentation of Nucleus for Acute Lymphoblastic Leukemia Detection","authors":"A. Prasanna., S. Saran, N. Manoj, S. Alagu","doi":"10.1109/ICBSII58188.2023.10181067","DOIUrl":"https://doi.org/10.1109/ICBSII58188.2023.10181067","url":null,"abstract":"Acute lymphoblastic leukemia is a form of blood cancer in which the bone marrow overproduces immature white blood cells. A novel semantic segmentation of nucleus for detection of Acute lymphoblastic leukemia is proposed here. The input images are obtained from public database ‘‘ALLIDB2’’. Resizing, SMOTE and Augmentation are carried out as preprocessing. After pre-processing, segmentation of nucleus is performed by SegNet and ResUNet. The performance of SegNet and ResUNet are compared. The segmented images are given as input to the classification models. Using Xception, Inception-v3 and ResNet50 models, the segmented images are classified as healthy and blast cells. It is found that Inception-v3 performs better than Xception and ResNet50 with an accuracy of 93.74%. This will be helpful to detect Acute lymphoblastic leukemia at the earliest.","PeriodicalId":388866,"journal":{"name":"2023 International Conference on Bio Signals, Images, and Instrumentation (ICBSII)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122895502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-16DOI: 10.1109/ICBSII58188.2023.10181079
Mohammed Ashik, Ramesh Patnaik Manapuram, P. Choppala
The particle filter is known to be a powerful tool for the estimation of time varying latent states guided by nonlinear dynamics and sensor measurements.Particle filter’s traditional resampling step is essential because it avoids degeneracy of particles by stochastically eliminating the small weight particles that do not contribute to the posterior probability density function and replacing them by copies of those having large weights. Nevertheless, resampling is computationally costly since it requires extensive and sequential communication among the particles. This work proposes a novel method of particle filtering that eliminates the need for resampling and prevents degeneracy by substituting low-weight particles with a simple cutoff decision strategy based on the cumulative sum of weights. The proposed scheme limits replacement over only a few important particles and hence substantially accelerates the filtering process. We show the merits of the proposed method via simulations using a nonlinear example and also apply the method for tracking harmonics of real biomedical signals.
{"title":"Resampling-free fast particle filtering with application to tracking rhythmic biomedical signals","authors":"Mohammed Ashik, Ramesh Patnaik Manapuram, P. Choppala","doi":"10.1109/ICBSII58188.2023.10181079","DOIUrl":"https://doi.org/10.1109/ICBSII58188.2023.10181079","url":null,"abstract":"The particle filter is known to be a powerful tool for the estimation of time varying latent states guided by nonlinear dynamics and sensor measurements.Particle filter’s traditional resampling step is essential because it avoids degeneracy of particles by stochastically eliminating the small weight particles that do not contribute to the posterior probability density function and replacing them by copies of those having large weights. Nevertheless, resampling is computationally costly since it requires extensive and sequential communication among the particles. This work proposes a novel method of particle filtering that eliminates the need for resampling and prevents degeneracy by substituting low-weight particles with a simple cutoff decision strategy based on the cumulative sum of weights. The proposed scheme limits replacement over only a few important particles and hence substantially accelerates the filtering process. We show the merits of the proposed method via simulations using a nonlinear example and also apply the method for tracking harmonics of real biomedical signals.","PeriodicalId":388866,"journal":{"name":"2023 International Conference on Bio Signals, Images, and Instrumentation (ICBSII)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115838828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-16DOI: 10.1109/ICBSII58188.2023.10181041
Mj Alben Richards, E. Kaaviya Varshini, N. Diviya, P. Prakash, Kasthuri P
Facial expression is a way of non-verbal communication by using eyes, lips, nose and facial muscles. Smiling and rolling eyes are some examples. Facial expression recognition is the process of extracting facial features from a person. Facial expressions include anger, happy, disgust, sad, neutral, fear and surprise. By the use of machine learning, an expression recognition model is built using Convolutional Neural Network. The input data is fed to the system in order to give the expected results. The model is trained using Facial Expression Recognition (FER) dataset. The Convolutional Neural Network (CNN) gives good and accurate results. The Haar cascade classifier classifies the face and non-face regions in the input image which helps the convolutional network to classify the images. Good classification of images can be desirable by the use of classifiers. These classifiers can be implemented by using the OpenCV library.
{"title":"Facial Expression Recognition using Convolutional Neural Network","authors":"Mj Alben Richards, E. Kaaviya Varshini, N. Diviya, P. Prakash, Kasthuri P","doi":"10.1109/ICBSII58188.2023.10181041","DOIUrl":"https://doi.org/10.1109/ICBSII58188.2023.10181041","url":null,"abstract":"Facial expression is a way of non-verbal communication by using eyes, lips, nose and facial muscles. Smiling and rolling eyes are some examples. Facial expression recognition is the process of extracting facial features from a person. Facial expressions include anger, happy, disgust, sad, neutral, fear and surprise. By the use of machine learning, an expression recognition model is built using Convolutional Neural Network. The input data is fed to the system in order to give the expected results. The model is trained using Facial Expression Recognition (FER) dataset. The Convolutional Neural Network (CNN) gives good and accurate results. The Haar cascade classifier classifies the face and non-face regions in the input image which helps the convolutional network to classify the images. Good classification of images can be desirable by the use of classifiers. These classifiers can be implemented by using the OpenCV library.","PeriodicalId":388866,"journal":{"name":"2023 International Conference on Bio Signals, Images, and Instrumentation (ICBSII)","volume":"383 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122438426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}