Pub Date : 2021-06-23DOI: 10.1109/MeMeA52024.2021.9478606
V. Guimarães, I. Sousa, M. Correia
Reliable detection of gait events is important to ensure accurate assessment of gait. While it is usually performed resorting to force platforms, methods based uniquely on kinematic analysis have also been proposed. These methods place no restrictions on the number of steps that can be analysed, simplifying setup and complexity of assessments. They also replace the need of annotating events manually when force platforms are not available. Although few methods have been proposed in literature, validation studies are relatively scarce. In this study we present multiple methods for the detection of heel strike (HS) and toe off (TO) in normal walking, and validate the detection against annotated events using three different datasets. The best performing candidates are based on the evaluation of heel vertical velocity (for HS) and toe vertical acceleration (for TO), resulting in relative errors of -12.4 ± 32.9 ms for HS and of -15.5 ± 24.9 ms for TO. The method is compatible with barefoot and shod walking, constituting a convenient, fast and reliable alternative to automatic gait event detection using kinematic data.
{"title":"Gait events detection from heel and toe trajectories: comparison of methods using multiple datasets","authors":"V. Guimarães, I. Sousa, M. Correia","doi":"10.1109/MeMeA52024.2021.9478606","DOIUrl":"https://doi.org/10.1109/MeMeA52024.2021.9478606","url":null,"abstract":"Reliable detection of gait events is important to ensure accurate assessment of gait. While it is usually performed resorting to force platforms, methods based uniquely on kinematic analysis have also been proposed. These methods place no restrictions on the number of steps that can be analysed, simplifying setup and complexity of assessments. They also replace the need of annotating events manually when force platforms are not available. Although few methods have been proposed in literature, validation studies are relatively scarce. In this study we present multiple methods for the detection of heel strike (HS) and toe off (TO) in normal walking, and validate the detection against annotated events using three different datasets. The best performing candidates are based on the evaluation of heel vertical velocity (for HS) and toe vertical acceleration (for TO), resulting in relative errors of -12.4 ± 32.9 ms for HS and of -15.5 ± 24.9 ms for TO. The method is compatible with barefoot and shod walking, constituting a convenient, fast and reliable alternative to automatic gait event detection using kinematic data.","PeriodicalId":429222,"journal":{"name":"2021 IEEE International Symposium on Medical Measurements and Applications (MeMeA)","volume":"151 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122862009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-06-23DOI: 10.1109/MeMeA52024.2021.9478718
Amit Nayak, C. Armstrong, C. Mavriplis, M. Fenech
Microfluidics is a prominent field used to analyze small amounts of biological fluids. Co-Flow microfluidic devices can be used to study red blood cell aggregation in blood samples under a controlled shear rate. The purpose of this paper is to optimize the parameters of a co-flow device in order to produce a linear velocity profile in blood samples which would provide a constant shear rate. This is desired as the eventual goal is to use an ultrasonic measurement sensor with the co-flow microfluidic device to analyze red blood cell aggregates. Computational fluid dynamic simulations were performed to model a microfluidic device. The simulation results were verified by µPIV of the experimental microfluidic device. Modifications were made to the geometry and flow rate ratio of the microfluidic device to produce a more linear velocity profile. By using a flow rate ratio of 50:1 of shearing fluid to sheared fluid, we were able to achieve a velocity profile in the blood layer that is approximately linear.
{"title":"Optimization of Blood Microfluidic Co-Flow Devices for Dual Measurement","authors":"Amit Nayak, C. Armstrong, C. Mavriplis, M. Fenech","doi":"10.1109/MeMeA52024.2021.9478718","DOIUrl":"https://doi.org/10.1109/MeMeA52024.2021.9478718","url":null,"abstract":"Microfluidics is a prominent field used to analyze small amounts of biological fluids. Co-Flow microfluidic devices can be used to study red blood cell aggregation in blood samples under a controlled shear rate. The purpose of this paper is to optimize the parameters of a co-flow device in order to produce a linear velocity profile in blood samples which would provide a constant shear rate. This is desired as the eventual goal is to use an ultrasonic measurement sensor with the co-flow microfluidic device to analyze red blood cell aggregates. Computational fluid dynamic simulations were performed to model a microfluidic device. The simulation results were verified by µPIV of the experimental microfluidic device. Modifications were made to the geometry and flow rate ratio of the microfluidic device to produce a more linear velocity profile. By using a flow rate ratio of 50:1 of shearing fluid to sheared fluid, we were able to achieve a velocity profile in the blood layer that is approximately linear.","PeriodicalId":429222,"journal":{"name":"2021 IEEE International Symposium on Medical Measurements and Applications (MeMeA)","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121264266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-06-23DOI: 10.1109/MeMeA52024.2021.9478602
E. Panero, E. Digo, U. Dimanico, C. Artusi, M. Zibetti, L. Gastaldi
Deep brain stimulation (DBS) implant represents an appropriate treatment for motor symptoms typical of Parkinson’s Disease (PD). However, little attention has been given to the effects of different DBS stimulation frequencies on gait outcomes. Accordingly, the aim of this pilot study was to evaluate the effects of two different DBS stimulation frequencies (60 and 130 Hz) on gait spatio-temporal parameters, symmetry, smoothness, and variability in PD patients. The analysis concentrated on acceleration signals acquired by a magnetic inertial measurement unit placed on the trunk of participants. Sessions of gait were registered for three PD patients, three young and three elderly healthy subjects. Gait outcomes revealed a connection with both age and pathology. Values of the Harmonic Ratio (HR) estimated for the three-axis acceleration signals showed subjective effects provoked by DBS stimulation frequencies. Consequently, HR turned out to be suitable for depicting gait characteristics, but also as a monitoring parameter for the subjective adaptation of DBS stimulation frequency. Concerning the Poincaré analysis of vertical acceleration signal, PD patients showed a greater dispersion of data compared to healthy subjects, but with negligible differences between the two stimulation frequencies. Overall, the presented analysis represented a starting point for the objective evaluation of gait performance and characteristics in PD patients with a DBS implant.
{"title":"Effect of Deep Brain Stimulation Frequency on Gait Symmetry, Smoothness and Variability using IMU","authors":"E. Panero, E. Digo, U. Dimanico, C. Artusi, M. Zibetti, L. Gastaldi","doi":"10.1109/MeMeA52024.2021.9478602","DOIUrl":"https://doi.org/10.1109/MeMeA52024.2021.9478602","url":null,"abstract":"Deep brain stimulation (DBS) implant represents an appropriate treatment for motor symptoms typical of Parkinson’s Disease (PD). However, little attention has been given to the effects of different DBS stimulation frequencies on gait outcomes. Accordingly, the aim of this pilot study was to evaluate the effects of two different DBS stimulation frequencies (60 and 130 Hz) on gait spatio-temporal parameters, symmetry, smoothness, and variability in PD patients. The analysis concentrated on acceleration signals acquired by a magnetic inertial measurement unit placed on the trunk of participants. Sessions of gait were registered for three PD patients, three young and three elderly healthy subjects. Gait outcomes revealed a connection with both age and pathology. Values of the Harmonic Ratio (HR) estimated for the three-axis acceleration signals showed subjective effects provoked by DBS stimulation frequencies. Consequently, HR turned out to be suitable for depicting gait characteristics, but also as a monitoring parameter for the subjective adaptation of DBS stimulation frequency. Concerning the Poincaré analysis of vertical acceleration signal, PD patients showed a greater dispersion of data compared to healthy subjects, but with negligible differences between the two stimulation frequencies. Overall, the presented analysis represented a starting point for the objective evaluation of gait performance and characteristics in PD patients with a DBS implant.","PeriodicalId":429222,"journal":{"name":"2021 IEEE International Symposium on Medical Measurements and Applications (MeMeA)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132187344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-06-23DOI: 10.1109/MeMeA52024.2021.9478713
N. Morresi, S. Casaccia, G. M. Revel
This paper presents a methodology for the processing of the Photoplethysmography (PPG) signal measured using a smartwatch during motion tests. For statistical validation, signals from 15 healthy subjects have been collected while the subjects are walking on a treadmill. The motion artifacts (MAs) of the PPG signal have been removed demonstrating that the 37% of the signals are affected by MAs. Then, the experimental performance assessment of the PPG signal, from which the heart rate variability (HRV) has been extracted, by measuring the RR intervals, is compared to the RR intervals extracted from ECG signals measured using a multi-parametric chest belt that is considered as a reference sensor. The uncertainty of the PPG sensor in the measurement of the RR intervals is ± 169 ms, (with a coverage factor k = 2) if compared to the reference method, which in percentage is 30%.
{"title":"Metrological characterization and signal processing of a wearable sensor for the measurement of heart rate variability","authors":"N. Morresi, S. Casaccia, G. M. Revel","doi":"10.1109/MeMeA52024.2021.9478713","DOIUrl":"https://doi.org/10.1109/MeMeA52024.2021.9478713","url":null,"abstract":"This paper presents a methodology for the processing of the Photoplethysmography (PPG) signal measured using a smartwatch during motion tests. For statistical validation, signals from 15 healthy subjects have been collected while the subjects are walking on a treadmill. The motion artifacts (MAs) of the PPG signal have been removed demonstrating that the 37% of the signals are affected by MAs. Then, the experimental performance assessment of the PPG signal, from which the heart rate variability (HRV) has been extracted, by measuring the RR intervals, is compared to the RR intervals extracted from ECG signals measured using a multi-parametric chest belt that is considered as a reference sensor. The uncertainty of the PPG sensor in the measurement of the RR intervals is ± 169 ms, (with a coverage factor k = 2) if compared to the reference method, which in percentage is 30%.","PeriodicalId":429222,"journal":{"name":"2021 IEEE International Symposium on Medical Measurements and Applications (MeMeA)","volume":"152 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115793609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-06-23DOI: 10.1109/MeMeA52024.2021.9478693
L. Scalise, M. Ali, L. Antognoli
Breathing is an important aspect of life. Monitoring of breathing signal plays an important role in clinical practice in order to determine the progression of illness. In this study the contactless modality to detect the breathing signal is assessed. For this purpose, the Laser Doppler Vibrometer (LDV) is used to detect the breathing signal. The test was performed on ten healthy volunteers and one simulator. An automatic algorithm is designed that can determine the efficiency of the contactless modality. The individuals were asked to simulate the conditions of apnea, tachypnea and bradypnea. The simulator was programmed with different respiratory rates in order to assess the functionality of the algorithm. The acquired signals were initially analyzed using manual setting of parameters and then using a standardised algorithm for every individual. The results were compared to determine the functionality. A user-friendly application was designed that allows user to set the ranges of high and low respiration rate along with the percentile value. The applications displays the pre-acquired breathing signal in real time scenarios along with the breathing tachograph and mean breathing rate. The difference between instantaneous respiration rates was found to be ±12.5% (mean value) in the case of signals acquired from human while in case of signal acquired from phantom simulator the same quantity was found to be ±1.6%.
{"title":"Contactless Continuous Monitoring Of Respiration","authors":"L. Scalise, M. Ali, L. Antognoli","doi":"10.1109/MeMeA52024.2021.9478693","DOIUrl":"https://doi.org/10.1109/MeMeA52024.2021.9478693","url":null,"abstract":"Breathing is an important aspect of life. Monitoring of breathing signal plays an important role in clinical practice in order to determine the progression of illness. In this study the contactless modality to detect the breathing signal is assessed. For this purpose, the Laser Doppler Vibrometer (LDV) is used to detect the breathing signal. The test was performed on ten healthy volunteers and one simulator. An automatic algorithm is designed that can determine the efficiency of the contactless modality. The individuals were asked to simulate the conditions of apnea, tachypnea and bradypnea. The simulator was programmed with different respiratory rates in order to assess the functionality of the algorithm. The acquired signals were initially analyzed using manual setting of parameters and then using a standardised algorithm for every individual. The results were compared to determine the functionality. A user-friendly application was designed that allows user to set the ranges of high and low respiration rate along with the percentile value. The applications displays the pre-acquired breathing signal in real time scenarios along with the breathing tachograph and mean breathing rate. The difference between instantaneous respiration rates was found to be ±12.5% (mean value) in the case of signals acquired from human while in case of signal acquired from phantom simulator the same quantity was found to be ±1.6%.","PeriodicalId":429222,"journal":{"name":"2021 IEEE International Symposium on Medical Measurements and Applications (MeMeA)","volume":"T156 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125657147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-06-23DOI: 10.1109/MeMeA52024.2021.9478746
M. Vagni, N. Giordano, G. Balestra, S. Rosati
The management of datasets containing heterogeneous types of data is a crucial point in the context of precision medicine, where genetic, environmental, and life-style information of each individual has to be analyzed simultaneously. Clustering represents a powerful method, used in data mining, for extracting new useful knowledge from unlabeled datasets. Clustering methods are essentially distance-based, since they measure the similarity (or the distance) between two elements or one element and the cluster centroid. However, the selection of the distance metric is not a trivial task: it could influence the clustering results and, thus, the extracted information. In this study we analyze the impact of four similarity measures (Manhattan or L1 distance, Euclidean or L2 distance, Chebyshev or L∞ distance and Gower distance) on the clustering results obtained for datasets containing different types of variables. We applied hierarchical clustering combined with an automatic cut point selection method to six datasets publicly available on the UCI Repository. Four different clusterizations were obtained for every dataset (one for each distance) and were analyzed in terms of number of clusters, number of elements in each cluster, and cluster centroids. Our results showed that changing the distance metric produces substantial modifications in the obtained clusters. This behavior is particularly evident for datasets containing heterogeneous variables. Thus, the choice of the distance measure should not be done a-priori but evaluated according to the set of data to be analyzed and the task to be accomplished.
{"title":"Comparison of different similarity measures in hierarchical clustering","authors":"M. Vagni, N. Giordano, G. Balestra, S. Rosati","doi":"10.1109/MeMeA52024.2021.9478746","DOIUrl":"https://doi.org/10.1109/MeMeA52024.2021.9478746","url":null,"abstract":"The management of datasets containing heterogeneous types of data is a crucial point in the context of precision medicine, where genetic, environmental, and life-style information of each individual has to be analyzed simultaneously. Clustering represents a powerful method, used in data mining, for extracting new useful knowledge from unlabeled datasets. Clustering methods are essentially distance-based, since they measure the similarity (or the distance) between two elements or one element and the cluster centroid. However, the selection of the distance metric is not a trivial task: it could influence the clustering results and, thus, the extracted information. In this study we analyze the impact of four similarity measures (Manhattan or L1 distance, Euclidean or L2 distance, Chebyshev or L∞ distance and Gower distance) on the clustering results obtained for datasets containing different types of variables. We applied hierarchical clustering combined with an automatic cut point selection method to six datasets publicly available on the UCI Repository. Four different clusterizations were obtained for every dataset (one for each distance) and were analyzed in terms of number of clusters, number of elements in each cluster, and cluster centroids. Our results showed that changing the distance metric produces substantial modifications in the obtained clusters. This behavior is particularly evident for datasets containing heterogeneous variables. Thus, the choice of the distance measure should not be done a-priori but evaluated according to the set of data to be analyzed and the task to be accomplished.","PeriodicalId":429222,"journal":{"name":"2021 IEEE International Symposium on Medical Measurements and Applications (MeMeA)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115386241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-06-23DOI: 10.1109/MeMeA52024.2021.9478680
P. Firoozi, S. Rajan, I. Lambadaris
Compressive sensing (CS) is an innovative approach to simultaneously measure and compress signals such as biomedical signals that are sparse or compressible. A major effort in CS is to design a measurement matrix that can be used to encode and compress such signals. The measurement matrix structure has a direct impact on the computational and storage costs as well as the recovered signal quality. Sparse measurement matrices (i.e. with few non-zero elements) may drastically reduce these costs. We propose a permuted Kronecker-based sparse measurement matrix for sensing and data recovery in CS applications. In our study, we use three classes of sub-matrices (normalized Gaussian, Bernoulli, and BCH-based matrices) to create the proposed measurement matrix. Using ECG signals from the MIT-BIH Arrhythmia database, we show that the reconstructed signal quality is comparable to the ones achieved using well known CS methods. Our methodology results in an overall reduction in storage and computations, both during the sensing and recovery process. This approach can be generalized to other classes of eligible measurement matrices in CS.
{"title":"Efficient Compressive Sensing of Biomedical Signals Using A Permuted Kronecker-based Sparse Measurement Matrix","authors":"P. Firoozi, S. Rajan, I. Lambadaris","doi":"10.1109/MeMeA52024.2021.9478680","DOIUrl":"https://doi.org/10.1109/MeMeA52024.2021.9478680","url":null,"abstract":"Compressive sensing (CS) is an innovative approach to simultaneously measure and compress signals such as biomedical signals that are sparse or compressible. A major effort in CS is to design a measurement matrix that can be used to encode and compress such signals. The measurement matrix structure has a direct impact on the computational and storage costs as well as the recovered signal quality. Sparse measurement matrices (i.e. with few non-zero elements) may drastically reduce these costs. We propose a permuted Kronecker-based sparse measurement matrix for sensing and data recovery in CS applications. In our study, we use three classes of sub-matrices (normalized Gaussian, Bernoulli, and BCH-based matrices) to create the proposed measurement matrix. Using ECG signals from the MIT-BIH Arrhythmia database, we show that the reconstructed signal quality is comparable to the ones achieved using well known CS methods. Our methodology results in an overall reduction in storage and computations, both during the sensing and recovery process. This approach can be generalized to other classes of eligible measurement matrices in CS.","PeriodicalId":429222,"journal":{"name":"2021 IEEE International Symposium on Medical Measurements and Applications (MeMeA)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116697809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-06-23DOI: 10.1109/MeMeA52024.2021.9478677
D. Heise, Ruhan Yi, Laurel A. Despins
Disordered breathing during sleep impacts sleep quality and the perceived amount of rest obtained while also serving as a potential indicator of other health conditions or risks. Apneas and hypopneas are leading indicators of disordered breathing, often quantified by an apnea-hypopnea index (AHI). Polysomnography is the gold standard for detecting apnea and hypopnea events (and thus calculating a subject’s AHI), but despite the inconvenience of sleeping in a strange place with numerous instruments attached, polysomnography delivers only a snapshot in time and is not practical for long-term monitoring. In this work, we describe a method of detecting apnea and hypopnea events during sleep using a hydraulic bed sensor, which has proven valuable for other dimensions of long-term monitoring and early detection of illness. We compare our results to those produced by a polysomnography lab, including calculation of respiratory disturbance indices. We successfully detect 73.6% of apneas with 77.2% precision, and our calculations for apnea index (AI) and respiratory disturbance index (RDI) are precise enough to indicate the appropriate severity of sleep apnea-hypopnea syndrome (SAHS) for each of our subjects.
{"title":"Unobtrusively Detecting Apnea and Hypopnea Events via a Hydraulic Bed Sensor","authors":"D. Heise, Ruhan Yi, Laurel A. Despins","doi":"10.1109/MeMeA52024.2021.9478677","DOIUrl":"https://doi.org/10.1109/MeMeA52024.2021.9478677","url":null,"abstract":"Disordered breathing during sleep impacts sleep quality and the perceived amount of rest obtained while also serving as a potential indicator of other health conditions or risks. Apneas and hypopneas are leading indicators of disordered breathing, often quantified by an apnea-hypopnea index (AHI). Polysomnography is the gold standard for detecting apnea and hypopnea events (and thus calculating a subject’s AHI), but despite the inconvenience of sleeping in a strange place with numerous instruments attached, polysomnography delivers only a snapshot in time and is not practical for long-term monitoring. In this work, we describe a method of detecting apnea and hypopnea events during sleep using a hydraulic bed sensor, which has proven valuable for other dimensions of long-term monitoring and early detection of illness. We compare our results to those produced by a polysomnography lab, including calculation of respiratory disturbance indices. We successfully detect 73.6% of apneas with 77.2% precision, and our calculations for apnea index (AI) and respiratory disturbance index (RDI) are precise enough to indicate the appropriate severity of sleep apnea-hypopnea syndrome (SAHS) for each of our subjects.","PeriodicalId":429222,"journal":{"name":"2021 IEEE International Symposium on Medical Measurements and Applications (MeMeA)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121610288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-06-23DOI: 10.1109/MeMeA52024.2021.9478683
Fulvio Cordella, A. Paffi, A. Pallotti
In this paper a classification algorithm for Parkinson’s Disease screening is proposed. Code executes the processing of specific voice signals recorded by healthy and ill subjects. In the direction of a future implementation and validation in a home telemonitoring system, the algorithm has been built with the objective to serve as a screening tool for the precocious directing of subjects with high risk of neurological diseases to instrumental exams. In fact, in several neurological disorders, such as Parkinson’s disease, motor impairments of vocal apparatus arise earlier than postural and ambulatory symptoms. In a home telemonitoring system, in which hardware would consist in a voice recorder (that could be a simple smartphone) and a server for the web platform, data would be acquired and instantly stored on a platform for their processing through machine learning algorithms and to be viewed by specialists. For this purpose, a fully automatic process is needed. Therefore, in this work, audio-preprocessing and features computation are completely performed automatically, using Matlab. Final models have been trained in Matlab environments from Weka’s libraries. The family of developed models are trained with different type of phonations, from simple vowels to complex sounds, for a wider and more efficient analysis of vocal apparatus motor impairments. Moreover, dataset was 612 observation large, that is significantly above the mean size of similar works using simple phonations only. For a deeper analysis, different groups of parameters have been tested and cepstral features have been found to be optimal for classification and made up the big part of final algorithm. Developed models are part of the K-Nearest Neighbor family, thus, available for implementation in web platform. Finally, obtained models have shown high accuracies on the whole dataset, reaching values comparable with the literature but with more stability (standard deviation less than 1%). These results have been confirmed in the last validation session in which models have been exported and validated with 25% of data, reaching a best performance with a true positive rate of 98% and a true negative rate of 87%.
{"title":"Classification-based screening of Parkinson’s disease patients through voice signal","authors":"Fulvio Cordella, A. Paffi, A. Pallotti","doi":"10.1109/MeMeA52024.2021.9478683","DOIUrl":"https://doi.org/10.1109/MeMeA52024.2021.9478683","url":null,"abstract":"In this paper a classification algorithm for Parkinson’s Disease screening is proposed. Code executes the processing of specific voice signals recorded by healthy and ill subjects. In the direction of a future implementation and validation in a home telemonitoring system, the algorithm has been built with the objective to serve as a screening tool for the precocious directing of subjects with high risk of neurological diseases to instrumental exams. In fact, in several neurological disorders, such as Parkinson’s disease, motor impairments of vocal apparatus arise earlier than postural and ambulatory symptoms. In a home telemonitoring system, in which hardware would consist in a voice recorder (that could be a simple smartphone) and a server for the web platform, data would be acquired and instantly stored on a platform for their processing through machine learning algorithms and to be viewed by specialists. For this purpose, a fully automatic process is needed. Therefore, in this work, audio-preprocessing and features computation are completely performed automatically, using Matlab. Final models have been trained in Matlab environments from Weka’s libraries. The family of developed models are trained with different type of phonations, from simple vowels to complex sounds, for a wider and more efficient analysis of vocal apparatus motor impairments. Moreover, dataset was 612 observation large, that is significantly above the mean size of similar works using simple phonations only. For a deeper analysis, different groups of parameters have been tested and cepstral features have been found to be optimal for classification and made up the big part of final algorithm. Developed models are part of the K-Nearest Neighbor family, thus, available for implementation in web platform. Finally, obtained models have shown high accuracies on the whole dataset, reaching values comparable with the literature but with more stability (standard deviation less than 1%). These results have been confirmed in the last validation session in which models have been exported and validated with 25% of data, reaching a best performance with a true positive rate of 98% and a true negative rate of 87%.","PeriodicalId":429222,"journal":{"name":"2021 IEEE International Symposium on Medical Measurements and Applications (MeMeA)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121659456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-06-23DOI: 10.1109/MeMeA52024.2021.9478756
G. Campobello, Angelica Quercia, G. Gugliandolo, Antonino Segreto, E. Tatti, M. Ghilardi, G. Crupi, A. Quartarone, N. Donato
In many biomedical measurement procedures, it is important to record a huge amount of data, to monitor the state of health of a subject. In such a context, electroencephalograph (EEG) data are one of the most demanding in terms of size and signal behavior. In this paper, we propose a near-lossless compression algorithm for EEG signals able to achieve a compression ratio in the order of 10 with a root-mean-square distortion less than 0.01%. The proposed algorithm exploits the fact that Principal Component Analysis is usually performed on EEG signals for denoising and removing unwanted artifacts. In this particular context, we can consider this algorithm as a good tool to ensure the best information of the signal beside an efficient compression ratio, reducing the amount of memory necessary to record data.
{"title":"An Efficient Near-lossless Compression Algorithm for Multichannel EEG signals","authors":"G. Campobello, Angelica Quercia, G. Gugliandolo, Antonino Segreto, E. Tatti, M. Ghilardi, G. Crupi, A. Quartarone, N. Donato","doi":"10.1109/MeMeA52024.2021.9478756","DOIUrl":"https://doi.org/10.1109/MeMeA52024.2021.9478756","url":null,"abstract":"In many biomedical measurement procedures, it is important to record a huge amount of data, to monitor the state of health of a subject. In such a context, electroencephalograph (EEG) data are one of the most demanding in terms of size and signal behavior. In this paper, we propose a near-lossless compression algorithm for EEG signals able to achieve a compression ratio in the order of 10 with a root-mean-square distortion less than 0.01%. The proposed algorithm exploits the fact that Principal Component Analysis is usually performed on EEG signals for denoising and removing unwanted artifacts. In this particular context, we can consider this algorithm as a good tool to ensure the best information of the signal beside an efficient compression ratio, reducing the amount of memory necessary to record data.","PeriodicalId":429222,"journal":{"name":"2021 IEEE International Symposium on Medical Measurements and Applications (MeMeA)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125188510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}