Pub Date : 2016-08-01DOI: 10.1109/STSIVA.2016.7743312
A. N. Cadavid, Mario Ramos
The information management has been treated primarily under the Nyquist sampling theory, but it is important to introduce new theories that replace deficiencies of what we know as the classical theory of sampling. These deficiencies create difficulties in data acquisition; this is a problem when large volumes of information are handled, in addition to the higher costs in storage and processing. This article presents the results obtained from the compressed sensing simulation technique applied to two types of signals. The aim of this paper was to simulate a communication system involving the data recovery applying the compressed sensing technique, analyzing sampling rates reduction, measuring the efficiency of the process and the behavior of the technique. The recovery of the signal is made using convex programming and using l1 norm minimization for recover the signals in the time domain. We used the L1Magic toolbox, which is a set of Matlab® functions used to solve optimization problems in this case with the l1eqpd function. As a summary of the obtained results, we checked the efficiency of the compressed sensing technique, minimum average rates for sampling the constructed signals, and the best performance of the technique to recover soft signals compared to non-differentiable signals. Additionally, the recovery results of an audio signal with the compressed sensing technique, by varying the sampling rate and checking the audibility of the signal, are presented. This allowed the testing of this technique in a real scenario, finding a good opportunity for the transmission of audio signals in a more efficient way.
{"title":"Simulation and analysis of compressed sensing technique as sampling and data compression and reconstruction of signals using convex programming","authors":"A. N. Cadavid, Mario Ramos","doi":"10.1109/STSIVA.2016.7743312","DOIUrl":"https://doi.org/10.1109/STSIVA.2016.7743312","url":null,"abstract":"The information management has been treated primarily under the Nyquist sampling theory, but it is important to introduce new theories that replace deficiencies of what we know as the classical theory of sampling. These deficiencies create difficulties in data acquisition; this is a problem when large volumes of information are handled, in addition to the higher costs in storage and processing. This article presents the results obtained from the compressed sensing simulation technique applied to two types of signals. The aim of this paper was to simulate a communication system involving the data recovery applying the compressed sensing technique, analyzing sampling rates reduction, measuring the efficiency of the process and the behavior of the technique. The recovery of the signal is made using convex programming and using l1 norm minimization for recover the signals in the time domain. We used the L1Magic toolbox, which is a set of Matlab® functions used to solve optimization problems in this case with the l1eqpd function. As a summary of the obtained results, we checked the efficiency of the compressed sensing technique, minimum average rates for sampling the constructed signals, and the best performance of the technique to recover soft signals compared to non-differentiable signals. Additionally, the recovery results of an audio signal with the compressed sensing technique, by varying the sampling rate and checking the audibility of the signal, are presented. This allowed the testing of this technique in a real scenario, finding a good opportunity for the transmission of audio signals in a more efficient way.","PeriodicalId":373420,"journal":{"name":"2016 XXI Symposium on Signal Processing, Images and Artificial Vision (STSIVA)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125859342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-08-01DOI: 10.1109/STSIVA.2016.7743362
Walter Serna, G. Daza, Natalia Izquierdo
Reference markers still are required to achieve the highest accuracy in tracking applications. Geometrical patterns allow precisely recognizing objects in image processing techniques. However, researchers are looking for new techniques for the minimization of the error. Post-processing stage was implemented for the refinement of the 3D coordinates computed for flat markers. In our application the flat markers are recognized with digital cameras through corner detection. Later, the points are paired and the corners are reconstructed forming a set of connected 3D points. Inevitably, the reconstruction algorithm introduces a spatial error dislocating the points from the original plane of the flat mark. The overall objective in this paper is to generate the best fitting plane for the 3D points which it was confirmed it produces a better approximation to the original flat marker. At this stage the measured points can be projected to the best fitting plane to be treated like fixed points. PCA was used for finding the best fitting plane. Finally, the influence of the method was evaluated in measurements of an underdevelopment image guided surgery system obtaining an error reduction of 17%.
{"title":"Planar approximation of three-dimensional data for refinement of marker-based tracking algorithm","authors":"Walter Serna, G. Daza, Natalia Izquierdo","doi":"10.1109/STSIVA.2016.7743362","DOIUrl":"https://doi.org/10.1109/STSIVA.2016.7743362","url":null,"abstract":"Reference markers still are required to achieve the highest accuracy in tracking applications. Geometrical patterns allow precisely recognizing objects in image processing techniques. However, researchers are looking for new techniques for the minimization of the error. Post-processing stage was implemented for the refinement of the 3D coordinates computed for flat markers. In our application the flat markers are recognized with digital cameras through corner detection. Later, the points are paired and the corners are reconstructed forming a set of connected 3D points. Inevitably, the reconstruction algorithm introduces a spatial error dislocating the points from the original plane of the flat mark. The overall objective in this paper is to generate the best fitting plane for the 3D points which it was confirmed it produces a better approximation to the original flat marker. At this stage the measured points can be projected to the best fitting plane to be treated like fixed points. PCA was used for finding the best fitting plane. Finally, the influence of the method was evaluated in measurements of an underdevelopment image guided surgery system obtaining an error reduction of 17%.","PeriodicalId":373420,"journal":{"name":"2016 XXI Symposium on Signal Processing, Images and Artificial Vision (STSIVA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132189013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-08-01DOI: 10.1109/STSIVA.2016.7743302
A. Jerez, M. Márquez, H. Arguello
Computed Tomography (CT) is a non-invasive and non-intrusive technique that allows classification and detection of the internal structure of an object. However, the high doses of radiation generated by CT scanners are excessive, and it may represent a risk to the patient's health or even damage to the object of study. To reduce this damage is necessary to decrease the doses of radiation, i.e., lowering the number of view angles at which projections are taken. However, the reduction of measurements leads to an inverse ill-posed inverse problem. Coded aperture X-ray tomography is an approach that allows to overcome these limitations. This approach is based on the Compressive Sensing (CS) theory, which emerged as a new sampling technique requiring fewer projections than those specified by the Nyquist criterion. However, CS method in CT does not exploit the internal structure of the object. In this paper, we propose a strategy of CS in CT using adaptive coded aperture to obtain better reconstruction of CT images. Coded apertures are adapted using an initial reconstruction of the object of interest that is obtained from a previous shot. The results indicate that by using just 18% of the samples, it is possible to obtain up to 2 dB improvement in terms of PSNR (Peak-signal-to-noise-ratio) in reconstructed images compared to the traditional method.
{"title":"Compressive computed tomography image reconstruction by using the analysis of the internal structure of an object","authors":"A. Jerez, M. Márquez, H. Arguello","doi":"10.1109/STSIVA.2016.7743302","DOIUrl":"https://doi.org/10.1109/STSIVA.2016.7743302","url":null,"abstract":"Computed Tomography (CT) is a non-invasive and non-intrusive technique that allows classification and detection of the internal structure of an object. However, the high doses of radiation generated by CT scanners are excessive, and it may represent a risk to the patient's health or even damage to the object of study. To reduce this damage is necessary to decrease the doses of radiation, i.e., lowering the number of view angles at which projections are taken. However, the reduction of measurements leads to an inverse ill-posed inverse problem. Coded aperture X-ray tomography is an approach that allows to overcome these limitations. This approach is based on the Compressive Sensing (CS) theory, which emerged as a new sampling technique requiring fewer projections than those specified by the Nyquist criterion. However, CS method in CT does not exploit the internal structure of the object. In this paper, we propose a strategy of CS in CT using adaptive coded aperture to obtain better reconstruction of CT images. Coded apertures are adapted using an initial reconstruction of the object of interest that is obtained from a previous shot. The results indicate that by using just 18% of the samples, it is possible to obtain up to 2 dB improvement in terms of PSNR (Peak-signal-to-noise-ratio) in reconstructed images compared to the traditional method.","PeriodicalId":373420,"journal":{"name":"2016 XXI Symposium on Signal Processing, Images and Artificial Vision (STSIVA)","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133157409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-08-01DOI: 10.1109/STSIVA.2016.7743309
Daniel F. Valencia, David Orejuela, Jeferson Salazar, Jose Valencia
Nowadays, wavelet transform (WT) is widely used in the realm of signal denoising, has proven a high effectiveness in terms of time and quality concerning denoising methods. Despite there are several achievements denoising through wavelet thresholding methods, these do not disclose an optimal configuration. In this paper, we proposed a comparative performance analysis of several thresholding methods using WT; biological signals are denoised to obtain performance metrics. The efficiency of particular thresholding methods: rigrsure, sqtwolog, heursure and minimaxi using hard and soft thresholding are compared in the presence of low Gaussian noise also the effect of wavelet decomposition levels is analyzed. For wavelet decomposition, Haar wavelet is used. Experimental results show that by increasing decomposition levels likewise there was a denoising improvement in terms of root mean square error (RMSE) and correlation coefficient, however, from the fifth decomposition level RMSE and correlation coefficient slowly tends to get worse, also the threshold method rigrsure on soft thresholding improved RMSE of 1.77 to 1.03 and correlation coefficient of 99.32% to 99.71% while others techniques on both, soft and hard thresholding did not improve more than 1.1 in RMSE and 99.67% in correlation coefficient.
{"title":"Comparison analysis between rigrsure, sqtwolog, heursure and minimaxi techniques using hard and soft thresholding methods","authors":"Daniel F. Valencia, David Orejuela, Jeferson Salazar, Jose Valencia","doi":"10.1109/STSIVA.2016.7743309","DOIUrl":"https://doi.org/10.1109/STSIVA.2016.7743309","url":null,"abstract":"Nowadays, wavelet transform (WT) is widely used in the realm of signal denoising, has proven a high effectiveness in terms of time and quality concerning denoising methods. Despite there are several achievements denoising through wavelet thresholding methods, these do not disclose an optimal configuration. In this paper, we proposed a comparative performance analysis of several thresholding methods using WT; biological signals are denoised to obtain performance metrics. The efficiency of particular thresholding methods: rigrsure, sqtwolog, heursure and minimaxi using hard and soft thresholding are compared in the presence of low Gaussian noise also the effect of wavelet decomposition levels is analyzed. For wavelet decomposition, Haar wavelet is used. Experimental results show that by increasing decomposition levels likewise there was a denoising improvement in terms of root mean square error (RMSE) and correlation coefficient, however, from the fifth decomposition level RMSE and correlation coefficient slowly tends to get worse, also the threshold method rigrsure on soft thresholding improved RMSE of 1.77 to 1.03 and correlation coefficient of 99.32% to 99.71% while others techniques on both, soft and hard thresholding did not improve more than 1.1 in RMSE and 99.67% in correlation coefficient.","PeriodicalId":373420,"journal":{"name":"2016 XXI Symposium on Signal Processing, Images and Artificial Vision (STSIVA)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133038426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-08-01DOI: 10.1109/STSIVA.2016.7743305
C. A. Montoya, Ruben D. Sanchez, Luis F. Castaño
In this paper is presented the implementation of a FPGA based heterogeneous system for the approach to the numerical solution of the Lorenz system by Euler's method. Unlike similar works, high level design tools as System Generator or DSP Builder are not used. The system is implemented on a ZedBoard Zynq Evaluation and Development Kit over Vivado Design Suite. It takes advantage of the SoC FPGA architecture, where a custom IP described in VHDL for the programmable logic section interacts with the ZYNQ-7processing system through AXI4-Lite interface. Operations are performed using a 32-bit floating-point format with rounding to the nearest value. Performance and numerical results analysis for the sequential algorithm execution over the Zynq-7000 ARM Cortex A9 core and the concurrent execution using the heterogeneous system are presented and compared. The validation of the developed system is made through the comparison between the approach to the numerical solution obtained with the SoC FPGA and MAT-LAB simulation is performed for different initial conditions and system parameters.
本文介绍了一种基于FPGA的异构系统的实现,用于用欧拉法求解洛伦兹系统的数值解。不像类似的作品,高层次的设计工具,如系统生成器或DSP生成器不使用。该系统是在ZedBoard Zynq评估和开发套件上实现的,基于Vivado设计套件。它利用了SoC FPGA架构,其中VHDL描述的可编程逻辑部分的自定义IP通过AXI4-Lite接口与zynq -7处理系统交互。操作使用32位浮点格式执行,并舍入到最接近的值。给出了在Zynq-7000 ARM Cortex A9内核上顺序执行算法和在异构系统上并发执行算法的性能和数值结果分析,并进行了比较。通过对SoC FPGA得到的数值解的方法进行比较,并对不同的初始条件和系统参数进行了matlab - lab仿真,验证了所开发系统的有效性。
{"title":"Approach to the numerical solution of Lorenz system on SoC FPGA","authors":"C. A. Montoya, Ruben D. Sanchez, Luis F. Castaño","doi":"10.1109/STSIVA.2016.7743305","DOIUrl":"https://doi.org/10.1109/STSIVA.2016.7743305","url":null,"abstract":"In this paper is presented the implementation of a FPGA based heterogeneous system for the approach to the numerical solution of the Lorenz system by Euler's method. Unlike similar works, high level design tools as System Generator or DSP Builder are not used. The system is implemented on a ZedBoard Zynq Evaluation and Development Kit over Vivado Design Suite. It takes advantage of the SoC FPGA architecture, where a custom IP described in VHDL for the programmable logic section interacts with the ZYNQ-7processing system through AXI4-Lite interface. Operations are performed using a 32-bit floating-point format with rounding to the nearest value. Performance and numerical results analysis for the sequential algorithm execution over the Zynq-7000 ARM Cortex A9 core and the concurrent execution using the heterogeneous system are presented and compared. The validation of the developed system is made through the comparison between the approach to the numerical solution obtained with the SoC FPGA and MAT-LAB simulation is performed for different initial conditions and system parameters.","PeriodicalId":373420,"journal":{"name":"2016 XXI Symposium on Signal Processing, Images and Artificial Vision (STSIVA)","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122463079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-08-01DOI: 10.1109/STSIVA.2016.7743297
C. A. Rodríguez, R. Gutiérrez, A. Jaramillo
This paper presents a variation of the technique of adaptive thresholding, as a robust method for segmentation of holes in quality assurance of the Gas Electron Multiplier Foils by analyzing high-resolution images. The proposed thresholding is applied to the region of interest around the holes and offers an effective solution to lighting variations on the GEM-foil image. This technique together with some operations of mathematical morphology allows extracting contours of the holes for accurate quantification. The inner radius and roundness of the holes are used and proposed as a measure to characterize defects in GEM-foils. Techniques and methods described in this work were developed in JAVA programming language and implemented in the Software for Foils Analyzer (SOFA).
{"title":"Adaptive thresholding by region of interest applied to quality control of gas electron multiplier foils","authors":"C. A. Rodríguez, R. Gutiérrez, A. Jaramillo","doi":"10.1109/STSIVA.2016.7743297","DOIUrl":"https://doi.org/10.1109/STSIVA.2016.7743297","url":null,"abstract":"This paper presents a variation of the technique of adaptive thresholding, as a robust method for segmentation of holes in quality assurance of the Gas Electron Multiplier Foils by analyzing high-resolution images. The proposed thresholding is applied to the region of interest around the holes and offers an effective solution to lighting variations on the GEM-foil image. This technique together with some operations of mathematical morphology allows extracting contours of the holes for accurate quantification. The inner radius and roundness of the holes are used and proposed as a measure to characterize defects in GEM-foils. Techniques and methods described in this work were developed in JAVA programming language and implemented in the Software for Foils Analyzer (SOFA).","PeriodicalId":373420,"journal":{"name":"2016 XXI Symposium on Signal Processing, Images and Artificial Vision (STSIVA)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128688921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-08-01DOI: 10.1109/STSIVA.2016.7743341
Cristian C. Velandia, Hugo Celedon, D. Tibaduiza, C. Torres-Pinzón, Jaime Vitola
According to statistical data provided by the National Administrative Department of Statistics (DANE), during 2005, 29.32% of Colombias disabled population had problems with their legs related to moving or walking. In order to contribute and help these people, medical science and engineering have been working together to provide solutions that can improve the quality of life of injured people. In this sense, it is possible to design and implement electromechanical devices to assist and facilitate movements in rehabilitation processes. Such devices can keep detailed records of movements performed, patient history, speed, force, muscle interaction, among others; also allows physiotherapists to have complete control of rehabilitation processes, improving it and offering a lot more possible treatments by analyzing the summarized collected data. Rehabilitation focused exoskeletons work as guide and support in physical therapy, those make sure that the treated patient performs correctly all its exercises, in case that the patient cannot perform the movement by himself such devices helps the patient to finish the exercise, improving the effectiveness of the therapy and reducing the time it takes to recover lost faculties. As a contribution to rehabilitation processes, this paper proposes the design of an exoskeleton by considering biome-chanical models involving the most possible characteristics of the human body to reduce the differences between the mathematical model and the real behavior of the body segments; That proposed model is used to constraint and design a controller in master-slave configuration to assist and ensure soft movements improving rehabilitation processes involving flection and extension movements in the sagittal plane of the lower limbs.
{"title":"Design and control of an exoskeleton in rehabilitation tasks for lower limb","authors":"Cristian C. Velandia, Hugo Celedon, D. Tibaduiza, C. Torres-Pinzón, Jaime Vitola","doi":"10.1109/STSIVA.2016.7743341","DOIUrl":"https://doi.org/10.1109/STSIVA.2016.7743341","url":null,"abstract":"According to statistical data provided by the National Administrative Department of Statistics (DANE), during 2005, 29.32% of Colombias disabled population had problems with their legs related to moving or walking. In order to contribute and help these people, medical science and engineering have been working together to provide solutions that can improve the quality of life of injured people. In this sense, it is possible to design and implement electromechanical devices to assist and facilitate movements in rehabilitation processes. Such devices can keep detailed records of movements performed, patient history, speed, force, muscle interaction, among others; also allows physiotherapists to have complete control of rehabilitation processes, improving it and offering a lot more possible treatments by analyzing the summarized collected data. Rehabilitation focused exoskeletons work as guide and support in physical therapy, those make sure that the treated patient performs correctly all its exercises, in case that the patient cannot perform the movement by himself such devices helps the patient to finish the exercise, improving the effectiveness of the therapy and reducing the time it takes to recover lost faculties. As a contribution to rehabilitation processes, this paper proposes the design of an exoskeleton by considering biome-chanical models involving the most possible characteristics of the human body to reduce the differences between the mathematical model and the real behavior of the body segments; That proposed model is used to constraint and design a controller in master-slave configuration to assist and ensure soft movements improving rehabilitation processes involving flection and extension movements in the sagittal plane of the lower limbs.","PeriodicalId":373420,"journal":{"name":"2016 XXI Symposium on Signal Processing, Images and Artificial Vision (STSIVA)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121561843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-08-01DOI: 10.1109/STSIVA.2016.7743298
H. Areiza-Laverde, L. R. Mercado-Diaz, A. E. Castro-Ospina, J. A. Jaramillo-Garzón
Prediction of protein fold families remains an existing challenge in molecular biology and bioinformatics, mainly because proteins form a broad range of complex three-dimensional configurations and because the number of proteins registered in datasets has dramatically increased in the recent years. Computational alternatives must then be designed for substituting experimental methods. However, implementations of computational methods have found a problem to extract features that involve the physical-chemical attributes and spatial features of the protein to improve the accuracy in predictions. In this paper, we propose the use of graph theory for representing position of amino acids of the protein as graph nodes, and graph edges connect amino acids that are close to each other under a given threshold. In this way we can get very descriptive features related to spatial and physical-chemical properties of the proteins to describe their three-dimensional structure and so predict the protein fold families with a good accuracy.
{"title":"Protein fold families prediction based on graph representations and machine learning methods","authors":"H. Areiza-Laverde, L. R. Mercado-Diaz, A. E. Castro-Ospina, J. A. Jaramillo-Garzón","doi":"10.1109/STSIVA.2016.7743298","DOIUrl":"https://doi.org/10.1109/STSIVA.2016.7743298","url":null,"abstract":"Prediction of protein fold families remains an existing challenge in molecular biology and bioinformatics, mainly because proteins form a broad range of complex three-dimensional configurations and because the number of proteins registered in datasets has dramatically increased in the recent years. Computational alternatives must then be designed for substituting experimental methods. However, implementations of computational methods have found a problem to extract features that involve the physical-chemical attributes and spatial features of the protein to improve the accuracy in predictions. In this paper, we propose the use of graph theory for representing position of amino acids of the protein as graph nodes, and graph edges connect amino acids that are close to each other under a given threshold. In this way we can get very descriptive features related to spatial and physical-chemical properties of the proteins to describe their three-dimensional structure and so predict the protein fold families with a good accuracy.","PeriodicalId":373420,"journal":{"name":"2016 XXI Symposium on Signal Processing, Images and Artificial Vision (STSIVA)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132797538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-08-01DOI: 10.1109/STSIVA.2016.7743333
D. Prada, David Serrano
Through a deterministic geometric representation, known as the Fractal Multifractal Model, rainfall records are studied. the fractal dimension of simulated rain data and the fractal interpolation function is analyzed. For this, some methods including Box Counting and Hurst coefficient, which allow us to observe the type of noise present in the data used. Given a time series with data taken by pluviographs in Floridablanca, in Colombia rainfall variability and persistence of these data over time was analyzed.
{"title":"Fractality in precipitation of the municipality of Floridablanca, Santander","authors":"D. Prada, David Serrano","doi":"10.1109/STSIVA.2016.7743333","DOIUrl":"https://doi.org/10.1109/STSIVA.2016.7743333","url":null,"abstract":"Through a deterministic geometric representation, known as the Fractal Multifractal Model, rainfall records are studied. the fractal dimension of simulated rain data and the fractal interpolation function is analyzed. For this, some methods including Box Counting and Hurst coefficient, which allow us to observe the type of noise present in the data used. Given a time series with data taken by pluviographs in Floridablanca, in Colombia rainfall variability and persistence of these data over time was analyzed.","PeriodicalId":373420,"journal":{"name":"2016 XXI Symposium on Signal Processing, Images and Artificial Vision (STSIVA)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121792203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-08-01DOI: 10.1109/STSIVA.2016.7743349
J. C. Vásquez-Correa, J. Orozco-Arroyave, E. Noth
Parkinson's disease patients develop several impairments related to the speech production process. The deficits of the speech of the patients include reduction in the phonation, articulation, prosody and intelligibility capabilities. Related studies have analyzed the phonation, articulation and prosody of the patients with Parkinson's, while the intelligibility impairments have not been enough evaluated. In this study we propose two novel features based on the word accuracy and the dynamic time warping algorithm with the aim of assess the intelligibility deficits of the patients using an automatic speech recognition system. We evaluate the suitability of the features by the automatic classification of utterances of patients vs. healthy controls, and by predicting automatically the neurological state of the patients. According to results, an accuracy of up to 92% is obtained, indicating that the proposed features are highly accurate to detect Parkinson's disease from speech. Regarding the automatic monitoring of the neurological state, the proposed approach could be used as complement of other features derived from speech or other bio-signals to monitor the neurological state of the patients.
{"title":"Word accuracy and dynamic time warping to assess intelligibility deficits in patients with Parkinsons disease","authors":"J. C. Vásquez-Correa, J. Orozco-Arroyave, E. Noth","doi":"10.1109/STSIVA.2016.7743349","DOIUrl":"https://doi.org/10.1109/STSIVA.2016.7743349","url":null,"abstract":"Parkinson's disease patients develop several impairments related to the speech production process. The deficits of the speech of the patients include reduction in the phonation, articulation, prosody and intelligibility capabilities. Related studies have analyzed the phonation, articulation and prosody of the patients with Parkinson's, while the intelligibility impairments have not been enough evaluated. In this study we propose two novel features based on the word accuracy and the dynamic time warping algorithm with the aim of assess the intelligibility deficits of the patients using an automatic speech recognition system. We evaluate the suitability of the features by the automatic classification of utterances of patients vs. healthy controls, and by predicting automatically the neurological state of the patients. According to results, an accuracy of up to 92% is obtained, indicating that the proposed features are highly accurate to detect Parkinson's disease from speech. Regarding the automatic monitoring of the neurological state, the proposed approach could be used as complement of other features derived from speech or other bio-signals to monitor the neurological state of the patients.","PeriodicalId":373420,"journal":{"name":"2016 XXI Symposium on Signal Processing, Images and Artificial Vision (STSIVA)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116604029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}