Pub Date : 2008-12-01DOI: 10.1109/ISSPIT.2008.4775700
A. Cristo, A. Plaza, D. Valencia
Tracking the position of stars or bright bodies in images from space represents a valuable source of information in different application domains. One of the simplest approaches used for this purpose in the literature is image thresholding, where all pixels above a certain intensity level are considered stars, and all other pixels are considered background. Two main problems have been identified in the literature for image thresholding-based star identification methods. Most notably, the intensity of the background is not always constant; i.e., a sloping background could give proper detection of stars in one part of the image, while in another part every pixel can have an intensity over the threshold value and will thus be detected as a star. Also, there is always some degree of noise present in astronomical images, and this noise can create spurious peaks in the intensity that can be detected as stars, even though they are not. In this work, we develop a novel image thresholding-based methodology which addresses the issues above. Specifically, the method proposed in this work relies on an enhanced histogram-based thresholding method complemented by a collection of auxiliary techniques aimed at searching inside diffuse objects such as galaxies, nebulas and comets, and thus enhance their detection by eliminating noise artifacts. Its black-box design and our experimental results indicate that this novel method offers potential for being included as a star identification module in already existent techniques and systems that require accurate tracking and recognition of stars in astronomical images.
{"title":"A novel thresholding method for automatically detecting stars in astronomical images","authors":"A. Cristo, A. Plaza, D. Valencia","doi":"10.1109/ISSPIT.2008.4775700","DOIUrl":"https://doi.org/10.1109/ISSPIT.2008.4775700","url":null,"abstract":"Tracking the position of stars or bright bodies in images from space represents a valuable source of information in different application domains. One of the simplest approaches used for this purpose in the literature is image thresholding, where all pixels above a certain intensity level are considered stars, and all other pixels are considered background. Two main problems have been identified in the literature for image thresholding-based star identification methods. Most notably, the intensity of the background is not always constant; i.e., a sloping background could give proper detection of stars in one part of the image, while in another part every pixel can have an intensity over the threshold value and will thus be detected as a star. Also, there is always some degree of noise present in astronomical images, and this noise can create spurious peaks in the intensity that can be detected as stars, even though they are not. In this work, we develop a novel image thresholding-based methodology which addresses the issues above. Specifically, the method proposed in this work relies on an enhanced histogram-based thresholding method complemented by a collection of auxiliary techniques aimed at searching inside diffuse objects such as galaxies, nebulas and comets, and thus enhance their detection by eliminating noise artifacts. Its black-box design and our experimental results indicate that this novel method offers potential for being included as a star identification module in already existent techniques and systems that require accurate tracking and recognition of stars in astronomical images.","PeriodicalId":213756,"journal":{"name":"2008 IEEE International Symposium on Signal Processing and Information Technology","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132970047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-12-01DOI: 10.1109/ISSPIT.2008.4775717
A. Tafreshi, A. Nasrabadi, Amir H. Omidvarnia
In this paper, we attempt to analyze the performance of the Empirical Mode Decomposition (EMD) for discriminating epileptic seizure data from the normal data. The Empirical Mode Decomposition (EMD) is a general signal processing method for analyzing nonlinear and nonstationary time series. The main idea of EMD is to decompose a time series into a finite and often small number of intrinsic mode functions (IMFs). EMD is an adaptive decomposition since the extracted information is obtained directly from the original signal. By utilizing this method to obtain the features of normal and epileptic seizure signals, we compare them with traditional features such as wavelet coefficients through two classifiers. Our results confirmed that our proposed features could potentially be used to distinguish normal from seizure data with success rate up to 95.42%.
{"title":"Epileptic Seizure Detection Using Empirical Mode Decomposition","authors":"A. Tafreshi, A. Nasrabadi, Amir H. Omidvarnia","doi":"10.1109/ISSPIT.2008.4775717","DOIUrl":"https://doi.org/10.1109/ISSPIT.2008.4775717","url":null,"abstract":"In this paper, we attempt to analyze the performance of the Empirical Mode Decomposition (EMD) for discriminating epileptic seizure data from the normal data. The Empirical Mode Decomposition (EMD) is a general signal processing method for analyzing nonlinear and nonstationary time series. The main idea of EMD is to decompose a time series into a finite and often small number of intrinsic mode functions (IMFs). EMD is an adaptive decomposition since the extracted information is obtained directly from the original signal. By utilizing this method to obtain the features of normal and epileptic seizure signals, we compare them with traditional features such as wavelet coefficients through two classifiers. Our results confirmed that our proposed features could potentially be used to distinguish normal from seizure data with success rate up to 95.42%.","PeriodicalId":213756,"journal":{"name":"2008 IEEE International Symposium on Signal Processing and Information Technology","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115953379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-12-01DOI: 10.1109/ISSPIT.2008.4775669
S. S. Al-Dahri, Y.H. Al-Jassar, Y. Alotaibi, M. Alsulaiman, K. Abdullah-Al-Mamun
Automatic speaker recognition is one of the difficult tasks in the field of computer speech and speaker recognition. Speaker recognition is a biometric process of automatically recognizing who is speaking on the basis of speaker dependent features of the speech signal. Currently, speaker recognition system is an important need for authenticating the personal like other biometrics such as finger prints and retinal scans. Speech based recognition permits both on site and remote access to the user. In this research, speaker identification system is investigated from the speaker recognition problem point of view. It is an important component of a speech-based user interface. The aim of this research is to develop a system that is capable of identifying an individual from a sample of his or her speech. Arabic language is a semitic language that differs from European languages such as English. Our system is based on Arabic speech. We have chosen to work on a word-dependent system using the Arabic isolated word /ns10 as10 cs10 as10 ms10//[unk]/ a single keyword for the test utterance. This choice has been made because the word /ns10 as10 cs10 as10 ms10//[unk]/ is mostly used by the Arabic speakers. Speech features are extracted using MFCC. The HTK is used to implement the speaker identification module with phoneme based HMM. The designed automatic Arabic speaker identification system contains 100 speakers and it achieved 96.25% accuracy for recognizing the correct speaker.
{"title":"A Word-Dependent Automatic Arabic Speaker Identification System","authors":"S. S. Al-Dahri, Y.H. Al-Jassar, Y. Alotaibi, M. Alsulaiman, K. Abdullah-Al-Mamun","doi":"10.1109/ISSPIT.2008.4775669","DOIUrl":"https://doi.org/10.1109/ISSPIT.2008.4775669","url":null,"abstract":"Automatic speaker recognition is one of the difficult tasks in the field of computer speech and speaker recognition. Speaker recognition is a biometric process of automatically recognizing who is speaking on the basis of speaker dependent features of the speech signal. Currently, speaker recognition system is an important need for authenticating the personal like other biometrics such as finger prints and retinal scans. Speech based recognition permits both on site and remote access to the user. In this research, speaker identification system is investigated from the speaker recognition problem point of view. It is an important component of a speech-based user interface. The aim of this research is to develop a system that is capable of identifying an individual from a sample of his or her speech. Arabic language is a semitic language that differs from European languages such as English. Our system is based on Arabic speech. We have chosen to work on a word-dependent system using the Arabic isolated word /ns10 as10 cs10 as10 ms10//[unk]/ a single keyword for the test utterance. This choice has been made because the word /ns10 as10 cs10 as10 ms10//[unk]/ is mostly used by the Arabic speakers. Speech features are extracted using MFCC. The HTK is used to implement the speaker identification module with phoneme based HMM. The designed automatic Arabic speaker identification system contains 100 speakers and it achieved 96.25% accuracy for recognizing the correct speaker.","PeriodicalId":213756,"journal":{"name":"2008 IEEE International Symposium on Signal Processing and Information Technology","volume":"2013 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127312429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-12-01DOI: 10.1109/ISSPIT.2008.4775720
V. Kitanovski, D. Taskovski, D. Gleich, P. Planinsic
This paper presents an adaptive lifting scheme, which performs integer-to-integer wavelet transform, for lossless image compression. We optimize the coefficients of the predict filter in the lifting scheme to minimize the predictor's error variance. The optimized coefficients depend on the autocorrelation structure of the image. The presented lifting scheme adapts not only to every component of the color image, but also to its horizontal and vertical directions. We implement this lifting scheme on the fixed-point TMS320C6416 DSK evaluation board. We obtain experimental results using different types of images, as well as using images captured by camera in a real-time application. These results show that the presented method is competitive to few well-known methods for lossless image compression.
{"title":"Optimization and Implementation of Integer Lifting Scheme for Lossless Image Coding","authors":"V. Kitanovski, D. Taskovski, D. Gleich, P. Planinsic","doi":"10.1109/ISSPIT.2008.4775720","DOIUrl":"https://doi.org/10.1109/ISSPIT.2008.4775720","url":null,"abstract":"This paper presents an adaptive lifting scheme, which performs integer-to-integer wavelet transform, for lossless image compression. We optimize the coefficients of the predict filter in the lifting scheme to minimize the predictor's error variance. The optimized coefficients depend on the autocorrelation structure of the image. The presented lifting scheme adapts not only to every component of the color image, but also to its horizontal and vertical directions. We implement this lifting scheme on the fixed-point TMS320C6416 DSK evaluation board. We obtain experimental results using different types of images, as well as using images captured by camera in a real-time application. These results show that the presented method is competitive to few well-known methods for lossless image compression.","PeriodicalId":213756,"journal":{"name":"2008 IEEE International Symposium on Signal Processing and Information Technology","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128166772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-12-01DOI: 10.1109/ISSPIT.2008.4775694
H. Demirel, G. Anbarjafari
This paper proposes a high performance iris recognition system based on the probability distribution functions (PDF) of pixels in different colour channels. The PDFs of the segmented iris images are used as statistical feature vectors for the recognition of irises by minimizing the Kullback-Leibler distance (KLD) between the PDF of a given iris and the PDFs of irises in the training set. Feature vector fusion (FVF) and majority voting (MV) methods have been employed to combine feature vectors obtained from different colour channels in YCbCr and RGB colour spaces to improve the recognition performance. The system has been tested on the segmented iris images from the UPOL iris database. The proposed system gives a 98.44% recognition rate on that iris database.
{"title":"Iris Recognition System Using Combined Colour Statistics","authors":"H. Demirel, G. Anbarjafari","doi":"10.1109/ISSPIT.2008.4775694","DOIUrl":"https://doi.org/10.1109/ISSPIT.2008.4775694","url":null,"abstract":"This paper proposes a high performance iris recognition system based on the probability distribution functions (PDF) of pixels in different colour channels. The PDFs of the segmented iris images are used as statistical feature vectors for the recognition of irises by minimizing the Kullback-Leibler distance (KLD) between the PDF of a given iris and the PDFs of irises in the training set. Feature vector fusion (FVF) and majority voting (MV) methods have been employed to combine feature vectors obtained from different colour channels in YCbCr and RGB colour spaces to improve the recognition performance. The system has been tested on the segmented iris images from the UPOL iris database. The proposed system gives a 98.44% recognition rate on that iris database.","PeriodicalId":213756,"journal":{"name":"2008 IEEE International Symposium on Signal Processing and Information Technology","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114245316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-12-01DOI: 10.1109/ISSPIT.2008.4775710
H. Yeganeh, S. Ahadi, S. M. Mirrezaie, A. Ziaei
Mel-frequency cepstral coefficients (MFCC) are the most widely used features for speech recognition. However, MFCC-based speech recognition performance degrades in presence of additive noise. In this paper, we propose a set of noise-robust features based on conventional MFCC feature extraction method. Our proposed method consists of two steps. In the first step, mel sub-band Wiener filtering is carried out. The second step consists of estimating SNR in each sub-band and calculating the sub-band entropy by defining a weight parameter based on sub-band SNR to entropy ratio. The weighting has been carried out in a way that gives more important roles, in cepstrum parameter formation, to sub-bands that are less affected by noise. Experimental results indicate that this method leads to improved ASR performance in noisy environments. Furthermore, due to the simplicity of the implementation of our method, its computational overhead in comparison to MFCC is quite small.
{"title":"Weighting of Mel Sub-bands Based on SNR/Entropy for Robust ASR","authors":"H. Yeganeh, S. Ahadi, S. M. Mirrezaie, A. Ziaei","doi":"10.1109/ISSPIT.2008.4775710","DOIUrl":"https://doi.org/10.1109/ISSPIT.2008.4775710","url":null,"abstract":"Mel-frequency cepstral coefficients (MFCC) are the most widely used features for speech recognition. However, MFCC-based speech recognition performance degrades in presence of additive noise. In this paper, we propose a set of noise-robust features based on conventional MFCC feature extraction method. Our proposed method consists of two steps. In the first step, mel sub-band Wiener filtering is carried out. The second step consists of estimating SNR in each sub-band and calculating the sub-band entropy by defining a weight parameter based on sub-band SNR to entropy ratio. The weighting has been carried out in a way that gives more important roles, in cepstrum parameter formation, to sub-bands that are less affected by noise. Experimental results indicate that this method leads to improved ASR performance in noisy environments. Furthermore, due to the simplicity of the implementation of our method, its computational overhead in comparison to MFCC is quite small.","PeriodicalId":213756,"journal":{"name":"2008 IEEE International Symposium on Signal Processing and Information Technology","volume":"2012 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128006864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-12-01DOI: 10.1109/ISSPIT.2008.4775699
S. Watanabe, A. Kawanaka
In this paper, we propose a new polygonal mesh geometry coding scheme based on a process of structuring by acquiring surrounding vertices of the polygonal mesh one layer at a time. The structuring process begins by selecting the start vertex and proceeding by acquiring surrounding vertices of the polygonal mesh. As a result, we obtain a 2-D structured vertex table. Structured geometry data are generated according to the structured vertices and encoded by a multiresolution decomposition and space frequency quantization coding method. In our proposed scheme, the multiresolution decomposition uses the connectivity of the polygonal mesh. In addition, with a space frequency quantization coding scheme, we can reduce redundancies of decomposed coefficients at similar positions in different components of decomposition level. Experimental results show that the proposed scheme gives better coding performance at lower bit-rates than the usual schemes.
{"title":"Triangular Mesh Geometry Coding with Multiresolution Decomposition Based on Structuring of Surrounding Vertices","authors":"S. Watanabe, A. Kawanaka","doi":"10.1109/ISSPIT.2008.4775699","DOIUrl":"https://doi.org/10.1109/ISSPIT.2008.4775699","url":null,"abstract":"In this paper, we propose a new polygonal mesh geometry coding scheme based on a process of structuring by acquiring surrounding vertices of the polygonal mesh one layer at a time. The structuring process begins by selecting the start vertex and proceeding by acquiring surrounding vertices of the polygonal mesh. As a result, we obtain a 2-D structured vertex table. Structured geometry data are generated according to the structured vertices and encoded by a multiresolution decomposition and space frequency quantization coding method. In our proposed scheme, the multiresolution decomposition uses the connectivity of the polygonal mesh. In addition, with a space frequency quantization coding scheme, we can reduce redundancies of decomposed coefficients at similar positions in different components of decomposition level. Experimental results show that the proposed scheme gives better coding performance at lower bit-rates than the usual schemes.","PeriodicalId":213756,"journal":{"name":"2008 IEEE International Symposium on Signal Processing and Information Technology","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132305576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-12-01DOI: 10.1109/ISSPIT.2008.4775677
J. George, S.P. Indu
In this paper, local structure tensor (LST) based adaptive anisotropic filtering (AAF) methodology is used for medical image enhancement over different modalities. This filtering framework enhances and preserves anisotropic image structures while suppressing high-frequency noise. The goal of this work is to reduce the overall computational cost with minimum risk on accuracy by introducing optimized filternets for local structure analysis and reconstruction filtering. This filtering technique facilitates user interaction and direct control over high frequency contents of the signal. The efficacy of the filtering framework is evaluated by testing the system with medical images of different modalities. The results are compared using three different quality measures. Experimental results show that a good level of noise reduction along with structure enhancement can be achieved in the adaptively filtered images.
{"title":"Fast Adaptive Anisotropic Filtering for Medical Image Enhancement","authors":"J. George, S.P. Indu","doi":"10.1109/ISSPIT.2008.4775677","DOIUrl":"https://doi.org/10.1109/ISSPIT.2008.4775677","url":null,"abstract":"In this paper, local structure tensor (LST) based adaptive anisotropic filtering (AAF) methodology is used for medical image enhancement over different modalities. This filtering framework enhances and preserves anisotropic image structures while suppressing high-frequency noise. The goal of this work is to reduce the overall computational cost with minimum risk on accuracy by introducing optimized filternets for local structure analysis and reconstruction filtering. This filtering technique facilitates user interaction and direct control over high frequency contents of the signal. The efficacy of the filtering framework is evaluated by testing the system with medical images of different modalities. The results are compared using three different quality measures. Experimental results show that a good level of noise reduction along with structure enhancement can be achieved in the adaptively filtered images.","PeriodicalId":213756,"journal":{"name":"2008 IEEE International Symposium on Signal Processing and Information Technology","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132434552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-12-01DOI: 10.1109/ISSPIT.2008.4775683
A. Plaza, J. Plaza, A. Cristo
Hyperspectral image processing has been a very active area in remote sensing and other application domains in recent years. Despite the availability of a wide range of advanced processing techniques for hyperspectral data analysis, a great majority of available techniques for this purpose are based on the consideration of spectral information separately from spatial information information, and thus the two types of information are not treated simultaneously. In this paper, we describe several innovative spatial/spectral techniques for hyperspectral image processing. The techniques described in this work cover different aspects of hyperspectral image processing such as dimensionality reduction, feature extraction, and spectral unmixing. The techniques addressed in this paper are based on concepts inspired by mathematical morphology, a theory that provides a remarkable framework to achieve the desired integration of spatial and spectral information. The proposed techniques are experimentally validated using standard hyperspectral data sets with ground-truth, and compared to traditional approaches in the hyperspectral imaging literature, revealing that the integration of spatial and spectral information can significantly improve the analysis of hyperspectral scenes when conducted in simultaneous fashion.
{"title":"Morphological feature extraction and spectral unmixing of hyperspectral images","authors":"A. Plaza, J. Plaza, A. Cristo","doi":"10.1109/ISSPIT.2008.4775683","DOIUrl":"https://doi.org/10.1109/ISSPIT.2008.4775683","url":null,"abstract":"Hyperspectral image processing has been a very active area in remote sensing and other application domains in recent years. Despite the availability of a wide range of advanced processing techniques for hyperspectral data analysis, a great majority of available techniques for this purpose are based on the consideration of spectral information separately from spatial information information, and thus the two types of information are not treated simultaneously. In this paper, we describe several innovative spatial/spectral techniques for hyperspectral image processing. The techniques described in this work cover different aspects of hyperspectral image processing such as dimensionality reduction, feature extraction, and spectral unmixing. The techniques addressed in this paper are based on concepts inspired by mathematical morphology, a theory that provides a remarkable framework to achieve the desired integration of spatial and spectral information. The proposed techniques are experimentally validated using standard hyperspectral data sets with ground-truth, and compared to traditional approaches in the hyperspectral imaging literature, revealing that the integration of spatial and spectral information can significantly improve the analysis of hyperspectral scenes when conducted in simultaneous fashion.","PeriodicalId":213756,"journal":{"name":"2008 IEEE International Symposium on Signal Processing and Information Technology","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131398541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-12-01DOI: 10.1109/ISSPIT.2008.4775649
M. Milanova, R. Kountchev, V. Todorov, R. Kountcheva
In the paper is presented new method for lossless compression of biomedical signals, aimed at telemedicine applications and efficient data storage with content protection. The method is based on data compression algorithm developed by the authors. The high compression ratio obtained permits efficient data transfer via communication channels and enhances the distance monitoring of patients. The presented approach is suitable for the processing of various biomedical signals. The relatively low computational complexity permits real time hardware and software applications.
{"title":"New Method for Lossless Compression of Medical Records","authors":"M. Milanova, R. Kountchev, V. Todorov, R. Kountcheva","doi":"10.1109/ISSPIT.2008.4775649","DOIUrl":"https://doi.org/10.1109/ISSPIT.2008.4775649","url":null,"abstract":"In the paper is presented new method for lossless compression of biomedical signals, aimed at telemedicine applications and efficient data storage with content protection. The method is based on data compression algorithm developed by the authors. The high compression ratio obtained permits efficient data transfer via communication channels and enhances the distance monitoring of patients. The presented approach is suitable for the processing of various biomedical signals. The relatively low computational complexity permits real time hardware and software applications.","PeriodicalId":213756,"journal":{"name":"2008 IEEE International Symposium on Signal Processing and Information Technology","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130221557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}