首页 > 最新文献

2008 IEEE International Symposium on Signal Processing and Information Technology最新文献

英文 中文
A novel thresholding method for automatically detecting stars in astronomical images 一种新的阈值自动探测天文图像中恒星的方法
Pub Date : 2008-12-01 DOI: 10.1109/ISSPIT.2008.4775700
A. Cristo, A. Plaza, D. Valencia
Tracking the position of stars or bright bodies in images from space represents a valuable source of information in different application domains. One of the simplest approaches used for this purpose in the literature is image thresholding, where all pixels above a certain intensity level are considered stars, and all other pixels are considered background. Two main problems have been identified in the literature for image thresholding-based star identification methods. Most notably, the intensity of the background is not always constant; i.e., a sloping background could give proper detection of stars in one part of the image, while in another part every pixel can have an intensity over the threshold value and will thus be detected as a star. Also, there is always some degree of noise present in astronomical images, and this noise can create spurious peaks in the intensity that can be detected as stars, even though they are not. In this work, we develop a novel image thresholding-based methodology which addresses the issues above. Specifically, the method proposed in this work relies on an enhanced histogram-based thresholding method complemented by a collection of auxiliary techniques aimed at searching inside diffuse objects such as galaxies, nebulas and comets, and thus enhance their detection by eliminating noise artifacts. Its black-box design and our experimental results indicate that this novel method offers potential for being included as a star identification module in already existent techniques and systems that require accurate tracking and recognition of stars in astronomical images.
在来自太空的图像中跟踪恒星或明亮天体的位置是不同应用领域的宝贵信息来源。在文献中,用于此目的的最简单的方法之一是图像阈值处理,其中高于一定强度水平的所有像素都被认为是恒星,而所有其他像素都被认为是背景。基于图像阈值的恒星识别方法存在两个主要问题。最值得注意的是,背景的强度并不总是恒定的;即,倾斜的背景可以在图像的一部分中适当地检测到恒星,而在另一部分中,每个像素的强度都可以超过阈值,因此将被检测到为恒星。此外,天文图像中总是存在某种程度的噪声,这种噪声会在强度上产生虚假的峰值,即使它们不是恒星,也可以被探测到。在这项工作中,我们开发了一种新的基于图像阈值的方法来解决上述问题。具体来说,这项工作中提出的方法依赖于一种增强的基于直方图的阈值方法,辅以一系列辅助技术,旨在搜索星系、星云和彗星等漫射物体的内部,从而通过消除噪声伪影来增强它们的检测。它的黑盒设计和我们的实验结果表明,这种新方法提供了作为恒星识别模块纳入现有技术和系统的潜力,这些技术和系统需要精确地跟踪和识别天文图像中的恒星。
{"title":"A novel thresholding method for automatically detecting stars in astronomical images","authors":"A. Cristo, A. Plaza, D. Valencia","doi":"10.1109/ISSPIT.2008.4775700","DOIUrl":"https://doi.org/10.1109/ISSPIT.2008.4775700","url":null,"abstract":"Tracking the position of stars or bright bodies in images from space represents a valuable source of information in different application domains. One of the simplest approaches used for this purpose in the literature is image thresholding, where all pixels above a certain intensity level are considered stars, and all other pixels are considered background. Two main problems have been identified in the literature for image thresholding-based star identification methods. Most notably, the intensity of the background is not always constant; i.e., a sloping background could give proper detection of stars in one part of the image, while in another part every pixel can have an intensity over the threshold value and will thus be detected as a star. Also, there is always some degree of noise present in astronomical images, and this noise can create spurious peaks in the intensity that can be detected as stars, even though they are not. In this work, we develop a novel image thresholding-based methodology which addresses the issues above. Specifically, the method proposed in this work relies on an enhanced histogram-based thresholding method complemented by a collection of auxiliary techniques aimed at searching inside diffuse objects such as galaxies, nebulas and comets, and thus enhance their detection by eliminating noise artifacts. Its black-box design and our experimental results indicate that this novel method offers potential for being included as a star identification module in already existent techniques and systems that require accurate tracking and recognition of stars in astronomical images.","PeriodicalId":213756,"journal":{"name":"2008 IEEE International Symposium on Signal Processing and Information Technology","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132970047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Epileptic Seizure Detection Using Empirical Mode Decomposition 基于经验模态分解的癫痫发作检测
Pub Date : 2008-12-01 DOI: 10.1109/ISSPIT.2008.4775717
A. Tafreshi, A. Nasrabadi, Amir H. Omidvarnia
In this paper, we attempt to analyze the performance of the Empirical Mode Decomposition (EMD) for discriminating epileptic seizure data from the normal data. The Empirical Mode Decomposition (EMD) is a general signal processing method for analyzing nonlinear and nonstationary time series. The main idea of EMD is to decompose a time series into a finite and often small number of intrinsic mode functions (IMFs). EMD is an adaptive decomposition since the extracted information is obtained directly from the original signal. By utilizing this method to obtain the features of normal and epileptic seizure signals, we compare them with traditional features such as wavelet coefficients through two classifiers. Our results confirmed that our proposed features could potentially be used to distinguish normal from seizure data with success rate up to 95.42%.
在本文中,我们试图分析经验模式分解(EMD)在区分癫痫发作数据和正常数据方面的性能。经验模态分解(EMD)是一种用于分析非线性和非平稳时间序列的通用信号处理方法。EMD的主要思想是将时间序列分解为有限的、通常是少量的内禀模态函数(IMFs)。EMD是一种自适应分解,因为提取的信息是直接从原始信号中获得的。利用该方法获得正常和癫痫发作信号的特征,并通过两种分类器将其与小波系数等传统特征进行比较。我们的结果证实,我们提出的特征可以潜在地用于区分正常和癫痫发作数据,成功率高达95.42%。
{"title":"Epileptic Seizure Detection Using Empirical Mode Decomposition","authors":"A. Tafreshi, A. Nasrabadi, Amir H. Omidvarnia","doi":"10.1109/ISSPIT.2008.4775717","DOIUrl":"https://doi.org/10.1109/ISSPIT.2008.4775717","url":null,"abstract":"In this paper, we attempt to analyze the performance of the Empirical Mode Decomposition (EMD) for discriminating epileptic seizure data from the normal data. The Empirical Mode Decomposition (EMD) is a general signal processing method for analyzing nonlinear and nonstationary time series. The main idea of EMD is to decompose a time series into a finite and often small number of intrinsic mode functions (IMFs). EMD is an adaptive decomposition since the extracted information is obtained directly from the original signal. By utilizing this method to obtain the features of normal and epileptic seizure signals, we compare them with traditional features such as wavelet coefficients through two classifiers. Our results confirmed that our proposed features could potentially be used to distinguish normal from seizure data with success rate up to 95.42%.","PeriodicalId":213756,"journal":{"name":"2008 IEEE International Symposium on Signal Processing and Information Technology","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115953379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 40
A Word-Dependent Automatic Arabic Speaker Identification System 依赖词的阿拉伯语说话人自动识别系统
Pub Date : 2008-12-01 DOI: 10.1109/ISSPIT.2008.4775669
S. S. Al-Dahri, Y.H. Al-Jassar, Y. Alotaibi, M. Alsulaiman, K. Abdullah-Al-Mamun
Automatic speaker recognition is one of the difficult tasks in the field of computer speech and speaker recognition. Speaker recognition is a biometric process of automatically recognizing who is speaking on the basis of speaker dependent features of the speech signal. Currently, speaker recognition system is an important need for authenticating the personal like other biometrics such as finger prints and retinal scans. Speech based recognition permits both on site and remote access to the user. In this research, speaker identification system is investigated from the speaker recognition problem point of view. It is an important component of a speech-based user interface. The aim of this research is to develop a system that is capable of identifying an individual from a sample of his or her speech. Arabic language is a semitic language that differs from European languages such as English. Our system is based on Arabic speech. We have chosen to work on a word-dependent system using the Arabic isolated word /ns10 as10 cs10 as10 ms10//[unk]/ a single keyword for the test utterance. This choice has been made because the word /ns10 as10 cs10 as10 ms10//[unk]/ is mostly used by the Arabic speakers. Speech features are extracted using MFCC. The HTK is used to implement the speaker identification module with phoneme based HMM. The designed automatic Arabic speaker identification system contains 100 speakers and it achieved 96.25% accuracy for recognizing the correct speaker.
自动说话人识别是计算机语音和说话人识别领域的难点之一。说话人识别是根据语音信号的说话人相关特征自动识别说话人的生物识别过程。目前,语音识别系统与指纹、视网膜扫描等其他生物识别技术一样,是验证个人身份的重要需求。基于语音的识别允许对用户进行现场和远程访问。本研究从说话人识别问题的角度对说话人识别系统进行了研究。它是基于语音的用户界面的重要组成部分。这项研究的目的是开发一种能够从他或她的讲话样本中识别个人的系统。阿拉伯语是一种闪族语言,不同于英语等欧洲语言。我们的系统是基于阿拉伯语的。我们选择使用阿拉伯语孤立词/ns10 as10 cs10 as10 ms10//[unk]/作为测试话语的单个关键字来处理单词依赖系统。之所以这样做是因为/ns10 as10 cs10 as10 ms10//[unk]/这个词主要由阿拉伯语使用者使用。使用MFCC提取语音特征。使用HTK实现基于音素HMM的说话人识别模块。设计的阿拉伯语说话人自动识别系统包含100个说话人,识别正确率达到96.25%。
{"title":"A Word-Dependent Automatic Arabic Speaker Identification System","authors":"S. S. Al-Dahri, Y.H. Al-Jassar, Y. Alotaibi, M. Alsulaiman, K. Abdullah-Al-Mamun","doi":"10.1109/ISSPIT.2008.4775669","DOIUrl":"https://doi.org/10.1109/ISSPIT.2008.4775669","url":null,"abstract":"Automatic speaker recognition is one of the difficult tasks in the field of computer speech and speaker recognition. Speaker recognition is a biometric process of automatically recognizing who is speaking on the basis of speaker dependent features of the speech signal. Currently, speaker recognition system is an important need for authenticating the personal like other biometrics such as finger prints and retinal scans. Speech based recognition permits both on site and remote access to the user. In this research, speaker identification system is investigated from the speaker recognition problem point of view. It is an important component of a speech-based user interface. The aim of this research is to develop a system that is capable of identifying an individual from a sample of his or her speech. Arabic language is a semitic language that differs from European languages such as English. Our system is based on Arabic speech. We have chosen to work on a word-dependent system using the Arabic isolated word /ns10 as10 cs10 as10 ms10//[unk]/ a single keyword for the test utterance. This choice has been made because the word /ns10 as10 cs10 as10 ms10//[unk]/ is mostly used by the Arabic speakers. Speech features are extracted using MFCC. The HTK is used to implement the speaker identification module with phoneme based HMM. The designed automatic Arabic speaker identification system contains 100 speakers and it achieved 96.25% accuracy for recognizing the correct speaker.","PeriodicalId":213756,"journal":{"name":"2008 IEEE International Symposium on Signal Processing and Information Technology","volume":"2013 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127312429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
Optimization and Implementation of Integer Lifting Scheme for Lossless Image Coding 无损图像编码中整数提升方案的优化与实现
Pub Date : 2008-12-01 DOI: 10.1109/ISSPIT.2008.4775720
V. Kitanovski, D. Taskovski, D. Gleich, P. Planinsic
This paper presents an adaptive lifting scheme, which performs integer-to-integer wavelet transform, for lossless image compression. We optimize the coefficients of the predict filter in the lifting scheme to minimize the predictor's error variance. The optimized coefficients depend on the autocorrelation structure of the image. The presented lifting scheme adapts not only to every component of the color image, but also to its horizontal and vertical directions. We implement this lifting scheme on the fixed-point TMS320C6416 DSK evaluation board. We obtain experimental results using different types of images, as well as using images captured by camera in a real-time application. These results show that the presented method is competitive to few well-known methods for lossless image compression.
提出了一种采用整数到整数小波变换进行图像无损压缩的自适应提升方案。我们在提升方案中对预测滤波器的系数进行优化,使预测器的误差方差最小。优化系数取决于图像的自相关结构。所提出的提升方案不仅适用于彩色图像的各个分量,而且适用于其水平方向和垂直方向。我们在定点TMS320C6416 DSK评估板上实现了该提升方案。我们使用不同类型的图像获得实验结果,以及在实时应用中使用相机捕获的图像。结果表明,该方法在无损图像压缩方面具有较强的竞争力。
{"title":"Optimization and Implementation of Integer Lifting Scheme for Lossless Image Coding","authors":"V. Kitanovski, D. Taskovski, D. Gleich, P. Planinsic","doi":"10.1109/ISSPIT.2008.4775720","DOIUrl":"https://doi.org/10.1109/ISSPIT.2008.4775720","url":null,"abstract":"This paper presents an adaptive lifting scheme, which performs integer-to-integer wavelet transform, for lossless image compression. We optimize the coefficients of the predict filter in the lifting scheme to minimize the predictor's error variance. The optimized coefficients depend on the autocorrelation structure of the image. The presented lifting scheme adapts not only to every component of the color image, but also to its horizontal and vertical directions. We implement this lifting scheme on the fixed-point TMS320C6416 DSK evaluation board. We obtain experimental results using different types of images, as well as using images captured by camera in a real-time application. These results show that the presented method is competitive to few well-known methods for lossless image compression.","PeriodicalId":213756,"journal":{"name":"2008 IEEE International Symposium on Signal Processing and Information Technology","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128166772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Iris Recognition System Using Combined Colour Statistics 结合颜色统计的虹膜识别系统
Pub Date : 2008-12-01 DOI: 10.1109/ISSPIT.2008.4775694
H. Demirel, G. Anbarjafari
This paper proposes a high performance iris recognition system based on the probability distribution functions (PDF) of pixels in different colour channels. The PDFs of the segmented iris images are used as statistical feature vectors for the recognition of irises by minimizing the Kullback-Leibler distance (KLD) between the PDF of a given iris and the PDFs of irises in the training set. Feature vector fusion (FVF) and majority voting (MV) methods have been employed to combine feature vectors obtained from different colour channels in YCbCr and RGB colour spaces to improve the recognition performance. The system has been tested on the segmented iris images from the UPOL iris database. The proposed system gives a 98.44% recognition rate on that iris database.
提出了一种基于不同颜色通道像素概率分布函数(PDF)的高性能虹膜识别系统。将分割后的虹膜图像的PDF作为统计特征向量,通过最小化给定虹膜的PDF与训练集中虹膜的PDF之间的kullbackleibler距离(KLD)来实现虹膜的识别。采用特征向量融合(FVF)和多数投票(MV)方法将YCbCr和RGB色彩空间中不同颜色通道的特征向量进行组合,提高识别性能。该系统已在UPOL虹膜数据库的分割虹膜图像上进行了测试。该系统在虹膜数据库上的识别率为98.44%。
{"title":"Iris Recognition System Using Combined Colour Statistics","authors":"H. Demirel, G. Anbarjafari","doi":"10.1109/ISSPIT.2008.4775694","DOIUrl":"https://doi.org/10.1109/ISSPIT.2008.4775694","url":null,"abstract":"This paper proposes a high performance iris recognition system based on the probability distribution functions (PDF) of pixels in different colour channels. The PDFs of the segmented iris images are used as statistical feature vectors for the recognition of irises by minimizing the Kullback-Leibler distance (KLD) between the PDF of a given iris and the PDFs of irises in the training set. Feature vector fusion (FVF) and majority voting (MV) methods have been employed to combine feature vectors obtained from different colour channels in YCbCr and RGB colour spaces to improve the recognition performance. The system has been tested on the segmented iris images from the UPOL iris database. The proposed system gives a 98.44% recognition rate on that iris database.","PeriodicalId":213756,"journal":{"name":"2008 IEEE International Symposium on Signal Processing and Information Technology","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114245316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Weighting of Mel Sub-bands Based on SNR/Entropy for Robust ASR 基于信噪比/熵的Mel子带加权鲁棒ASR
Pub Date : 2008-12-01 DOI: 10.1109/ISSPIT.2008.4775710
H. Yeganeh, S. Ahadi, S. M. Mirrezaie, A. Ziaei
Mel-frequency cepstral coefficients (MFCC) are the most widely used features for speech recognition. However, MFCC-based speech recognition performance degrades in presence of additive noise. In this paper, we propose a set of noise-robust features based on conventional MFCC feature extraction method. Our proposed method consists of two steps. In the first step, mel sub-band Wiener filtering is carried out. The second step consists of estimating SNR in each sub-band and calculating the sub-band entropy by defining a weight parameter based on sub-band SNR to entropy ratio. The weighting has been carried out in a way that gives more important roles, in cepstrum parameter formation, to sub-bands that are less affected by noise. Experimental results indicate that this method leads to improved ASR performance in noisy environments. Furthermore, due to the simplicity of the implementation of our method, its computational overhead in comparison to MFCC is quite small.
Mel-frequency倒谱系数(MFCC)是语音识别中应用最广泛的特征。然而,基于mfcc的语音识别性能在加性噪声存在下会下降。本文在传统MFCC特征提取方法的基础上,提出了一套噪声鲁棒特征。我们提出的方法包括两个步骤。第一步,进行五子带维纳滤波。第二步是估计每个子带的信噪比,并根据子带信噪比与熵比定义权重参数计算子带熵。在倒谱参数形成中,加权的方式赋予受噪声影响较小的子带更重要的作用。实验结果表明,该方法提高了噪声环境下的ASR性能。此外,由于我们的方法实现简单,与MFCC相比,它的计算开销非常小。
{"title":"Weighting of Mel Sub-bands Based on SNR/Entropy for Robust ASR","authors":"H. Yeganeh, S. Ahadi, S. M. Mirrezaie, A. Ziaei","doi":"10.1109/ISSPIT.2008.4775710","DOIUrl":"https://doi.org/10.1109/ISSPIT.2008.4775710","url":null,"abstract":"Mel-frequency cepstral coefficients (MFCC) are the most widely used features for speech recognition. However, MFCC-based speech recognition performance degrades in presence of additive noise. In this paper, we propose a set of noise-robust features based on conventional MFCC feature extraction method. Our proposed method consists of two steps. In the first step, mel sub-band Wiener filtering is carried out. The second step consists of estimating SNR in each sub-band and calculating the sub-band entropy by defining a weight parameter based on sub-band SNR to entropy ratio. The weighting has been carried out in a way that gives more important roles, in cepstrum parameter formation, to sub-bands that are less affected by noise. Experimental results indicate that this method leads to improved ASR performance in noisy environments. Furthermore, due to the simplicity of the implementation of our method, its computational overhead in comparison to MFCC is quite small.","PeriodicalId":213756,"journal":{"name":"2008 IEEE International Symposium on Signal Processing and Information Technology","volume":"2012 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128006864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Triangular Mesh Geometry Coding with Multiresolution Decomposition Based on Structuring of Surrounding Vertices 基于周围顶点结构的多分辨率分解三角形网格几何编码
Pub Date : 2008-12-01 DOI: 10.1109/ISSPIT.2008.4775699
S. Watanabe, A. Kawanaka
In this paper, we propose a new polygonal mesh geometry coding scheme based on a process of structuring by acquiring surrounding vertices of the polygonal mesh one layer at a time. The structuring process begins by selecting the start vertex and proceeding by acquiring surrounding vertices of the polygonal mesh. As a result, we obtain a 2-D structured vertex table. Structured geometry data are generated according to the structured vertices and encoded by a multiresolution decomposition and space frequency quantization coding method. In our proposed scheme, the multiresolution decomposition uses the connectivity of the polygonal mesh. In addition, with a space frequency quantization coding scheme, we can reduce redundancies of decomposed coefficients at similar positions in different components of decomposition level. Experimental results show that the proposed scheme gives better coding performance at lower bit-rates than the usual schemes.
本文提出了一种新的多边形网格几何编码方案,该方案基于一层一层获取多边形网格周围顶点的结构化过程。构造过程首先选择起始顶点,然后获取多边形网格的周围顶点。结果,我们得到了一个二维结构化的顶点表。根据结构化顶点生成结构化几何数据,采用多分辨率分解和空间频率量化编码方法进行编码。在我们提出的方案中,多分辨率分解利用了多边形网格的连通性。此外,采用空间频率量化编码方案,可以减少分解系数在分解层次不同分量相似位置的冗余度。实验结果表明,该方案在较低比特率下具有较好的编码性能。
{"title":"Triangular Mesh Geometry Coding with Multiresolution Decomposition Based on Structuring of Surrounding Vertices","authors":"S. Watanabe, A. Kawanaka","doi":"10.1109/ISSPIT.2008.4775699","DOIUrl":"https://doi.org/10.1109/ISSPIT.2008.4775699","url":null,"abstract":"In this paper, we propose a new polygonal mesh geometry coding scheme based on a process of structuring by acquiring surrounding vertices of the polygonal mesh one layer at a time. The structuring process begins by selecting the start vertex and proceeding by acquiring surrounding vertices of the polygonal mesh. As a result, we obtain a 2-D structured vertex table. Structured geometry data are generated according to the structured vertices and encoded by a multiresolution decomposition and space frequency quantization coding method. In our proposed scheme, the multiresolution decomposition uses the connectivity of the polygonal mesh. In addition, with a space frequency quantization coding scheme, we can reduce redundancies of decomposed coefficients at similar positions in different components of decomposition level. Experimental results show that the proposed scheme gives better coding performance at lower bit-rates than the usual schemes.","PeriodicalId":213756,"journal":{"name":"2008 IEEE International Symposium on Signal Processing and Information Technology","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132305576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Fast Adaptive Anisotropic Filtering for Medical Image Enhancement 快速自适应各向异性滤波医学图像增强
Pub Date : 2008-12-01 DOI: 10.1109/ISSPIT.2008.4775677
J. George, S.P. Indu
In this paper, local structure tensor (LST) based adaptive anisotropic filtering (AAF) methodology is used for medical image enhancement over different modalities. This filtering framework enhances and preserves anisotropic image structures while suppressing high-frequency noise. The goal of this work is to reduce the overall computational cost with minimum risk on accuracy by introducing optimized filternets for local structure analysis and reconstruction filtering. This filtering technique facilitates user interaction and direct control over high frequency contents of the signal. The efficacy of the filtering framework is evaluated by testing the system with medical images of different modalities. The results are compared using three different quality measures. Experimental results show that a good level of noise reduction along with structure enhancement can be achieved in the adaptively filtered images.
本文将基于局部结构张量(LST)的自适应各向异性滤波(AAF)方法用于不同模态的医学图像增强。该滤波框架在抑制高频噪声的同时增强并保留了各向异性图像结构。本工作的目标是通过引入优化的滤波器来进行局部结构分析和重建滤波,在降低总体计算成本的同时降低精度风险。这种滤波技术便于用户交互和直接控制信号的高频内容。通过对不同模式的医学图像进行测试,评估了过滤框架的有效性。使用三种不同的质量测量方法对结果进行比较。实验结果表明,自适应滤波后的图像具有较好的降噪效果和结构增强效果。
{"title":"Fast Adaptive Anisotropic Filtering for Medical Image Enhancement","authors":"J. George, S.P. Indu","doi":"10.1109/ISSPIT.2008.4775677","DOIUrl":"https://doi.org/10.1109/ISSPIT.2008.4775677","url":null,"abstract":"In this paper, local structure tensor (LST) based adaptive anisotropic filtering (AAF) methodology is used for medical image enhancement over different modalities. This filtering framework enhances and preserves anisotropic image structures while suppressing high-frequency noise. The goal of this work is to reduce the overall computational cost with minimum risk on accuracy by introducing optimized filternets for local structure analysis and reconstruction filtering. This filtering technique facilitates user interaction and direct control over high frequency contents of the signal. The efficacy of the filtering framework is evaluated by testing the system with medical images of different modalities. The results are compared using three different quality measures. Experimental results show that a good level of noise reduction along with structure enhancement can be achieved in the adaptively filtered images.","PeriodicalId":213756,"journal":{"name":"2008 IEEE International Symposium on Signal Processing and Information Technology","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132434552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Morphological feature extraction and spectral unmixing of hyperspectral images 高光谱图像的形态特征提取与光谱分解
Pub Date : 2008-12-01 DOI: 10.1109/ISSPIT.2008.4775683
A. Plaza, J. Plaza, A. Cristo
Hyperspectral image processing has been a very active area in remote sensing and other application domains in recent years. Despite the availability of a wide range of advanced processing techniques for hyperspectral data analysis, a great majority of available techniques for this purpose are based on the consideration of spectral information separately from spatial information information, and thus the two types of information are not treated simultaneously. In this paper, we describe several innovative spatial/spectral techniques for hyperspectral image processing. The techniques described in this work cover different aspects of hyperspectral image processing such as dimensionality reduction, feature extraction, and spectral unmixing. The techniques addressed in this paper are based on concepts inspired by mathematical morphology, a theory that provides a remarkable framework to achieve the desired integration of spatial and spectral information. The proposed techniques are experimentally validated using standard hyperspectral data sets with ground-truth, and compared to traditional approaches in the hyperspectral imaging literature, revealing that the integration of spatial and spectral information can significantly improve the analysis of hyperspectral scenes when conducted in simultaneous fashion.
高光谱图像处理是近年来遥感等应用领域中非常活跃的一个研究领域。尽管高光谱数据分析有很多先进的处理技术,但绝大多数技术都是将光谱信息与空间信息分开考虑的,因此这两类信息并没有同时处理。在本文中,我们描述了几种创新的空间/光谱技术用于高光谱图像处理。这项工作中描述的技术涵盖了高光谱图像处理的不同方面,如降维、特征提取和光谱分解。本文讨论的技术是基于数学形态学启发的概念,该理论为实现所需的空间和光谱信息集成提供了一个显着的框架。利用具有地面真值的标准高光谱数据集对所提出的技术进行了实验验证,并与高光谱成像文献中的传统方法进行了比较,结果表明,空间和光谱信息的整合可以显著改善同时进行的高光谱场景分析。
{"title":"Morphological feature extraction and spectral unmixing of hyperspectral images","authors":"A. Plaza, J. Plaza, A. Cristo","doi":"10.1109/ISSPIT.2008.4775683","DOIUrl":"https://doi.org/10.1109/ISSPIT.2008.4775683","url":null,"abstract":"Hyperspectral image processing has been a very active area in remote sensing and other application domains in recent years. Despite the availability of a wide range of advanced processing techniques for hyperspectral data analysis, a great majority of available techniques for this purpose are based on the consideration of spectral information separately from spatial information information, and thus the two types of information are not treated simultaneously. In this paper, we describe several innovative spatial/spectral techniques for hyperspectral image processing. The techniques described in this work cover different aspects of hyperspectral image processing such as dimensionality reduction, feature extraction, and spectral unmixing. The techniques addressed in this paper are based on concepts inspired by mathematical morphology, a theory that provides a remarkable framework to achieve the desired integration of spatial and spectral information. The proposed techniques are experimentally validated using standard hyperspectral data sets with ground-truth, and compared to traditional approaches in the hyperspectral imaging literature, revealing that the integration of spatial and spectral information can significantly improve the analysis of hyperspectral scenes when conducted in simultaneous fashion.","PeriodicalId":213756,"journal":{"name":"2008 IEEE International Symposium on Signal Processing and Information Technology","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131398541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
New Method for Lossless Compression of Medical Records 医疗记录无损压缩的新方法
Pub Date : 2008-12-01 DOI: 10.1109/ISSPIT.2008.4775649
M. Milanova, R. Kountchev, V. Todorov, R. Kountcheva
In the paper is presented new method for lossless compression of biomedical signals, aimed at telemedicine applications and efficient data storage with content protection. The method is based on data compression algorithm developed by the authors. The high compression ratio obtained permits efficient data transfer via communication channels and enhances the distance monitoring of patients. The presented approach is suitable for the processing of various biomedical signals. The relatively low computational complexity permits real time hardware and software applications.
针对远程医疗的应用,提出了一种新的生物医学信号无损压缩方法,并对数据进行了内容保护和高效存储。该方法基于作者开发的数据压缩算法。获得的高压缩比允许通过通信通道有效地传输数据,并增强了患者的远程监控。该方法适用于各种生物医学信号的处理。相对较低的计算复杂度允许实时的硬件和软件应用。
{"title":"New Method for Lossless Compression of Medical Records","authors":"M. Milanova, R. Kountchev, V. Todorov, R. Kountcheva","doi":"10.1109/ISSPIT.2008.4775649","DOIUrl":"https://doi.org/10.1109/ISSPIT.2008.4775649","url":null,"abstract":"In the paper is presented new method for lossless compression of biomedical signals, aimed at telemedicine applications and efficient data storage with content protection. The method is based on data compression algorithm developed by the authors. The high compression ratio obtained permits efficient data transfer via communication channels and enhances the distance monitoring of patients. The presented approach is suitable for the processing of various biomedical signals. The relatively low computational complexity permits real time hardware and software applications.","PeriodicalId":213756,"journal":{"name":"2008 IEEE International Symposium on Signal Processing and Information Technology","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130221557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
期刊
2008 IEEE International Symposium on Signal Processing and Information Technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1