首页 > 最新文献

2008 IEEE International Symposium on Signal Processing and Information Technology最新文献

英文 中文
A novel thresholding method for automatically detecting stars in astronomical images 一种新的阈值自动探测天文图像中恒星的方法
Pub Date : 2008-12-01 DOI: 10.1109/ISSPIT.2008.4775700
A. Cristo, A. Plaza, D. Valencia
Tracking the position of stars or bright bodies in images from space represents a valuable source of information in different application domains. One of the simplest approaches used for this purpose in the literature is image thresholding, where all pixels above a certain intensity level are considered stars, and all other pixels are considered background. Two main problems have been identified in the literature for image thresholding-based star identification methods. Most notably, the intensity of the background is not always constant; i.e., a sloping background could give proper detection of stars in one part of the image, while in another part every pixel can have an intensity over the threshold value and will thus be detected as a star. Also, there is always some degree of noise present in astronomical images, and this noise can create spurious peaks in the intensity that can be detected as stars, even though they are not. In this work, we develop a novel image thresholding-based methodology which addresses the issues above. Specifically, the method proposed in this work relies on an enhanced histogram-based thresholding method complemented by a collection of auxiliary techniques aimed at searching inside diffuse objects such as galaxies, nebulas and comets, and thus enhance their detection by eliminating noise artifacts. Its black-box design and our experimental results indicate that this novel method offers potential for being included as a star identification module in already existent techniques and systems that require accurate tracking and recognition of stars in astronomical images.
在来自太空的图像中跟踪恒星或明亮天体的位置是不同应用领域的宝贵信息来源。在文献中,用于此目的的最简单的方法之一是图像阈值处理,其中高于一定强度水平的所有像素都被认为是恒星,而所有其他像素都被认为是背景。基于图像阈值的恒星识别方法存在两个主要问题。最值得注意的是,背景的强度并不总是恒定的;即,倾斜的背景可以在图像的一部分中适当地检测到恒星,而在另一部分中,每个像素的强度都可以超过阈值,因此将被检测到为恒星。此外,天文图像中总是存在某种程度的噪声,这种噪声会在强度上产生虚假的峰值,即使它们不是恒星,也可以被探测到。在这项工作中,我们开发了一种新的基于图像阈值的方法来解决上述问题。具体来说,这项工作中提出的方法依赖于一种增强的基于直方图的阈值方法,辅以一系列辅助技术,旨在搜索星系、星云和彗星等漫射物体的内部,从而通过消除噪声伪影来增强它们的检测。它的黑盒设计和我们的实验结果表明,这种新方法提供了作为恒星识别模块纳入现有技术和系统的潜力,这些技术和系统需要精确地跟踪和识别天文图像中的恒星。
{"title":"A novel thresholding method for automatically detecting stars in astronomical images","authors":"A. Cristo, A. Plaza, D. Valencia","doi":"10.1109/ISSPIT.2008.4775700","DOIUrl":"https://doi.org/10.1109/ISSPIT.2008.4775700","url":null,"abstract":"Tracking the position of stars or bright bodies in images from space represents a valuable source of information in different application domains. One of the simplest approaches used for this purpose in the literature is image thresholding, where all pixels above a certain intensity level are considered stars, and all other pixels are considered background. Two main problems have been identified in the literature for image thresholding-based star identification methods. Most notably, the intensity of the background is not always constant; i.e., a sloping background could give proper detection of stars in one part of the image, while in another part every pixel can have an intensity over the threshold value and will thus be detected as a star. Also, there is always some degree of noise present in astronomical images, and this noise can create spurious peaks in the intensity that can be detected as stars, even though they are not. In this work, we develop a novel image thresholding-based methodology which addresses the issues above. Specifically, the method proposed in this work relies on an enhanced histogram-based thresholding method complemented by a collection of auxiliary techniques aimed at searching inside diffuse objects such as galaxies, nebulas and comets, and thus enhance their detection by eliminating noise artifacts. Its black-box design and our experimental results indicate that this novel method offers potential for being included as a star identification module in already existent techniques and systems that require accurate tracking and recognition of stars in astronomical images.","PeriodicalId":213756,"journal":{"name":"2008 IEEE International Symposium on Signal Processing and Information Technology","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132970047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Epileptic Seizure Detection Using Empirical Mode Decomposition 基于经验模态分解的癫痫发作检测
Pub Date : 2008-12-01 DOI: 10.1109/ISSPIT.2008.4775717
A. Tafreshi, A. Nasrabadi, Amir H. Omidvarnia
In this paper, we attempt to analyze the performance of the Empirical Mode Decomposition (EMD) for discriminating epileptic seizure data from the normal data. The Empirical Mode Decomposition (EMD) is a general signal processing method for analyzing nonlinear and nonstationary time series. The main idea of EMD is to decompose a time series into a finite and often small number of intrinsic mode functions (IMFs). EMD is an adaptive decomposition since the extracted information is obtained directly from the original signal. By utilizing this method to obtain the features of normal and epileptic seizure signals, we compare them with traditional features such as wavelet coefficients through two classifiers. Our results confirmed that our proposed features could potentially be used to distinguish normal from seizure data with success rate up to 95.42%.
在本文中,我们试图分析经验模式分解(EMD)在区分癫痫发作数据和正常数据方面的性能。经验模态分解(EMD)是一种用于分析非线性和非平稳时间序列的通用信号处理方法。EMD的主要思想是将时间序列分解为有限的、通常是少量的内禀模态函数(IMFs)。EMD是一种自适应分解,因为提取的信息是直接从原始信号中获得的。利用该方法获得正常和癫痫发作信号的特征,并通过两种分类器将其与小波系数等传统特征进行比较。我们的结果证实,我们提出的特征可以潜在地用于区分正常和癫痫发作数据,成功率高达95.42%。
{"title":"Epileptic Seizure Detection Using Empirical Mode Decomposition","authors":"A. Tafreshi, A. Nasrabadi, Amir H. Omidvarnia","doi":"10.1109/ISSPIT.2008.4775717","DOIUrl":"https://doi.org/10.1109/ISSPIT.2008.4775717","url":null,"abstract":"In this paper, we attempt to analyze the performance of the Empirical Mode Decomposition (EMD) for discriminating epileptic seizure data from the normal data. The Empirical Mode Decomposition (EMD) is a general signal processing method for analyzing nonlinear and nonstationary time series. The main idea of EMD is to decompose a time series into a finite and often small number of intrinsic mode functions (IMFs). EMD is an adaptive decomposition since the extracted information is obtained directly from the original signal. By utilizing this method to obtain the features of normal and epileptic seizure signals, we compare them with traditional features such as wavelet coefficients through two classifiers. Our results confirmed that our proposed features could potentially be used to distinguish normal from seizure data with success rate up to 95.42%.","PeriodicalId":213756,"journal":{"name":"2008 IEEE International Symposium on Signal Processing and Information Technology","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115953379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 40
Weighting of Mel Sub-bands Based on SNR/Entropy for Robust ASR 基于信噪比/熵的Mel子带加权鲁棒ASR
Pub Date : 2008-12-01 DOI: 10.1109/ISSPIT.2008.4775710
H. Yeganeh, S. Ahadi, S. M. Mirrezaie, A. Ziaei
Mel-frequency cepstral coefficients (MFCC) are the most widely used features for speech recognition. However, MFCC-based speech recognition performance degrades in presence of additive noise. In this paper, we propose a set of noise-robust features based on conventional MFCC feature extraction method. Our proposed method consists of two steps. In the first step, mel sub-band Wiener filtering is carried out. The second step consists of estimating SNR in each sub-band and calculating the sub-band entropy by defining a weight parameter based on sub-band SNR to entropy ratio. The weighting has been carried out in a way that gives more important roles, in cepstrum parameter formation, to sub-bands that are less affected by noise. Experimental results indicate that this method leads to improved ASR performance in noisy environments. Furthermore, due to the simplicity of the implementation of our method, its computational overhead in comparison to MFCC is quite small.
Mel-frequency倒谱系数(MFCC)是语音识别中应用最广泛的特征。然而,基于mfcc的语音识别性能在加性噪声存在下会下降。本文在传统MFCC特征提取方法的基础上,提出了一套噪声鲁棒特征。我们提出的方法包括两个步骤。第一步,进行五子带维纳滤波。第二步是估计每个子带的信噪比,并根据子带信噪比与熵比定义权重参数计算子带熵。在倒谱参数形成中,加权的方式赋予受噪声影响较小的子带更重要的作用。实验结果表明,该方法提高了噪声环境下的ASR性能。此外,由于我们的方法实现简单,与MFCC相比,它的计算开销非常小。
{"title":"Weighting of Mel Sub-bands Based on SNR/Entropy for Robust ASR","authors":"H. Yeganeh, S. Ahadi, S. M. Mirrezaie, A. Ziaei","doi":"10.1109/ISSPIT.2008.4775710","DOIUrl":"https://doi.org/10.1109/ISSPIT.2008.4775710","url":null,"abstract":"Mel-frequency cepstral coefficients (MFCC) are the most widely used features for speech recognition. However, MFCC-based speech recognition performance degrades in presence of additive noise. In this paper, we propose a set of noise-robust features based on conventional MFCC feature extraction method. Our proposed method consists of two steps. In the first step, mel sub-band Wiener filtering is carried out. The second step consists of estimating SNR in each sub-band and calculating the sub-band entropy by defining a weight parameter based on sub-band SNR to entropy ratio. The weighting has been carried out in a way that gives more important roles, in cepstrum parameter formation, to sub-bands that are less affected by noise. Experimental results indicate that this method leads to improved ASR performance in noisy environments. Furthermore, due to the simplicity of the implementation of our method, its computational overhead in comparison to MFCC is quite small.","PeriodicalId":213756,"journal":{"name":"2008 IEEE International Symposium on Signal Processing and Information Technology","volume":"2012 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128006864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
A Word-Dependent Automatic Arabic Speaker Identification System 依赖词的阿拉伯语说话人自动识别系统
Pub Date : 2008-12-01 DOI: 10.1109/ISSPIT.2008.4775669
S. S. Al-Dahri, Y.H. Al-Jassar, Y. Alotaibi, M. Alsulaiman, K. Abdullah-Al-Mamun
Automatic speaker recognition is one of the difficult tasks in the field of computer speech and speaker recognition. Speaker recognition is a biometric process of automatically recognizing who is speaking on the basis of speaker dependent features of the speech signal. Currently, speaker recognition system is an important need for authenticating the personal like other biometrics such as finger prints and retinal scans. Speech based recognition permits both on site and remote access to the user. In this research, speaker identification system is investigated from the speaker recognition problem point of view. It is an important component of a speech-based user interface. The aim of this research is to develop a system that is capable of identifying an individual from a sample of his or her speech. Arabic language is a semitic language that differs from European languages such as English. Our system is based on Arabic speech. We have chosen to work on a word-dependent system using the Arabic isolated word /ns10 as10 cs10 as10 ms10//[unk]/ a single keyword for the test utterance. This choice has been made because the word /ns10 as10 cs10 as10 ms10//[unk]/ is mostly used by the Arabic speakers. Speech features are extracted using MFCC. The HTK is used to implement the speaker identification module with phoneme based HMM. The designed automatic Arabic speaker identification system contains 100 speakers and it achieved 96.25% accuracy for recognizing the correct speaker.
自动说话人识别是计算机语音和说话人识别领域的难点之一。说话人识别是根据语音信号的说话人相关特征自动识别说话人的生物识别过程。目前,语音识别系统与指纹、视网膜扫描等其他生物识别技术一样,是验证个人身份的重要需求。基于语音的识别允许对用户进行现场和远程访问。本研究从说话人识别问题的角度对说话人识别系统进行了研究。它是基于语音的用户界面的重要组成部分。这项研究的目的是开发一种能够从他或她的讲话样本中识别个人的系统。阿拉伯语是一种闪族语言,不同于英语等欧洲语言。我们的系统是基于阿拉伯语的。我们选择使用阿拉伯语孤立词/ns10 as10 cs10 as10 ms10//[unk]/作为测试话语的单个关键字来处理单词依赖系统。之所以这样做是因为/ns10 as10 cs10 as10 ms10//[unk]/这个词主要由阿拉伯语使用者使用。使用MFCC提取语音特征。使用HTK实现基于音素HMM的说话人识别模块。设计的阿拉伯语说话人自动识别系统包含100个说话人,识别正确率达到96.25%。
{"title":"A Word-Dependent Automatic Arabic Speaker Identification System","authors":"S. S. Al-Dahri, Y.H. Al-Jassar, Y. Alotaibi, M. Alsulaiman, K. Abdullah-Al-Mamun","doi":"10.1109/ISSPIT.2008.4775669","DOIUrl":"https://doi.org/10.1109/ISSPIT.2008.4775669","url":null,"abstract":"Automatic speaker recognition is one of the difficult tasks in the field of computer speech and speaker recognition. Speaker recognition is a biometric process of automatically recognizing who is speaking on the basis of speaker dependent features of the speech signal. Currently, speaker recognition system is an important need for authenticating the personal like other biometrics such as finger prints and retinal scans. Speech based recognition permits both on site and remote access to the user. In this research, speaker identification system is investigated from the speaker recognition problem point of view. It is an important component of a speech-based user interface. The aim of this research is to develop a system that is capable of identifying an individual from a sample of his or her speech. Arabic language is a semitic language that differs from European languages such as English. Our system is based on Arabic speech. We have chosen to work on a word-dependent system using the Arabic isolated word /ns10 as10 cs10 as10 ms10//[unk]/ a single keyword for the test utterance. This choice has been made because the word /ns10 as10 cs10 as10 ms10//[unk]/ is mostly used by the Arabic speakers. Speech features are extracted using MFCC. The HTK is used to implement the speaker identification module with phoneme based HMM. The designed automatic Arabic speaker identification system contains 100 speakers and it achieved 96.25% accuracy for recognizing the correct speaker.","PeriodicalId":213756,"journal":{"name":"2008 IEEE International Symposium on Signal Processing and Information Technology","volume":"2013 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127312429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
Iris Recognition System Using Combined Colour Statistics 结合颜色统计的虹膜识别系统
Pub Date : 2008-12-01 DOI: 10.1109/ISSPIT.2008.4775694
H. Demirel, G. Anbarjafari
This paper proposes a high performance iris recognition system based on the probability distribution functions (PDF) of pixels in different colour channels. The PDFs of the segmented iris images are used as statistical feature vectors for the recognition of irises by minimizing the Kullback-Leibler distance (KLD) between the PDF of a given iris and the PDFs of irises in the training set. Feature vector fusion (FVF) and majority voting (MV) methods have been employed to combine feature vectors obtained from different colour channels in YCbCr and RGB colour spaces to improve the recognition performance. The system has been tested on the segmented iris images from the UPOL iris database. The proposed system gives a 98.44% recognition rate on that iris database.
提出了一种基于不同颜色通道像素概率分布函数(PDF)的高性能虹膜识别系统。将分割后的虹膜图像的PDF作为统计特征向量,通过最小化给定虹膜的PDF与训练集中虹膜的PDF之间的kullbackleibler距离(KLD)来实现虹膜的识别。采用特征向量融合(FVF)和多数投票(MV)方法将YCbCr和RGB色彩空间中不同颜色通道的特征向量进行组合,提高识别性能。该系统已在UPOL虹膜数据库的分割虹膜图像上进行了测试。该系统在虹膜数据库上的识别率为98.44%。
{"title":"Iris Recognition System Using Combined Colour Statistics","authors":"H. Demirel, G. Anbarjafari","doi":"10.1109/ISSPIT.2008.4775694","DOIUrl":"https://doi.org/10.1109/ISSPIT.2008.4775694","url":null,"abstract":"This paper proposes a high performance iris recognition system based on the probability distribution functions (PDF) of pixels in different colour channels. The PDFs of the segmented iris images are used as statistical feature vectors for the recognition of irises by minimizing the Kullback-Leibler distance (KLD) between the PDF of a given iris and the PDFs of irises in the training set. Feature vector fusion (FVF) and majority voting (MV) methods have been employed to combine feature vectors obtained from different colour channels in YCbCr and RGB colour spaces to improve the recognition performance. The system has been tested on the segmented iris images from the UPOL iris database. The proposed system gives a 98.44% recognition rate on that iris database.","PeriodicalId":213756,"journal":{"name":"2008 IEEE International Symposium on Signal Processing and Information Technology","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114245316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Data Center Resilience Evaluation Test-bed: Design and Implementation 数据中心弹性评估测试平台:设计与实现
Pub Date : 2008-12-01 DOI: 10.1109/ISSPIT.2008.4775667
Y. Khalil, Adel Said Elmaghraby
Operational continuity of data centers faces challenges by experienced cyber attackers and occasional natural disasters. Assessment of a data center's resilience for complex and realistic scenarios is very important for various reasons such as: system specification, design and enhancement. Yet data center resilience evaluation is a demanding process because of the complexity of its systems and the multidimensional aspects required for a resilient system. This paper illustrates data center resilience evaluation test-bed and its monitoring system. This test-bed can provide a realistic testing environment and capable to implement multiple operating and attacking scenarios.
数据中心的运行连续性面临着经验丰富的网络攻击者和偶尔发生的自然灾害的挑战。由于各种原因,例如:系统规范、设计和增强,对复杂和现实场景的数据中心弹性进行评估非常重要。然而,由于数据中心系统的复杂性和弹性系统所需的多维方面,数据中心弹性评估是一个要求很高的过程。介绍了数据中心弹性评估试验台及其监控系统。该测试平台可以提供真实的测试环境,并能够实现多种操作和攻击场景。
{"title":"Data Center Resilience Evaluation Test-bed: Design and Implementation","authors":"Y. Khalil, Adel Said Elmaghraby","doi":"10.1109/ISSPIT.2008.4775667","DOIUrl":"https://doi.org/10.1109/ISSPIT.2008.4775667","url":null,"abstract":"Operational continuity of data centers faces challenges by experienced cyber attackers and occasional natural disasters. Assessment of a data center's resilience for complex and realistic scenarios is very important for various reasons such as: system specification, design and enhancement. Yet data center resilience evaluation is a demanding process because of the complexity of its systems and the multidimensional aspects required for a resilient system. This paper illustrates data center resilience evaluation test-bed and its monitoring system. This test-bed can provide a realistic testing environment and capable to implement multiple operating and attacking scenarios.","PeriodicalId":213756,"journal":{"name":"2008 IEEE International Symposium on Signal Processing and Information Technology","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130380859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Clustering with K-Harmonic Means Applied to Colour Image Quantization 基于k调和均值聚类的彩色图像量化
Pub Date : 2008-12-01 DOI: 10.1109/ISSPIT.2008.4775684
M. Frackiewicz, H. Palus
The main goal of colour quantization methods is a colour reduction with minimum colour error. In this paper were investigated six following colour quantization techniques: the classical median cut, improved median cut, clustering k-means technique in two colour versions (RGB, CIELAB) and also two versions of relative novel technique named k-harmonic means. The comparison presented here was based on testing of ten natural colour images for quantization into 16, 64 and 256 colours. In evaluation process two criteria were used: the mean squared quantization error (MSE) and the average error in the CIELAB colour space (DeltaE). During tests the efficiency of k-harmonic means applied to colour quantization has been proved.
颜色量化方法的主要目标是以最小的颜色误差还原颜色。本文研究了六种颜色量化技术:经典中值切割、改进中值切割、两种颜色版本(RGB、CIELAB)的聚类k-均值技术以及两种相对新颖的k-调和均值技术。本文提出的比较是基于对10个自然彩色图像进行量化为16、64和256色的测试。在评价过程中使用了两个标准:均方量化误差(MSE)和CIELAB色彩空间的平均误差(DeltaE)。实验证明了k-谐波均值在彩色量化中的有效性。
{"title":"Clustering with K-Harmonic Means Applied to Colour Image Quantization","authors":"M. Frackiewicz, H. Palus","doi":"10.1109/ISSPIT.2008.4775684","DOIUrl":"https://doi.org/10.1109/ISSPIT.2008.4775684","url":null,"abstract":"The main goal of colour quantization methods is a colour reduction with minimum colour error. In this paper were investigated six following colour quantization techniques: the classical median cut, improved median cut, clustering k-means technique in two colour versions (RGB, CIELAB) and also two versions of relative novel technique named k-harmonic means. The comparison presented here was based on testing of ten natural colour images for quantization into 16, 64 and 256 colours. In evaluation process two criteria were used: the mean squared quantization error (MSE) and the average error in the CIELAB colour space (DeltaE). During tests the efficiency of k-harmonic means applied to colour quantization has been proved.","PeriodicalId":213756,"journal":{"name":"2008 IEEE International Symposium on Signal Processing and Information Technology","volume":"138 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133258354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Fast Adaptive Anisotropic Filtering for Medical Image Enhancement 快速自适应各向异性滤波医学图像增强
Pub Date : 2008-12-01 DOI: 10.1109/ISSPIT.2008.4775677
J. George, S.P. Indu
In this paper, local structure tensor (LST) based adaptive anisotropic filtering (AAF) methodology is used for medical image enhancement over different modalities. This filtering framework enhances and preserves anisotropic image structures while suppressing high-frequency noise. The goal of this work is to reduce the overall computational cost with minimum risk on accuracy by introducing optimized filternets for local structure analysis and reconstruction filtering. This filtering technique facilitates user interaction and direct control over high frequency contents of the signal. The efficacy of the filtering framework is evaluated by testing the system with medical images of different modalities. The results are compared using three different quality measures. Experimental results show that a good level of noise reduction along with structure enhancement can be achieved in the adaptively filtered images.
本文将基于局部结构张量(LST)的自适应各向异性滤波(AAF)方法用于不同模态的医学图像增强。该滤波框架在抑制高频噪声的同时增强并保留了各向异性图像结构。本工作的目标是通过引入优化的滤波器来进行局部结构分析和重建滤波,在降低总体计算成本的同时降低精度风险。这种滤波技术便于用户交互和直接控制信号的高频内容。通过对不同模式的医学图像进行测试,评估了过滤框架的有效性。使用三种不同的质量测量方法对结果进行比较。实验结果表明,自适应滤波后的图像具有较好的降噪效果和结构增强效果。
{"title":"Fast Adaptive Anisotropic Filtering for Medical Image Enhancement","authors":"J. George, S.P. Indu","doi":"10.1109/ISSPIT.2008.4775677","DOIUrl":"https://doi.org/10.1109/ISSPIT.2008.4775677","url":null,"abstract":"In this paper, local structure tensor (LST) based adaptive anisotropic filtering (AAF) methodology is used for medical image enhancement over different modalities. This filtering framework enhances and preserves anisotropic image structures while suppressing high-frequency noise. The goal of this work is to reduce the overall computational cost with minimum risk on accuracy by introducing optimized filternets for local structure analysis and reconstruction filtering. This filtering technique facilitates user interaction and direct control over high frequency contents of the signal. The efficacy of the filtering framework is evaluated by testing the system with medical images of different modalities. The results are compared using three different quality measures. Experimental results show that a good level of noise reduction along with structure enhancement can be achieved in the adaptively filtered images.","PeriodicalId":213756,"journal":{"name":"2008 IEEE International Symposium on Signal Processing and Information Technology","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132434552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Triangular Mesh Geometry Coding with Multiresolution Decomposition Based on Structuring of Surrounding Vertices 基于周围顶点结构的多分辨率分解三角形网格几何编码
Pub Date : 2008-12-01 DOI: 10.1109/ISSPIT.2008.4775699
S. Watanabe, A. Kawanaka
In this paper, we propose a new polygonal mesh geometry coding scheme based on a process of structuring by acquiring surrounding vertices of the polygonal mesh one layer at a time. The structuring process begins by selecting the start vertex and proceeding by acquiring surrounding vertices of the polygonal mesh. As a result, we obtain a 2-D structured vertex table. Structured geometry data are generated according to the structured vertices and encoded by a multiresolution decomposition and space frequency quantization coding method. In our proposed scheme, the multiresolution decomposition uses the connectivity of the polygonal mesh. In addition, with a space frequency quantization coding scheme, we can reduce redundancies of decomposed coefficients at similar positions in different components of decomposition level. Experimental results show that the proposed scheme gives better coding performance at lower bit-rates than the usual schemes.
本文提出了一种新的多边形网格几何编码方案,该方案基于一层一层获取多边形网格周围顶点的结构化过程。构造过程首先选择起始顶点,然后获取多边形网格的周围顶点。结果,我们得到了一个二维结构化的顶点表。根据结构化顶点生成结构化几何数据,采用多分辨率分解和空间频率量化编码方法进行编码。在我们提出的方案中,多分辨率分解利用了多边形网格的连通性。此外,采用空间频率量化编码方案,可以减少分解系数在分解层次不同分量相似位置的冗余度。实验结果表明,该方案在较低比特率下具有较好的编码性能。
{"title":"Triangular Mesh Geometry Coding with Multiresolution Decomposition Based on Structuring of Surrounding Vertices","authors":"S. Watanabe, A. Kawanaka","doi":"10.1109/ISSPIT.2008.4775699","DOIUrl":"https://doi.org/10.1109/ISSPIT.2008.4775699","url":null,"abstract":"In this paper, we propose a new polygonal mesh geometry coding scheme based on a process of structuring by acquiring surrounding vertices of the polygonal mesh one layer at a time. The structuring process begins by selecting the start vertex and proceeding by acquiring surrounding vertices of the polygonal mesh. As a result, we obtain a 2-D structured vertex table. Structured geometry data are generated according to the structured vertices and encoded by a multiresolution decomposition and space frequency quantization coding method. In our proposed scheme, the multiresolution decomposition uses the connectivity of the polygonal mesh. In addition, with a space frequency quantization coding scheme, we can reduce redundancies of decomposed coefficients at similar positions in different components of decomposition level. Experimental results show that the proposed scheme gives better coding performance at lower bit-rates than the usual schemes.","PeriodicalId":213756,"journal":{"name":"2008 IEEE International Symposium on Signal Processing and Information Technology","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132305576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Optimization and Implementation of Integer Lifting Scheme for Lossless Image Coding 无损图像编码中整数提升方案的优化与实现
Pub Date : 2008-12-01 DOI: 10.1109/ISSPIT.2008.4775720
V. Kitanovski, D. Taskovski, D. Gleich, P. Planinsic
This paper presents an adaptive lifting scheme, which performs integer-to-integer wavelet transform, for lossless image compression. We optimize the coefficients of the predict filter in the lifting scheme to minimize the predictor's error variance. The optimized coefficients depend on the autocorrelation structure of the image. The presented lifting scheme adapts not only to every component of the color image, but also to its horizontal and vertical directions. We implement this lifting scheme on the fixed-point TMS320C6416 DSK evaluation board. We obtain experimental results using different types of images, as well as using images captured by camera in a real-time application. These results show that the presented method is competitive to few well-known methods for lossless image compression.
提出了一种采用整数到整数小波变换进行图像无损压缩的自适应提升方案。我们在提升方案中对预测滤波器的系数进行优化,使预测器的误差方差最小。优化系数取决于图像的自相关结构。所提出的提升方案不仅适用于彩色图像的各个分量,而且适用于其水平方向和垂直方向。我们在定点TMS320C6416 DSK评估板上实现了该提升方案。我们使用不同类型的图像获得实验结果,以及在实时应用中使用相机捕获的图像。结果表明,该方法在无损图像压缩方面具有较强的竞争力。
{"title":"Optimization and Implementation of Integer Lifting Scheme for Lossless Image Coding","authors":"V. Kitanovski, D. Taskovski, D. Gleich, P. Planinsic","doi":"10.1109/ISSPIT.2008.4775720","DOIUrl":"https://doi.org/10.1109/ISSPIT.2008.4775720","url":null,"abstract":"This paper presents an adaptive lifting scheme, which performs integer-to-integer wavelet transform, for lossless image compression. We optimize the coefficients of the predict filter in the lifting scheme to minimize the predictor's error variance. The optimized coefficients depend on the autocorrelation structure of the image. The presented lifting scheme adapts not only to every component of the color image, but also to its horizontal and vertical directions. We implement this lifting scheme on the fixed-point TMS320C6416 DSK evaluation board. We obtain experimental results using different types of images, as well as using images captured by camera in a real-time application. These results show that the presented method is competitive to few well-known methods for lossless image compression.","PeriodicalId":213756,"journal":{"name":"2008 IEEE International Symposium on Signal Processing and Information Technology","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128166772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
2008 IEEE International Symposium on Signal Processing and Information Technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1