首页 > 最新文献

2012 11th International Conference on Information Science, Signal Processing and their Applications (ISSPA)最新文献

英文 中文
System-level noise of an ultra-wideband tracking system 超宽带跟踪系统的系统级噪声
William C. Suski, Salil Banerjee, A. Hoover
Previous works in ultra-wideband (UWB) noise modeling have mostly focused on isolating the individual sources of error. However, it is important to recognize that some errors will always pass through to the system output. In this work, we methodically evaluated the system-level noise of a UWB position tracking system. We define system-level noise as the measurement error obtained when the system is installed in a real-world environment. Our results show that a multi-modal noise model will be essential for filtering system-level noise. To encourage further research, all of our data has been made publicly available.
以往在超宽带(UWB)噪声建模方面的工作主要集中在隔离单个误差源上。然而,重要的是要认识到一些错误总是会传递到系统输出。在这项工作中,我们系统地评估了超宽带位置跟踪系统的系统级噪声。我们将系统级噪声定义为系统在实际环境中安装时获得的测量误差。我们的结果表明,多模态噪声模型对于过滤系统级噪声至关重要。为了鼓励进一步的研究,我们所有的数据都是公开的。
{"title":"System-level noise of an ultra-wideband tracking system","authors":"William C. Suski, Salil Banerjee, A. Hoover","doi":"10.1109/ISSPA.2012.6310630","DOIUrl":"https://doi.org/10.1109/ISSPA.2012.6310630","url":null,"abstract":"Previous works in ultra-wideband (UWB) noise modeling have mostly focused on isolating the individual sources of error. However, it is important to recognize that some errors will always pass through to the system output. In this work, we methodically evaluated the system-level noise of a UWB position tracking system. We define system-level noise as the measurement error obtained when the system is installed in a real-world environment. Our results show that a multi-modal noise model will be essential for filtering system-level noise. To encourage further research, all of our data has been made publicly available.","PeriodicalId":248763,"journal":{"name":"2012 11th International Conference on Information Science, Signal Processing and their Applications (ISSPA)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121235039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Forming projection images from each layer of retina using diffusion may based OCT segmentation 利用基于扩散的OCT分割从视网膜的每一层形成投影图像
Jalil Jalili, H. Rabbani, M. Akhlaghi, R. Kafieh, A. M. Dehnavi
Optical coherence tomography (OCT) is an effective and noninvasive modality for retinal imaging. 3-D data that acquired from 3-D Spectral Domain OCT (SD-OCT) have shown their importance in the evaluation of retinal diseases. In addition, this set of data provides an opportunity to study depth of retina. In this paper, we focus on forming X-Y axis images from each layer of retina. In this manner, we first choose diffusion map based segmentation for localization of 12 different boundaries in 3D retinal data. Then we take an average on layers which located between each pairs of detected boundaries. Therefore, we make the X-Y axis image from each layer. With wavelet based image fusion, we combine together the layers with appropriate information to make images with additional information in retinal depth.
光学相干断层扫描(OCT)是一种有效的、无创的视网膜成像方法。从三维光谱域OCT (SD-OCT)获得的三维数据在视网膜疾病的评估中显示出其重要性。此外,这组数据为研究视网膜的深度提供了机会。在本文中,我们着重于从视网膜的每一层形成X-Y轴图像。因此,我们首先选择基于扩散图的分割方法对3D视网膜数据中的12个不同边界进行定位。然后对每对检测到的边界之间的层取平均值。因此,我们从每一层制作X-Y轴图像。采用基于小波的图像融合方法,将具有适当信息的图层组合在一起,形成具有视网膜深度附加信息的图像。
{"title":"Forming projection images from each layer of retina using diffusion may based OCT segmentation","authors":"Jalil Jalili, H. Rabbani, M. Akhlaghi, R. Kafieh, A. M. Dehnavi","doi":"10.1109/ISSPA.2012.6310688","DOIUrl":"https://doi.org/10.1109/ISSPA.2012.6310688","url":null,"abstract":"Optical coherence tomography (OCT) is an effective and noninvasive modality for retinal imaging. 3-D data that acquired from 3-D Spectral Domain OCT (SD-OCT) have shown their importance in the evaluation of retinal diseases. In addition, this set of data provides an opportunity to study depth of retina. In this paper, we focus on forming X-Y axis images from each layer of retina. In this manner, we first choose diffusion map based segmentation for localization of 12 different boundaries in 3D retinal data. Then we take an average on layers which located between each pairs of detected boundaries. Therefore, we make the X-Y axis image from each layer. With wavelet based image fusion, we combine together the layers with appropriate information to make images with additional information in retinal depth.","PeriodicalId":248763,"journal":{"name":"2012 11th International Conference on Information Science, Signal Processing and their Applications (ISSPA)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125334191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
ECG signal classification using support vector machine based on wavelet multiresolution analysis 基于小波多分辨率分析的支持向量机心电信号分类
Ayman Rabee, I. Barhumi
In this paper we propose a highly reliable ECG analysis and classification approach using discrete wavelet transform multiresolution analysis and support vector machine (SVM). This approach is composed of three stages, including ECG signal preprocessing, feature selection, and classification of ECG beats. Wavelet transform is used for signal preprocessing, denoising, and for extracting the coefficients of the transform as features of each ECG beat which are employed as inputs to the classifier. SVM is used to construct a classifier to categorize the input ECG beat into one of 14 classes. In this work, 17260 ECG beats, including 14 different beat types, were selected from the MIT/BIH arrhythmia database. The average accuracy of classification for recognition of the 14 heart beat types is 99.2%.
本文提出了一种基于离散小波变换、多分辨率分析和支持向量机(SVM)的高可靠性心电分析与分类方法。该方法由心电信号预处理、特征选择和心电拍频分类三个阶段组成。小波变换用于信号预处理,去噪,并提取变换系数作为每个心电拍的特征,这些特征被用作分类器的输入。使用支持向量机构建分类器,将输入的心电拍分为14类。在这项工作中,从MIT/BIH心律失常数据库中选择了17260次心电图,包括14种不同的心跳类型。对14种心跳类型的分类识别平均准确率为99.2%。
{"title":"ECG signal classification using support vector machine based on wavelet multiresolution analysis","authors":"Ayman Rabee, I. Barhumi","doi":"10.1109/ISSPA.2012.6310497","DOIUrl":"https://doi.org/10.1109/ISSPA.2012.6310497","url":null,"abstract":"In this paper we propose a highly reliable ECG analysis and classification approach using discrete wavelet transform multiresolution analysis and support vector machine (SVM). This approach is composed of three stages, including ECG signal preprocessing, feature selection, and classification of ECG beats. Wavelet transform is used for signal preprocessing, denoising, and for extracting the coefficients of the transform as features of each ECG beat which are employed as inputs to the classifier. SVM is used to construct a classifier to categorize the input ECG beat into one of 14 classes. In this work, 17260 ECG beats, including 14 different beat types, were selected from the MIT/BIH arrhythmia database. The average accuracy of classification for recognition of the 14 heart beat types is 99.2%.","PeriodicalId":248763,"journal":{"name":"2012 11th International Conference on Information Science, Signal Processing and their Applications (ISSPA)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121705350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Object- versus pixel-based building detection for disaster response 灾害响应中基于物体与基于像素的建筑物检测
D. Dubois, R. Lepage
Recent disasters have shown that there is a growing interest for remotely sensed data to support decision makers and emergency teams in the field. Fast and accurate detection of buildings and sustained damage is of great importance. Current methods rely on numerous photo-interpreters to visually analyze the data. Multiple pixel-based methods exist to classify pixels as being part of a building or not but results vary widely and precision is often poor with very high resolution images. This paper proposes an object-based solution to building detection and compares it to a traditional approach. Object-based classification clearly provides adequate results in much less time and thus is ideal for disaster response.
最近发生的灾害表明,人们越来越有兴趣利用遥感数据来支持现场的决策者和应急小组。快速准确地检测建筑物和持续损坏是非常重要的。目前的方法依赖于大量的照片解释器来直观地分析数据。目前存在多种基于像素的方法来将像素分类为建筑物的一部分,但结果差异很大,并且对于非常高分辨率的图像,精度通常很差。本文提出了一种基于对象的建筑检测解决方案,并将其与传统方法进行了比较。基于对象的分类显然在更短的时间内提供了足够的结果,因此是灾难响应的理想选择。
{"title":"Object- versus pixel-based building detection for disaster response","authors":"D. Dubois, R. Lepage","doi":"10.1109/ISSPA.2012.6310623","DOIUrl":"https://doi.org/10.1109/ISSPA.2012.6310623","url":null,"abstract":"Recent disasters have shown that there is a growing interest for remotely sensed data to support decision makers and emergency teams in the field. Fast and accurate detection of buildings and sustained damage is of great importance. Current methods rely on numerous photo-interpreters to visually analyze the data. Multiple pixel-based methods exist to classify pixels as being part of a building or not but results vary widely and precision is often poor with very high resolution images. This paper proposes an object-based solution to building detection and compares it to a traditional approach. Object-based classification clearly provides adequate results in much less time and thus is ideal for disaster response.","PeriodicalId":248763,"journal":{"name":"2012 11th International Conference on Information Science, Signal Processing and their Applications (ISSPA)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129383162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Automatic conversion system for 3D video generation based on wavelets 基于小波的三维视频生成自动转换系统
V. Ponomaryov, Eduardo Ramos-Díaz, V. Golikov
The 2D to 3D conversion is currently a hot topic for several applications because of the 3D content lack in a new era of different hardware. The proposed algorithm in 3D reconstruction is based on the wavelets, especially on the wavelet atomic functions (WAF), which are used in the computation of the disparity maps employing multilevel decomposition, and technique of 3D visualization via color anaglyphs synthesis. Novel approach performs better in depth and spatial perception than do existing techniques, both in terms of objective SSIM criterion and based on the more subjective measure of human vision that has been confirmed in numerous simulation results obtained in synthetic images, in synthetic video sequences and in real-life video sequences.
由于3D内容在不同硬件的新时代缺乏,2D到3D的转换目前是一些应用的热门话题。本文提出的三维重建算法是基于小波,特别是基于小波原子函数(WAF)的多级分解计算视差图,以及基于彩色解析合成的三维可视化技术。无论是在客观的SSIM标准方面,还是在更主观的人类视觉测量基础上,新方法在深度和空间感知方面都比现有技术表现得更好,这在合成图像、合成视频序列和现实视频序列中获得的大量仿真结果中得到了证实。
{"title":"Automatic conversion system for 3D video generation based on wavelets","authors":"V. Ponomaryov, Eduardo Ramos-Díaz, V. Golikov","doi":"10.1109/ISSPA.2012.6310584","DOIUrl":"https://doi.org/10.1109/ISSPA.2012.6310584","url":null,"abstract":"The 2D to 3D conversion is currently a hot topic for several applications because of the 3D content lack in a new era of different hardware. The proposed algorithm in 3D reconstruction is based on the wavelets, especially on the wavelet atomic functions (WAF), which are used in the computation of the disparity maps employing multilevel decomposition, and technique of 3D visualization via color anaglyphs synthesis. Novel approach performs better in depth and spatial perception than do existing techniques, both in terms of objective SSIM criterion and based on the more subjective measure of human vision that has been confirmed in numerous simulation results obtained in synthetic images, in synthetic video sequences and in real-life video sequences.","PeriodicalId":248763,"journal":{"name":"2012 11th International Conference on Information Science, Signal Processing and their Applications (ISSPA)","volume":"295 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129848250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A genetic algorithm based clustering approach for improving off-line handwritten digit classification 基于遗传算法的聚类方法改进离线手写体数字分类
S. Impedovo, Francesco Maurizio Mangini, G. Pirlo
In this paper a new clustering technique for improving off-line handwritten digit recognition is introduced. Clustering design is approached as an optimization problem in which the objective function to be minimized is the cost function associated to the classification, that is here performed by the k-nearest neighbor (k-NN) classifier based on the Sokal and Michener dissimilarity measure. For this purpose, a genetic algorithm is used to determine the best cluster centers to reduce classification time, without suffering a great loss in accuracy. In addition, an effective strategy for generating the initial-population of the genetic algorithm is also presented. The experimental tests carried out using the MNIST database show the effectiveness of this method.
本文介绍了一种改进离线手写数字识别的聚类技术。聚类设计被视为一个优化问题,其中要最小化的目标函数是与分类相关的成本函数,这是由基于Sokal和Michener不相似性度量的k-近邻(k-NN)分类器执行的。为此,使用遗传算法来确定最佳聚类中心,以减少分类时间,而不会损失很大的准确性。此外,还提出了一种有效的遗传算法初始种群生成策略。利用MNIST数据库进行的实验测试表明了该方法的有效性。
{"title":"A genetic algorithm based clustering approach for improving off-line handwritten digit classification","authors":"S. Impedovo, Francesco Maurizio Mangini, G. Pirlo","doi":"10.1109/ISSPA.2012.6310471","DOIUrl":"https://doi.org/10.1109/ISSPA.2012.6310471","url":null,"abstract":"In this paper a new clustering technique for improving off-line handwritten digit recognition is introduced. Clustering design is approached as an optimization problem in which the objective function to be minimized is the cost function associated to the classification, that is here performed by the k-nearest neighbor (k-NN) classifier based on the Sokal and Michener dissimilarity measure. For this purpose, a genetic algorithm is used to determine the best cluster centers to reduce classification time, without suffering a great loss in accuracy. In addition, an effective strategy for generating the initial-population of the genetic algorithm is also presented. The experimental tests carried out using the MNIST database show the effectiveness of this method.","PeriodicalId":248763,"journal":{"name":"2012 11th International Conference on Information Science, Signal Processing and their Applications (ISSPA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130639213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Voice pathology detection in continuous speech using nonlinear dynamics 基于非线性动力学的连续语音病理检测
J. Orozco-Arroyave, J. Vargas-Bonilla, J. B. Alonso, M. A. Ferrer-Ballester, C. Travieso-González, P. H. Rodríguez
A novel methodology, based on the estimation of nonlinear dynamics features, is presented for automatic detection of pathologies in the phonatory system considering continuous speech records (text-dependent). The proposed automatic segmentation and characterization of the voice registers does not require the estimation of the pitch period, therefore it doesn't depend on the gender and intonation of the patients. A robust methodology for finding the features that better discriminate between healthy and pathological voices and also for analyzing the affinity among them is also presented. An average success rate of 95% ± 3.54% in the automatic detection of voice pathologies is achieved considering only six features. The results indicate that nonlinear dynamics is a good alternative for automatic detection of abnormal phonations in continuous speech.
提出了一种基于非线性动态特征估计的新方法,用于考虑连续语音记录(文本依赖)的语音系统中的病理自动检测。所提出的语音寄存器的自动分割和表征不需要估计音高周期,因此不依赖于患者的性别和语调。还提出了一种强大的方法,用于发现更好地区分健康和病理声音的特征,并用于分析它们之间的亲和力。仅考虑6个特征,语音病理自动检测的平均成功率为95%±3.54%。结果表明,非线性动力学是连续语音异常语音自动检测的一种很好的替代方法。
{"title":"Voice pathology detection in continuous speech using nonlinear dynamics","authors":"J. Orozco-Arroyave, J. Vargas-Bonilla, J. B. Alonso, M. A. Ferrer-Ballester, C. Travieso-González, P. H. Rodríguez","doi":"10.1109/ISSPA.2012.6310440","DOIUrl":"https://doi.org/10.1109/ISSPA.2012.6310440","url":null,"abstract":"A novel methodology, based on the estimation of nonlinear dynamics features, is presented for automatic detection of pathologies in the phonatory system considering continuous speech records (text-dependent). The proposed automatic segmentation and characterization of the voice registers does not require the estimation of the pitch period, therefore it doesn't depend on the gender and intonation of the patients. A robust methodology for finding the features that better discriminate between healthy and pathological voices and also for analyzing the affinity among them is also presented. An average success rate of 95% ± 3.54% in the automatic detection of voice pathologies is achieved considering only six features. The results indicate that nonlinear dynamics is a good alternative for automatic detection of abnormal phonations in continuous speech.","PeriodicalId":248763,"journal":{"name":"2012 11th International Conference on Information Science, Signal Processing and their Applications (ISSPA)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124481175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Graph theory for the discovery of non-parametric audio objects 图论用于发现非参数音频对象
C. Srinivasa, M. Bouchard, R. Pichevar, Hossein Najaf-Zadeh
A novel framework based on graph theory for structure discovery is applied to audio to find new types of audio objects which enable the compression of an input signal. It converts the sparse time-frequency representation of an audio signal into a graph by representing each data point as a vertex and the relationship between two vertices as an edge. Each edge is labelled based on a clustering algorithm which preserves a quality guarantee on the clusters. Frequent subgraphs are then extracted from this graph, via a mining algorithm, and recorded as objects. Tests performed using a corpus of audio excerpts show that the framework discovers new types of audio objects which yield an average compression gain of 23.53% while maintaining high audio quality.
将基于图论的结构发现框架应用于音频中,寻找能够压缩输入信号的新型音频对象。它通过将每个数据点表示为顶点,将两个顶点之间的关系表示为边,将音频信号的稀疏时频表示转换为图。每个边缘都是基于一种聚类算法来标记的,这种算法保留了聚类的质量保证。然后通过挖掘算法从该图中提取频繁子图,并记录为对象。使用音频摘录语料库进行的测试表明,该框架发现了新的音频对象类型,在保持高音频质量的同时,平均压缩增益为23.53%。
{"title":"Graph theory for the discovery of non-parametric audio objects","authors":"C. Srinivasa, M. Bouchard, R. Pichevar, Hossein Najaf-Zadeh","doi":"10.1109/ISSPA.2012.6310498","DOIUrl":"https://doi.org/10.1109/ISSPA.2012.6310498","url":null,"abstract":"A novel framework based on graph theory for structure discovery is applied to audio to find new types of audio objects which enable the compression of an input signal. It converts the sparse time-frequency representation of an audio signal into a graph by representing each data point as a vertex and the relationship between two vertices as an edge. Each edge is labelled based on a clustering algorithm which preserves a quality guarantee on the clusters. Frequent subgraphs are then extracted from this graph, via a mining algorithm, and recorded as objects. Tests performed using a corpus of audio excerpts show that the framework discovers new types of audio objects which yield an average compression gain of 23.53% while maintaining high audio quality.","PeriodicalId":248763,"journal":{"name":"2012 11th International Conference on Information Science, Signal Processing and their Applications (ISSPA)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128849754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
All-optical ultrafast hilbert transformations based on all-fiber long period grating designs 基于全光纤长周期光栅设计的全光超快希尔伯特变换
R. Ashrafi, J. Azaña
A novel all-optical design for implementing THz-bandwidth real-time Hilbert transformers is proposed and numerically demonstrated. We show that an all-optical Hilbert transformer can be implemented using a uniform-period long-period fiber grating (LPG) with a properly designed amplitude-only grating apodization profile incorporating a single/multiple π-phase-shift(s) along the grating length. The designed LPG for implementation of Hilbert transformer operates in the cross-coupling mode, which can be practically implemented based on either a fiber-optic approach or integrated-waveguide technology. All-optical Hilbert transformers capable of processing arbitrary optical signals with bandwidths well in the THz range can be implemented using feasible LPG designs.
提出了一种实现太赫兹带宽实时希尔伯特变压器的新型全光设计,并进行了数值验证。我们证明了全光希尔伯特变压器可以使用均匀周期长周期光纤光栅(LPG)来实现,该光栅具有适当设计的仅限幅光栅apoapozation轮廓,该轮廓沿光栅长度具有单/多π相移(s)。设计的用于实现Hilbert变压器的LPG工作在交叉耦合模式下,可以基于光纤方法或集成波导技术实现。采用可行的LPG设计,可以实现能够处理带宽在太赫兹范围内的任意光信号的全光希尔伯特变压器。
{"title":"All-optical ultrafast hilbert transformations based on all-fiber long period grating designs","authors":"R. Ashrafi, J. Azaña","doi":"10.1109/ISSPA.2012.6310515","DOIUrl":"https://doi.org/10.1109/ISSPA.2012.6310515","url":null,"abstract":"A novel all-optical design for implementing THz-bandwidth real-time Hilbert transformers is proposed and numerically demonstrated. We show that an all-optical Hilbert transformer can be implemented using a uniform-period long-period fiber grating (LPG) with a properly designed amplitude-only grating apodization profile incorporating a single/multiple π-phase-shift(s) along the grating length. The designed LPG for implementation of Hilbert transformer operates in the cross-coupling mode, which can be practically implemented based on either a fiber-optic approach or integrated-waveguide technology. All-optical Hilbert transformers capable of processing arbitrary optical signals with bandwidths well in the THz range can be implemented using feasible LPG designs.","PeriodicalId":248763,"journal":{"name":"2012 11th International Conference on Information Science, Signal Processing and their Applications (ISSPA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125905556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Incorporating user specific normalization in multimodal biometric fusion system 在多模态生物特征融合系统中纳入用户特定归一化
Messaoud Bengherabi, F. Harizi, A. Guessoum, M. Cheriet
The aim of this paper is to investigate the user-specific two-level fusion strategy in the context of multimodal biometrics. In this strategy, a client-specific score normalization procedure is applied firstly to each of the system outputs to be fused. Then, the resulting normalized outputs are fed into a common classifier. The logistic regression, non-confidence weighted sum and the likelihood ratio based on Gaussian mixture model are used as back-end classifiers. Three client-specific score normalization procedures are considered in this paper, i.e. Z-norm, F-norm and the Model-Specific Log-Likelihood Ratio MSLLR-norm. Our first findings based on 15 fusion experiments on the XM2VTS score database show that when the previous two-level fusion strategy is applied, the resulting fusion classifier outperforms the baseline classifiers significantly and a relative reduction of more than 50% in the equal error rate can be achieved. The second finding is that when using this two-level user-specific fusion strategy, the design of the final classifier is simplified and performance generalization of baseline classifiers is not straightforward. A great attention must be given to the choice of the combination normalization-back-end classifier.
本文的目的是研究在多模态生物识别背景下用户特定的两级融合策略。在此策略中,首先将特定于客户端的评分规范化过程应用于要融合的每个系统输出。然后,将得到的归一化输出馈送到公共分类器中。后端分类器采用逻辑回归、非置信度加权和和和基于高斯混合模型的似然比。本文考虑了三种客户特定评分归一化过程,即Z-norm, F-norm和模型特定对数似然比MSLLR-norm。基于XM2VTS分数数据库的15个融合实验,我们的第一个研究结果表明,当采用先前的两级融合策略时,所得到的融合分类器明显优于基线分类器,并且在相同错误率下可以实现50%以上的相对降低。第二个发现是,当使用这种两级用户特定融合策略时,最终分类器的设计被简化,基线分类器的性能泛化并不直接。对组合归一化-后端分类器的选择必须给予高度重视。
{"title":"Incorporating user specific normalization in multimodal biometric fusion system","authors":"Messaoud Bengherabi, F. Harizi, A. Guessoum, M. Cheriet","doi":"10.1109/ISSPA.2012.6310596","DOIUrl":"https://doi.org/10.1109/ISSPA.2012.6310596","url":null,"abstract":"The aim of this paper is to investigate the user-specific two-level fusion strategy in the context of multimodal biometrics. In this strategy, a client-specific score normalization procedure is applied firstly to each of the system outputs to be fused. Then, the resulting normalized outputs are fed into a common classifier. The logistic regression, non-confidence weighted sum and the likelihood ratio based on Gaussian mixture model are used as back-end classifiers. Three client-specific score normalization procedures are considered in this paper, i.e. Z-norm, F-norm and the Model-Specific Log-Likelihood Ratio MSLLR-norm. Our first findings based on 15 fusion experiments on the XM2VTS score database show that when the previous two-level fusion strategy is applied, the resulting fusion classifier outperforms the baseline classifiers significantly and a relative reduction of more than 50% in the equal error rate can be achieved. The second finding is that when using this two-level user-specific fusion strategy, the design of the final classifier is simplified and performance generalization of baseline classifiers is not straightforward. A great attention must be given to the choice of the combination normalization-back-end classifier.","PeriodicalId":248763,"journal":{"name":"2012 11th International Conference on Information Science, Signal Processing and their Applications (ISSPA)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126497544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2012 11th International Conference on Information Science, Signal Processing and their Applications (ISSPA)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1