首页 > 最新文献

The 2nd IEEE Internatioal Workshop on Haptic, Audio and Visual Environments and Their Applications, 2003. HAVE 2003. Proceedings.最新文献

英文 中文
A new filtering method for RST invariant image watermarking 一种新的RST不变图像水印滤波方法
Yan Liu, Jiying Zhao
Based on log-polar mapping, this paper presents a new filtering method. We compare our new filtering method with the classical matched filter, phase-only filter, binary phase-only filter, amplitude-only filter, and inverse filter. We found that our new method is the only one that is robust against rotation, scaling, and translation (RST) transformation. We use the filtering method in our new rotation, scaling, and translation (RST) invariant digital image watermarking scheme, to rectify the watermark position. The watermarking scheme does not need the original image to extract watermark and avoids exhaustive search. The three-dimensional plots of cross-correlation functions of different filters are presented and discussed.
基于对数极坐标映射,提出了一种新的滤波方法。将该滤波方法与经典匹配滤波器、纯相位滤波器、二值纯相位滤波器、纯幅值滤波器和逆滤波器进行了比较。我们发现我们的新方法是唯一一种对旋转、缩放和平移(RST)变换具有鲁棒性的方法。我们在新的旋转、缩放和平移(RST)不变数字图像水印方案中使用滤波方法来校正水印位置。该水印方案不需要原始图像来提取水印,避免了穷举搜索。给出并讨论了不同滤波器相互关函数的三维图。
{"title":"A new filtering method for RST invariant image watermarking","authors":"Yan Liu, Jiying Zhao","doi":"10.1109/HAVE.2003.1244733","DOIUrl":"https://doi.org/10.1109/HAVE.2003.1244733","url":null,"abstract":"Based on log-polar mapping, this paper presents a new filtering method. We compare our new filtering method with the classical matched filter, phase-only filter, binary phase-only filter, amplitude-only filter, and inverse filter. We found that our new method is the only one that is robust against rotation, scaling, and translation (RST) transformation. We use the filtering method in our new rotation, scaling, and translation (RST) invariant digital image watermarking scheme, to rectify the watermark position. The watermarking scheme does not need the original image to extract watermark and avoids exhaustive search. The three-dimensional plots of cross-correlation functions of different filters are presented and discussed.","PeriodicalId":431267,"journal":{"name":"The 2nd IEEE Internatioal Workshop on Haptic, Audio and Visual Environments and Their Applications, 2003. HAVE 2003. Proceedings.","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126624270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
A novel semi-fragile audio watermarking scheme 一种新的半脆弱音频水印方案
Ronghui Tu, Jiying Zhao
In this paper, we present a semi-fragile audio watermarking technique which embeds watermark in the discrete wavelet domain of an audio by quantizing the selected coefficients. The quantization parameter used in the algorithm is user-defined. Different value of the quantization parameter affects the robustness of the watermark. This allows the user to control the performance of the watermark. The application of our approach could be copyright verification or content authentication.
本文提出了一种半脆弱音频水印技术,通过对所选系数进行量化,在音频的离散小波域中嵌入水印。算法中使用的量化参数是用户自定义的。量化参数的取值不同会影响水印的鲁棒性。这允许用户控制水印的性能。我们的方法的应用可以是版权验证或内容验证。
{"title":"A novel semi-fragile audio watermarking scheme","authors":"Ronghui Tu, Jiying Zhao","doi":"10.1109/HAVE.2003.1244731","DOIUrl":"https://doi.org/10.1109/HAVE.2003.1244731","url":null,"abstract":"In this paper, we present a semi-fragile audio watermarking technique which embeds watermark in the discrete wavelet domain of an audio by quantizing the selected coefficients. The quantization parameter used in the algorithm is user-defined. Different value of the quantization parameter affects the robustness of the watermark. This allows the user to control the performance of the watermark. The application of our approach could be copyright verification or content authentication.","PeriodicalId":431267,"journal":{"name":"The 2nd IEEE Internatioal Workshop on Haptic, Audio and Visual Environments and Their Applications, 2003. HAVE 2003. Proceedings.","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116489865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Optical character recognition for model-based object recognition applications 基于模型的物体识别应用中的光学字符识别
Qing Chen, E. Petriu
This paper discusses the performance of Fourier descriptors and Hu's seven moment invariants for an Optical Character Recognition (OCR) engine developed for 3D model-based object recognition applications.
本文讨论了傅里叶描述子和Hu's七矩不变量在光学字符识别(OCR)引擎中的性能,该引擎用于基于3D模型的对象识别应用。
{"title":"Optical character recognition for model-based object recognition applications","authors":"Qing Chen, E. Petriu","doi":"10.1109/HAVE.2003.1244729","DOIUrl":"https://doi.org/10.1109/HAVE.2003.1244729","url":null,"abstract":"This paper discusses the performance of Fourier descriptors and Hu's seven moment invariants for an Optical Character Recognition (OCR) engine developed for 3D model-based object recognition applications.","PeriodicalId":431267,"journal":{"name":"The 2nd IEEE Internatioal Workshop on Haptic, Audio and Visual Environments and Their Applications, 2003. HAVE 2003. Proceedings.","volume":"70 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120907638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Mechanics modeling for virtual interactive environments 虚拟交互环境的力学建模
Jean-Christian Delannoy, E. Petriu, P. Wide
Many of the virtual environments on the market today only account for 3D geometry and lighting of objects. These environments may appear realistic in a static domain but when the objects are set into motion the object's movements often appear unnatural. This paper presents algorithms to model the world around us more accurately by accounting for the mechanical behaviors and properties of objects, and by basing the virtual world on sensor information provided by objects in the real world.
目前市场上的许多虚拟环境只考虑物体的3D几何形状和照明。这些环境在静态领域可能看起来很真实,但当物体被设置为运动时,物体的运动往往显得不自然。本文提出了一种算法,通过考虑物体的力学行为和特性,并根据现实世界中物体提供的传感器信息建立虚拟世界,从而更准确地模拟我们周围的世界。
{"title":"Mechanics modeling for virtual interactive environments","authors":"Jean-Christian Delannoy, E. Petriu, P. Wide","doi":"10.1109/HAVE.2003.1244725","DOIUrl":"https://doi.org/10.1109/HAVE.2003.1244725","url":null,"abstract":"Many of the virtual environments on the market today only account for 3D geometry and lighting of objects. These environments may appear realistic in a static domain but when the objects are set into motion the object's movements often appear unnatural. This paper presents algorithms to model the world around us more accurately by accounting for the mechanical behaviors and properties of objects, and by basing the virtual world on sensor information provided by objects in the real world.","PeriodicalId":431267,"journal":{"name":"The 2nd IEEE Internatioal Workshop on Haptic, Audio and Visual Environments and Their Applications, 2003. HAVE 2003. Proceedings.","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121631187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Image quality measurement by using digital watermarking 基于数字水印的图像质量测量
D. Zheng, Jiying Zhao, W. J. Tam, F. Speranza
This paper presents an objective picture quality measurement method based on the fragile digital image watermarking. Based on the DCT-based watermarking scheme, this paper presents a fragile digital image watermarking scheme that can work as an automatic quality monitoring system. We embed watermark in the DCT domain of original image, and the DCT blocks for embedding are carefully selected so that the degradation of the watermark can reflect the degradation of the image. The evaluations demonstrate the effectiveness of the proposed scheme against JPEG compression.
提出了一种基于脆弱数字图像水印的客观图像质量测量方法。在基于dct的脆弱数字图像水印方案的基础上,提出了一种可作为自动质量监控系统的脆弱数字图像水印方案。我们将水印嵌入到原始图像的DCT域中,并仔细选择用于嵌入的DCT块,使水印的退化能够反映图像的退化。实验结果证明了该算法对JPEG压缩的有效性。
{"title":"Image quality measurement by using digital watermarking","authors":"D. Zheng, Jiying Zhao, W. J. Tam, F. Speranza","doi":"10.1109/HAVE.2003.1244727","DOIUrl":"https://doi.org/10.1109/HAVE.2003.1244727","url":null,"abstract":"This paper presents an objective picture quality measurement method based on the fragile digital image watermarking. Based on the DCT-based watermarking scheme, this paper presents a fragile digital image watermarking scheme that can work as an automatic quality monitoring system. We embed watermark in the DCT domain of original image, and the DCT blocks for embedding are carefully selected so that the degradation of the watermark can reflect the degradation of the image. The evaluations demonstrate the effectiveness of the proposed scheme against JPEG compression.","PeriodicalId":431267,"journal":{"name":"The 2nd IEEE Internatioal Workshop on Haptic, Audio and Visual Environments and Their Applications, 2003. HAVE 2003. Proceedings.","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128088774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
A heterogeneous scalable architecture for collaborative haptics environments 协同触觉环境的异构可扩展架构
Xiaojun Shen, Francis Bogsanyi, L. Ni, N. Georganas
The purpose of this research effort is to design a generic architecture for collaborative haptic, audio, visual environments (C-HAVE). We aim to develop a heterogeneous scalable architecture for large collaborative haptics environments where a number of potential users participate with different kinds of haptic devices. This paper begins with a brief overview of C-HAVE and then proceeds to describe a generic architecture that is implemented over HLA/RTI (High Level Architecture/Run Time Infrastructure), an IEEE standard for distributed simulations and modeling. A potential electronic commerce application over C-HAVE is discussed.
本研究的目的是为协同触觉、音频、视觉环境(C-HAVE)设计一个通用架构。我们的目标是为大型协作触觉环境开发一种异构可扩展架构,其中许多潜在用户使用不同类型的触觉设备参与。本文从C-HAVE的简要概述开始,然后继续描述在HLA/RTI(高级体系结构/运行时基础设施)上实现的通用体系结构,这是IEEE用于分布式模拟和建模的标准。讨论了基于C-HAVE的潜在电子商务应用。
{"title":"A heterogeneous scalable architecture for collaborative haptics environments","authors":"Xiaojun Shen, Francis Bogsanyi, L. Ni, N. Georganas","doi":"10.1109/HAVE.2003.1244735","DOIUrl":"https://doi.org/10.1109/HAVE.2003.1244735","url":null,"abstract":"The purpose of this research effort is to design a generic architecture for collaborative haptic, audio, visual environments (C-HAVE). We aim to develop a heterogeneous scalable architecture for large collaborative haptics environments where a number of potential users participate with different kinds of haptic devices. This paper begins with a brief overview of C-HAVE and then proceeds to describe a generic architecture that is implemented over HLA/RTI (High Level Architecture/Run Time Infrastructure), an IEEE standard for distributed simulations and modeling. A potential electronic commerce application over C-HAVE is discussed.","PeriodicalId":431267,"journal":{"name":"The 2nd IEEE Internatioal Workshop on Haptic, Audio and Visual Environments and Their Applications, 2003. HAVE 2003. Proceedings.","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130473282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Pitch-based feature extraction for audio classification 基于音高的音频分类特征提取
A.R. Abu-El-Quran, R. Goubran
This paper proposes a new algorithm to discriminate between speech and non-speech audio segments. It is intended for security applications as well as talker location identification in audio conferencing systems, equipped with microphone arrays. The proposed method is based on splitting the audio segment into small frames and detecting the presence of pitch on each one of them. The ratio of frames with pitch detected to the total number of frames is defined as the pitch ratio and is used as the main feature to classify speech and non-speech segments. The performance of the proposed method is evaluated using a library of audio segments containing female and male speech, and non-speech segments such as computer fan noise, cocktail noise, footsteps, and traffic noise. It is shown that the proposed algorithm can achieve correct decision of 97% for the speech and 98% for non-speech segments, 0.5-seconds long.
本文提出了一种区分语音和非语音音频片段的新算法。它旨在用于安全应用以及配备麦克风阵列的音频会议系统中的讲话者位置识别。该方法基于将音频片段分割成小帧并检测每个小帧上是否存在音高。检测到的具有基音的帧数与总帧数的比值被定义为基音比,并被用作语音和非语音片段分类的主要特征。使用包含女性和男性语音的音频片段库以及计算机风扇噪声、鸡尾酒噪声、脚步声和交通噪声等非语音片段来评估所提出方法的性能。实验表明,该算法对0.5秒长的语音片段的判断正确率为97%,对非语音片段的判断正确率为98%。
{"title":"Pitch-based feature extraction for audio classification","authors":"A.R. Abu-El-Quran, R. Goubran","doi":"10.1109/HAVE.2003.1244723","DOIUrl":"https://doi.org/10.1109/HAVE.2003.1244723","url":null,"abstract":"This paper proposes a new algorithm to discriminate between speech and non-speech audio segments. It is intended for security applications as well as talker location identification in audio conferencing systems, equipped with microphone arrays. The proposed method is based on splitting the audio segment into small frames and detecting the presence of pitch on each one of them. The ratio of frames with pitch detected to the total number of frames is defined as the pitch ratio and is used as the main feature to classify speech and non-speech segments. The performance of the proposed method is evaluated using a library of audio segments containing female and male speech, and non-speech segments such as computer fan noise, cocktail noise, footsteps, and traffic noise. It is shown that the proposed algorithm can achieve correct decision of 97% for the speech and 98% for non-speech segments, 0.5-seconds long.","PeriodicalId":431267,"journal":{"name":"The 2nd IEEE Internatioal Workshop on Haptic, Audio and Visual Environments and Their Applications, 2003. HAVE 2003. Proceedings.","volume":"110 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125149576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Lip feature extraction using motion, color, and edge information 利用运动、颜色和边缘信息提取嘴唇特征
R. Dansereau, C. Li, R. Goubran
In this paper, we present a Markov random field based technique for extracting lip features from video using color and edge information. Motion between frames is used as an indicator to locate the approximate lip region, while color and edge information allow boundaries of naturally covered lips to be identified and segmented from the rest of the face. Using the lip region, geometric lip features are then extracted from the segmented lip area. The experimental results show that 96% accuracy is obtained in extracting six key lip feature points in typical talking head video sequences when the tongue is not visible in the scene, and 90% accuracy when the tongue is visible.
在本文中,我们提出了一种基于马尔可夫随机场的技术,利用颜色和边缘信息从视频中提取唇特征。帧之间的运动被用作定位近似嘴唇区域的指示器,而颜色和边缘信息允许识别自然覆盖的嘴唇的边界,并从脸部的其他部分分割出来。然后利用唇区,从分割的唇区提取几何唇特征。实验结果表明,在典型的说话头视频序列中,当场景中舌头不可见时,提取6个关键嘴唇特征点的准确率达到96%,当场景中舌头可见时,提取准确率达到90%。
{"title":"Lip feature extraction using motion, color, and edge information","authors":"R. Dansereau, C. Li, R. Goubran","doi":"10.1109/HAVE.2003.1244716","DOIUrl":"https://doi.org/10.1109/HAVE.2003.1244716","url":null,"abstract":"In this paper, we present a Markov random field based technique for extracting lip features from video using color and edge information. Motion between frames is used as an indicator to locate the approximate lip region, while color and edge information allow boundaries of naturally covered lips to be identified and segmented from the rest of the face. Using the lip region, geometric lip features are then extracted from the segmented lip area. The experimental results show that 96% accuracy is obtained in extracting six key lip feature points in typical talking head video sequences when the tongue is not visible in the scene, and 90% accuracy when the tongue is visible.","PeriodicalId":431267,"journal":{"name":"The 2nd IEEE Internatioal Workshop on Haptic, Audio and Visual Environments and Their Applications, 2003. HAVE 2003. Proceedings.","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130899557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Neural network architecture for 3D object representation 三维对象表示的神经网络体系结构
A. Crétu, E. Petriu, G. Patry
The paper discusses a neural network architecture for 3D object modeling. A multi-layered feedforward structure having as inputs the 3D-coordinates of the object points is employed to model the object space. Cascaded with a transformation neural network module, the proposed architecture can be used to generate and train 3D objects, perform transformations, set operations and object morphing. A possible application for object recognition is also presented.
本文讨论了一种用于三维物体建模的神经网络体系结构。采用以目标点的三维坐标为输入的多层前馈结构对目标空间进行建模。与转换神经网络模块级联,所提出的体系结构可用于生成和训练3D对象,执行转换,设置操作和对象变形。并提出了一种可能的目标识别应用。
{"title":"Neural network architecture for 3D object representation","authors":"A. Crétu, E. Petriu, G. Patry","doi":"10.1109/HAVE.2003.1244721","DOIUrl":"https://doi.org/10.1109/HAVE.2003.1244721","url":null,"abstract":"The paper discusses a neural network architecture for 3D object modeling. A multi-layered feedforward structure having as inputs the 3D-coordinates of the object points is employed to model the object space. Cascaded with a transformation neural network module, the proposed architecture can be used to generate and train 3D objects, perform transformations, set operations and object morphing. A possible application for object recognition is also presented.","PeriodicalId":431267,"journal":{"name":"The 2nd IEEE Internatioal Workshop on Haptic, Audio and Visual Environments and Their Applications, 2003. HAVE 2003. Proceedings.","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134537220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Musical noise reduction in speech using two-dimensional spectrogram enhancement 利用二维谱图增强来降低语音中的音乐噪声
Zhong Lin, R. Goubran
This paper investigates the problem of "musical noise" and proposes a new algorithm to reduce it. Musical noise occurs in most of the spectral-estimation-based algorithms, such as spectral subtraction and minimum mean-square error short-time spectral amplitude estimator (MMSE-STSA). To reduce this type of noise, a novel algorithm, which is called two-dimensional spectogram enhancement, is proposed. A speech enhancement scheme is implemented by combining the proposed algorithm with the MMSE-STSA method. Spectogram comparisons show that with the proposed scheme, musical noise is effectively reduced with reference to MMSE-STSA. SNR and PESQ evaluations show that the proposed method is superior to MMSE-STSA and spectral subtraction with auditory masking method.
本文对“音乐噪声”问题进行了研究,提出了一种新的降噪算法。在大多数基于谱估计的算法中,如谱减法和最小均方误差短时谱幅估计(MMSE-STSA)都会产生音乐噪声。为了减少这种类型的噪声,提出了一种新的算法,称为二维谱增强。将该算法与MMSE-STSA方法相结合,实现了一种语音增强方案。谱图比较表明,该方案能有效地降低参考MMSE-STSA的噪声。信噪比和PESQ评价表明,该方法优于MMSE-STSA法和听觉掩蔽谱减法。
{"title":"Musical noise reduction in speech using two-dimensional spectrogram enhancement","authors":"Zhong Lin, R. Goubran","doi":"10.1109/HAVE.2003.1244726","DOIUrl":"https://doi.org/10.1109/HAVE.2003.1244726","url":null,"abstract":"This paper investigates the problem of \"musical noise\" and proposes a new algorithm to reduce it. Musical noise occurs in most of the spectral-estimation-based algorithms, such as spectral subtraction and minimum mean-square error short-time spectral amplitude estimator (MMSE-STSA). To reduce this type of noise, a novel algorithm, which is called two-dimensional spectogram enhancement, is proposed. A speech enhancement scheme is implemented by combining the proposed algorithm with the MMSE-STSA method. Spectogram comparisons show that with the proposed scheme, musical noise is effectively reduced with reference to MMSE-STSA. SNR and PESQ evaluations show that the proposed method is superior to MMSE-STSA and spectral subtraction with auditory masking method.","PeriodicalId":431267,"journal":{"name":"The 2nd IEEE Internatioal Workshop on Haptic, Audio and Visual Environments and Their Applications, 2003. HAVE 2003. Proceedings.","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133596610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
期刊
The 2nd IEEE Internatioal Workshop on Haptic, Audio and Visual Environments and Their Applications, 2003. HAVE 2003. Proceedings.
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1