首页 > 最新文献

2022 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC)最新文献

英文 中文
Direction-aware target speaker extraction with a dual-channel system based on conditional variational autoencoders under underdetermined conditions 欠确定条件下基于条件变分自编码器的双通道定向目标说话人提取
Rui Wang, Li Li, T. Toda
In this paper, we deal with a dual-channel target speaker extraction (TSE) problem under underdetermined con-ditions. For the dual-channel system, the generalized sidelobe canceller (GSC) is a commonly used structure for estimating a blocking matrix (BM) to generate interference, and geometric source separation (GSS) can be used as an implementation of BM estimation utilizing directional information. However, the performance of the conventional GSS methods is limited under underdetermined conditions because of the lack of a powerful source model. In this paper, we propose a dual-channel TSE method that combines the ability of target selection based on geometric constraints, more powerful source modeling, and nonlinear postprocessing. The target directional information is used as a geometric constraint, and two conditional variational auto encoders (CVAEs) are used to model a single speaker's speech and interference mixture speech. For the postprocessing, an ideal ratio Time-Frequency (T-F) mask estimated from the separated interference mixture speech is used to extract the target speaker's speech. The experimental results demonstrate that the proposed method achieves 6.24 dB and 8.37 dB improvements compared with the baseline method in terms of signal-to-distortions ratio (SDR) and source-to-interferences ratio (SIR) respectively under strong reverberation for 470 ms.
本文研究了欠确定条件下的双通道目标说话人提取问题。对于双通道系统,广义旁瓣对消器(GSC)是估计产生干扰的块矩阵(BM)的常用结构,而几何源分离(GSS)可以作为利用方向信息实现块矩阵估计的一种方法。然而,由于缺乏强大的源模型,传统的GSS方法在欠确定条件下的性能受到限制。在本文中,我们提出了一种结合了基于几何约束的目标选择能力、更强大的源建模能力和非线性后处理能力的双通道TSE方法。利用目标方向信息作为几何约束,利用两个条件变分自编码器(CVAEs)分别对单个说话人的语音和干扰混合语音进行建模。在后处理中,利用从分离的干扰混合语音中估计出的理想时频比掩模提取目标说话人的语音。实验结果表明,在470 ms强混响条件下,与基准方法相比,该方法的信失真比(SDR)和源干扰比(SIR)分别提高了6.24 dB和8.37 dB。
{"title":"Direction-aware target speaker extraction with a dual-channel system based on conditional variational autoencoders under underdetermined conditions","authors":"Rui Wang, Li Li, T. Toda","doi":"10.23919/APSIPAASC55919.2022.9979881","DOIUrl":"https://doi.org/10.23919/APSIPAASC55919.2022.9979881","url":null,"abstract":"In this paper, we deal with a dual-channel target speaker extraction (TSE) problem under underdetermined con-ditions. For the dual-channel system, the generalized sidelobe canceller (GSC) is a commonly used structure for estimating a blocking matrix (BM) to generate interference, and geometric source separation (GSS) can be used as an implementation of BM estimation utilizing directional information. However, the performance of the conventional GSS methods is limited under underdetermined conditions because of the lack of a powerful source model. In this paper, we propose a dual-channel TSE method that combines the ability of target selection based on geometric constraints, more powerful source modeling, and nonlinear postprocessing. The target directional information is used as a geometric constraint, and two conditional variational auto encoders (CVAEs) are used to model a single speaker's speech and interference mixture speech. For the postprocessing, an ideal ratio Time-Frequency (T-F) mask estimated from the separated interference mixture speech is used to extract the target speaker's speech. The experimental results demonstrate that the proposed method achieves 6.24 dB and 8.37 dB improvements compared with the baseline method in terms of signal-to-distortions ratio (SDR) and source-to-interferences ratio (SIR) respectively under strong reverberation for 470 ms.","PeriodicalId":382967,"journal":{"name":"2022 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116382658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
DCAN: Deep Consecutive Attention Network for Video Super Resolution DCAN:视频超分辨率深度连续注意网络
Talha Saleem, Sovann Chen, S. Aramvith
Slow motion is visually attractive in video applications and gets more attention in video super-resolution (VSR). To generate the high-resolution (HR) center frame with its neighbor HR frames from the low-resolution (LR) of two frames. Two sub-tasks are required, including video super-resolution (VSR) and video frame interpolation (VFI). However, the interpolation approach does not successfully extract low-level features to achieve the acceptable result of space-time video super-resolution. Therefore, the restoration performance of existing systems is constrained due to rarely considering the spatial-temporal correlation and the long-term temporal context concurrently. To this extent, we propose a deep consecutive attention network-based method to generate attentive features to get HR slow-motion frames. A channel attention module and an attentive temporal feature module are designed to improve the perceptual quality of predicted interpolation feature frames. The experimental results show the proposed method outperforms 0.17 dB in an average PSNR compared to the state-of-the-art baseline method.
慢动作在视频应用中具有视觉吸引力,在视频超分辨率(VSR)中受到越来越多的关注。从两帧的低分辨率(LR)帧中生成高分辨率(HR)中心帧及其相邻的高分辨率(HR)帧。其中包括视频超分辨率(VSR)和视频帧插值(VFI)两个子任务。然而,该插值方法无法成功提取底层特征,无法获得可接受的时空视频超分辨率结果。因此,由于很少同时考虑时空相关性和长期时间背景,现有系统的恢复性能受到限制。为此,我们提出了一种基于深度连续注意网络的方法来生成注意特征以获取HR慢动作帧。为了提高预测插值特征帧的感知质量,设计了信道注意模块和注意时间特征模块。实验结果表明,与最先进的基线方法相比,该方法的平均PSNR提高了0.17 dB。
{"title":"DCAN: Deep Consecutive Attention Network for Video Super Resolution","authors":"Talha Saleem, Sovann Chen, S. Aramvith","doi":"10.23919/APSIPAASC55919.2022.9979823","DOIUrl":"https://doi.org/10.23919/APSIPAASC55919.2022.9979823","url":null,"abstract":"Slow motion is visually attractive in video applications and gets more attention in video super-resolution (VSR). To generate the high-resolution (HR) center frame with its neighbor HR frames from the low-resolution (LR) of two frames. Two sub-tasks are required, including video super-resolution (VSR) and video frame interpolation (VFI). However, the interpolation approach does not successfully extract low-level features to achieve the acceptable result of space-time video super-resolution. Therefore, the restoration performance of existing systems is constrained due to rarely considering the spatial-temporal correlation and the long-term temporal context concurrently. To this extent, we propose a deep consecutive attention network-based method to generate attentive features to get HR slow-motion frames. A channel attention module and an attentive temporal feature module are designed to improve the perceptual quality of predicted interpolation feature frames. The experimental results show the proposed method outperforms 0.17 dB in an average PSNR compared to the state-of-the-art baseline method.","PeriodicalId":382967,"journal":{"name":"2022 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115578252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Encrypted JPEG Image Retrieval via Huffman-code Based Self-Attention Networks 基于霍夫曼码的自关注网络加密JPEG图像检索
Zhixun Lu, Qihua Feng, Peiya Li
Image retrieval has been widely used in daily life. In recent years, with the increasing awareness of privacy protection, encrypted image retrieval has also been gradually developed. In this paper, we propose a new encrypted JPEG image retrieval scheme, named Huffman-code Based Self-Attention Networks (HBSAN), which could conduct image retrieval and protect image privacy effectively. To be specific, we first extract Huffman-code histograms directly from cipher-images which are encrypted by jointly using new orthogonal transformation, permutation cipher and stream cipher during JPEG compression. Then we employ the self-attention neural networks to mine the deep relations and retrieve the cipher-images. In our retrieval model, we design a self-attention multi-layer perceptron module, called SAMLP, to effectively learn global dependencies within representations of cipher-images. Extensive experiments present our encryption algorithm is compression-friendly, ensures no information leakage, and HBSAN significantly outperforms other state-of-the-art models in retrieval performance.
图像检索在日常生活中有着广泛的应用。近年来,随着人们隐私保护意识的增强,加密图像检索也逐渐发展起来。本文提出了一种新的加密JPEG图像检索方案——基于Huffman-code的自关注网络(HBSAN),该方案能够有效地进行图像检索和保护图像隐私。具体来说,我们首先直接从JPEG压缩过程中使用新的正交变换、置换密码和流密码联合加密的密码图像中提取霍夫曼码直方图。然后利用自关注神经网络挖掘深层关系,检索密码图像。在我们的检索模型中,我们设计了一个自关注多层感知器模块,称为SAMLP,以有效地学习密码图像表示中的全局依赖关系。大量的实验表明,我们的加密算法是压缩友好的,确保没有信息泄漏,并且HBSAN在检索性能上明显优于其他最先进的模型。
{"title":"Encrypted JPEG Image Retrieval via Huffman-code Based Self-Attention Networks","authors":"Zhixun Lu, Qihua Feng, Peiya Li","doi":"10.23919/APSIPAASC55919.2022.9979814","DOIUrl":"https://doi.org/10.23919/APSIPAASC55919.2022.9979814","url":null,"abstract":"Image retrieval has been widely used in daily life. In recent years, with the increasing awareness of privacy protection, encrypted image retrieval has also been gradually developed. In this paper, we propose a new encrypted JPEG image retrieval scheme, named Huffman-code Based Self-Attention Networks (HBSAN), which could conduct image retrieval and protect image privacy effectively. To be specific, we first extract Huffman-code histograms directly from cipher-images which are encrypted by jointly using new orthogonal transformation, permutation cipher and stream cipher during JPEG compression. Then we employ the self-attention neural networks to mine the deep relations and retrieve the cipher-images. In our retrieval model, we design a self-attention multi-layer perceptron module, called SAMLP, to effectively learn global dependencies within representations of cipher-images. Extensive experiments present our encryption algorithm is compression-friendly, ensures no information leakage, and HBSAN significantly outperforms other state-of-the-art models in retrieval performance.","PeriodicalId":382967,"journal":{"name":"2022 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115597106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-branch Learning for Noisy and Reverberant Monaural Speech Separation 噪声和混响单耳语音分离的多分支学习
Chao Ma, Dongmei Li
With the rapid development of deep learning approaches, much progress has been made on speech enhancement, speech dereverberation, and monaural multi- speaker speech separation to solve the cocktail problem. Some excellent methods have been proposed to solve the monaural speech separation in a noisy and reverberant environment. However, few studies exploit the correlations between anechoic speech and reverberant speech. In this work, the structure of a popular separation system is deconstructed, and a multi-branch learning method is proposed to enforce the network to exploit the correlations between anechoic speech and the corresponding reverberant speech. The results show that using multi-branch learning can improve the separation performance of different networks by 0.7dB with conv-tasnet on the WHAMR! dataset.
随着深度学习方法的快速发展,在语音增强、语音去混响和单耳多说话人语音分离等方面取得了很大进展,以解决鸡尾酒问题。为了解决噪声和混响环境下的单耳语音分离问题,已经提出了一些很好的方法。然而,很少有研究探讨消声语音和混响语音之间的关系。在这项工作中,解构了一个流行的分离系统的结构,并提出了一种多分支学习方法来强制网络利用消声语音和相应的混响语音之间的相关性。结果表明,使用多分支学习可以使不同网络的分离性能在WHAMR上提高0.7dB。数据集。
{"title":"Multi-branch Learning for Noisy and Reverberant Monaural Speech Separation","authors":"Chao Ma, Dongmei Li","doi":"10.23919/APSIPAASC55919.2022.9980244","DOIUrl":"https://doi.org/10.23919/APSIPAASC55919.2022.9980244","url":null,"abstract":"With the rapid development of deep learning approaches, much progress has been made on speech enhancement, speech dereverberation, and monaural multi- speaker speech separation to solve the cocktail problem. Some excellent methods have been proposed to solve the monaural speech separation in a noisy and reverberant environment. However, few studies exploit the correlations between anechoic speech and reverberant speech. In this work, the structure of a popular separation system is deconstructed, and a multi-branch learning method is proposed to enforce the network to exploit the correlations between anechoic speech and the corresponding reverberant speech. The results show that using multi-branch learning can improve the separation performance of different networks by 0.7dB with conv-tasnet on the WHAMR! dataset.","PeriodicalId":382967,"journal":{"name":"2022 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC)","volume":"58 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114126699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-resolution GPR clutter suppression method based on low-rank and sparse decomposition 基于低秩稀疏分解的多分辨探地雷达杂波抑制方法
Yanjie Cao, Xiaopeng Yang, T. Lan
The clutter encountered in ground-penetrating radar (GPR) seriously affects the detection and identification for the subsurface target, which has been widely studied in recent years. A low-rank and sparse decomposition (LRSD) method with multi-resolution is introduced in this paper. First, the raw GPR data is decomposed by stationary wavelet transform (SWT) to obtain different sub-bands. Then, the robust non-negative matrix factorization (RNMF) is used for approximation sub-bands and horizontal wavelet sub-bands to extract the target sparse parts. Next, the wavelet soft threshold de-noising is used for the vertical and diagonal wavelet sub-bands. Finally, the inverse wavelet transform of processed sub-bands is performed to reconstruct the target signal. The proposed method is compared with the subspace method and LRSD methods on both simulation data and real collected data. Visual and quantitative results show that the proposed method has better clutter suppression performance.
探地雷达中遇到的杂波严重影响了对地下目标的探测和识别,近年来得到了广泛的研究。介绍了一种多分辨率低秩稀疏分解(LRSD)方法。首先,对原始探地雷达数据进行平稳小波变换(SWT)分解,得到不同的子带;然后,利用鲁棒非负矩阵分解(RNMF)对近似子带和水平小波子带提取目标稀疏部分;然后对垂直子带和对角子带分别进行小波软阈值去噪。最后,对处理后的子带进行小波反变换,重建目标信号。在仿真数据和实际采集数据上与子空间方法和LRSD方法进行了比较。视觉和定量结果表明,该方法具有较好的杂波抑制性能。
{"title":"Multi-resolution GPR clutter suppression method based on low-rank and sparse decomposition","authors":"Yanjie Cao, Xiaopeng Yang, T. Lan","doi":"10.23919/APSIPAASC55919.2022.9980215","DOIUrl":"https://doi.org/10.23919/APSIPAASC55919.2022.9980215","url":null,"abstract":"The clutter encountered in ground-penetrating radar (GPR) seriously affects the detection and identification for the subsurface target, which has been widely studied in recent years. A low-rank and sparse decomposition (LRSD) method with multi-resolution is introduced in this paper. First, the raw GPR data is decomposed by stationary wavelet transform (SWT) to obtain different sub-bands. Then, the robust non-negative matrix factorization (RNMF) is used for approximation sub-bands and horizontal wavelet sub-bands to extract the target sparse parts. Next, the wavelet soft threshold de-noising is used for the vertical and diagonal wavelet sub-bands. Finally, the inverse wavelet transform of processed sub-bands is performed to reconstruct the target signal. The proposed method is compared with the subspace method and LRSD methods on both simulation data and real collected data. Visual and quantitative results show that the proposed method has better clutter suppression performance.","PeriodicalId":382967,"journal":{"name":"2022 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115224478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Fine-Tuning BERT for Question and Answering Using PubMed Abstract Dataset 基于PubMed摘要数据集的BERT问答优化
Saeyeon Cheon, Insung Ahn
The coronavirus, which first originated in China in 2019, spread worldwide and eventually reached a pandemic situation. In the interest of many people, misinformation about the coronavirus has been pouring out on the Internet. We developed a Q&A processing technique by building a dataset based on the PubMed paper abstract for people to easily get the right information. We fine-tuned BioBERT among the BERT models that reached SOTA performance in the biomedical Q&A task. It answered questions about coronavirus with high accuracy. In the future, we will develop our technology that can handle Q&A not only in English but also in multiple languages. This work will contribute to helping people who speak different languages easily obtain correct information amidst confusing data.
冠状病毒于2019年首先起源于中国,并在全球传播,最终形成了大流行局面。为了许多人的利益,关于冠状病毒的错误信息在互联网上铺天盖地。我们开发了一种问答处理技术,通过建立基于PubMed论文摘要的数据集,使人们更容易获得正确的信息。我们对在生物医学问答任务中达到SOTA性能的BERT模型中的BioBERT进行了微调。它非常准确地回答了有关冠状病毒的问题。未来,我们将开发不仅能用英语,还能用多种语言进行问答的技术。”这项工作将有助于说不同语言的人在混乱的数据中轻松获得正确的信息。
{"title":"Fine-Tuning BERT for Question and Answering Using PubMed Abstract Dataset","authors":"Saeyeon Cheon, Insung Ahn","doi":"10.23919/APSIPAASC55919.2022.9980097","DOIUrl":"https://doi.org/10.23919/APSIPAASC55919.2022.9980097","url":null,"abstract":"The coronavirus, which first originated in China in 2019, spread worldwide and eventually reached a pandemic situation. In the interest of many people, misinformation about the coronavirus has been pouring out on the Internet. We developed a Q&A processing technique by building a dataset based on the PubMed paper abstract for people to easily get the right information. We fine-tuned BioBERT among the BERT models that reached SOTA performance in the biomedical Q&A task. It answered questions about coronavirus with high accuracy. In the future, we will develop our technology that can handle Q&A not only in English but also in multiple languages. This work will contribute to helping people who speak different languages easily obtain correct information amidst confusing data.","PeriodicalId":382967,"journal":{"name":"2022 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC)","volume":"201 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116169326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Parameterization of Dominant Spectral Peak Trajectory for Whisper Speech Recognition 耳语语音识别的优势谱峰轨迹参数化
Chang Feng, Xiaolong Wu, Mingxing Xu, T. Zheng
Automatic speech recognition (ASR) systems trained on normal speech generally suffer from performance degradations for whisper speech. To solve this problem, this paper concentrates on utilizing similar factors between normal and whisper speech to construct a whisper speech recognizer with normal speech data. We propose to parameterize the dominant spectral peak trajectory (Ppeak) to capture the similarities and concatenate it to the traditional Mel-Frequency Cepstral Coefficients (MFCC) and Human Factor Cepstral Coefficients (HFCC), respectively, to form new features. The proposed features benefit to the accuracy of whisper speech recognition. Performance improvement can be further achieved when the similarity is enhanced by removing low frequency information. Experimental results show that the performance degradation between match and mismatch scenarios was reduced relatively by 90.31% in Word Error Rate (WER) for HFCC after similarity enhancement at a cut-off frequency of 500Hz. Furthermore, we ultimately achieved a relative reduction of 69.60% in WER in the mismatch scenario compared with conventional MFCC even without whisper speech data for training.
基于正常语音训练的自动语音识别(ASR)系统在处理耳语语音时通常会出现性能下降。为了解决这一问题,本文着重利用正常语音和耳语语音之间的相似因素,利用正常语音数据构建耳语语音识别器。我们提出对主导谱峰轨迹(Ppeak)进行参数化以获取相似性,并将其分别与传统的Mel-Frequency Cepstral系数(MFCC)和Human Factor Cepstral系数(HFCC)连接,形成新的特征。所提出的特征有利于耳语语音识别的准确性。当通过去除低频信息来增强相似性时,可以进一步实现性能改进。实验结果表明,在截断频率为500Hz的相似度增强后,HFCC匹配和不匹配场景之间的性能下降相对降低了90.31%。此外,即使没有耳语语音数据进行训练,我们最终也实现了在不匹配场景下,与传统MFCC相比,WER的相对降低了69.60%。
{"title":"Parameterization of Dominant Spectral Peak Trajectory for Whisper Speech Recognition","authors":"Chang Feng, Xiaolong Wu, Mingxing Xu, T. Zheng","doi":"10.23919/APSIPAASC55919.2022.9980259","DOIUrl":"https://doi.org/10.23919/APSIPAASC55919.2022.9980259","url":null,"abstract":"Automatic speech recognition (ASR) systems trained on normal speech generally suffer from performance degradations for whisper speech. To solve this problem, this paper concentrates on utilizing similar factors between normal and whisper speech to construct a whisper speech recognizer with normal speech data. We propose to parameterize the dominant spectral peak trajectory (Ppeak) to capture the similarities and concatenate it to the traditional Mel-Frequency Cepstral Coefficients (MFCC) and Human Factor Cepstral Coefficients (HFCC), respectively, to form new features. The proposed features benefit to the accuracy of whisper speech recognition. Performance improvement can be further achieved when the similarity is enhanced by removing low frequency information. Experimental results show that the performance degradation between match and mismatch scenarios was reduced relatively by 90.31% in Word Error Rate (WER) for HFCC after similarity enhancement at a cut-off frequency of 500Hz. Furthermore, we ultimately achieved a relative reduction of 69.60% in WER in the mismatch scenario compared with conventional MFCC even without whisper speech data for training.","PeriodicalId":382967,"journal":{"name":"2022 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114250363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Correlation Loss for MOS Prediction of Synthetic Speech 基于相关损失的MOS合成语音预测
Beibei Hu, Qiang Li
For the speech mean opinion score (MOS) prediction task, many deep-learning-based methods are developed. Generally, system-level and utterance-level mean squared error (MSE), Linear Correlation Coefficient (LCC), Spearman Rank Correlation Coefficient (SRCC), and Kendall Tau Rank Correlation (KTAU) are leveraged as the evaluation metrics. However, we find that the objective functions for many MOS prediction networks are MAE or MSE based without an explicit correlation objective part. This paper investigates different correlation losses for voice MOS prediction networks. Based on the datasets and SSL-MOS baseline system provided by VoiceMOsChallenge 2022, we employ different auxiliary correlation losses to train the MOS prediction network. The experiment results show that the suggested auxiliary correlation losses increase the performance of the SSL-MOS network on the six correlation metrics. Compared with the two best-performing systems in the VoiceMOsChallenge 2022, our approach achieves close performance on the system-level correlation metrics with simpler system architecture.
对于语音平均意见评分(MOS)预测任务,开发了许多基于深度学习的方法。通常,系统级和话语级均方误差(MSE)、线性相关系数(LCC)、Spearman等级相关系数(SRCC)和Kendall Tau等级相关系数(KTAU)作为评价指标。然而,我们发现许多MOS预测网络的目标函数是基于MAE或MSE的,没有明确的相关目标部分。本文研究了语音MOS预测网络中不同的相关损失。基于voicemochallenge 2022提供的数据集和SSL-MOS基线系统,我们采用不同的辅助相关损失来训练MOS预测网络。实验结果表明,提出的辅助相关损失提高了SSL-MOS网络在6个相关指标上的性能。与voicemochallenge 2022中表现最好的两个系统相比,我们的方法在系统级相关指标上实现了接近的性能,系统架构更简单。
{"title":"Correlation Loss for MOS Prediction of Synthetic Speech","authors":"Beibei Hu, Qiang Li","doi":"10.23919/APSIPAASC55919.2022.9980182","DOIUrl":"https://doi.org/10.23919/APSIPAASC55919.2022.9980182","url":null,"abstract":"For the speech mean opinion score (MOS) prediction task, many deep-learning-based methods are developed. Generally, system-level and utterance-level mean squared error (MSE), Linear Correlation Coefficient (LCC), Spearman Rank Correlation Coefficient (SRCC), and Kendall Tau Rank Correlation (KTAU) are leveraged as the evaluation metrics. However, we find that the objective functions for many MOS prediction networks are MAE or MSE based without an explicit correlation objective part. This paper investigates different correlation losses for voice MOS prediction networks. Based on the datasets and SSL-MOS baseline system provided by VoiceMOsChallenge 2022, we employ different auxiliary correlation losses to train the MOS prediction network. The experiment results show that the suggested auxiliary correlation losses increase the performance of the SSL-MOS network on the six correlation metrics. Compared with the two best-performing systems in the VoiceMOsChallenge 2022, our approach achieves close performance on the system-level correlation metrics with simpler system architecture.","PeriodicalId":382967,"journal":{"name":"2022 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC)","volume":"124 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114512378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Restoring Edge and Color using Weighted Near-Infrared Image and Color Transmission Maps for Robust Haze Removal 使用加权近红外图像和彩色传输图恢复边缘和颜色,用于鲁棒去除雾霾
Onhi Kato, Akira Kubota
In recent years, various haze removal methods based on atmospheric scattering models have been proposed. Most methods target strong haze images in which light is scattered equally in all color channels. This paper proposes a haze removal method using near-infrared (NIR) images for weak haze images. The proposed method first restores the edges of color images by fusing weighted NIR images. Second, it estimates transmission maps for all color channels based on a wavelength-dependent scattering model and restores the color of the edge-restored image using the estimated transmission maps. Finally, the edge- restored and color-restored images are blended. Qualitative and quantitative evaluations demonstrate that the proposed method can restore edges and colors more naturally in weak haze images than conventional methods.
近年来,人们提出了各种基于大气散射模型的雾霾去除方法。大多数方法针对强雾霾图像,其中光在所有颜色通道中均匀散射。针对弱雾图像,提出了一种利用近红外(NIR)图像去雾的方法。该方法首先通过融合加权近红外图像恢复彩色图像的边缘;其次,基于波长相关的散射模型估计所有颜色通道的传输映射,并使用估计的传输映射恢复边缘恢复图像的颜色。最后,对边缘恢复图像和彩色恢复图像进行混合处理。定性和定量评价表明,与传统方法相比,该方法可以更自然地恢复弱雾图像的边缘和颜色。
{"title":"Restoring Edge and Color using Weighted Near-Infrared Image and Color Transmission Maps for Robust Haze Removal","authors":"Onhi Kato, Akira Kubota","doi":"10.23919/APSIPAASC55919.2022.9979960","DOIUrl":"https://doi.org/10.23919/APSIPAASC55919.2022.9979960","url":null,"abstract":"In recent years, various haze removal methods based on atmospheric scattering models have been proposed. Most methods target strong haze images in which light is scattered equally in all color channels. This paper proposes a haze removal method using near-infrared (NIR) images for weak haze images. The proposed method first restores the edges of color images by fusing weighted NIR images. Second, it estimates transmission maps for all color channels based on a wavelength-dependent scattering model and restores the color of the edge-restored image using the estimated transmission maps. Finally, the edge- restored and color-restored images are blended. Qualitative and quantitative evaluations demonstrate that the proposed method can restore edges and colors more naturally in weak haze images than conventional methods.","PeriodicalId":382967,"journal":{"name":"2022 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114604464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Camera-Based Log System for Human Physical Distance Tracking in Classroom 基于摄像机的教室人体物理距离跟踪日志系统
S. Deepaisarn, Angkoon Angkoonsawaengsuk, Charn Arunkit, Chayud Srisumarnk, Krongkan Nimmanwatthana, Nanmanas Linphrachaya, Nattapol Chiewnawintawat, Rinrada Tanthanathewin, Sivakorn Seinglek, Suphachok Buaruk, Virach Sornlertlamvanich
In the pandemic of COVID-19, the indoor physical distancing protocol has been one of the recommendations for people to avoid close contact with each other in order to prevent contagious clusters. This paper proposes an end-to-end camera-based human physical distancing recording system for an indoor environment, specifically, a classroom. The recording system aims to automatically trace the locations of persons and the directions of their movements in a classroom, also with respect to the on- and off-seat activities. No identity of persons is kept in the recording log system, but locations of individual persons at each timestamp are obtained; hence, the spatial and temporal distribution can be studied further. In this paper, we illustrate the overview workflow of the human and seat detection as well as the log system storing human physical distancing actions.
在2019冠状病毒病大流行期间,为防止传染性聚集性感染,建议人们避免密切接触,室内保持身体距离方案是建议之一。本文提出了一种基于端到端摄像机的室内环境(特别是教室)人体物理距离记录系统。该记录系统旨在自动跟踪人的位置和他们在教室里的运动方向,也涉及到座位上和座位外的活动。在记录日志系统中不保留人员的身份,但获得每个时间戳的个人位置;因此,可以进一步研究其时空分布。在本文中,我们阐述了人体和座位检测的总体工作流程以及存储人体物理距离动作的日志系统。
{"title":"Camera-Based Log System for Human Physical Distance Tracking in Classroom","authors":"S. Deepaisarn, Angkoon Angkoonsawaengsuk, Charn Arunkit, Chayud Srisumarnk, Krongkan Nimmanwatthana, Nanmanas Linphrachaya, Nattapol Chiewnawintawat, Rinrada Tanthanathewin, Sivakorn Seinglek, Suphachok Buaruk, Virach Sornlertlamvanich","doi":"10.23919/APSIPAASC55919.2022.9980055","DOIUrl":"https://doi.org/10.23919/APSIPAASC55919.2022.9980055","url":null,"abstract":"In the pandemic of COVID-19, the indoor physical distancing protocol has been one of the recommendations for people to avoid close contact with each other in order to prevent contagious clusters. This paper proposes an end-to-end camera-based human physical distancing recording system for an indoor environment, specifically, a classroom. The recording system aims to automatically trace the locations of persons and the directions of their movements in a classroom, also with respect to the on- and off-seat activities. No identity of persons is kept in the recording log system, but locations of individual persons at each timestamp are obtained; hence, the spatial and temporal distribution can be studied further. In this paper, we illustrate the overview workflow of the human and seat detection as well as the log system storing human physical distancing actions.","PeriodicalId":382967,"journal":{"name":"2022 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114899944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2022 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1