Pub Date : 2014-12-01DOI: 10.1109/APSIPA.2014.7041620
S. Sakti, Y. Odagaki, Takafumi Sasakura, Graham Neubig, T. Toda, Satoshi Nakamura
Most automatic speech recognition (ASR) systems, which aim for perfect transcription of utterances, are trained and tuned by minimizing the word error rate (WER). In this framework, even though the impact of all errors is not the same, all errors (substitutions, deletions, insertions) from any words are treated in a uniform manner. The size of the impact and exactly what the differences are remain unknown. Several studies have proposed possible alternatives to the WER metric. But no analysis has investigated how the human brain processes language and perceives the effect of mistaken output by ASR systems. In this research we utilize event-related brain potential (ERP) studies and directly analyze the brain activities on the impact of ASR errors. Our results reveal that the peak amplitudes of the positive shift after the substitution and deletion violations are much bigger than the insertion violations. This finding indicates that humans perceived each error differently based on its impact of the whole sentence. To investigate the effect of this study, we formulated a new weighted word error rate metric based on the ERP results: ERP-WWER. We re-evaluated the ASR performance using the new ERP-WWER metric and compared and discussed the results with the standard WER.
{"title":"An event-related brain potential study on the impact of speech recognition errors","authors":"S. Sakti, Y. Odagaki, Takafumi Sasakura, Graham Neubig, T. Toda, Satoshi Nakamura","doi":"10.1109/APSIPA.2014.7041620","DOIUrl":"https://doi.org/10.1109/APSIPA.2014.7041620","url":null,"abstract":"Most automatic speech recognition (ASR) systems, which aim for perfect transcription of utterances, are trained and tuned by minimizing the word error rate (WER). In this framework, even though the impact of all errors is not the same, all errors (substitutions, deletions, insertions) from any words are treated in a uniform manner. The size of the impact and exactly what the differences are remain unknown. Several studies have proposed possible alternatives to the WER metric. But no analysis has investigated how the human brain processes language and perceives the effect of mistaken output by ASR systems. In this research we utilize event-related brain potential (ERP) studies and directly analyze the brain activities on the impact of ASR errors. Our results reveal that the peak amplitudes of the positive shift after the substitution and deletion violations are much bigger than the insertion violations. This finding indicates that humans perceived each error differently based on its impact of the whole sentence. To investigate the effect of this study, we formulated a new weighted word error rate metric based on the ERP results: ERP-WWER. We re-evaluated the ASR performance using the new ERP-WWER metric and compared and discussed the results with the standard WER.","PeriodicalId":231382,"journal":{"name":"Signal and Information Processing Association Annual Summit and Conference (APSIPA), 2014 Asia-Pacific","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130315263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/APSIPA.2014.7041635
Muhammad Sikandar Lal Khan, S. Réhman, P. L. Hera, Feng Liu, Haibo Li
In this work we present an interactive video conferencing system specifically designed for enhancing the experience of video teleconferencing for a pilot user. We have used an Embodied Telepresence System (ETS) which was previously designed to enhance the experience of video teleconferencing for the collaborators. In this work we have deployed an ETS in a novel scenario to improve the experience of pilot user during distance communication. The ETS is used to adjust the view of the pilot user at the distance location (e.g. distance located conference/meeting). The velocity profile control for the ETS is developed which is implicitly controlled by the head of the pilot user. The experiment was conducted to test whether the view adjustment capability of an ETS increases the collaboration experience of video conferencing for the pilot user or not. The user study was conducted in which participants (pilot users) performed interaction using ETS and with traditional computer based video conferencing tool. Overall, the user study suggests the effectiveness of our approach and hence results in enhancing the experience of video conferencing for the pilot user.
{"title":"A pilot user's prospective in mobile robotic telepresence system","authors":"Muhammad Sikandar Lal Khan, S. Réhman, P. L. Hera, Feng Liu, Haibo Li","doi":"10.1109/APSIPA.2014.7041635","DOIUrl":"https://doi.org/10.1109/APSIPA.2014.7041635","url":null,"abstract":"In this work we present an interactive video conferencing system specifically designed for enhancing the experience of video teleconferencing for a pilot user. We have used an Embodied Telepresence System (ETS) which was previously designed to enhance the experience of video teleconferencing for the collaborators. In this work we have deployed an ETS in a novel scenario to improve the experience of pilot user during distance communication. The ETS is used to adjust the view of the pilot user at the distance location (e.g. distance located conference/meeting). The velocity profile control for the ETS is developed which is implicitly controlled by the head of the pilot user. The experiment was conducted to test whether the view adjustment capability of an ETS increases the collaboration experience of video conferencing for the pilot user or not. The user study was conducted in which participants (pilot users) performed interaction using ETS and with traditional computer based video conferencing tool. Overall, the user study suggests the effectiveness of our approach and hence results in enhancing the experience of video conferencing for the pilot user.","PeriodicalId":231382,"journal":{"name":"Signal and Information Processing Association Annual Summit and Conference (APSIPA), 2014 Asia-Pacific","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126523654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/APSIPA.2014.7041695
Chen-Hao Wei, Chen-Kuo Chiang, S. Lai
We propose a novel depth maps refinement algorithm and generate multi-view video sequences from two-view video sequences for modern autostereoscopic display. In order to generate realistic contents for virtual views, high-quality depth maps are very critical to the view synthesis results. We propose an iterative depth refinement approach of a joint error detection and correction algorithm to refine the depth maps that can be estimated by an existing stereo matching method or provided by a depth capturing device. Error detection aims at two types of error: across-view color-depth-inconsistency error and local color-depth-inconsistency error. Subsequently, the detected error pixels are corrected by searching appropriate candidates under several constraints to amend the depth errors. A trilateral filter is included in the refining process that considers intensity, spatial and temporal terms into the filter weighting to enhance the consistency across frames. In the proposed view synthesis framework, it features a disparity-based view interpolation method to alleviate the translucent artifacts and a directional filter to reduce the aliasing around the object boundaries. Experimental results show that the proposed algorithm effectively fixes errors in the depth maps. In addition, we also show the refined depth maps along with the proposed view synthesis framework significantly improve the novel view synthesis on several benchmark datasets.
{"title":"Iterative depth recovery for multi-view video synthesis from stereo videos","authors":"Chen-Hao Wei, Chen-Kuo Chiang, S. Lai","doi":"10.1109/APSIPA.2014.7041695","DOIUrl":"https://doi.org/10.1109/APSIPA.2014.7041695","url":null,"abstract":"We propose a novel depth maps refinement algorithm and generate multi-view video sequences from two-view video sequences for modern autostereoscopic display. In order to generate realistic contents for virtual views, high-quality depth maps are very critical to the view synthesis results. We propose an iterative depth refinement approach of a joint error detection and correction algorithm to refine the depth maps that can be estimated by an existing stereo matching method or provided by a depth capturing device. Error detection aims at two types of error: across-view color-depth-inconsistency error and local color-depth-inconsistency error. Subsequently, the detected error pixels are corrected by searching appropriate candidates under several constraints to amend the depth errors. A trilateral filter is included in the refining process that considers intensity, spatial and temporal terms into the filter weighting to enhance the consistency across frames. In the proposed view synthesis framework, it features a disparity-based view interpolation method to alleviate the translucent artifacts and a directional filter to reduce the aliasing around the object boundaries. Experimental results show that the proposed algorithm effectively fixes errors in the depth maps. In addition, we also show the refined depth maps along with the proposed view synthesis framework significantly improve the novel view synthesis on several benchmark datasets.","PeriodicalId":231382,"journal":{"name":"Signal and Information Processing Association Annual Summit and Conference (APSIPA), 2014 Asia-Pacific","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130152439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/APSIPA.2014.7041810
R. Banchuin, R. Chaisricharoen
This paper proposes an analytical prediction formula of probability distribution of random variation in high frequency performance of weak inversion region operated scaled MOSFET where the manufacturing process induced physical defects of MOSFET have been taken into account. Furthermore, the correlation between process parameter included random variables which contribute such variation, and the effects of physical differences between N-type and P-type MOSFET have also been considered. As the scaled MOSFET is of interested, the up to dated formula of physical defects induced random variation in parameter of such scaled device has been used as the basis. The proposed formula can accurately predict the probability distribution of random variation in high frequency performance of weak inversion scaled MOSFET with a confidence level of 99%. So, it has been found to be an efficient alternative approach for the variability aware design of various weak inversion scaled MOSFET based signal processing circuits and systems.
{"title":"Analytical prediction formula of random variation in high frequency performance of weak inversion scaled MOSFET","authors":"R. Banchuin, R. Chaisricharoen","doi":"10.1109/APSIPA.2014.7041810","DOIUrl":"https://doi.org/10.1109/APSIPA.2014.7041810","url":null,"abstract":"This paper proposes an analytical prediction formula of probability distribution of random variation in high frequency performance of weak inversion region operated scaled MOSFET where the manufacturing process induced physical defects of MOSFET have been taken into account. Furthermore, the correlation between process parameter included random variables which contribute such variation, and the effects of physical differences between N-type and P-type MOSFET have also been considered. As the scaled MOSFET is of interested, the up to dated formula of physical defects induced random variation in parameter of such scaled device has been used as the basis. The proposed formula can accurately predict the probability distribution of random variation in high frequency performance of weak inversion scaled MOSFET with a confidence level of 99%. So, it has been found to be an efficient alternative approach for the variability aware design of various weak inversion scaled MOSFET based signal processing circuits and systems.","PeriodicalId":231382,"journal":{"name":"Signal and Information Processing Association Annual Summit and Conference (APSIPA), 2014 Asia-Pacific","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130469136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/APSIPA.2014.7041643
Sudeng Hu, Lina Jin, C.-C. Jay Kuo
A method to adjust the mean-squared-errors (MSE) value for coded video quality assessment is investigated in this work by incorporating subjective human visual experience. First, we propose a linear model between the mean opinioin score (MOS) and a logarithmic function of the MSE value of coded video under a range of coding rates. This model is validated by experimental data. With further simplification, this model contains only one parameter to be determined by video characteristics. Next, we adopt a machine learing method to learn this parameter. Specifically, we select features to classify video content into groups, where videos in each group are more homoegeneous in their characteristics. Then, a proper model parameter can be trained and predicted within each video group. Experimental results on a coded video database are given to demonstrate the effectiveness of the proposed algorithm.
{"title":"Compressed video quality assessment with modified MSE","authors":"Sudeng Hu, Lina Jin, C.-C. Jay Kuo","doi":"10.1109/APSIPA.2014.7041643","DOIUrl":"https://doi.org/10.1109/APSIPA.2014.7041643","url":null,"abstract":"A method to adjust the mean-squared-errors (MSE) value for coded video quality assessment is investigated in this work by incorporating subjective human visual experience. First, we propose a linear model between the mean opinioin score (MOS) and a logarithmic function of the MSE value of coded video under a range of coding rates. This model is validated by experimental data. With further simplification, this model contains only one parameter to be determined by video characteristics. Next, we adopt a machine learing method to learn this parameter. Specifically, we select features to classify video content into groups, where videos in each group are more homoegeneous in their characteristics. Then, a proper model parameter can be trained and predicted within each video group. Experimental results on a coded video database are given to demonstrate the effectiveness of the proposed algorithm.","PeriodicalId":231382,"journal":{"name":"Signal and Information Processing Association Annual Summit and Conference (APSIPA), 2014 Asia-Pacific","volume":"122 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134556708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/APSIPA.2014.7041821
Mayoore S. Jaiswal, Jun Xie, Ming-Ting Sun
RGB-D (Kinect-style) cameras are novel low-cost sensing systems that capture RGB images along with per-pixel depth information. In this paper we investigate the use of such cameras for acquiring multiple images of an object from multiple viewpoints and building complete 3D models of objects. Such models have applications in a wide range of industries. We implemented a complete 3D object model construction process with object segmentation, registration, global alignment, model denoising, and texturing, and studied the effects of these functions on the constructed 3D object models. We also developed a process for objective performance evaluation of the constructed 3D object models. We collected laser scan data as the ground truth using a Roland Pieza LPX-600 Laser Scanner to compare to the 3D models created by our process.
RGB- d (kinect式)相机是一种新颖的低成本传感系统,可以捕获RGB图像以及每像素深度信息。在本文中,我们研究了使用这种相机从多个视点获取一个物体的多个图像并建立物体的完整3D模型。这些模型在各行各业都有广泛的应用。我们实现了一个完整的三维物体模型构建过程,包括物体分割、配准、全局对齐、模型去噪和纹理化,并研究了这些功能对构建的三维物体模型的影响。我们还开发了一个过程的客观性能评估构建的3D对象模型。我们使用罗兰皮萨LPX-600激光扫描仪收集激光扫描数据作为地面真相,与我们的过程创建的3D模型进行比较。
{"title":"3D object modeling with a Kinect camera","authors":"Mayoore S. Jaiswal, Jun Xie, Ming-Ting Sun","doi":"10.1109/APSIPA.2014.7041821","DOIUrl":"https://doi.org/10.1109/APSIPA.2014.7041821","url":null,"abstract":"RGB-D (Kinect-style) cameras are novel low-cost sensing systems that capture RGB images along with per-pixel depth information. In this paper we investigate the use of such cameras for acquiring multiple images of an object from multiple viewpoints and building complete 3D models of objects. Such models have applications in a wide range of industries. We implemented a complete 3D object model construction process with object segmentation, registration, global alignment, model denoising, and texturing, and studied the effects of these functions on the constructed 3D object models. We also developed a process for objective performance evaluation of the constructed 3D object models. We collected laser scan data as the ground truth using a Roland Pieza LPX-600 Laser Scanner to compare to the 3D models created by our process.","PeriodicalId":231382,"journal":{"name":"Signal and Information Processing Association Annual Summit and Conference (APSIPA), 2014 Asia-Pacific","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134098742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/APSIPA.2014.7041598
K. Noro, Koichi Ito, Yukari Yanagisawa, M. Sakamoto, S. Mori, K. Shiga, T. Kodama, T. Aoki
This paper proposes a method for detecting contrast agents in ultrasound image sequences to develop diagnostic ultrasound imaging systems for tumor diagnosis. The conventional methods are based on simple subtraction of ultrasound images to detect ultrasound contrast agents, where the conventional methods need ultrasound image sequences with and without contrast agents. Even if the subject slightly moves, the detection result of the conventional methods includes significant errors. On the other hand, the proposed method employs the spatio-temporal analysis of the pixel intensity variation over several frames. The proposed method also employs motion estimation to select optimal image frames for detecting contrast agents. Through a set of experiments using mice, we demonstrate that the proposed method exhibits efficient performance compared with the conventional methods.
{"title":"Detecting contrast agents in ultrasound image sequences for tumor diagnosis","authors":"K. Noro, Koichi Ito, Yukari Yanagisawa, M. Sakamoto, S. Mori, K. Shiga, T. Kodama, T. Aoki","doi":"10.1109/APSIPA.2014.7041598","DOIUrl":"https://doi.org/10.1109/APSIPA.2014.7041598","url":null,"abstract":"This paper proposes a method for detecting contrast agents in ultrasound image sequences to develop diagnostic ultrasound imaging systems for tumor diagnosis. The conventional methods are based on simple subtraction of ultrasound images to detect ultrasound contrast agents, where the conventional methods need ultrasound image sequences with and without contrast agents. Even if the subject slightly moves, the detection result of the conventional methods includes significant errors. On the other hand, the proposed method employs the spatio-temporal analysis of the pixel intensity variation over several frames. The proposed method also employs motion estimation to select optimal image frames for detecting contrast agents. Through a set of experiments using mice, we demonstrate that the proposed method exhibits efficient performance compared with the conventional methods.","PeriodicalId":231382,"journal":{"name":"Signal and Information Processing Association Annual Summit and Conference (APSIPA), 2014 Asia-Pacific","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131069205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/APSIPA.2014.7041587
Mading Li, Jiaying Liu, Chong Ruan, Lu Liu, Zongming Guo
In this paper, we introduce a novel block-based multiscale error concealment method using low-rank completion. The proposed method searches for similar blocks and utilizes low-rank completion to recover the missing pixels. In order to make the full use of the hidden redundant information of images, we seek for more similar blocks by building an image pyramid. The blocks collected from the pyramid are more similar to each other, which leads to a more accurate recovery. Moreover, instead of recovering the missing block at once, we propose a ringlike iterative process to partially minimize the number of unknown pixels and further enhance the recovery result. Experimental results demonstrate the effectiveness of the proposed method.
{"title":"Block-based multiscale error concealment using low-rank completion","authors":"Mading Li, Jiaying Liu, Chong Ruan, Lu Liu, Zongming Guo","doi":"10.1109/APSIPA.2014.7041587","DOIUrl":"https://doi.org/10.1109/APSIPA.2014.7041587","url":null,"abstract":"In this paper, we introduce a novel block-based multiscale error concealment method using low-rank completion. The proposed method searches for similar blocks and utilizes low-rank completion to recover the missing pixels. In order to make the full use of the hidden redundant information of images, we seek for more similar blocks by building an image pyramid. The blocks collected from the pyramid are more similar to each other, which leads to a more accurate recovery. Moreover, instead of recovering the missing block at once, we propose a ringlike iterative process to partially minimize the number of unknown pixels and further enhance the recovery result. Experimental results demonstrate the effectiveness of the proposed method.","PeriodicalId":231382,"journal":{"name":"Signal and Information Processing Association Annual Summit and Conference (APSIPA), 2014 Asia-Pacific","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132293705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/APSIPA.2014.7041525
Twe Ta Oo, T. Onoye
In this paper, we firstly propose an effective audio scrambling method based on the pre-order traversal of a complete binary tree in time domain. The proposed method is fast, simple and has good scrambling effect. Then, with the aim of strengthening the anti-decryption capability, we present a wavelet-domain based scheme by considering not only the pre-order but also the in-/post-order based scrambling methods. First, an audio signal is wavelet-decomposed and retrieves the layers of wavelet coefficients. Then, the coefficients in each layer are scrambled by randomly chosen one out of the three methods. Anyone without knowledge of the correct wavelet decomposition parameters and the scrambling method used for each layer will never be able to descramble the signal successfully. Moreover, the new scheme also achieves progressive scrambling that enables to generate the audio outputs with different quality levels by controlling the scrambling degree as required.
{"title":"Progressive audio scrambling via complete binary tree's traversal and wavelet transform","authors":"Twe Ta Oo, T. Onoye","doi":"10.1109/APSIPA.2014.7041525","DOIUrl":"https://doi.org/10.1109/APSIPA.2014.7041525","url":null,"abstract":"In this paper, we firstly propose an effective audio scrambling method based on the pre-order traversal of a complete binary tree in time domain. The proposed method is fast, simple and has good scrambling effect. Then, with the aim of strengthening the anti-decryption capability, we present a wavelet-domain based scheme by considering not only the pre-order but also the in-/post-order based scrambling methods. First, an audio signal is wavelet-decomposed and retrieves the layers of wavelet coefficients. Then, the coefficients in each layer are scrambled by randomly chosen one out of the three methods. Anyone without knowledge of the correct wavelet decomposition parameters and the scrambling method used for each layer will never be able to descramble the signal successfully. Moreover, the new scheme also achieves progressive scrambling that enables to generate the audio outputs with different quality levels by controlling the scrambling degree as required.","PeriodicalId":231382,"journal":{"name":"Signal and Information Processing Association Annual Summit and Conference (APSIPA), 2014 Asia-Pacific","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133217095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/APSIPA.2014.7041655
Ryan Paderna, T. Higashino, M. Okada
This paper proposed an oversampling Modified Orthogonal Matching Pursuit based channel estimation for ISDB-T in fractional delay TU6 channel. The simulation shows that the proposed method has better bit-error-rate performance compared to the conventional method. In addition, the oversampling MOMP requires only less computational cost to improved the performance in fractional delay.
{"title":"Improved channel estimation for ISDB-T using Modified Orthogonal Matching Pursuit over fractional delay TU6 channel","authors":"Ryan Paderna, T. Higashino, M. Okada","doi":"10.1109/APSIPA.2014.7041655","DOIUrl":"https://doi.org/10.1109/APSIPA.2014.7041655","url":null,"abstract":"This paper proposed an oversampling Modified Orthogonal Matching Pursuit based channel estimation for ISDB-T in fractional delay TU6 channel. The simulation shows that the proposed method has better bit-error-rate performance compared to the conventional method. In addition, the oversampling MOMP requires only less computational cost to improved the performance in fractional delay.","PeriodicalId":231382,"journal":{"name":"Signal and Information Processing Association Annual Summit and Conference (APSIPA), 2014 Asia-Pacific","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115385549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}