{"title":"A novel algorithm for synchronizing audio and video streams in MPEG-2 system layer","authors":"Shereen M. Mosharafa, G. A. Ebrahim, A. Zekry","doi":"10.1109/ICCES.2014.7030945","DOIUrl":null,"url":null,"abstract":"Audio-video synchronization is a very common quality issue that has a direct effect on the end-user's experience. Audio and video streams are supposed to be synchronized at the transmitter. However, they might be received out of synchronization. In general, loss of synchronization between audio and video streams may be attributed to several factors such as incorrect time-stamping at the transmitter, frame loss, and the different delays introduced by the signal processing and post-processing functions on each stream. Unfortunately, the video pipeline is much complicated as compared with the audio pipeline. Hence, during the journey of the streams from the transmitter to the presentation unit at the receiver, they may be susceptible to different processing delays. These different delays can make the two streams finally presented out of phase. Different algorithms were developed for preserving synchronization at the receiver side in order to provide the best possible perceptual quality. However, most of these algorithms rely on the encoded data of the bit streams. Hence, this paper introduces a new audio-video synchronization algorithm that exploits some features in MPEG-2 system layer. It addresses the misalignment of audio and video streams occurred due to the different processing and post-processing delays. The proposed algorithm relies on both the system clock reference and the relative alignment between the audio and video streams.","PeriodicalId":339697,"journal":{"name":"2014 9th International Conference on Computer Engineering & Systems (ICCES)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 9th International Conference on Computer Engineering & Systems (ICCES)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCES.2014.7030945","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Audio-video synchronization is a very common quality issue that has a direct effect on the end-user's experience. Audio and video streams are supposed to be synchronized at the transmitter. However, they might be received out of synchronization. In general, loss of synchronization between audio and video streams may be attributed to several factors such as incorrect time-stamping at the transmitter, frame loss, and the different delays introduced by the signal processing and post-processing functions on each stream. Unfortunately, the video pipeline is much complicated as compared with the audio pipeline. Hence, during the journey of the streams from the transmitter to the presentation unit at the receiver, they may be susceptible to different processing delays. These different delays can make the two streams finally presented out of phase. Different algorithms were developed for preserving synchronization at the receiver side in order to provide the best possible perceptual quality. However, most of these algorithms rely on the encoded data of the bit streams. Hence, this paper introduces a new audio-video synchronization algorithm that exploits some features in MPEG-2 system layer. It addresses the misalignment of audio and video streams occurred due to the different processing and post-processing delays. The proposed algorithm relies on both the system clock reference and the relative alignment between the audio and video streams.