首页 > 最新文献

2013 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB)最新文献

英文 中文
Applications of Image Recognition for Real-Time Water Level and Surface Velocity 实时水位和水面速度图像识别的应用
Franco Lin, Wen-Yi Chang, Lung-Cheng Lee, Hung-Ta Hsiao, W. Tsai, J. Lai
In this paper, we present two types of the real-time water monitoring system using the image processing technology, the water level recognition and the surface velocity recognition. According to the bridge failure investigation, floods in the river often pose potential risk to bridges, and scouring could undermine the pier foundation and cause the structures to collapse. It is very important to develop monitoring techniques for bridge safety in the field. In this study, we installed two high-resolution cameras on the in-situ bridge site to get the real-time water level and surface velocity image. For the water level recognition, we use the image processing techniques of the image binarization, character recognition, and water line detection. For the surface velocity recognition, the proposed system apply the PIV(Particle Image Velocimetry, PIV) method to obtain the recognition of the water surface velocity by the cross correlation analysis. Finally, the proposed systems are used to record and measure the variations of the water level and surface velocity for a period of three days. The good results show that the proposed systems have potential to provide real-time information of water level and surface velocity during flood periods.
本文介绍了两种基于图像处理技术的实时水位监测系统:水位识别和水面流速识别。根据桥梁的破坏调查,河流中的洪水经常对桥梁造成潜在的危险,冲刷会破坏桥墩基础,导致结构倒塌。在现场开展桥梁安全监测技术具有十分重要的意义。在本研究中,我们在桥址上安装了两台高分辨率摄像机,获得了实时的水位和水面速度图像。在水位识别方面,我们采用了图像二值化、字符识别和水线检测等图像处理技术。在水面速度识别方面,该系统采用粒子图像测速(PIV)方法,通过互相关分析获得水面速度的识别。最后,用所提出的系统记录和测量了为期三天的水位和地表速度的变化。结果表明,该系统具有提供汛期水位和地表流速实时信息的潜力。
{"title":"Applications of Image Recognition for Real-Time Water Level and Surface Velocity","authors":"Franco Lin, Wen-Yi Chang, Lung-Cheng Lee, Hung-Ta Hsiao, W. Tsai, J. Lai","doi":"10.1109/ISM.2013.49","DOIUrl":"https://doi.org/10.1109/ISM.2013.49","url":null,"abstract":"In this paper, we present two types of the real-time water monitoring system using the image processing technology, the water level recognition and the surface velocity recognition. According to the bridge failure investigation, floods in the river often pose potential risk to bridges, and scouring could undermine the pier foundation and cause the structures to collapse. It is very important to develop monitoring techniques for bridge safety in the field. In this study, we installed two high-resolution cameras on the in-situ bridge site to get the real-time water level and surface velocity image. For the water level recognition, we use the image processing techniques of the image binarization, character recognition, and water line detection. For the surface velocity recognition, the proposed system apply the PIV(Particle Image Velocimetry, PIV) method to obtain the recognition of the water surface velocity by the cross correlation analysis. Finally, the proposed systems are used to record and measure the variations of the water level and surface velocity for a period of three days. The good results show that the proposed systems have potential to provide real-time information of water level and surface velocity during flood periods.","PeriodicalId":6311,"journal":{"name":"2013 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB)","volume":"9 1 1","pages":"259-262"},"PeriodicalIF":0.0,"publicationDate":"2013-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78266870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Searching for Near-Duplicate Video Sequences from a Scalable Sequence Aligner 搜索近重复的视频序列从一个可扩展的序列对齐器
Leonardo S. de Oliveira, Zenilton K. G. Patrocínio, S. Guimarães, G. Gravier
Near-duplicate video sequence identification consists in identifying real positions of a specific video clip in a video stream stored in a database. To address this problem, we propose a new approach based on a scalable sequence aligner borrowed from proteomics. Sequence alignment is performed on symbolic representations of features extracted from the input videos, based on an algorithm originally applied to bio-informatics. Experimental results demonstrate that our method performance achieved 94% recall with 100% precision, with an average searching time of about 1 second.
近重复视频序列识别包括识别存储在数据库中的视频流中特定视频片段的真实位置。为了解决这个问题,我们提出了一种基于可扩展序列比对器的新方法,该方法借鉴了蛋白质组学。序列比对是基于一种最初应用于生物信息学的算法,对从输入视频中提取的特征进行符号表示。实验结果表明,该方法的查全率为94%,查准率为100%,平均搜索时间约为1秒。
{"title":"Searching for Near-Duplicate Video Sequences from a Scalable Sequence Aligner","authors":"Leonardo S. de Oliveira, Zenilton K. G. Patrocínio, S. Guimarães, G. Gravier","doi":"10.1109/ISM.2013.42","DOIUrl":"https://doi.org/10.1109/ISM.2013.42","url":null,"abstract":"Near-duplicate video sequence identification consists in identifying real positions of a specific video clip in a video stream stored in a database. To address this problem, we propose a new approach based on a scalable sequence aligner borrowed from proteomics. Sequence alignment is performed on symbolic representations of features extracted from the input videos, based on an algorithm originally applied to bio-informatics. Experimental results demonstrate that our method performance achieved 94% recall with 100% precision, with an average searching time of about 1 second.","PeriodicalId":6311,"journal":{"name":"2013 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB)","volume":"21 1","pages":"223-226"},"PeriodicalIF":0.0,"publicationDate":"2013-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80337534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Towards More Robust Commutative Watermarking-Encryption of Images 更稳健的图像交换水印加密
R. Schmitz, Shujun Li, C. Grecos, Xinpeng Zhang
Histogram-based watermarking schemes are invariant against pixel permutations and can be combined with permutation-based ciphers. However, typical histogram-based watermarking schemes based on comparison of histogram bins are prone to de-synchronization attacks, where the whole histogram is shifted by a certain amount. In this paper we investigate the possibility of avoiding this kind of attacks by synchronizing the embedding and detection processes, using the mean of the histogram as a calibration point. The resulting watermarking scheme is resistant to three common types of shifts of the histogram, while the advantages of previous histogram-based schemes, especially commutativity of watermarking and permutation-based encryption, are preserved.
基于直方图的水印方案对像素排列是不变的,并且可以与基于排列的密码相结合。然而,典型的基于直方图bins比较的基于直方图的水印方案容易受到去同步攻击,即整个直方图被移动一定的量。在本文中,我们研究了通过同步嵌入和检测过程来避免这种攻击的可能性,使用直方图的平均值作为校准点。所得到的水印方案可以抵抗三种常见的直方图移位,同时保留了以前基于直方图的方案的优点,特别是水印的交换性和基于置换的加密。
{"title":"Towards More Robust Commutative Watermarking-Encryption of Images","authors":"R. Schmitz, Shujun Li, C. Grecos, Xinpeng Zhang","doi":"10.1109/ISM.2013.54","DOIUrl":"https://doi.org/10.1109/ISM.2013.54","url":null,"abstract":"Histogram-based watermarking schemes are invariant against pixel permutations and can be combined with permutation-based ciphers. However, typical histogram-based watermarking schemes based on comparison of histogram bins are prone to de-synchronization attacks, where the whole histogram is shifted by a certain amount. In this paper we investigate the possibility of avoiding this kind of attacks by synchronizing the embedding and detection processes, using the mean of the histogram as a calibration point. The resulting watermarking scheme is resistant to three common types of shifts of the histogram, while the advantages of previous histogram-based schemes, especially commutativity of watermarking and permutation-based encryption, are preserved.","PeriodicalId":6311,"journal":{"name":"2013 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB)","volume":"40 1","pages":"283-286"},"PeriodicalIF":0.0,"publicationDate":"2013-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77697828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
The Impact of Video Transcoding Parameters on Event Detection for Surveillance Systems 视频转码参数对监控系统事件检测的影响
E. Kafetzakis, Chris Xilouris, M. Kourtis, M. Nieto, Iveel Jargalsaikhan, S. Little
The process of transcoding videos apart from being computationally intensive, can also be a rather complex procedure. The complexity refers to the choice of appropriate parameters for the transcoding engine, with the aim of decreasing video sizes, transcoding times and network bandwidth without degrading video quality beyond some threshold that event detectors lose their accuracy. This paper explains the need for transcoding, and then studies different video quality metrics. Commonly used algorithms for motion and person detection are briefly described, with emphasis in investigating the optimum transcoding configuration parameters. The analysis of the experimental results reveals that the existing video quality metrics are not suitable for automated systems and that the detection of persons is affected by the reduction of bit rate and resolution, while motion detection is more sensitive to frame rate.
视频转码的过程除了计算量大之外,也是一个相当复杂的过程。复杂性是指为转码引擎选择合适的参数,目的是减少视频大小、转码时间和网络带宽,同时不降低视频质量,使事件检测器失去准确性。本文首先阐述了转码的必要性,然后研究了不同的视频质量指标。简要介绍了常用的运动和人检测算法,重点研究了最佳转码配置参数。实验结果分析表明,现有的视频质量指标不适合自动化系统,并且人的检测受到比特率和分辨率降低的影响,而运动检测对帧率更敏感。
{"title":"The Impact of Video Transcoding Parameters on Event Detection for Surveillance Systems","authors":"E. Kafetzakis, Chris Xilouris, M. Kourtis, M. Nieto, Iveel Jargalsaikhan, S. Little","doi":"10.1109/ISM.2013.64","DOIUrl":"https://doi.org/10.1109/ISM.2013.64","url":null,"abstract":"The process of transcoding videos apart from being computationally intensive, can also be a rather complex procedure. The complexity refers to the choice of appropriate parameters for the transcoding engine, with the aim of decreasing video sizes, transcoding times and network bandwidth without degrading video quality beyond some threshold that event detectors lose their accuracy. This paper explains the need for transcoding, and then studies different video quality metrics. Commonly used algorithms for motion and person detection are briefly described, with emphasis in investigating the optimum transcoding configuration parameters. The analysis of the experimental results reveals that the existing video quality metrics are not suitable for automated systems and that the detection of persons is affected by the reduction of bit rate and resolution, while motion detection is more sensitive to frame rate.","PeriodicalId":6311,"journal":{"name":"2013 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB)","volume":"15 1","pages":"333-338"},"PeriodicalIF":0.0,"publicationDate":"2013-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88966656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Multimodal Sparse Linear Integration for Content-Based Item Recommendation 基于内容的物品推荐的多模态稀疏线性集成
Qiusha Zhu, Zhao Li, Haohong Wang, Yimin Yang, M. Shyu
Most content-based recommender systems focus on analyzing the textual information of items. For items with images, the images can be treated as another information modality. In this paper, an effective method called MSLIM is proposed to integrate multimodal information for content-based item recommendation. It formalizes the probelm into a regularized optimization problem in the least-squares sense and the coordinate gradient descent is applied to solve the problem. The aggregation coefficients of the items are learned in an unsupervised manner during this process, based on which the k-nearest neighbor (k-NN) algorithm is used to generate the top-N recommendations of each item by finding its k nearest neighbors. A framework of using MSLIM for item recommendation is proposed accordingly. The experimental results on a self-collected handbag dataset show that MSLIM outperforms the selected comparison methods and show how the model parameters affect the final recommendation results.
大多数基于内容的推荐系统侧重于分析项目的文本信息。对于带有图像的项目,可以将图像视为另一种信息形态。本文提出了一种有效的多模态信息集成方法MSLIM,用于基于内容的商品推荐。将该问题形式化为最小二乘意义上的正则化优化问题,并采用坐标梯度下降法求解该问题。在此过程中,以无监督的方式学习项目的聚合系数,并在此基础上使用k近邻(k- nn)算法通过找到每个项目的k近邻来生成top-N推荐。在此基础上,提出了一种利用MSLIM进行项目推荐的框架。在一个自收集手袋数据集上的实验结果表明,MSLIM优于所选的比较方法,并显示了模型参数如何影响最终的推荐结果。
{"title":"Multimodal Sparse Linear Integration for Content-Based Item Recommendation","authors":"Qiusha Zhu, Zhao Li, Haohong Wang, Yimin Yang, M. Shyu","doi":"10.1109/ISM.2013.37","DOIUrl":"https://doi.org/10.1109/ISM.2013.37","url":null,"abstract":"Most content-based recommender systems focus on analyzing the textual information of items. For items with images, the images can be treated as another information modality. In this paper, an effective method called MSLIM is proposed to integrate multimodal information for content-based item recommendation. It formalizes the probelm into a regularized optimization problem in the least-squares sense and the coordinate gradient descent is applied to solve the problem. The aggregation coefficients of the items are learned in an unsupervised manner during this process, based on which the k-nearest neighbor (k-NN) algorithm is used to generate the top-N recommendations of each item by finding its k nearest neighbors. A framework of using MSLIM for item recommendation is proposed accordingly. The experimental results on a self-collected handbag dataset show that MSLIM outperforms the selected comparison methods and show how the model parameters affect the final recommendation results.","PeriodicalId":6311,"journal":{"name":"2013 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB)","volume":"185 1","pages":"187-194"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78050324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Efficient Implementation and Processing of a Real-Time Panorama Video Pipeline 实时全景视频管道的高效实现与处理
Marius Tennøe, Espen Helgedagsrud, Mikkel Næss, Henrik Kjus Alstad, H. Stensland, V. Reddy, Dag Johansen, C. Griwodz, P. Halvorsen
High resolution, wide field of view video generated from multiple camera feeds has many use cases. However, processing the different steps of a panorama video pipeline in real-time is challenging due to the high data rates and the stringent requirements of timeliness. We use panorama video in a sport analysis system where video events must be generated in real-time. In this respect, we present a system for real-time panorama video generation from an array of low-cost CCD HD video cameras. We describe how we have implemented different components and evaluated alternatives. We also present performance results with and without co-processors like graphics processing units (GPUs), and we evaluate each individual component and show how the entire pipeline is able to run in real-time on commodity hardware.
从多个摄像机馈送生成的高分辨率、宽视场视频有许多用例。然而,由于高数据速率和严格的时效性要求,实时处理全景视频管道的不同步骤具有挑战性。我们在体育分析系统中使用全景视频,其中视频事件必须实时生成。在这方面,我们提出了一个系统的实时全景视频生成从一个低成本的CCD高清摄像机阵列。我们描述了如何实现不同的组件和评估替代方案。我们还展示了使用和不使用图形处理单元(gpu)等协处理器的性能结果,我们评估了每个单独的组件,并展示了整个管道如何能够在商用硬件上实时运行。
{"title":"Efficient Implementation and Processing of a Real-Time Panorama Video Pipeline","authors":"Marius Tennøe, Espen Helgedagsrud, Mikkel Næss, Henrik Kjus Alstad, H. Stensland, V. Reddy, Dag Johansen, C. Griwodz, P. Halvorsen","doi":"10.1109/ISM.2013.21","DOIUrl":"https://doi.org/10.1109/ISM.2013.21","url":null,"abstract":"High resolution, wide field of view video generated from multiple camera feeds has many use cases. However, processing the different steps of a panorama video pipeline in real-time is challenging due to the high data rates and the stringent requirements of timeliness. We use panorama video in a sport analysis system where video events must be generated in real-time. In this respect, we present a system for real-time panorama video generation from an array of low-cost CCD HD video cameras. We describe how we have implemented different components and evaluated alternatives. We also present performance results with and without co-processors like graphics processing units (GPUs), and we evaluate each individual component and show how the entire pipeline is able to run in real-time on commodity hardware.","PeriodicalId":6311,"journal":{"name":"2013 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB)","volume":"131 1","pages":"76-83"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86834280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
An Automatic Object Retrieval Framework for Complex Background 复杂背景下的自动对象检索框架
Yimin Yang, Fausto Fleites, Haohong Wang, Shu‐Ching Chen
In this paper we propose a novel framework for object retrieval based on automatic foreground object extraction and multi-layer information integration. Specifically, user interested objects are firstly detected from unconstrained videos via a multimodal cues method, then an automatic object extraction algorithm based on Grab Cut is applied to separate foreground object from background. The object-level information is enhanced during the feature extraction layer by assigning different weights to foreground and background pixels respectively, and the spatial color and texture information is integrated during the similarity calculation layer. Experimental results on both benchmark data set and real-world data set demonstrate the effectiveness of the proposed framework.
本文提出了一种基于前景目标自动提取和多层信息集成的目标检索框架。具体而言,首先通过多模态线索方法从无约束视频中检测用户感兴趣的对象,然后应用基于Grab Cut的自动对象提取算法分离前景对象和背景对象。在特征提取层中,通过对前景和背景像素分别赋予不同的权重来增强对象级信息,在相似度计算层中集成空间颜色和纹理信息。在基准数据集和实际数据集上的实验结果表明了该框架的有效性。
{"title":"An Automatic Object Retrieval Framework for Complex Background","authors":"Yimin Yang, Fausto Fleites, Haohong Wang, Shu‐Ching Chen","doi":"10.1109/ISM.2013.71","DOIUrl":"https://doi.org/10.1109/ISM.2013.71","url":null,"abstract":"In this paper we propose a novel framework for object retrieval based on automatic foreground object extraction and multi-layer information integration. Specifically, user interested objects are firstly detected from unconstrained videos via a multimodal cues method, then an automatic object extraction algorithm based on Grab Cut is applied to separate foreground object from background. The object-level information is enhanced during the feature extraction layer by assigning different weights to foreground and background pixels respectively, and the spatial color and texture information is integrated during the similarity calculation layer. Experimental results on both benchmark data set and real-world data set demonstrate the effectiveness of the proposed framework.","PeriodicalId":6311,"journal":{"name":"2013 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB)","volume":"1 1","pages":"374-377"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84710378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Image Super-resolution Using Registration of Wavelet Multi-scale Components with Affine Transformation 基于仿射变换的小波多尺度分量配准图像超分辨率
Y. Matsuo, Ryoki Takada, Shinya Iwasaki, J. Katto
We propose a novel image super-resolution method from digital cinema to 8K ultra high-definition television using registration of wavelet multi-scale components with affine transformation. The proposed method features that an original image is divided into signal and noise components by the wavelet soft-shrinkage with detection of white noise level. The signal component enhances resolution by registration between a signal component and its wavelet multi-scale components with affine transformation and parameters optimization. The affine transformation enhances super-resolution image quality because it increases registration candidates. The noise component enhances resolution with power control considering cinema noise representation. Super-resolution image outputs by synthesis of super-resolved signal and noise components. Experiments show that the proposed method has objectively better PSNR measurement and subjectively better appearance in comparison with conventional super-resolution methods.
提出了一种基于仿射变换的小波多尺度分量配准的数字电影到8K超高清电视图像超分辨方法。该方法的特点是通过小波软收缩检测白噪声水平,将原始图像分为信号和噪声两部分。通过仿射变换和参数优化,将信号分量与其小波多尺度分量进行配准,提高信号分量的分辨率。仿射变换增加了配准候选者,从而提高了超分辨率图像的质量。噪声分量通过考虑影院噪声表现的功率控制来提高分辨率。通过合成超分辨信号和噪声分量输出超分辨图像。实验表明,与传统的超分辨方法相比,该方法在客观上具有更好的PSNR测量值,在主观上具有更好的外观。
{"title":"Image Super-resolution Using Registration of Wavelet Multi-scale Components with Affine Transformation","authors":"Y. Matsuo, Ryoki Takada, Shinya Iwasaki, J. Katto","doi":"10.1109/ISM.2013.53","DOIUrl":"https://doi.org/10.1109/ISM.2013.53","url":null,"abstract":"We propose a novel image super-resolution method from digital cinema to 8K ultra high-definition television using registration of wavelet multi-scale components with affine transformation. The proposed method features that an original image is divided into signal and noise components by the wavelet soft-shrinkage with detection of white noise level. The signal component enhances resolution by registration between a signal component and its wavelet multi-scale components with affine transformation and parameters optimization. The affine transformation enhances super-resolution image quality because it increases registration candidates. The noise component enhances resolution with power control considering cinema noise representation. Super-resolution image outputs by synthesis of super-resolved signal and noise components. Experiments show that the proposed method has objectively better PSNR measurement and subjectively better appearance in comparison with conventional super-resolution methods.","PeriodicalId":6311,"journal":{"name":"2013 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB)","volume":"36 3 1","pages":"279-282"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89589593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Foreground detection using background subtraction with histogram 利用直方图的背景减法进行前景检测
M. Nawaz, J. Cosmas, A. Adnan, M. F. U. Haq, E. Alazawi
In the background subtraction method one of the core issue is; how to setup the threshold value precisely at run time, which can ultimately overcome several bugs of this approach in the foreground detection. In the proposed algorithm the key feature of any foreground detection algorithm; motion is used however getting the threshold value from the original motion histogram is not possible, so for the said purpose smooth motion histogram is used in a systematic way to obtain the threshold value. In the proposed algorithm the main focus is to get a better estimation of threshold so that to get a dynamic value, from histogram at run time. If the proposed algorithm is used intelligently in terms of motion magnitude and motion direction it can distinguish accurately between background and foreground, camera motion along with camera motion and object motion.
背景减法中一个核心问题是;如何在运行时精确设置阈值,最终克服该方法在前景检测中的几个缺陷。该算法具有任何前景检测算法的关键特征;使用了运动,但是无法从原始的运动直方图中获得阈值,因此为了达到上述目的,系统地使用平滑的运动直方图来获得阈值。在该算法中,重点是对阈值进行更好的估计,以便在运行时从直方图中获得动态值。如果在运动幅度和运动方向上智能地使用该算法,则可以准确地区分背景和前景、摄像机运动以及摄像机运动和物体运动。
{"title":"Foreground detection using background subtraction with histogram","authors":"M. Nawaz, J. Cosmas, A. Adnan, M. F. U. Haq, E. Alazawi","doi":"10.1109/BMSB.2013.6621707","DOIUrl":"https://doi.org/10.1109/BMSB.2013.6621707","url":null,"abstract":"In the background subtraction method one of the core issue is; how to setup the threshold value precisely at run time, which can ultimately overcome several bugs of this approach in the foreground detection. In the proposed algorithm the key feature of any foreground detection algorithm; motion is used however getting the threshold value from the original motion histogram is not possible, so for the said purpose smooth motion histogram is used in a systematic way to obtain the threshold value. In the proposed algorithm the main focus is to get a better estimation of threshold so that to get a dynamic value, from histogram at run time. If the proposed algorithm is used intelligently in terms of motion magnitude and motion direction it can distinguish accurately between background and foreground, camera motion along with camera motion and object motion.","PeriodicalId":6311,"journal":{"name":"2013 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB)","volume":"43 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2013-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73407698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
MMT, new alternative to MPEG-2 TS and RTP MMT, mpeg - 2ts和RTP的新替代品
Youngkwon Lim
During the last two decades MPEG has successfully developed the standards for multimedia delivery such as MPEG-2 TS and ISO Base Media File Format. Recent changes of multimedia delivery environment due to the rapid increase of multimedia services over the Internet brought new requirements to the standards for multimedia delivery, namely (i) flexible and dynamic access to multimedia components, (ii) Easy conversion between the format for storage and the format for the packetized delivery and (iii) mixed use of multimedia components from multiple sources including the caches and the local storages. MPEG has started the development of MPEG Media Transport (MMT) standard to respond to such new requirements. In this paper, challenges to MPEG-2 TS and RTP regarding new requirements are discussed and brief descriptions about MMT showing how MMT provides solutions to those challenges are provided.
在过去的二十年中,MPEG已经成功地开发了多媒体传输标准,如MPEG- 2ts和ISO基础媒体文件格式。由于互联网上多媒体服务的迅速增加,多媒体传送环境最近发生了变化,对多媒体传送的标准提出了新的要求,即(i)灵活和动态地存取多媒体组件;(ii)易于在存储格式和分组传送格式之间转换;(iii)混合使用来自多个来源的多媒体组件,包括缓存和本地存储。MPEG已经开始开发MPEG媒体传输(MMT)标准来响应这些新的需求。本文讨论了MPEG-2 TS和RTP在新需求方面面临的挑战,并简要介绍了MMT如何为这些挑战提供解决方案。
{"title":"MMT, new alternative to MPEG-2 TS and RTP","authors":"Youngkwon Lim","doi":"10.1109/BMSB.2013.6621691","DOIUrl":"https://doi.org/10.1109/BMSB.2013.6621691","url":null,"abstract":"During the last two decades MPEG has successfully developed the standards for multimedia delivery such as MPEG-2 TS and ISO Base Media File Format. Recent changes of multimedia delivery environment due to the rapid increase of multimedia services over the Internet brought new requirements to the standards for multimedia delivery, namely (i) flexible and dynamic access to multimedia components, (ii) Easy conversion between the format for storage and the format for the packetized delivery and (iii) mixed use of multimedia components from multiple sources including the caches and the local storages. MPEG has started the development of MPEG Media Transport (MMT) standard to respond to such new requirements. In this paper, challenges to MPEG-2 TS and RTP regarding new requirements are discussed and brief descriptions about MMT showing how MMT provides solutions to those challenges are provided.","PeriodicalId":6311,"journal":{"name":"2013 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB)","volume":"24 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2013-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74806526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
期刊
2013 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1