首页 > 最新文献

Proceedings 2014 IEEE International Conference on Security, Pattern Analysis, and Cybernetics (SPAC)最新文献

英文 中文
A new GPR image de-nosing method based on BEMD 一种基于BEMD的探地雷达图像去噪方法
Lu Gan, Long Zhou, Xinge You
This paper presents a new de-noising method for GPR image based on BEMD and wavelet. This method complies with the adaptability from BEMD. The method decomposes the image into a series of IMF components, then applies wavelet threshold de-noising on the selected high frequency IMF components for de-noising. In the reconstruction course, the de-noising IMF and low frequency IMF are combined. The experiment results shows the effectiveness of the method on GPR image.
提出了一种基于bmd和小波的探地雷达图像去噪方法。该方法符合BEMD的自适应性。该方法将图像分解为一系列IMF分量,然后对选取的高频IMF分量进行小波阈值去噪。在重建过程中,将去噪IMF和低频IMF相结合。实验结果表明了该方法在GPR图像上的有效性。
{"title":"A new GPR image de-nosing method based on BEMD","authors":"Lu Gan, Long Zhou, Xinge You","doi":"10.1109/SPAC.2014.6982709","DOIUrl":"https://doi.org/10.1109/SPAC.2014.6982709","url":null,"abstract":"This paper presents a new de-noising method for GPR image based on BEMD and wavelet. This method complies with the adaptability from BEMD. The method decomposes the image into a series of IMF components, then applies wavelet threshold de-noising on the selected high frequency IMF components for de-noising. In the reconstruction course, the de-noising IMF and low frequency IMF are combined. The experiment results shows the effectiveness of the method on GPR image.","PeriodicalId":326246,"journal":{"name":"Proceedings 2014 IEEE International Conference on Security, Pattern Analysis, and Cybernetics (SPAC)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114038809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Pornographic image classification based on top down color-saliency based BoW representation 基于自顶向下颜色显著性BoW表示的色情图像分类
Chunna Tian, Xiangnan Zhang, Xinbo Gao, Wei Wei
Since color is an important visual clue of the pornographic image, this study presents a new framework for pornographic image classification based on the fusion of color and shape information for the bag of words representation. This framework contains three fusion patterns: The early fusion, late fusion and top down color-saliency based fusion, which are compared intensively. Based on the comparison, the top down color-saliency fusion based pornographic image classification method is proposed by using the statistical class prior of each color word to weight the shape word. In the late fusion and color-saliency based fusion, color name is adopt to represent the color information. To verify the effectiveness of spatial constrain on the words, we also compared the shape features quantized by vector quantization and locality-constrained linear coding. The experimental results show that our model combines the shape and color information properly and it is superior over the popular methods to distinguish the normal and pornographic-like images from the pornographic ones.
由于颜色是色情图像的重要视觉线索,本研究提出了一种基于颜色和形状信息融合的色情图像分类框架,用于词包表示。该融合框架包含三种融合模式:早期融合、晚期融合和基于颜色显著性的自顶向下融合。在此基础上,提出了基于自顶向下颜色显著性融合的色情图像分类方法,利用每个颜色词的统计类先验对形状词进行加权。在后期融合和基于颜色显著性的融合中,采用颜色名称来表示颜色信息。为了验证空间约束对单词的有效性,我们还比较了矢量量化和位置约束线性编码量化的形状特征。实验结果表明,该模型正确地结合了图像的形状和颜色信息,在区分正常图像和类色情图像和类色情图像方面优于目前流行的方法。
{"title":"Pornographic image classification based on top down color-saliency based BoW representation","authors":"Chunna Tian, Xiangnan Zhang, Xinbo Gao, Wei Wei","doi":"10.1109/SPAC.2014.6982698","DOIUrl":"https://doi.org/10.1109/SPAC.2014.6982698","url":null,"abstract":"Since color is an important visual clue of the pornographic image, this study presents a new framework for pornographic image classification based on the fusion of color and shape information for the bag of words representation. This framework contains three fusion patterns: The early fusion, late fusion and top down color-saliency based fusion, which are compared intensively. Based on the comparison, the top down color-saliency fusion based pornographic image classification method is proposed by using the statistical class prior of each color word to weight the shape word. In the late fusion and color-saliency based fusion, color name is adopt to represent the color information. To verify the effectiveness of spatial constrain on the words, we also compared the shape features quantized by vector quantization and locality-constrained linear coding. The experimental results show that our model combines the shape and color information properly and it is superior over the popular methods to distinguish the normal and pornographic-like images from the pornographic ones.","PeriodicalId":326246,"journal":{"name":"Proceedings 2014 IEEE International Conference on Security, Pattern Analysis, and Cybernetics (SPAC)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115576170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
The robust patches-based tracking method via sparse representation 基于稀疏表示的鲁棒补丁跟踪方法
Yi Li, Zhenyu He, Shuangyan Yi, Wei-Guo Yang
Occlusion is one important problem in single object tracking. However, conventional methods are not capable of making full use of the spatial information because of occlusion, which may lead to the drift. In this paper, we propose a robust patches-based tracking method via sparse representation, namely RPSR, which selects the unoccluded patches, and adaptively assigns larger contribution factors to them. The experimental results on popular benchmark video sequences show that our RPSR method is effective and outperforms the state-of-the-art methods for single object tracking.
遮挡是单目标跟踪中的一个重要问题。然而,由于遮挡的存在,传统的方法无法充分利用空间信息,容易产生漂移。本文提出了一种基于稀疏表示的基于补丁的鲁棒跟踪方法,即RPSR,该方法选择未包含的补丁,并自适应地为其分配较大的贡献因子。在流行的基准视频序列上的实验结果表明,RPSR方法是有效的,并且优于目前最先进的单目标跟踪方法。
{"title":"The robust patches-based tracking method via sparse representation","authors":"Yi Li, Zhenyu He, Shuangyan Yi, Wei-Guo Yang","doi":"10.1109/SPAC.2014.6982667","DOIUrl":"https://doi.org/10.1109/SPAC.2014.6982667","url":null,"abstract":"Occlusion is one important problem in single object tracking. However, conventional methods are not capable of making full use of the spatial information because of occlusion, which may lead to the drift. In this paper, we propose a robust patches-based tracking method via sparse representation, namely RPSR, which selects the unoccluded patches, and adaptively assigns larger contribution factors to them. The experimental results on popular benchmark video sequences show that our RPSR method is effective and outperforms the state-of-the-art methods for single object tracking.","PeriodicalId":326246,"journal":{"name":"Proceedings 2014 IEEE International Conference on Security, Pattern Analysis, and Cybernetics (SPAC)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115168372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A knowledge acquisition model in maritime domain based on ontology 基于本体的海事领域知识获取模型
Dong Xia Zheng, Xue Da Sun
First, the research status of knowledge acquisition is analyzed. Second, knowledge acquisition model in maritime domain based on ontology under semantic web environment is built, specifically, how to build maritime domain ontology and how to preprocess Chinese text are researched, the method of maritime domain knowledge acquisition from heterogeneous data sources in network is also researched; at last, ontology confirmation is discussed. In this study the modeling method of knowledge acquisition is not only applicable to the maritime domain, can also be extended to other fields learn to use.
首先,分析了知识获取的研究现状。其次,建立了语义web环境下基于本体的海事领域知识获取模型,具体研究了如何构建海事领域本体、如何对中文文本进行预处理,研究了网络中异构数据源的海事领域知识获取方法;最后对本体的确认进行了讨论。本研究中知识获取的建模方法不仅适用于海事领域,还可以推广到其他领域学习使用。
{"title":"A knowledge acquisition model in maritime domain based on ontology","authors":"Dong Xia Zheng, Xue Da Sun","doi":"10.1109/SPAC.2014.6982718","DOIUrl":"https://doi.org/10.1109/SPAC.2014.6982718","url":null,"abstract":"First, the research status of knowledge acquisition is analyzed. Second, knowledge acquisition model in maritime domain based on ontology under semantic web environment is built, specifically, how to build maritime domain ontology and how to preprocess Chinese text are researched, the method of maritime domain knowledge acquisition from heterogeneous data sources in network is also researched; at last, ontology confirmation is discussed. In this study the modeling method of knowledge acquisition is not only applicable to the maritime domain, can also be extended to other fields learn to use.","PeriodicalId":326246,"journal":{"name":"Proceedings 2014 IEEE International Conference on Security, Pattern Analysis, and Cybernetics (SPAC)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132727021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Privacy preserving for human object in video surveillance via visual cryptography 视频监控中人的隐私保护
Ling Du, Yuhang Li
This paper proposes a privacy preserving scheme for data security in the video surveillance. We firstly separate the foreground for each video frame, and obscure the separated human object by motion blur. For secure storage, each blurred foreground object is encrypted into N shares by visual cryptography, and stored into different servers. Each share is fully confidential and does not convey any meaningful information about the original video, so that breaking into one storage server do not induce any compromise. For legal requirement, the authorized users can recover the original content with better quality by non-blind deblurring algorithm. Moreover, thanks to our exploited foreground based encoding scheme, the data expansion introduced by distributed storage is greatly reduced. It is impossible for unauthorized users to recover the original content by the following reasons: 1) distributed video stream storage; 2) unknown blurring kernel; 3) inaccurate foreground content and mask. The performance evaluation on several surveillance scenarios demonstrates that our proposed method can effectively protect sensitive privacy information in surveillance videos.
针对视频监控中的数据安全问题,提出了一种隐私保护方案。我们首先对每个视频帧的前景进行分离,然后对分离出来的人体物体进行模糊处理。为了安全存储,每个模糊的前景对象通过视觉加密被加密为N个共享,并存储在不同的服务器中。每个共享都是完全保密的,不传递任何有关原始视频的有意义的信息,因此,闯入一个存储服务器不会导致任何妥协。出于法律要求,授权用户可以通过非盲去模糊算法以更好的质量恢复原始内容。此外,由于采用了基于前景的编码方案,大大减少了分布式存储带来的数据扩展。由于以下原因,未经授权的用户无法恢复原始内容:1)分布式视频流存储;2)未知模糊核;3)前景内容和蒙版不准确。对多个监控场景的性能评估表明,该方法可以有效地保护监控视频中的敏感隐私信息。
{"title":"Privacy preserving for human object in video surveillance via visual cryptography","authors":"Ling Du, Yuhang Li","doi":"10.1109/SPAC.2014.6982661","DOIUrl":"https://doi.org/10.1109/SPAC.2014.6982661","url":null,"abstract":"This paper proposes a privacy preserving scheme for data security in the video surveillance. We firstly separate the foreground for each video frame, and obscure the separated human object by motion blur. For secure storage, each blurred foreground object is encrypted into N shares by visual cryptography, and stored into different servers. Each share is fully confidential and does not convey any meaningful information about the original video, so that breaking into one storage server do not induce any compromise. For legal requirement, the authorized users can recover the original content with better quality by non-blind deblurring algorithm. Moreover, thanks to our exploited foreground based encoding scheme, the data expansion introduced by distributed storage is greatly reduced. It is impossible for unauthorized users to recover the original content by the following reasons: 1) distributed video stream storage; 2) unknown blurring kernel; 3) inaccurate foreground content and mask. The performance evaluation on several surveillance scenarios demonstrates that our proposed method can effectively protect sensitive privacy information in surveillance videos.","PeriodicalId":326246,"journal":{"name":"Proceedings 2014 IEEE International Conference on Security, Pattern Analysis, and Cybernetics (SPAC)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132115016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Class specific dictionary learning for face recognition 类特定的字典学习人脸识别
Baodi Liu, Bin Shen, Yu-Xiong Wang
Recently, sparse representation based classification (SRC) has been successfully used for visual recognition and showed impressive performance. Given a testing sample, SRC computes its sparse linear representation with respect to all the training samples and calculates the residual error for each class of training samples. However, SRC considers the training samples in each class contributing equally to the dictionary in that class, i.e., the dictionary consists of the training samples in that class. This may lead to high residual error and instability. In this paper, a class specific dictionary learning algorithm is proposed. First, by introducing the dual form of dictionary learning, an explicit relationship between the bases vectors and the original image features is represented, which enhances the interpretability. SRC can be thus considered to be a special case of the proposed algorithm. Second, blockwise coordinate descent algorithm and Lagrange multipliers are then applied to optimize the corresponding objective function. Extensive experimental results on three benchmark face recognition datasets demonstrate that the proposed algorithm has achieved superior performance compared with conventional classification algorithms.
近年来,基于稀疏表示的分类(SRC)已成功地应用于视觉识别,并显示出令人印象深刻的性能。给定一个测试样本,SRC计算其相对于所有训练样本的稀疏线性表示,并计算每一类训练样本的残差。然而,SRC认为每个类中的训练样本对该类字典的贡献是相等的,即字典由该类的训练样本组成。这可能导致高残余误差和不稳定性。本文提出了一种针对类的字典学习算法。首先,通过引入对偶形式的字典学习,明确了基向量与原始图像特征之间的关系,增强了可解释性;因此,SRC可以被认为是所提出算法的一个特例。其次,采用块坐标下降算法和拉格朗日乘子对目标函数进行优化。在三个基准人脸识别数据集上的大量实验结果表明,与传统的分类算法相比,该算法取得了更好的性能。
{"title":"Class specific dictionary learning for face recognition","authors":"Baodi Liu, Bin Shen, Yu-Xiong Wang","doi":"10.1109/SPAC.2014.6982690","DOIUrl":"https://doi.org/10.1109/SPAC.2014.6982690","url":null,"abstract":"Recently, sparse representation based classification (SRC) has been successfully used for visual recognition and showed impressive performance. Given a testing sample, SRC computes its sparse linear representation with respect to all the training samples and calculates the residual error for each class of training samples. However, SRC considers the training samples in each class contributing equally to the dictionary in that class, i.e., the dictionary consists of the training samples in that class. This may lead to high residual error and instability. In this paper, a class specific dictionary learning algorithm is proposed. First, by introducing the dual form of dictionary learning, an explicit relationship between the bases vectors and the original image features is represented, which enhances the interpretability. SRC can be thus considered to be a special case of the proposed algorithm. Second, blockwise coordinate descent algorithm and Lagrange multipliers are then applied to optimize the corresponding objective function. Extensive experimental results on three benchmark face recognition datasets demonstrate that the proposed algorithm has achieved superior performance compared with conventional classification algorithms.","PeriodicalId":326246,"journal":{"name":"Proceedings 2014 IEEE International Conference on Security, Pattern Analysis, and Cybernetics (SPAC)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114397213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Adaptive structured sub-blocks tracking 自适应结构化子块跟踪
Liu Jing-Wen, Sun Wei-Ping, Xia Tao
Local features have been widely used in visual object tracking for their robustness in illumination, deformation, rotation and partial occlusion. Traditional feature selection algorithms based on accumulated knowledge of previous frames usually adopt the perspective of continuity of changes, which could lead to degradation. Exploiting discrimination and uniqueness of local sub-blocks, we build an automatic preselection mechanism for local features and propose the structured sub-blocks tracking algorithm under particle filter framework. Optimal sub-blocks are chosen automatically according to their discriminant function distribution in current frame. Furthermore, we reduce blocks search costs with help of historical prediction accuracy. Experiments validate the robustness of our algorithm in tackling with small deformation and partial occlusion.
局部特征以其在光照、变形、旋转和局部遮挡等方面的鲁棒性被广泛应用于视觉目标跟踪中。传统的基于前帧知识积累的特征选择算法通常采用变化连续性的视角,这可能导致算法的退化。利用局部子块的区别性和唯一性,建立了局部特征的自动预选机制,提出了粒子滤波框架下的结构化子块跟踪算法。根据当前帧中子块的判别函数分布,自动选择最优子块。此外,我们还利用历史预测的准确性降低了块搜索成本。实验验证了该算法在处理小变形和局部遮挡时的鲁棒性。
{"title":"Adaptive structured sub-blocks tracking","authors":"Liu Jing-Wen, Sun Wei-Ping, Xia Tao","doi":"10.1109/SPAC.2014.6982651","DOIUrl":"https://doi.org/10.1109/SPAC.2014.6982651","url":null,"abstract":"Local features have been widely used in visual object tracking for their robustness in illumination, deformation, rotation and partial occlusion. Traditional feature selection algorithms based on accumulated knowledge of previous frames usually adopt the perspective of continuity of changes, which could lead to degradation. Exploiting discrimination and uniqueness of local sub-blocks, we build an automatic preselection mechanism for local features and propose the structured sub-blocks tracking algorithm under particle filter framework. Optimal sub-blocks are chosen automatically according to their discriminant function distribution in current frame. Furthermore, we reduce blocks search costs with help of historical prediction accuracy. Experiments validate the robustness of our algorithm in tackling with small deformation and partial occlusion.","PeriodicalId":326246,"journal":{"name":"Proceedings 2014 IEEE International Conference on Security, Pattern Analysis, and Cybernetics (SPAC)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114475127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust inverse perspective mapping based on vanishing point 基于消失点的鲁棒反透视映射
Daiming Zhang, Bin Fang, Weibin Yang, Xiaosong Luo, Yuanyan Tang
Vision-based road signs detection and recognition has been widely used in intelligent robotics and automotive autonomous driving technology. Currently, one-time calibration of inverse perspective mapping (IPM) parameters is employed to eliminate the effect of perspective mapping, but it is not robust to the uphill and downhill road. We propose an automatic inverse perspective mapping method based on vanishing point, which is adaptive to the uphill and downhill road even with slight rotation of the main road direction. The proposed algorithm is composed of the following three steps: detecting the vanishing point, calculating the pitch and yaw angles and adopting inverse perspective mapping to obtain the “bird's eye view” image. Experimental results show that the adaptability of our inverse perspective mapping framework is comparable to existing state-of-the-art methods, which is conducive to the subsequent detection and recognition of road signs.
基于视觉的道路标志检测与识别已广泛应用于智能机器人和汽车自动驾驶技术。目前,采用一次性标定逆透视映射参数的方法消除了透视映射的影响,但对上坡和下坡道路的鲁棒性较差。提出了一种基于消失点的自动反透视映射方法,该方法在主路方向轻微旋转的情况下也能适应上坡和下坡道路。该算法由三个步骤组成:检测消失点、计算俯仰角和偏航角、采用反透视映射获得“鸟瞰”图像。实验结果表明,我们的反透视映射框架的适应性与现有最先进的方法相当,有利于后续道路标志的检测和识别。
{"title":"Robust inverse perspective mapping based on vanishing point","authors":"Daiming Zhang, Bin Fang, Weibin Yang, Xiaosong Luo, Yuanyan Tang","doi":"10.1109/SPAC.2014.6982733","DOIUrl":"https://doi.org/10.1109/SPAC.2014.6982733","url":null,"abstract":"Vision-based road signs detection and recognition has been widely used in intelligent robotics and automotive autonomous driving technology. Currently, one-time calibration of inverse perspective mapping (IPM) parameters is employed to eliminate the effect of perspective mapping, but it is not robust to the uphill and downhill road. We propose an automatic inverse perspective mapping method based on vanishing point, which is adaptive to the uphill and downhill road even with slight rotation of the main road direction. The proposed algorithm is composed of the following three steps: detecting the vanishing point, calculating the pitch and yaw angles and adopting inverse perspective mapping to obtain the “bird's eye view” image. Experimental results show that the adaptability of our inverse perspective mapping framework is comparable to existing state-of-the-art methods, which is conducive to the subsequent detection and recognition of road signs.","PeriodicalId":326246,"journal":{"name":"Proceedings 2014 IEEE International Conference on Security, Pattern Analysis, and Cybernetics (SPAC)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134643061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
STFT-like time frequency representations for nonstationary signal — From evenly sampled data to arbitrary nonuniformly sampled data 非平稳信号的类stft时频表示。从均匀采样数据到任意非均匀采样数据
Shujian Yu, Xinge You, Kexin Zhao, Xiubao Jiang, Yi Mou, Jie Zhu
Spectrograms provide an effective way for time-frequency representation (TFR). Among these, short-time Fourier transform (STFT) based spectrograms are extensively used for various applications. However, STFT spectrogram and its revised versions suffer from two main issues: (1) there is a trade-off between time resolution and frequency resolution, and (2) almost all existing TFR methods, including STFT spectrogram, are not suitable to deal with nonuniformly sampled data. In this paper, we address these two problems by presenting alternative approaches, namely short-time amplitude and phase estimation (ST-APES) and short-time sparse learning via iterative minimization (ST-SLIM), to improve the resolution of STFT based spectrogram, and extend the applicability of our approaches to signals with arbitrary sampling patterns. Apart from evenly sampled data, we will consider missing data as well as arbitrary nonuniformly sampled data, at the same time. We will demonstrate via simulation results the superiority of our proposed algorithms in terms of resolution, sidelobe suppression and applicability to signals with arbitrary sampling patterns.
谱图为时频表示(TFR)提供了有效的方法。其中,短时傅立叶变换(STFT)谱图被广泛用于各种应用。然而,STFT谱图及其修正版本存在两个主要问题:(1)时间分辨率和频率分辨率之间存在权衡;(2)几乎所有现有的TFR方法,包括STFT谱图,都不适合处理非均匀采样数据。在本文中,我们通过提出替代方法来解决这两个问题,即短时幅度和相位估计(ST-APES)和通过迭代最小化的短时稀疏学习(ST-SLIM),以提高基于STFT的频谱图的分辨率,并扩展我们的方法对任意采样模式信号的适用性。除了均匀采样数据外,我们还将同时考虑缺失数据和任意非均匀采样数据。我们将通过仿真结果证明我们提出的算法在分辨率、旁瓣抑制和对任意采样模式信号的适用性方面的优越性。
{"title":"STFT-like time frequency representations for nonstationary signal — From evenly sampled data to arbitrary nonuniformly sampled data","authors":"Shujian Yu, Xinge You, Kexin Zhao, Xiubao Jiang, Yi Mou, Jie Zhu","doi":"10.1109/SPAC.2014.6982725","DOIUrl":"https://doi.org/10.1109/SPAC.2014.6982725","url":null,"abstract":"Spectrograms provide an effective way for time-frequency representation (TFR). Among these, short-time Fourier transform (STFT) based spectrograms are extensively used for various applications. However, STFT spectrogram and its revised versions suffer from two main issues: (1) there is a trade-off between time resolution and frequency resolution, and (2) almost all existing TFR methods, including STFT spectrogram, are not suitable to deal with nonuniformly sampled data. In this paper, we address these two problems by presenting alternative approaches, namely short-time amplitude and phase estimation (ST-APES) and short-time sparse learning via iterative minimization (ST-SLIM), to improve the resolution of STFT based spectrogram, and extend the applicability of our approaches to signals with arbitrary sampling patterns. Apart from evenly sampled data, we will consider missing data as well as arbitrary nonuniformly sampled data, at the same time. We will demonstrate via simulation results the superiority of our proposed algorithms in terms of resolution, sidelobe suppression and applicability to signals with arbitrary sampling patterns.","PeriodicalId":326246,"journal":{"name":"Proceedings 2014 IEEE International Conference on Security, Pattern Analysis, and Cybernetics (SPAC)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132749659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Fast mode selection algorithm based on derived layer 基于衍生层的快速模式选择算法
Yingyi Liang, Zhenyu He, Yi Li
Scalable Video Coding (SVC), provides different resolutions, different video quality and different video streaming rate after once compression according to various requirements of users. The characteristic performance can solve a series of video transmission problems encountered in the current complex and heterogeneous network environment conveniently and effectively, and provide a highly efficient solution for the new video network. Because of problems such as the SVC coding efficiency in multilayer and the coding cost, the research on SVC is mainly focused on how to improve the coding speed of the algorithm (fast SVC algorithm). For the macroblock mode selection in H.264/SVC, the paper selects the fast algorithm based on the macroblock in derived layer.
可伸缩视频编码(Scalable Video Coding, SVC),可根据用户的不同需求,一次压缩后提供不同的分辨率、不同的视频质量和不同的视频流率。该特性可以方便有效地解决当前复杂异构的网络环境中遇到的一系列视频传输问题,为新型视频网络提供高效的解决方案。由于多层SVC编码效率和编码成本等问题,对SVC的研究主要集中在如何提高算法的编码速度(快速SVC算法)。对于H.264/SVC中的宏块模式选择,本文选择了基于派生层宏块的快速算法。
{"title":"Fast mode selection algorithm based on derived layer","authors":"Yingyi Liang, Zhenyu He, Yi Li","doi":"10.1109/SPAC.2014.6982678","DOIUrl":"https://doi.org/10.1109/SPAC.2014.6982678","url":null,"abstract":"Scalable Video Coding (SVC), provides different resolutions, different video quality and different video streaming rate after once compression according to various requirements of users. The characteristic performance can solve a series of video transmission problems encountered in the current complex and heterogeneous network environment conveniently and effectively, and provide a highly efficient solution for the new video network. Because of problems such as the SVC coding efficiency in multilayer and the coding cost, the research on SVC is mainly focused on how to improve the coding speed of the algorithm (fast SVC algorithm). For the macroblock mode selection in H.264/SVC, the paper selects the fast algorithm based on the macroblock in derived layer.","PeriodicalId":326246,"journal":{"name":"Proceedings 2014 IEEE International Conference on Security, Pattern Analysis, and Cybernetics (SPAC)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130589230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings 2014 IEEE International Conference on Security, Pattern Analysis, and Cybernetics (SPAC)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1