首页 > 最新文献

2011 First International Conference on Data Compression, Communications and Processing最新文献

英文 中文
Asymptotic Optimal Lossless Compression via the CSE Technique 基于CSE技术的渐近最优无损压缩
H. Yokoo
A novel loss less compression algorithm known as compression by sub string enumeration (CSE) is analyzed and modified. The CSE compression algorithm is a block-based, off-line method, as is the case with enumerative codes and the block-sorting compression scheme. First, we propose an encoding model that achieves asymptotic optimality for stationary ergodic sources. The codeword length attained by the proposed model converges almost surely to the entropy rate of a source when the length of a string generated by the source tends to infinity. Then, we propose a novel decoding algorithm that requires fewer code words than the original CSE.
分析并改进了一种新的无损压缩算法——子串枚举压缩算法(CSE)。CSE的压缩算法是一种基于块的离线方法,就像枚举代码和块排序压缩方案一样。首先,我们提出了一种对平稳遍历源实现渐近最优的编码模型。当由源产生的字符串长度趋于无穷大时,该模型得到的码字长度几乎肯定收敛于源的熵率。然后,我们提出了一种新的译码算法,比原来的CSE需要更少的码字。
{"title":"Asymptotic Optimal Lossless Compression via the CSE Technique","authors":"H. Yokoo","doi":"10.1109/CCP.2011.32","DOIUrl":"https://doi.org/10.1109/CCP.2011.32","url":null,"abstract":"A novel loss less compression algorithm known as compression by sub string enumeration (CSE) is analyzed and modified. The CSE compression algorithm is a block-based, off-line method, as is the case with enumerative codes and the block-sorting compression scheme. First, we propose an encoding model that achieves asymptotic optimality for stationary ergodic sources. The codeword length attained by the proposed model converges almost surely to the entropy rate of a source when the length of a string generated by the source tends to infinity. Then, we propose a novel decoding algorithm that requires fewer code words than the original CSE.","PeriodicalId":167131,"journal":{"name":"2011 First International Conference on Data Compression, Communications and Processing","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126595592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Successive Normalization of Rectangular Arrays: Rates of Convergence 矩形阵列的连续归一化:收敛速率
R. Olshen, B. Rajaratnam
In this note we illustrate with examples and heuristic mathematics, figures that are given throughout the earlier paper by the same authors [1]. Thus, we deal with successive iterations applied to rectangular arrays of numbers, where to avoid technical difficulties an array has at least three rows and at least three columns. Without loss, an iteration begins with operations on columns: first subtract the mean of each column; then divide by its standard deviation. The iteration continues with the same two operations done successively for rows. These four operations applied in sequence completes one iteration. One then iterates again, and again, and again,.... In [1] it was argued that if arrays are made up of real numbers, then the set for which convergence of these successive iterations fails has Lebesgue measure 0. The limiting array has row and column means 0, row and column standard deviations 1. Moreover, many graphics given in [1] suggest that but for a set of entries of any array with Lebesgue measure 0, convergence is very rapid, eventually exponentially fast in the number of iterations. Here mathematical reason for this is suggested. More importantly, the rapidity of convergence is illustrated by numerical examples.
在这篇笔记中,我们用例子和启发式数学来说明,这些数字在之前的论文中都是由同一作者[1]给出的。因此,我们处理应用于矩形数字数组的连续迭代,为了避免技术上的困难,数组至少有三行和三列。在没有损失的情况下,迭代从对列的操作开始:首先减去每列的平均值;然后除以标准差。迭代继续,对行依次执行相同的两个操作。按顺序应用这四个操作完成一次迭代。然后迭代一次,一次又一次,....在[1]中提出,如果数组由实数组成,则这些连续迭代不能收敛的集合具有Lebesgue测度0。极限阵列的行、列均值为0,行、列标准差为1。此外,[1]中给出的许多图形表明,除了对于任何Lebesgue测度为0的数组的一组条目外,收敛速度非常快,最终迭代次数呈指数级增长。这里提出了数学上的原因。更重要的是,通过数值算例说明了收敛的快速性。
{"title":"Successive Normalization of Rectangular Arrays: Rates of Convergence","authors":"R. Olshen, B. Rajaratnam","doi":"10.1109/CCP.2011.48","DOIUrl":"https://doi.org/10.1109/CCP.2011.48","url":null,"abstract":"In this note we illustrate with examples and heuristic mathematics, figures that are given throughout the earlier paper by the same authors [1]. Thus, we deal with successive iterations applied to rectangular arrays of numbers, where to avoid technical difficulties an array has at least three rows and at least three columns. Without loss, an iteration begins with operations on columns: first subtract the mean of each column; then divide by its standard deviation. The iteration continues with the same two operations done successively for rows. These four operations applied in sequence completes one iteration. One then iterates again, and again, and again,.... In [1] it was argued that if arrays are made up of real numbers, then the set for which convergence of these successive iterations fails has Lebesgue measure 0. The limiting array has row and column means 0, row and column standard deviations 1. Moreover, many graphics given in [1] suggest that but for a set of entries of any array with Lebesgue measure 0, convergence is very rapid, eventually exponentially fast in the number of iterations. Here mathematical reason for this is suggested. More importantly, the rapidity of convergence is illustrated by numerical examples.","PeriodicalId":167131,"journal":{"name":"2011 First International Conference on Data Compression, Communications and Processing","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122942898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
The Evolution of Data Center Networking Technologies 数据中心网络技术的发展
Antonio Scarfò
the emerging challenges, like simplicity, efficiency and agility, and the new optical-empowered technologies are driving the innovation of the networking in the data center. Virtualization, Consolidation and, more generally, the Cloud oriented approach, are the pillars of the new technological wave. A few key technologies, FCoE, TRILL and OTV, are leading this evolution, fostering the development of new networking architectures, models and communication paradigms. In this scenario, both the design models and power/footprint ratios for data center are changing significantly. This work aims at presenting the state-of-the art of the different technologies driving the Data Center evolution, mainly focusing on the most novel and evolutionary issues of the networking architectures, protocols and standards.
新出现的挑战,如简单性、效率和敏捷性,以及新的光学技术正在推动数据中心网络的创新。虚拟化、整合以及更普遍的面向云的方法是新技术浪潮的支柱。一些关键技术,如FCoE、TRILL和OTV,正在引领这一演变,促进新的网络架构、模型和通信范式的发展。在这种情况下,数据中心的设计模型和功率/占用比都发生了显著变化。这项工作旨在展示推动数据中心发展的不同技术的最新技术,主要关注网络架构、协议和标准中最新颖和最具演变性的问题。
{"title":"The Evolution of Data Center Networking Technologies","authors":"Antonio Scarfò","doi":"10.1109/CCP.2011.30","DOIUrl":"https://doi.org/10.1109/CCP.2011.30","url":null,"abstract":"the emerging challenges, like simplicity, efficiency and agility, and the new optical-empowered technologies are driving the innovation of the networking in the data center. Virtualization, Consolidation and, more generally, the Cloud oriented approach, are the pillars of the new technological wave. A few key technologies, FCoE, TRILL and OTV, are leading this evolution, fostering the development of new networking architectures, models and communication paradigms. In this scenario, both the design models and power/footprint ratios for data center are changing significantly. This work aims at presenting the state-of-the art of the different technologies driving the Data Center evolution, mainly focusing on the most novel and evolutionary issues of the networking architectures, protocols and standards.","PeriodicalId":167131,"journal":{"name":"2011 First International Conference on Data Compression, Communications and Processing","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127726226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Low Complexity, High Efficiency Probability Model for Hyper-spectral Image Coding 低复杂度、高效率的高光谱图像编码概率模型
Francesc Aulí Llinàs, Joan Bartrina-Rapesta, J. Serra-Sagristà, M. Marcellin
This paper describes a low-complexity, high-efficiency lossy-to-lossless coding scheme for hyper-spectral images. Together with only a 2D wavelet transform on individual image components, the proposed scheme achieves coding performance similar to that achieved by a 3D transform strategy that adds one level of wavelet decomposition along the depth axis of the volume. The proposed schemes operates by means of a probability model for symbols emitted by the bit plane coding engine. This probability model captures the statistical behavior of hyper-spectral images with high precision. The proposed method is implemented in the core coding system of JPEG2000 reducing computational costs by 25%.
提出了一种低复杂度、高效率的高光谱图像无损编码方案。该方案仅对单个图像分量进行二维小波变换,其编码性能类似于沿体的深度轴添加一层小波分解的三维变换策略。所提出的方案通过位平面编码引擎发出的符号的概率模型来运行。该概率模型以较高的精度捕获了高光谱图像的统计行为。该方法已在JPEG2000核心编码系统中实现,计算成本降低了25%。
{"title":"Low Complexity, High Efficiency Probability Model for Hyper-spectral Image Coding","authors":"Francesc Aulí Llinàs, Joan Bartrina-Rapesta, J. Serra-Sagristà, M. Marcellin","doi":"10.1109/CCP.2011.10","DOIUrl":"https://doi.org/10.1109/CCP.2011.10","url":null,"abstract":"This paper describes a low-complexity, high-efficiency lossy-to-lossless coding scheme for hyper-spectral images. Together with only a 2D wavelet transform on individual image components, the proposed scheme achieves coding performance similar to that achieved by a 3D transform strategy that adds one level of wavelet decomposition along the depth axis of the volume. The proposed schemes operates by means of a probability model for symbols emitted by the bit plane coding engine. This probability model captures the statistical behavior of hyper-spectral images with high precision. The proposed method is implemented in the core coding system of JPEG2000 reducing computational costs by 25%.","PeriodicalId":167131,"journal":{"name":"2011 First International Conference on Data Compression, Communications and Processing","volume":"149 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125881623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Review and Implementation of the Emerging CCSDS Recommended Standard for Multispectral and Hyperspectral Lossless Image Coding 多光谱和高光谱无损图像编码CCSDS推荐标准的回顾和实施
Jose Enrique Sánchez, E. Auge, J. Santaló, Ian Blanes, J. Serra-Sagristà, A. Kiely
A new standard for image coding is being developed by the MHDC working group of the CCSDS, targeting onboard compression of multi- and hyper-spectral imagery captured by aircraft and satellites. The proposed standard is based on the "Fast Lossless" adaptive linear predictive compressor, and is adapted to better overcome issues of on-board scenarios. In this paper, we present a review of the state of the art in this field, and provide an experimental comparison of the coding performance of the emerging standard in relation to other state-of-the-art coding techniques. Our own independent implementation of the MHDC Recommended Standard, as well as of some of the other techniques, has been used to provide extensive results over the vast corpus of test images from the CCSDS-MHDC.
CCSDS的MHDC工作组正在开发一种新的图像编码标准,目标是对飞机和卫星捕获的多光谱和高光谱图像进行机载压缩。拟议的标准是基于“快速无损”自适应线性预测压缩器,并适应更好地克服车载场景的问题。在本文中,我们对该领域的最新技术进行了回顾,并对新兴标准的编码性能与其他最先进的编码技术进行了实验比较。我们自己独立实施的MHDC推荐标准,以及一些其他技术,已被用于对CCSDS-MHDC的大量测试图像提供广泛的结果。
{"title":"Review and Implementation of the Emerging CCSDS Recommended Standard for Multispectral and Hyperspectral Lossless Image Coding","authors":"Jose Enrique Sánchez, E. Auge, J. Santaló, Ian Blanes, J. Serra-Sagristà, A. Kiely","doi":"10.1109/CCP.2011.17","DOIUrl":"https://doi.org/10.1109/CCP.2011.17","url":null,"abstract":"A new standard for image coding is being developed by the MHDC working group of the CCSDS, targeting onboard compression of multi- and hyper-spectral imagery captured by aircraft and satellites. The proposed standard is based on the \"Fast Lossless\" adaptive linear predictive compressor, and is adapted to better overcome issues of on-board scenarios. In this paper, we present a review of the state of the art in this field, and provide an experimental comparison of the coding performance of the emerging standard in relation to other state-of-the-art coding techniques. Our own independent implementation of the MHDC Recommended Standard, as well as of some of the other techniques, has been used to provide extensive results over the vast corpus of test images from the CCSDS-MHDC.","PeriodicalId":167131,"journal":{"name":"2011 First International Conference on Data Compression, Communications and Processing","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121146435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 41
Saving Energy in Data Center Infrastructures 数据中心基础设施节能
S. Ricciardi, D. Careglio, G. Santos-Boada, J. Solé-Pareta, Ugo Fiore, F. Palmieri
At present, data centers consume a considerable percentage of the worldwide produced electrical energy, equivalent to the electrical production of 26 nuclear power plants, and such energy demand is growing at fast pace due to the ever increasing data volumes to be processed, stored and accessed every day in the modern grid and cloud infrastructures. Such energy consumption growth scenario is clearly not sustainable and it is necessary to limit the data center power budget by controlling the absorbed energy while keeping the desired level of service. In this paper, we describe Energy Farm, a data center energy manager that exploits load fluctuations to save as much energy as possible while satisfying quality of service requirements. Energy Farm achieves energy savings by aggregating traffic during low load periods and temporary turning off a subset of computing resources. Energy Farm respects the logical and physical dependencies of the interconnected devices in the data center and performs automatic shut down even in emergency cases such as temperature peaks and power leakages. Results show that high resource utilization efficiency is possible in data center infrastructures and that huge savings in terms of energy (MWh), emissions (tons of CO2) and costs (k€) are achievable.
目前,数据中心消耗的电能占全球发电量的相当大比例,相当于26座核电站的发电量,而且由于现代电网和云基础设施每天要处理、存储和访问的数据量不断增加,这种能源需求还在快速增长。这样的能源消耗增长场景显然是不可持续的,有必要通过控制吸收的能量来限制数据中心的电力预算,同时保持所需的服务水平。在本文中,我们描述了Energy Farm,一个数据中心能源管理器,它利用负载波动来尽可能多地节省能源,同时满足服务质量要求。Energy Farm通过在低负荷期间聚合流量和暂时关闭计算资源子集来实现节能。Energy Farm尊重数据中心内互联设备的逻辑和物理依赖关系,即使在温度峰值和电源泄漏等紧急情况下也能自动关闭。结果表明,在数据中心基础设施中,高资源利用效率是可能的,并且在能源(兆瓦时)、排放(二氧化碳吨)和成本(k€)方面可以实现巨大的节约。
{"title":"Saving Energy in Data Center Infrastructures","authors":"S. Ricciardi, D. Careglio, G. Santos-Boada, J. Solé-Pareta, Ugo Fiore, F. Palmieri","doi":"10.1109/CCP.2011.9","DOIUrl":"https://doi.org/10.1109/CCP.2011.9","url":null,"abstract":"At present, data centers consume a considerable percentage of the worldwide produced electrical energy, equivalent to the electrical production of 26 nuclear power plants, and such energy demand is growing at fast pace due to the ever increasing data volumes to be processed, stored and accessed every day in the modern grid and cloud infrastructures. Such energy consumption growth scenario is clearly not sustainable and it is necessary to limit the data center power budget by controlling the absorbed energy while keeping the desired level of service. In this paper, we describe Energy Farm, a data center energy manager that exploits load fluctuations to save as much energy as possible while satisfying quality of service requirements. Energy Farm achieves energy savings by aggregating traffic during low load periods and temporary turning off a subset of computing resources. Energy Farm respects the logical and physical dependencies of the interconnected devices in the data center and performs automatic shut down even in emergency cases such as temperature peaks and power leakages. Results show that high resource utilization efficiency is possible in data center infrastructures and that huge savings in terms of energy (MWh), emissions (tons of CO2) and costs (k€) are achievable.","PeriodicalId":167131,"journal":{"name":"2011 First International Conference on Data Compression, Communications and Processing","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124166378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 45
Fast Implementation of Block Motion Estimation Algorithms in Video Encoders 视频编码器中块运动估计算法的快速实现
N. Koduri, M. Dlodlo, G. D. Jager, K. Ferguson
Block matching algorithms (BMA) are central to optimal frame prediction for motion estimation in video compression. This paper focuses on the efficiency of Hierarchical Search (HS) algorithms. The research proposed two new combinations of fast algorithms like Small Diamond-shaped Search Pattern (SDSP) and Square-Shaped Search Pattern (SSSP) with a three-level Hierarchical algorithm at different levels of hierarchy. The computational complexity and efficiency for each combination algorithm were of interest. Simulation results show that the developed combination algorithms, Hierarchical search with SDSP (HSD) and Hierarchal search with SDSP and SSSP (HSD+SQ) are around 10% faster than the classic hierarchical algorithm with either slight improvement or no significant change in video quality when compared to general HS algorithm.
块匹配算法(BMA)是视频压缩中运动估计中最优帧预测的核心。本文主要研究层次搜索(HS)算法的效率。本研究提出了小菱形搜索模式(SDSP)和方形搜索模式(SSSP)两种新的快速算法组合,并在不同层次上采用三级分层算法。每个组合算法的计算复杂度和效率都令人感兴趣。仿真结果表明,所开发的分层搜索SDSP (HSD)和分层搜索SDSP和SSSP (HSD+SQ)的组合算法比经典分层算法快10%左右,与一般HS算法相比,视频质量略有改善或无明显变化。
{"title":"Fast Implementation of Block Motion Estimation Algorithms in Video Encoders","authors":"N. Koduri, M. Dlodlo, G. D. Jager, K. Ferguson","doi":"10.1109/CCP.2011.19","DOIUrl":"https://doi.org/10.1109/CCP.2011.19","url":null,"abstract":"Block matching algorithms (BMA) are central to optimal frame prediction for motion estimation in video compression. This paper focuses on the efficiency of Hierarchical Search (HS) algorithms. The research proposed two new combinations of fast algorithms like Small Diamond-shaped Search Pattern (SDSP) and Square-Shaped Search Pattern (SSSP) with a three-level Hierarchical algorithm at different levels of hierarchy. The computational complexity and efficiency for each combination algorithm were of interest. Simulation results show that the developed combination algorithms, Hierarchical search with SDSP (HSD) and Hierarchal search with SDSP and SSSP (HSD+SQ) are around 10% faster than the classic hierarchical algorithm with either slight improvement or no significant change in video quality when compared to general HS algorithm.","PeriodicalId":167131,"journal":{"name":"2011 First International Conference on Data Compression, Communications and Processing","volume":"169 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114662358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
The E-healthcare Point of Diagnosis Implementation as a First Instance 以电子医疗诊断点实施为例
C. Ravariu, F. Babarada
A new challenge in the biomedical engineering is the remote patient monitoring by web applications. This paper proposes a first instance of diagnosis that guide the patients to a first medical point, with the advantage of a flexible structure, suitable for the next web developers, like students from bioinformatics universities. These kinds of applications stand for a sub-component of the telemedicine, as web-assisted medical E-healthcare diagnosis points. A diseases-symptoms database was uploaded in order to establish a final diagnosis. The main scope of this article is to offer a web protocol for medical diagnosis, under educational projects, as application to learning, bioinformatics and telemedicine. The paper presents an original software developed in HTML and its subsidiaries, as a flexible student environment.
基于web的远程病人监护是生物医学工程面临的一个新挑战。本文提出了一种将患者引导到第一个医疗点的第一个诊断实例,具有结构灵活的优点,适合生物信息学院校的学生等下一代web开发人员。这些类型的应用程序代表远程医疗的子组件,作为网络辅助医疗电子保健诊断点。上传了一个疾病-症状数据库,以确定最终诊断。本文的主要范围是提供一个医学诊断的web协议,在教育项目下,作为学习、生物信息学和远程医疗的应用。本文介绍了一个用HTML开发的原创软件及其附属软件,作为一个灵活的学生环境。
{"title":"The E-healthcare Point of Diagnosis Implementation as a First Instance","authors":"C. Ravariu, F. Babarada","doi":"10.1109/CCP.2011.23","DOIUrl":"https://doi.org/10.1109/CCP.2011.23","url":null,"abstract":"A new challenge in the biomedical engineering is the remote patient monitoring by web applications. This paper proposes a first instance of diagnosis that guide the patients to a first medical point, with the advantage of a flexible structure, suitable for the next web developers, like students from bioinformatics universities. These kinds of applications stand for a sub-component of the telemedicine, as web-assisted medical E-healthcare diagnosis points. A diseases-symptoms database was uploaded in order to establish a final diagnosis. The main scope of this article is to offer a web protocol for medical diagnosis, under educational projects, as application to learning, bioinformatics and telemedicine. The paper presents an original software developed in HTML and its subsidiaries, as a flexible student environment.","PeriodicalId":167131,"journal":{"name":"2011 First International Conference on Data Compression, Communications and Processing","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121919741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A Method to Ensure the Confidentiality of the Compressed Data 一种保证压缩数据保密性的方法
M. O. Kulekci
The usual way of ensuring the confidentiality of the compressed data is to encrypt it with a standard encryption algorithm such as the AES. However, the cost of encryption not only brings an additional computational complexity, but also lacks the flexibility to perform pattern matching on the compressed data, which is an active research topic in stringology. In this study, we investigate the secure compression solutions, and propose a practical method to keep contents of the compressed data hidden. The method is based on the Burrows -- Wheeler transform ({BWT}) such that a randomly selected permutation of the input symbols are used as the lexicographical ordering during the construction. The motivation is the observation that on BWT of an input data it is not possible to perform a successful search nor construct any part of it without the correct knowledge of the character ordering. %Capturing that secret ordering from the BWT is hard. The proposed method is supposed to be is an elegant alternative to the standard encryption approaches with the advantage of supporting the compressed pattern matching, while still pertaining the confidentiality. When the input data is homophonic such that the frequencies of the symbols are flat and the alphabet is sufficiently large, it is possible to unify compression and security in a single framework with the proposed technique instead of the two -- level compress -- then -- encrypt paradigm.
确保压缩数据机密性的通常方法是使用标准加密算法(如AES)对其进行加密。然而,加密的代价不仅带来了额外的计算复杂度,而且缺乏对压缩数据进行模式匹配的灵活性,这是字符串学中一个活跃的研究课题。在本研究中,我们研究了安全压缩的解决方案,并提出了一种实用的方法来保持压缩数据的内容隐藏。该方法基于Burrows—Wheeler变换({BWT}),因此在构造过程中使用随机选择的输入符号排列作为字典顺序。动机是观察到在输入数据的BWT上,如果没有正确的字符顺序知识,就不可能执行成功的搜索,也不可能构建它的任何部分。从BWT获取秘密订单是很困难的。所提出的方法被认为是标准加密方法的一种优雅的替代方法,具有支持压缩模式匹配的优点,同时仍然具有机密性。当输入数据是同音的,使得符号的频率是平坦的,并且字母表足够大时,可以使用所提出的技术在单个框架中统一压缩和安全性,而不是两级压缩-然后-加密范式。
{"title":"A Method to Ensure the Confidentiality of the Compressed Data","authors":"M. O. Kulekci","doi":"10.1109/CCP.2011.28","DOIUrl":"https://doi.org/10.1109/CCP.2011.28","url":null,"abstract":"The usual way of ensuring the confidentiality of the compressed data is to encrypt it with a standard encryption algorithm such as the AES. However, the cost of encryption not only brings an additional computational complexity, but also lacks the flexibility to perform pattern matching on the compressed data, which is an active research topic in stringology. In this study, we investigate the secure compression solutions, and propose a practical method to keep contents of the compressed data hidden. The method is based on the Burrows -- Wheeler transform ({BWT}) such that a randomly selected permutation of the input symbols are used as the lexicographical ordering during the construction. The motivation is the observation that on BWT of an input data it is not possible to perform a successful search nor construct any part of it without the correct knowledge of the character ordering. %Capturing that secret ordering from the BWT is hard. The proposed method is supposed to be is an elegant alternative to the standard encryption approaches with the advantage of supporting the compressed pattern matching, while still pertaining the confidentiality. When the input data is homophonic such that the frequencies of the symbols are flat and the alphabet is sufficiently large, it is possible to unify compression and security in a single framework with the proposed technique instead of the two -- level compress -- then -- encrypt paradigm.","PeriodicalId":167131,"journal":{"name":"2011 First International Conference on Data Compression, Communications and Processing","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125055308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
QoS Performance Testing of Multimedia Delivery over WiMAX Networks WiMAX网络多媒体传输的QoS性能测试
D. Reid, A. Srinivasan, W. Almuhtadi
This paper addresses the important performance issues that arises when multimedia traffic is carried over WiMAX systems. In future, WiMAX will be used in conjunction with other wireless systems to bring a variety of multimedia services. This paper presents the results of application testing using commercially available WiMAX products. The main focus is to show the effectiveness of QoS capabilities in delivering streaming multimedia such as IPTV and similar media content. The results provide a good indication on the applicability of WiMAX for multimedia applications. These findings will be followed up by field trials with IPTV and other live stream media.
本文讨论了在WiMAX系统上传输多媒体流量时出现的重要性能问题。未来,WiMAX将与其他无线系统一起使用,带来多种多媒体服务。本文介绍了商用WiMAX产品的应用测试结果。主要重点是展示QoS功能在交付流媒体(如IPTV和类似的媒体内容)方面的有效性。结果表明WiMAX在多媒体应用中的适用性。这些发现将通过IPTV和其他直播流媒体进行现场试验。
{"title":"QoS Performance Testing of Multimedia Delivery over WiMAX Networks","authors":"D. Reid, A. Srinivasan, W. Almuhtadi","doi":"10.1109/CCP.2011.26","DOIUrl":"https://doi.org/10.1109/CCP.2011.26","url":null,"abstract":"This paper addresses the important performance issues that arises when multimedia traffic is carried over WiMAX systems. In future, WiMAX will be used in conjunction with other wireless systems to bring a variety of multimedia services. This paper presents the results of application testing using commercially available WiMAX products. The main focus is to show the effectiveness of QoS capabilities in delivering streaming multimedia such as IPTV and similar media content. The results provide a good indication on the applicability of WiMAX for multimedia applications. These findings will be followed up by field trials with IPTV and other live stream media.","PeriodicalId":167131,"journal":{"name":"2011 First International Conference on Data Compression, Communications and Processing","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115842891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
期刊
2011 First International Conference on Data Compression, Communications and Processing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1