A novel loss less compression algorithm known as compression by sub string enumeration (CSE) is analyzed and modified. The CSE compression algorithm is a block-based, off-line method, as is the case with enumerative codes and the block-sorting compression scheme. First, we propose an encoding model that achieves asymptotic optimality for stationary ergodic sources. The codeword length attained by the proposed model converges almost surely to the entropy rate of a source when the length of a string generated by the source tends to infinity. Then, we propose a novel decoding algorithm that requires fewer code words than the original CSE.
{"title":"Asymptotic Optimal Lossless Compression via the CSE Technique","authors":"H. Yokoo","doi":"10.1109/CCP.2011.32","DOIUrl":"https://doi.org/10.1109/CCP.2011.32","url":null,"abstract":"A novel loss less compression algorithm known as compression by sub string enumeration (CSE) is analyzed and modified. The CSE compression algorithm is a block-based, off-line method, as is the case with enumerative codes and the block-sorting compression scheme. First, we propose an encoding model that achieves asymptotic optimality for stationary ergodic sources. The codeword length attained by the proposed model converges almost surely to the entropy rate of a source when the length of a string generated by the source tends to infinity. Then, we propose a novel decoding algorithm that requires fewer code words than the original CSE.","PeriodicalId":167131,"journal":{"name":"2011 First International Conference on Data Compression, Communications and Processing","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126595592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this note we illustrate with examples and heuristic mathematics, figures that are given throughout the earlier paper by the same authors [1]. Thus, we deal with successive iterations applied to rectangular arrays of numbers, where to avoid technical difficulties an array has at least three rows and at least three columns. Without loss, an iteration begins with operations on columns: first subtract the mean of each column; then divide by its standard deviation. The iteration continues with the same two operations done successively for rows. These four operations applied in sequence completes one iteration. One then iterates again, and again, and again,.... In [1] it was argued that if arrays are made up of real numbers, then the set for which convergence of these successive iterations fails has Lebesgue measure 0. The limiting array has row and column means 0, row and column standard deviations 1. Moreover, many graphics given in [1] suggest that but for a set of entries of any array with Lebesgue measure 0, convergence is very rapid, eventually exponentially fast in the number of iterations. Here mathematical reason for this is suggested. More importantly, the rapidity of convergence is illustrated by numerical examples.
{"title":"Successive Normalization of Rectangular Arrays: Rates of Convergence","authors":"R. Olshen, B. Rajaratnam","doi":"10.1109/CCP.2011.48","DOIUrl":"https://doi.org/10.1109/CCP.2011.48","url":null,"abstract":"In this note we illustrate with examples and heuristic mathematics, figures that are given throughout the earlier paper by the same authors [1]. Thus, we deal with successive iterations applied to rectangular arrays of numbers, where to avoid technical difficulties an array has at least three rows and at least three columns. Without loss, an iteration begins with operations on columns: first subtract the mean of each column; then divide by its standard deviation. The iteration continues with the same two operations done successively for rows. These four operations applied in sequence completes one iteration. One then iterates again, and again, and again,.... In [1] it was argued that if arrays are made up of real numbers, then the set for which convergence of these successive iterations fails has Lebesgue measure 0. The limiting array has row and column means 0, row and column standard deviations 1. Moreover, many graphics given in [1] suggest that but for a set of entries of any array with Lebesgue measure 0, convergence is very rapid, eventually exponentially fast in the number of iterations. Here mathematical reason for this is suggested. More importantly, the rapidity of convergence is illustrated by numerical examples.","PeriodicalId":167131,"journal":{"name":"2011 First International Conference on Data Compression, Communications and Processing","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122942898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
the emerging challenges, like simplicity, efficiency and agility, and the new optical-empowered technologies are driving the innovation of the networking in the data center. Virtualization, Consolidation and, more generally, the Cloud oriented approach, are the pillars of the new technological wave. A few key technologies, FCoE, TRILL and OTV, are leading this evolution, fostering the development of new networking architectures, models and communication paradigms. In this scenario, both the design models and power/footprint ratios for data center are changing significantly. This work aims at presenting the state-of-the art of the different technologies driving the Data Center evolution, mainly focusing on the most novel and evolutionary issues of the networking architectures, protocols and standards.
{"title":"The Evolution of Data Center Networking Technologies","authors":"Antonio Scarfò","doi":"10.1109/CCP.2011.30","DOIUrl":"https://doi.org/10.1109/CCP.2011.30","url":null,"abstract":"the emerging challenges, like simplicity, efficiency and agility, and the new optical-empowered technologies are driving the innovation of the networking in the data center. Virtualization, Consolidation and, more generally, the Cloud oriented approach, are the pillars of the new technological wave. A few key technologies, FCoE, TRILL and OTV, are leading this evolution, fostering the development of new networking architectures, models and communication paradigms. In this scenario, both the design models and power/footprint ratios for data center are changing significantly. This work aims at presenting the state-of-the art of the different technologies driving the Data Center evolution, mainly focusing on the most novel and evolutionary issues of the networking architectures, protocols and standards.","PeriodicalId":167131,"journal":{"name":"2011 First International Conference on Data Compression, Communications and Processing","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127726226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Francesc Aulí Llinàs, Joan Bartrina-Rapesta, J. Serra-Sagristà, M. Marcellin
This paper describes a low-complexity, high-efficiency lossy-to-lossless coding scheme for hyper-spectral images. Together with only a 2D wavelet transform on individual image components, the proposed scheme achieves coding performance similar to that achieved by a 3D transform strategy that adds one level of wavelet decomposition along the depth axis of the volume. The proposed schemes operates by means of a probability model for symbols emitted by the bit plane coding engine. This probability model captures the statistical behavior of hyper-spectral images with high precision. The proposed method is implemented in the core coding system of JPEG2000 reducing computational costs by 25%.
{"title":"Low Complexity, High Efficiency Probability Model for Hyper-spectral Image Coding","authors":"Francesc Aulí Llinàs, Joan Bartrina-Rapesta, J. Serra-Sagristà, M. Marcellin","doi":"10.1109/CCP.2011.10","DOIUrl":"https://doi.org/10.1109/CCP.2011.10","url":null,"abstract":"This paper describes a low-complexity, high-efficiency lossy-to-lossless coding scheme for hyper-spectral images. Together with only a 2D wavelet transform on individual image components, the proposed scheme achieves coding performance similar to that achieved by a 3D transform strategy that adds one level of wavelet decomposition along the depth axis of the volume. The proposed schemes operates by means of a probability model for symbols emitted by the bit plane coding engine. This probability model captures the statistical behavior of hyper-spectral images with high precision. The proposed method is implemented in the core coding system of JPEG2000 reducing computational costs by 25%.","PeriodicalId":167131,"journal":{"name":"2011 First International Conference on Data Compression, Communications and Processing","volume":"149 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125881623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jose Enrique Sánchez, E. Auge, J. Santaló, Ian Blanes, J. Serra-Sagristà, A. Kiely
A new standard for image coding is being developed by the MHDC working group of the CCSDS, targeting onboard compression of multi- and hyper-spectral imagery captured by aircraft and satellites. The proposed standard is based on the "Fast Lossless" adaptive linear predictive compressor, and is adapted to better overcome issues of on-board scenarios. In this paper, we present a review of the state of the art in this field, and provide an experimental comparison of the coding performance of the emerging standard in relation to other state-of-the-art coding techniques. Our own independent implementation of the MHDC Recommended Standard, as well as of some of the other techniques, has been used to provide extensive results over the vast corpus of test images from the CCSDS-MHDC.
{"title":"Review and Implementation of the Emerging CCSDS Recommended Standard for Multispectral and Hyperspectral Lossless Image Coding","authors":"Jose Enrique Sánchez, E. Auge, J. Santaló, Ian Blanes, J. Serra-Sagristà, A. Kiely","doi":"10.1109/CCP.2011.17","DOIUrl":"https://doi.org/10.1109/CCP.2011.17","url":null,"abstract":"A new standard for image coding is being developed by the MHDC working group of the CCSDS, targeting onboard compression of multi- and hyper-spectral imagery captured by aircraft and satellites. The proposed standard is based on the \"Fast Lossless\" adaptive linear predictive compressor, and is adapted to better overcome issues of on-board scenarios. In this paper, we present a review of the state of the art in this field, and provide an experimental comparison of the coding performance of the emerging standard in relation to other state-of-the-art coding techniques. Our own independent implementation of the MHDC Recommended Standard, as well as of some of the other techniques, has been used to provide extensive results over the vast corpus of test images from the CCSDS-MHDC.","PeriodicalId":167131,"journal":{"name":"2011 First International Conference on Data Compression, Communications and Processing","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121146435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Ricciardi, D. Careglio, G. Santos-Boada, J. Solé-Pareta, Ugo Fiore, F. Palmieri
At present, data centers consume a considerable percentage of the worldwide produced electrical energy, equivalent to the electrical production of 26 nuclear power plants, and such energy demand is growing at fast pace due to the ever increasing data volumes to be processed, stored and accessed every day in the modern grid and cloud infrastructures. Such energy consumption growth scenario is clearly not sustainable and it is necessary to limit the data center power budget by controlling the absorbed energy while keeping the desired level of service. In this paper, we describe Energy Farm, a data center energy manager that exploits load fluctuations to save as much energy as possible while satisfying quality of service requirements. Energy Farm achieves energy savings by aggregating traffic during low load periods and temporary turning off a subset of computing resources. Energy Farm respects the logical and physical dependencies of the interconnected devices in the data center and performs automatic shut down even in emergency cases such as temperature peaks and power leakages. Results show that high resource utilization efficiency is possible in data center infrastructures and that huge savings in terms of energy (MWh), emissions (tons of CO2) and costs (k€) are achievable.
{"title":"Saving Energy in Data Center Infrastructures","authors":"S. Ricciardi, D. Careglio, G. Santos-Boada, J. Solé-Pareta, Ugo Fiore, F. Palmieri","doi":"10.1109/CCP.2011.9","DOIUrl":"https://doi.org/10.1109/CCP.2011.9","url":null,"abstract":"At present, data centers consume a considerable percentage of the worldwide produced electrical energy, equivalent to the electrical production of 26 nuclear power plants, and such energy demand is growing at fast pace due to the ever increasing data volumes to be processed, stored and accessed every day in the modern grid and cloud infrastructures. Such energy consumption growth scenario is clearly not sustainable and it is necessary to limit the data center power budget by controlling the absorbed energy while keeping the desired level of service. In this paper, we describe Energy Farm, a data center energy manager that exploits load fluctuations to save as much energy as possible while satisfying quality of service requirements. Energy Farm achieves energy savings by aggregating traffic during low load periods and temporary turning off a subset of computing resources. Energy Farm respects the logical and physical dependencies of the interconnected devices in the data center and performs automatic shut down even in emergency cases such as temperature peaks and power leakages. Results show that high resource utilization efficiency is possible in data center infrastructures and that huge savings in terms of energy (MWh), emissions (tons of CO2) and costs (k€) are achievable.","PeriodicalId":167131,"journal":{"name":"2011 First International Conference on Data Compression, Communications and Processing","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124166378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Block matching algorithms (BMA) are central to optimal frame prediction for motion estimation in video compression. This paper focuses on the efficiency of Hierarchical Search (HS) algorithms. The research proposed two new combinations of fast algorithms like Small Diamond-shaped Search Pattern (SDSP) and Square-Shaped Search Pattern (SSSP) with a three-level Hierarchical algorithm at different levels of hierarchy. The computational complexity and efficiency for each combination algorithm were of interest. Simulation results show that the developed combination algorithms, Hierarchical search with SDSP (HSD) and Hierarchal search with SDSP and SSSP (HSD+SQ) are around 10% faster than the classic hierarchical algorithm with either slight improvement or no significant change in video quality when compared to general HS algorithm.
{"title":"Fast Implementation of Block Motion Estimation Algorithms in Video Encoders","authors":"N. Koduri, M. Dlodlo, G. D. Jager, K. Ferguson","doi":"10.1109/CCP.2011.19","DOIUrl":"https://doi.org/10.1109/CCP.2011.19","url":null,"abstract":"Block matching algorithms (BMA) are central to optimal frame prediction for motion estimation in video compression. This paper focuses on the efficiency of Hierarchical Search (HS) algorithms. The research proposed two new combinations of fast algorithms like Small Diamond-shaped Search Pattern (SDSP) and Square-Shaped Search Pattern (SSSP) with a three-level Hierarchical algorithm at different levels of hierarchy. The computational complexity and efficiency for each combination algorithm were of interest. Simulation results show that the developed combination algorithms, Hierarchical search with SDSP (HSD) and Hierarchal search with SDSP and SSSP (HSD+SQ) are around 10% faster than the classic hierarchical algorithm with either slight improvement or no significant change in video quality when compared to general HS algorithm.","PeriodicalId":167131,"journal":{"name":"2011 First International Conference on Data Compression, Communications and Processing","volume":"169 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114662358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A new challenge in the biomedical engineering is the remote patient monitoring by web applications. This paper proposes a first instance of diagnosis that guide the patients to a first medical point, with the advantage of a flexible structure, suitable for the next web developers, like students from bioinformatics universities. These kinds of applications stand for a sub-component of the telemedicine, as web-assisted medical E-healthcare diagnosis points. A diseases-symptoms database was uploaded in order to establish a final diagnosis. The main scope of this article is to offer a web protocol for medical diagnosis, under educational projects, as application to learning, bioinformatics and telemedicine. The paper presents an original software developed in HTML and its subsidiaries, as a flexible student environment.
{"title":"The E-healthcare Point of Diagnosis Implementation as a First Instance","authors":"C. Ravariu, F. Babarada","doi":"10.1109/CCP.2011.23","DOIUrl":"https://doi.org/10.1109/CCP.2011.23","url":null,"abstract":"A new challenge in the biomedical engineering is the remote patient monitoring by web applications. This paper proposes a first instance of diagnosis that guide the patients to a first medical point, with the advantage of a flexible structure, suitable for the next web developers, like students from bioinformatics universities. These kinds of applications stand for a sub-component of the telemedicine, as web-assisted medical E-healthcare diagnosis points. A diseases-symptoms database was uploaded in order to establish a final diagnosis. The main scope of this article is to offer a web protocol for medical diagnosis, under educational projects, as application to learning, bioinformatics and telemedicine. The paper presents an original software developed in HTML and its subsidiaries, as a flexible student environment.","PeriodicalId":167131,"journal":{"name":"2011 First International Conference on Data Compression, Communications and Processing","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121919741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The usual way of ensuring the confidentiality of the compressed data is to encrypt it with a standard encryption algorithm such as the AES. However, the cost of encryption not only brings an additional computational complexity, but also lacks the flexibility to perform pattern matching on the compressed data, which is an active research topic in stringology. In this study, we investigate the secure compression solutions, and propose a practical method to keep contents of the compressed data hidden. The method is based on the Burrows -- Wheeler transform ({BWT}) such that a randomly selected permutation of the input symbols are used as the lexicographical ordering during the construction. The motivation is the observation that on BWT of an input data it is not possible to perform a successful search nor construct any part of it without the correct knowledge of the character ordering. %Capturing that secret ordering from the BWT is hard. The proposed method is supposed to be is an elegant alternative to the standard encryption approaches with the advantage of supporting the compressed pattern matching, while still pertaining the confidentiality. When the input data is homophonic such that the frequencies of the symbols are flat and the alphabet is sufficiently large, it is possible to unify compression and security in a single framework with the proposed technique instead of the two -- level compress -- then -- encrypt paradigm.
{"title":"A Method to Ensure the Confidentiality of the Compressed Data","authors":"M. O. Kulekci","doi":"10.1109/CCP.2011.28","DOIUrl":"https://doi.org/10.1109/CCP.2011.28","url":null,"abstract":"The usual way of ensuring the confidentiality of the compressed data is to encrypt it with a standard encryption algorithm such as the AES. However, the cost of encryption not only brings an additional computational complexity, but also lacks the flexibility to perform pattern matching on the compressed data, which is an active research topic in stringology. In this study, we investigate the secure compression solutions, and propose a practical method to keep contents of the compressed data hidden. The method is based on the Burrows -- Wheeler transform ({BWT}) such that a randomly selected permutation of the input symbols are used as the lexicographical ordering during the construction. The motivation is the observation that on BWT of an input data it is not possible to perform a successful search nor construct any part of it without the correct knowledge of the character ordering. %Capturing that secret ordering from the BWT is hard. The proposed method is supposed to be is an elegant alternative to the standard encryption approaches with the advantage of supporting the compressed pattern matching, while still pertaining the confidentiality. When the input data is homophonic such that the frequencies of the symbols are flat and the alphabet is sufficiently large, it is possible to unify compression and security in a single framework with the proposed technique instead of the two -- level compress -- then -- encrypt paradigm.","PeriodicalId":167131,"journal":{"name":"2011 First International Conference on Data Compression, Communications and Processing","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125055308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper addresses the important performance issues that arises when multimedia traffic is carried over WiMAX systems. In future, WiMAX will be used in conjunction with other wireless systems to bring a variety of multimedia services. This paper presents the results of application testing using commercially available WiMAX products. The main focus is to show the effectiveness of QoS capabilities in delivering streaming multimedia such as IPTV and similar media content. The results provide a good indication on the applicability of WiMAX for multimedia applications. These findings will be followed up by field trials with IPTV and other live stream media.
{"title":"QoS Performance Testing of Multimedia Delivery over WiMAX Networks","authors":"D. Reid, A. Srinivasan, W. Almuhtadi","doi":"10.1109/CCP.2011.26","DOIUrl":"https://doi.org/10.1109/CCP.2011.26","url":null,"abstract":"This paper addresses the important performance issues that arises when multimedia traffic is carried over WiMAX systems. In future, WiMAX will be used in conjunction with other wireless systems to bring a variety of multimedia services. This paper presents the results of application testing using commercially available WiMAX products. The main focus is to show the effectiveness of QoS capabilities in delivering streaming multimedia such as IPTV and similar media content. The results provide a good indication on the applicability of WiMAX for multimedia applications. These findings will be followed up by field trials with IPTV and other live stream media.","PeriodicalId":167131,"journal":{"name":"2011 First International Conference on Data Compression, Communications and Processing","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115842891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}