首页 > 最新文献

2011 First International Conference on Data Compression, Communications and Processing最新文献

英文 中文
Lossless Compression of Hyperspectral Imagery 高光谱图像的无损压缩
Raffaele Pizzolante
In this paper we review the Spectral oriented Least SQuares (SLSQ) algorithm : an efficient and low complexity algorithm for Hyper spectral Image loss less compression, presented in [2]. Subsequently, we consider two important measures : Pearson's Correlation and Bhattacharyya distance and describe a band ordering approach based on this distances. Finally, we report experimental results achieved with a Java-based implementation of SLSQ on data cubes acquired by NASA JPL's Airborne Visible/Infrared Imaging Spectrometer (AVIRIS).
本文回顾了文献[2]中提出的面向光谱的最小二乘(SLSQ)算法:一种高效、低复杂度的高光谱图像无损压缩算法。随后,我们考虑了两个重要的度量:Pearson’s Correlation和Bhattacharyya distance,并描述了基于该距离的波段排序方法。最后,我们报告了基于java的SLSQ在NASA喷气推进实验室机载可见/红外成像光谱仪(AVIRIS)获取的数据立方上实现的实验结果。
{"title":"Lossless Compression of Hyperspectral Imagery","authors":"Raffaele Pizzolante","doi":"10.1109/CCP.2011.31","DOIUrl":"https://doi.org/10.1109/CCP.2011.31","url":null,"abstract":"In this paper we review the Spectral oriented Least SQuares (SLSQ) algorithm : an efficient and low complexity algorithm for Hyper spectral Image loss less compression, presented in [2]. Subsequently, we consider two important measures : Pearson's Correlation and Bhattacharyya distance and describe a band ordering approach based on this distances. Finally, we report experimental results achieved with a Java-based implementation of SLSQ on data cubes acquired by NASA JPL's Airborne Visible/Infrared Imaging Spectrometer (AVIRIS).","PeriodicalId":167131,"journal":{"name":"2011 First International Conference on Data Compression, Communications and Processing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123078323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Evaluating New Cluster Setup on 10Gbit/s Network to Support the SuperB Computing Model 在10Gbit/s网络上评估新的集群设置以支持卓越的计算模型
D. D. Prete, S. Pardi, G. Russo
The new era of particle physics poses strong constraints on computing and storage availability for data analysis and data distribution. The SuperB project plans to produce and analyzes bulk of dataset two times bigger than the actual HEP experiment. In this scenario one of the main issues is to create a new cluster setup, able to scale for the next ten years and to take advantage from the new fabric technologies, included multicore and graphic programming units (GPUs). In this paper we propose a new site-wide cluster setup for Tier1 computer facilities, aimed to integrate storage and computing resources through a mix of high density storage solutions, cluster file system and Nx10Gbit/s network interfaces. The main idea is overcome the bottleneck due to the storage-computing decoupling through a scalable model composed by nodes with many cores and several disks in JBOD configuration. Preliminary tests made on 10Gbit/s cluster with a real SuperB use case, show the validity of our approach.
粒子物理的新时代对数据分析和数据分发的计算和存储可用性提出了强烈的限制。SuperB项目计划生成和分析比实际HEP实验大两倍的大量数据集。在这种情况下,主要问题之一是创建一个新的集群设置,能够在未来十年进行扩展,并利用新的结构技术,包括多核和图形编程单元(gpu)。在本文中,我们为Tier1计算机设施提出了一种新的站点范围集群设置,旨在通过高密度存储解决方案、集群文件系统和Nx10Gbit/s网络接口的混合来集成存储和计算资源。其主要思想是通过一个可扩展模型克服由于存储-计算解耦而导致的瓶颈,该模型由具有多个内核和多个JBOD配置的磁盘的节点组成。在10Gbit/s集群上通过一个真实的SuperB用例进行了初步测试,结果表明了该方法的有效性。
{"title":"Evaluating New Cluster Setup on 10Gbit/s Network to Support the SuperB Computing Model","authors":"D. D. Prete, S. Pardi, G. Russo","doi":"10.1109/CCP.2011.33","DOIUrl":"https://doi.org/10.1109/CCP.2011.33","url":null,"abstract":"The new era of particle physics poses strong constraints on computing and storage availability for data analysis and data distribution. The SuperB project plans to produce and analyzes bulk of dataset two times bigger than the actual HEP experiment. In this scenario one of the main issues is to create a new cluster setup, able to scale for the next ten years and to take advantage from the new fabric technologies, included multicore and graphic programming units (GPUs). In this paper we propose a new site-wide cluster setup for Tier1 computer facilities, aimed to integrate storage and computing resources through a mix of high density storage solutions, cluster file system and Nx10Gbit/s network interfaces. The main idea is overcome the bottleneck due to the storage-computing decoupling through a scalable model composed by nodes with many cores and several disks in JBOD configuration. Preliminary tests made on 10Gbit/s cluster with a real SuperB use case, show the validity of our approach.","PeriodicalId":167131,"journal":{"name":"2011 First International Conference on Data Compression, Communications and Processing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125806623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
CoTracks: A New Lossy Compression Schema for Tracking Logs Data Based on Multiparametric Segmentation CoTracks:一种基于多参数分割的日志数据跟踪有损压缩新模式
W. Balzano, M. D. Sorbo
A massive diffusion of positioning devices and services, transmitting and producing spatio-temporal data, raised space complexity problems and pulled the research focus toward efficient and specific algorithms to compress these huge amount of stored or flowing data. Co Tracks algorithm has been projected for a lossy compression of GPS data, exploiting analogies between all their spatio-temporal features. The original contribution of this algorithm is the consideration of the altitude of the track, an elaboration of 3D data and a dynamic vision of the moving point, because the speed, tightly linked to the time, is supposed to be one of the significant parameters in the uniformity search. Minimum Bounding Box has been the tool employed to group data points and to generate the key points of the approximated trajectory. The compression ratio, resulting also after a further Huffman coding, appears attractively high, suggesting new interesting developments of this new technique.
定位设备和服务的大规模扩散,传输和产生时空数据,引发了空间复杂性问题,并将研究重点转向高效和特定的算法来压缩这些大量存储或流动的数据。Co Tracks算法已被用于有损压缩GPS数据,利用它们所有时空特征之间的相似性。该算法的原始贡献在于考虑了轨道的高度,对三维数据的阐述和运动点的动态视觉,因为速度与时间紧密相连,是均匀性搜索中的重要参数之一。最小边界框是用来对数据点进行分组和生成近似轨迹关键点的工具。压缩比,也得到进一步的霍夫曼编码后,似乎有吸引力的高,表明这种新技术的新的有趣的发展。
{"title":"CoTracks: A New Lossy Compression Schema for Tracking Logs Data Based on Multiparametric Segmentation","authors":"W. Balzano, M. D. Sorbo","doi":"10.1109/CCP.2011.37","DOIUrl":"https://doi.org/10.1109/CCP.2011.37","url":null,"abstract":"A massive diffusion of positioning devices and services, transmitting and producing spatio-temporal data, raised space complexity problems and pulled the research focus toward efficient and specific algorithms to compress these huge amount of stored or flowing data. Co Tracks algorithm has been projected for a lossy compression of GPS data, exploiting analogies between all their spatio-temporal features. The original contribution of this algorithm is the consideration of the altitude of the track, an elaboration of 3D data and a dynamic vision of the moving point, because the speed, tightly linked to the time, is supposed to be one of the significant parameters in the uniformity search. Minimum Bounding Box has been the tool employed to group data points and to generate the key points of the approximated trajectory. The compression ratio, resulting also after a further Huffman coding, appears attractively high, suggesting new interesting developments of this new technique.","PeriodicalId":167131,"journal":{"name":"2011 First International Conference on Data Compression, Communications and Processing","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114504084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Straight-Line Programs: A Practical Test 直线程序:一个实用测试
I. Burmistrov, Lesha Khvorost
We present an improvement of Rytter's algorithm that constructs a straight-line program for a given text and show that the improved algorithm is optimal in the worst case with respect to the number of AVL-tree rotations. Also we compare Rytter's and ours algorithms on various data sets and provide a comparative analysis of compression ratio achieved by these algorithms, by LZ77 and by LZW.
我们提出了对Rytter算法的改进,该算法为给定文本构建了一个直线程序,并表明改进算法在最坏情况下对于avl树旋转的次数是最优的。我们还比较了Rytter的算法和我们的算法在各种数据集上的表现,并提供了LZ77和LZW这两种算法实现的压缩比的对比分析。
{"title":"Straight-Line Programs: A Practical Test","authors":"I. Burmistrov, Lesha Khvorost","doi":"10.1109/CCP.2011.8","DOIUrl":"https://doi.org/10.1109/CCP.2011.8","url":null,"abstract":"We present an improvement of Rytter's algorithm that constructs a straight-line program for a given text and show that the improved algorithm is optimal in the worst case with respect to the number of AVL-tree rotations. Also we compare Rytter's and ours algorithms on various data sets and provide a comparative analysis of compression ratio achieved by these algorithms, by LZ77 and by LZW.","PeriodicalId":167131,"journal":{"name":"2011 First International Conference on Data Compression, Communications and Processing","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126666876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Overload Control through Multiprocessor Load Sharing in ATCA Architecture ATCA体系结构中多处理器负载分担的过载控制
S. Montagna, M. Pignolo
This work will deal with overload control schemes within ATCA modules achieving IMS functionalities and exploiting the cooperation between processors. A performance evaluation will be carried out on two algorithms aimed at optimizing multiple processors workload within ATCA boards performing incoming traffic control. The driving policy of the first algorithm consists in a continuous estimation of the mean processors workload, while the gear of the other algorithm is a load balancing following a queue estimation. The Key Performance Indicator will be represented by the throughput, i.e. the number of sessions managed within a fixed time period.
这项工作将处理ATCA模块内的过载控制方案,实现IMS功能并利用处理器之间的协作。将对两种算法进行性能评估,旨在优化ATCA板内执行传入流量控制的多处理器工作负载。第一种算法的驱动策略是对平均处理器工作负载的连续估计,而另一种算法的驱动策略是在队列估计之后的负载平衡。关键绩效指标将由吞吐量表示,即在固定时间段内管理的会话数量。
{"title":"Overload Control through Multiprocessor Load Sharing in ATCA Architecture","authors":"S. Montagna, M. Pignolo","doi":"10.1109/CCP.2011.13","DOIUrl":"https://doi.org/10.1109/CCP.2011.13","url":null,"abstract":"This work will deal with overload control schemes within ATCA modules achieving IMS functionalities and exploiting the cooperation between processors. A performance evaluation will be carried out on two algorithms aimed at optimizing multiple processors workload within ATCA boards performing incoming traffic control. The driving policy of the first algorithm consists in a continuous estimation of the mean processors workload, while the gear of the other algorithm is a load balancing following a queue estimation. The Key Performance Indicator will be represented by the throughput, i.e. the number of sessions managed within a fixed time period.","PeriodicalId":167131,"journal":{"name":"2011 First International Conference on Data Compression, Communications and Processing","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124088201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cache Friendly Burrows-Wheeler Inversion 缓存友好的陋居-惠勒反转
Juha Kärkkäinen, S. Puglisi
The Burrows-Wheeler transform permutes the symbols of a string such that the permuted string can be compressed effectively with fast, simple techniques. Inversion of the transform is a bottleneck in practice. Inversion takes linear time, but, for each symbol decoded, folklore says that a random access into the transformed string (and so a CPU cache-miss) is necessary. In this paper we show how to mitigate cache misses and so speed inversion. Our main idea is to modify the standard inversion algorithm to detect and record repeated sub strings in the original string as it is recovered. Subsequent occurrences of these repetitions are then copied in a cache friendly way from the already recovered portion of the string, short cutting a series of random accesses by the standard inversion algorithm. We show experimentally that this approach leads to faster runtimes in general, and can drastically reduce inversion time for highly repetitive data.
Burrows-Wheeler变换对字符串的符号进行排列,这样排列后的字符串可以用快速、简单的技术有效地压缩。变换的反演是实际应用中的瓶颈。反转需要线性时间,但是,对于解码的每个符号,一般认为随机访问转换后的字符串(因此CPU缓存丢失)是必要的。在本文中,我们展示了如何减少缓存丢失,从而加快反转。我们的主要想法是修改标准反转算法,以便在原始字符串恢复时检测并记录重复的子字符串。然后,这些重复的后续出现以缓存友好的方式从已经恢复的字符串部分复制,通过标准反转算法缩短了一系列随机访问。我们通过实验证明,这种方法通常会导致更快的运行时间,并且可以大大减少高度重复数据的反演时间。
{"title":"Cache Friendly Burrows-Wheeler Inversion","authors":"Juha Kärkkäinen, S. Puglisi","doi":"10.1109/CCP.2011.15","DOIUrl":"https://doi.org/10.1109/CCP.2011.15","url":null,"abstract":"The Burrows-Wheeler transform permutes the symbols of a string such that the permuted string can be compressed effectively with fast, simple techniques. Inversion of the transform is a bottleneck in practice. Inversion takes linear time, but, for each symbol decoded, folklore says that a random access into the transformed string (and so a CPU cache-miss) is necessary. In this paper we show how to mitigate cache misses and so speed inversion. Our main idea is to modify the standard inversion algorithm to detect and record repeated sub strings in the original string as it is recovered. Subsequent occurrences of these repetitions are then copied in a cache friendly way from the already recovered portion of the string, short cutting a series of random accesses by the standard inversion algorithm. We show experimentally that this approach leads to faster runtimes in general, and can drastically reduce inversion time for highly repetitive data.","PeriodicalId":167131,"journal":{"name":"2011 First International Conference on Data Compression, Communications and Processing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125845245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Cataloga: A Software for Semantic-Based Terminological Data Mining Cataloga:基于语义的术语数据挖掘软件
A. Elia, Mario Monteleone, Alberto Postiglione
This paper is focused on Catalog a, a software package based on Lexicon-Grammar theoretical and practical analytical framework and embedding a ling ware module built on compressed terminological electronic dictionaries. We will here show how Catalog a can be used to achieve efficient data mining and information retrieval by means of lexical ontology associated to terminology-based automatic textual analysis. Also, we will show how accurate data compression is necessary to build efficient textual analysis software. Therefore, we will here discuss the creation and functioning of a software for semantic-based terminological data mining, in which a crucial role is played by Italian simple and compound-word electronic dictionaries. Lexicon-Grammar is one of the most profitable and consistent methods for natural language formalization and automatic textual analysis it was set up by French linguist Maurice Gross during the '60s, and subsequently developed for and applied to Italian by Annibale Elia, Emilio D'Agostino and Maurizio Martin Elli. Basically, Lexicon-Grammar establishes morph syntactic and statistical sets of analytic rules to read and parse large textual corpora. The analytical procedure here described will prove itself appropriate for any type of digitalized text, and will represent a relevant support for the building and implementing of Semantic Web (SW) interactive platforms.
目录a是一个基于词典语法理论和实践分析框架,嵌入一个基于压缩术语电子词典的语言模块的软件包。我们将在这里展示如何使用Catalog a通过与基于术语的自动文本分析相关联的词汇本体来实现有效的数据挖掘和信息检索。此外,我们将展示准确的数据压缩对于构建高效的文本分析软件是多么必要。因此,我们将在这里讨论基于语义的术语数据挖掘软件的创建和功能,其中意大利语简单词和复合词电子词典起着至关重要的作用。Lexicon-Grammar是自然语言形式化和自动文本分析中最有效和最一致的方法之一,它是由法国语言学家Maurice Gross在60年代建立的,随后由Annibale Elia, Emilio D' agostino和Maurizio Martin Elli发展并应用于意大利语。基本上,Lexicon-Grammar建立了分析规则的词形、句法和统计集,以阅读和解析大型文本语料库。这里描述的分析过程将证明自己适用于任何类型的数字化文本,并将代表对语义网(SW)交互平台的构建和实现的相关支持。
{"title":"Cataloga: A Software for Semantic-Based Terminological Data Mining","authors":"A. Elia, Mario Monteleone, Alberto Postiglione","doi":"10.1109/CCP.2011.42","DOIUrl":"https://doi.org/10.1109/CCP.2011.42","url":null,"abstract":"This paper is focused on Catalog a, a software package based on Lexicon-Grammar theoretical and practical analytical framework and embedding a ling ware module built on compressed terminological electronic dictionaries. We will here show how Catalog a can be used to achieve efficient data mining and information retrieval by means of lexical ontology associated to terminology-based automatic textual analysis. Also, we will show how accurate data compression is necessary to build efficient textual analysis software. Therefore, we will here discuss the creation and functioning of a software for semantic-based terminological data mining, in which a crucial role is played by Italian simple and compound-word electronic dictionaries. Lexicon-Grammar is one of the most profitable and consistent methods for natural language formalization and automatic textual analysis it was set up by French linguist Maurice Gross during the '60s, and subsequently developed for and applied to Italian by Annibale Elia, Emilio D'Agostino and Maurizio Martin Elli. Basically, Lexicon-Grammar establishes morph syntactic and statistical sets of analytic rules to read and parse large textual corpora. The analytical procedure here described will prove itself appropriate for any type of digitalized text, and will represent a relevant support for the building and implementing of Semantic Web (SW) interactive platforms.","PeriodicalId":167131,"journal":{"name":"2011 First International Conference on Data Compression, Communications and Processing","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123407640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Combining Non-stationary Prediction, Optimization and Mixing for Data Compression 结合非平稳预测、优化和混合的数据压缩
Christopher Mattern
In this paper an approach to modelling nonstationary binary sequences, i.e., predicting the probability of upcoming symbols, is presented. After studying the prediction model we evaluate its performance in two non-artificial test cases. First the model is compared to the Laplace and Krichevsky-Trofimov estimators. Secondly a statistical ensemble model for compressing Burrows-Wheeler-Transform output is worked out and evaluated. A systematic approach to the parameter optimization of an individual model and the ensemble model is stated.
本文提出了一种非平稳二值序列的建模方法,即预测即将出现的符号的概率。在研究了预测模型之后,我们在两个非人工的测试用例中对其性能进行了评估。首先,将模型与拉普拉斯估计和克里切夫斯基-特罗菲莫夫估计进行比较。其次,提出了一种用于压缩Burrows-Wheeler-Transform输出的统计集成模型,并对其进行了评价。对单个模型和集成模型的参数优化提出了一种系统的方法。
{"title":"Combining Non-stationary Prediction, Optimization and Mixing for Data Compression","authors":"Christopher Mattern","doi":"10.1109/CCP.2011.22","DOIUrl":"https://doi.org/10.1109/CCP.2011.22","url":null,"abstract":"In this paper an approach to modelling nonstationary binary sequences, i.e., predicting the probability of upcoming symbols, is presented. After studying the prediction model we evaluate its performance in two non-artificial test cases. First the model is compared to the Laplace and Krichevsky-Trofimov estimators. Secondly a statistical ensemble model for compressing Burrows-Wheeler-Transform output is worked out and evaluated. A systematic approach to the parameter optimization of an individual model and the ensemble model is stated.","PeriodicalId":167131,"journal":{"name":"2011 First International Conference on Data Compression, Communications and Processing","volume":"2011 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121792579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A Patch for Squashfs to Improve the Compressed Files Contents Search: HSFS 改进压缩文件内容搜索的Squashfs补丁:HSFS
N. Corriero
Squash FS is a Linux compress file system. Hixosfs is a file system to improve file content search by using metadata information's. In this paper we propose to use Hixosfs idea in Squash FS context by creating a new file system HSFS. HSFS is a compress Linux file system to store metadata within nodes. We compare our idea with other common solutions. We test our idea with DICOM file used to store medical images.
Squash FS是一个Linux压缩文件系统。Hixosfs是一个通过使用元数据信息来改进文件内容搜索的文件系统。在本文中,我们通过创建一个新的文件系统HSFS,提出在Squash FS上下文中使用Hixosfs的思想。HSFS是一个压缩Linux文件系统,用于在节点中存储元数据。我们将我们的想法与其他常见的解决方案进行比较。我们使用用于存储医学图像的DICOM文件来测试我们的想法。
{"title":"A Patch for Squashfs to Improve the Compressed Files Contents Search: HSFS","authors":"N. Corriero","doi":"10.1109/CCP.2011.34","DOIUrl":"https://doi.org/10.1109/CCP.2011.34","url":null,"abstract":"Squash FS is a Linux compress file system. Hixosfs is a file system to improve file content search by using metadata information's. In this paper we propose to use Hixosfs idea in Squash FS context by creating a new file system HSFS. HSFS is a compress Linux file system to store metadata within nodes. We compare our idea with other common solutions. We test our idea with DICOM file used to store medical images.","PeriodicalId":167131,"journal":{"name":"2011 First International Conference on Data Compression, Communications and Processing","volume":"2015 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121068281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Novel Approach to QoS Monitoring in the Cloud 一种新的云服务质量监控方法
L. Romano, D. Mari, Zbigniew Jerzak, C. Fetzer
The availability of a dependable (i.e. reliable and timely) QoS monitoring facility is key for the real take up of cloud computing, since - by allowing organizations to receive the full value of cloud computing services - it would increase the level of trust they would place in this emerging technology. In this paper, we present a dependable QoS monitoring facility which relies on the "as a Service" paradigm, and can thus be made available to virtually all cloud users in a seamless way. Such a facility is called QoS-MONaaS, which stands for "Quality of Service MONitoring as a Service". Details are given about the internal design, current implementation, and experimental validation of the service.
可靠(即可靠和及时)的QoS监控设施的可用性是真正采用云计算的关键,因为通过允许组织获得云计算服务的全部价值,这将提高他们对这种新兴技术的信任程度。在本文中,我们提出了一个可靠的QoS监控设施,它依赖于“即服务”范式,因此可以以无缝的方式提供给几乎所有的云用户。这种设施被称为QoS-MONaaS,即“服务质量监控即服务”。详细介绍了该服务的内部设计、当前实现和实验验证。
{"title":"A Novel Approach to QoS Monitoring in the Cloud","authors":"L. Romano, D. Mari, Zbigniew Jerzak, C. Fetzer","doi":"10.1109/CCP.2011.49","DOIUrl":"https://doi.org/10.1109/CCP.2011.49","url":null,"abstract":"The availability of a dependable (i.e. reliable and timely) QoS monitoring facility is key for the real take up of cloud computing, since - by allowing organizations to receive the full value of cloud computing services - it would increase the level of trust they would place in this emerging technology. In this paper, we present a dependable QoS monitoring facility which relies on the \"as a Service\" paradigm, and can thus be made available to virtually all cloud users in a seamless way. Such a facility is called QoS-MONaaS, which stands for \"Quality of Service MONitoring as a Service\". Details are given about the internal design, current implementation, and experimental validation of the service.","PeriodicalId":167131,"journal":{"name":"2011 First International Conference on Data Compression, Communications and Processing","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126627524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 48
期刊
2011 First International Conference on Data Compression, Communications and Processing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1