In this paper we review the Spectral oriented Least SQuares (SLSQ) algorithm : an efficient and low complexity algorithm for Hyper spectral Image loss less compression, presented in [2]. Subsequently, we consider two important measures : Pearson's Correlation and Bhattacharyya distance and describe a band ordering approach based on this distances. Finally, we report experimental results achieved with a Java-based implementation of SLSQ on data cubes acquired by NASA JPL's Airborne Visible/Infrared Imaging Spectrometer (AVIRIS).
{"title":"Lossless Compression of Hyperspectral Imagery","authors":"Raffaele Pizzolante","doi":"10.1109/CCP.2011.31","DOIUrl":"https://doi.org/10.1109/CCP.2011.31","url":null,"abstract":"In this paper we review the Spectral oriented Least SQuares (SLSQ) algorithm : an efficient and low complexity algorithm for Hyper spectral Image loss less compression, presented in [2]. Subsequently, we consider two important measures : Pearson's Correlation and Bhattacharyya distance and describe a band ordering approach based on this distances. Finally, we report experimental results achieved with a Java-based implementation of SLSQ on data cubes acquired by NASA JPL's Airborne Visible/Infrared Imaging Spectrometer (AVIRIS).","PeriodicalId":167131,"journal":{"name":"2011 First International Conference on Data Compression, Communications and Processing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123078323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The new era of particle physics poses strong constraints on computing and storage availability for data analysis and data distribution. The SuperB project plans to produce and analyzes bulk of dataset two times bigger than the actual HEP experiment. In this scenario one of the main issues is to create a new cluster setup, able to scale for the next ten years and to take advantage from the new fabric technologies, included multicore and graphic programming units (GPUs). In this paper we propose a new site-wide cluster setup for Tier1 computer facilities, aimed to integrate storage and computing resources through a mix of high density storage solutions, cluster file system and Nx10Gbit/s network interfaces. The main idea is overcome the bottleneck due to the storage-computing decoupling through a scalable model composed by nodes with many cores and several disks in JBOD configuration. Preliminary tests made on 10Gbit/s cluster with a real SuperB use case, show the validity of our approach.
{"title":"Evaluating New Cluster Setup on 10Gbit/s Network to Support the SuperB Computing Model","authors":"D. D. Prete, S. Pardi, G. Russo","doi":"10.1109/CCP.2011.33","DOIUrl":"https://doi.org/10.1109/CCP.2011.33","url":null,"abstract":"The new era of particle physics poses strong constraints on computing and storage availability for data analysis and data distribution. The SuperB project plans to produce and analyzes bulk of dataset two times bigger than the actual HEP experiment. In this scenario one of the main issues is to create a new cluster setup, able to scale for the next ten years and to take advantage from the new fabric technologies, included multicore and graphic programming units (GPUs). In this paper we propose a new site-wide cluster setup for Tier1 computer facilities, aimed to integrate storage and computing resources through a mix of high density storage solutions, cluster file system and Nx10Gbit/s network interfaces. The main idea is overcome the bottleneck due to the storage-computing decoupling through a scalable model composed by nodes with many cores and several disks in JBOD configuration. Preliminary tests made on 10Gbit/s cluster with a real SuperB use case, show the validity of our approach.","PeriodicalId":167131,"journal":{"name":"2011 First International Conference on Data Compression, Communications and Processing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125806623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A massive diffusion of positioning devices and services, transmitting and producing spatio-temporal data, raised space complexity problems and pulled the research focus toward efficient and specific algorithms to compress these huge amount of stored or flowing data. Co Tracks algorithm has been projected for a lossy compression of GPS data, exploiting analogies between all their spatio-temporal features. The original contribution of this algorithm is the consideration of the altitude of the track, an elaboration of 3D data and a dynamic vision of the moving point, because the speed, tightly linked to the time, is supposed to be one of the significant parameters in the uniformity search. Minimum Bounding Box has been the tool employed to group data points and to generate the key points of the approximated trajectory. The compression ratio, resulting also after a further Huffman coding, appears attractively high, suggesting new interesting developments of this new technique.
{"title":"CoTracks: A New Lossy Compression Schema for Tracking Logs Data Based on Multiparametric Segmentation","authors":"W. Balzano, M. D. Sorbo","doi":"10.1109/CCP.2011.37","DOIUrl":"https://doi.org/10.1109/CCP.2011.37","url":null,"abstract":"A massive diffusion of positioning devices and services, transmitting and producing spatio-temporal data, raised space complexity problems and pulled the research focus toward efficient and specific algorithms to compress these huge amount of stored or flowing data. Co Tracks algorithm has been projected for a lossy compression of GPS data, exploiting analogies between all their spatio-temporal features. The original contribution of this algorithm is the consideration of the altitude of the track, an elaboration of 3D data and a dynamic vision of the moving point, because the speed, tightly linked to the time, is supposed to be one of the significant parameters in the uniformity search. Minimum Bounding Box has been the tool employed to group data points and to generate the key points of the approximated trajectory. The compression ratio, resulting also after a further Huffman coding, appears attractively high, suggesting new interesting developments of this new technique.","PeriodicalId":167131,"journal":{"name":"2011 First International Conference on Data Compression, Communications and Processing","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114504084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present an improvement of Rytter's algorithm that constructs a straight-line program for a given text and show that the improved algorithm is optimal in the worst case with respect to the number of AVL-tree rotations. Also we compare Rytter's and ours algorithms on various data sets and provide a comparative analysis of compression ratio achieved by these algorithms, by LZ77 and by LZW.
{"title":"Straight-Line Programs: A Practical Test","authors":"I. Burmistrov, Lesha Khvorost","doi":"10.1109/CCP.2011.8","DOIUrl":"https://doi.org/10.1109/CCP.2011.8","url":null,"abstract":"We present an improvement of Rytter's algorithm that constructs a straight-line program for a given text and show that the improved algorithm is optimal in the worst case with respect to the number of AVL-tree rotations. Also we compare Rytter's and ours algorithms on various data sets and provide a comparative analysis of compression ratio achieved by these algorithms, by LZ77 and by LZW.","PeriodicalId":167131,"journal":{"name":"2011 First International Conference on Data Compression, Communications and Processing","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126666876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This work will deal with overload control schemes within ATCA modules achieving IMS functionalities and exploiting the cooperation between processors. A performance evaluation will be carried out on two algorithms aimed at optimizing multiple processors workload within ATCA boards performing incoming traffic control. The driving policy of the first algorithm consists in a continuous estimation of the mean processors workload, while the gear of the other algorithm is a load balancing following a queue estimation. The Key Performance Indicator will be represented by the throughput, i.e. the number of sessions managed within a fixed time period.
{"title":"Overload Control through Multiprocessor Load Sharing in ATCA Architecture","authors":"S. Montagna, M. Pignolo","doi":"10.1109/CCP.2011.13","DOIUrl":"https://doi.org/10.1109/CCP.2011.13","url":null,"abstract":"This work will deal with overload control schemes within ATCA modules achieving IMS functionalities and exploiting the cooperation between processors. A performance evaluation will be carried out on two algorithms aimed at optimizing multiple processors workload within ATCA boards performing incoming traffic control. The driving policy of the first algorithm consists in a continuous estimation of the mean processors workload, while the gear of the other algorithm is a load balancing following a queue estimation. The Key Performance Indicator will be represented by the throughput, i.e. the number of sessions managed within a fixed time period.","PeriodicalId":167131,"journal":{"name":"2011 First International Conference on Data Compression, Communications and Processing","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124088201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Burrows-Wheeler transform permutes the symbols of a string such that the permuted string can be compressed effectively with fast, simple techniques. Inversion of the transform is a bottleneck in practice. Inversion takes linear time, but, for each symbol decoded, folklore says that a random access into the transformed string (and so a CPU cache-miss) is necessary. In this paper we show how to mitigate cache misses and so speed inversion. Our main idea is to modify the standard inversion algorithm to detect and record repeated sub strings in the original string as it is recovered. Subsequent occurrences of these repetitions are then copied in a cache friendly way from the already recovered portion of the string, short cutting a series of random accesses by the standard inversion algorithm. We show experimentally that this approach leads to faster runtimes in general, and can drastically reduce inversion time for highly repetitive data.
{"title":"Cache Friendly Burrows-Wheeler Inversion","authors":"Juha Kärkkäinen, S. Puglisi","doi":"10.1109/CCP.2011.15","DOIUrl":"https://doi.org/10.1109/CCP.2011.15","url":null,"abstract":"The Burrows-Wheeler transform permutes the symbols of a string such that the permuted string can be compressed effectively with fast, simple techniques. Inversion of the transform is a bottleneck in practice. Inversion takes linear time, but, for each symbol decoded, folklore says that a random access into the transformed string (and so a CPU cache-miss) is necessary. In this paper we show how to mitigate cache misses and so speed inversion. Our main idea is to modify the standard inversion algorithm to detect and record repeated sub strings in the original string as it is recovered. Subsequent occurrences of these repetitions are then copied in a cache friendly way from the already recovered portion of the string, short cutting a series of random accesses by the standard inversion algorithm. We show experimentally that this approach leads to faster runtimes in general, and can drastically reduce inversion time for highly repetitive data.","PeriodicalId":167131,"journal":{"name":"2011 First International Conference on Data Compression, Communications and Processing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125845245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper is focused on Catalog a, a software package based on Lexicon-Grammar theoretical and practical analytical framework and embedding a ling ware module built on compressed terminological electronic dictionaries. We will here show how Catalog a can be used to achieve efficient data mining and information retrieval by means of lexical ontology associated to terminology-based automatic textual analysis. Also, we will show how accurate data compression is necessary to build efficient textual analysis software. Therefore, we will here discuss the creation and functioning of a software for semantic-based terminological data mining, in which a crucial role is played by Italian simple and compound-word electronic dictionaries. Lexicon-Grammar is one of the most profitable and consistent methods for natural language formalization and automatic textual analysis it was set up by French linguist Maurice Gross during the '60s, and subsequently developed for and applied to Italian by Annibale Elia, Emilio D'Agostino and Maurizio Martin Elli. Basically, Lexicon-Grammar establishes morph syntactic and statistical sets of analytic rules to read and parse large textual corpora. The analytical procedure here described will prove itself appropriate for any type of digitalized text, and will represent a relevant support for the building and implementing of Semantic Web (SW) interactive platforms.
目录a是一个基于词典语法理论和实践分析框架,嵌入一个基于压缩术语电子词典的语言模块的软件包。我们将在这里展示如何使用Catalog a通过与基于术语的自动文本分析相关联的词汇本体来实现有效的数据挖掘和信息检索。此外,我们将展示准确的数据压缩对于构建高效的文本分析软件是多么必要。因此,我们将在这里讨论基于语义的术语数据挖掘软件的创建和功能,其中意大利语简单词和复合词电子词典起着至关重要的作用。Lexicon-Grammar是自然语言形式化和自动文本分析中最有效和最一致的方法之一,它是由法国语言学家Maurice Gross在60年代建立的,随后由Annibale Elia, Emilio D' agostino和Maurizio Martin Elli发展并应用于意大利语。基本上,Lexicon-Grammar建立了分析规则的词形、句法和统计集,以阅读和解析大型文本语料库。这里描述的分析过程将证明自己适用于任何类型的数字化文本,并将代表对语义网(SW)交互平台的构建和实现的相关支持。
{"title":"Cataloga: A Software for Semantic-Based Terminological Data Mining","authors":"A. Elia, Mario Monteleone, Alberto Postiglione","doi":"10.1109/CCP.2011.42","DOIUrl":"https://doi.org/10.1109/CCP.2011.42","url":null,"abstract":"This paper is focused on Catalog a, a software package based on Lexicon-Grammar theoretical and practical analytical framework and embedding a ling ware module built on compressed terminological electronic dictionaries. We will here show how Catalog a can be used to achieve efficient data mining and information retrieval by means of lexical ontology associated to terminology-based automatic textual analysis. Also, we will show how accurate data compression is necessary to build efficient textual analysis software. Therefore, we will here discuss the creation and functioning of a software for semantic-based terminological data mining, in which a crucial role is played by Italian simple and compound-word electronic dictionaries. Lexicon-Grammar is one of the most profitable and consistent methods for natural language formalization and automatic textual analysis it was set up by French linguist Maurice Gross during the '60s, and subsequently developed for and applied to Italian by Annibale Elia, Emilio D'Agostino and Maurizio Martin Elli. Basically, Lexicon-Grammar establishes morph syntactic and statistical sets of analytic rules to read and parse large textual corpora. The analytical procedure here described will prove itself appropriate for any type of digitalized text, and will represent a relevant support for the building and implementing of Semantic Web (SW) interactive platforms.","PeriodicalId":167131,"journal":{"name":"2011 First International Conference on Data Compression, Communications and Processing","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123407640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In Remote Objective Monitoring of Bio-Signals(ROMOBS) project, an automated near real-time remote health-monitoring device is being developed. The goal of this device is to measure blood flow parameters (systolic/diastolic blood pressure, heart rate, etc.), report the measurement results to a medical centre, and get the response back to the outpatient, all in an autonomous fashion. The objective of this paper is to develop a communication protocol that will enable the measurement device to be efficiently and constantly connected to a server the medical staff works on. Steps toward completing this goal include figuring out the network scheme that would effectively do the job, while maintaining low level of complexity and complying with the requirements set by the project. It results in a hybrid Bluetooth/cellular wireless system that emerges as the primary choice of connectivity medium with an application that sits on the Bluetooth- and Java-enabled cell phone as the data carrier. This paper discusses the development progress, the technologies involved, and the creation process of an interactive and user-friendly ROMOBS application.
{"title":"Wireless Connectivity for Remote Objective Monitoring of Bio-signals","authors":"A. Aristama, W. Almuhtadi","doi":"10.1109/CCP.2011.27","DOIUrl":"https://doi.org/10.1109/CCP.2011.27","url":null,"abstract":"In Remote Objective Monitoring of Bio-Signals(ROMOBS) project, an automated near real-time remote health-monitoring device is being developed. The goal of this device is to measure blood flow parameters (systolic/diastolic blood pressure, heart rate, etc.), report the measurement results to a medical centre, and get the response back to the outpatient, all in an autonomous fashion. The objective of this paper is to develop a communication protocol that will enable the measurement device to be efficiently and constantly connected to a server the medical staff works on. Steps toward completing this goal include figuring out the network scheme that would effectively do the job, while maintaining low level of complexity and complying with the requirements set by the project. It results in a hybrid Bluetooth/cellular wireless system that emerges as the primary choice of connectivity medium with an application that sits on the Bluetooth- and Java-enabled cell phone as the data carrier. This paper discusses the development progress, the technologies involved, and the creation process of an interactive and user-friendly ROMOBS application.","PeriodicalId":167131,"journal":{"name":"2011 First International Conference on Data Compression, Communications and Processing","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133271752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Burrows-Wheeler Transform (bwt) is the basis for many of the most effective compression and self-indexing methods used today. A key to the versatility of the bwt is the ability to search for patterns directly in the transformed text. A backwards search for a pattern P can be performed on a transformed text by iteratively determining the range of suffixes that match P. The search can be further enhanced by constructing a wavelet tree over the output of the bwt in order to emulate a suffix array. In this paper, we investigate new algorithms for search derived from a variation of the bwt whereby rotations are only sorted to a depth k, commonly referred to as a context bound transform. Interestingly, this bwt variant can be used to mimic a k-gram index, which are used in a variety of applications that need to efficiently return occurrences in text position order. In this paper, we present the first backwards search algorithms on the k-bwt, and show how to construct a self-index containing many of the attractive properties of a k-gram index.
{"title":"Backwards Search in Context Bound Text Transformations","authors":"M. Petri, G. Navarro, J. Culpepper, S. Puglisi","doi":"10.1109/CCP.2011.18","DOIUrl":"https://doi.org/10.1109/CCP.2011.18","url":null,"abstract":"The Burrows-Wheeler Transform (bwt) is the basis for many of the most effective compression and self-indexing methods used today. A key to the versatility of the bwt is the ability to search for patterns directly in the transformed text. A backwards search for a pattern P can be performed on a transformed text by iteratively determining the range of suffixes that match P. The search can be further enhanced by constructing a wavelet tree over the output of the bwt in order to emulate a suffix array. In this paper, we investigate new algorithms for search derived from a variation of the bwt whereby rotations are only sorted to a depth k, commonly referred to as a context bound transform. Interestingly, this bwt variant can be used to mimic a k-gram index, which are used in a variety of applications that need to efficiently return occurrences in text position order. In this paper, we present the first backwards search algorithms on the k-bwt, and show how to construct a self-index containing many of the attractive properties of a k-gram index.","PeriodicalId":167131,"journal":{"name":"2011 First International Conference on Data Compression, Communications and Processing","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131219201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Batch verification is a method devised to verify multiple signatures as a whole simultaneously. In literatures, we can see that some conventional batch verification schemes cannot effectively and efficiently identity bad signatures. Small Exponent test, a popular batch verification method, has its own problems, e.g., after a test, bad signatures still exist with some escape probabilities. In this paper, we propose a batch verification approach, called Matrix-Detection Algorithm (MDA for short), with which when a batch of signatures has less than four bad signatures or odd number of bad signatures, all bad signatures can be identified. Given 1024 signatures with 4 bad signatures, the maximum escape probability pmax of the MDA is 5.3×10-5 , and max p decreases as digital signatures or bad signatures increase. Analytic results show that the MDA is more secure and efficient than the SET.
批验证是将多个签名作为一个整体同时进行验证的一种方法。在文献中,我们可以看到一些传统的批验证方案不能有效和高效地识别坏签名。小指数测试是一种流行的批处理验证方法,但它也有自己的问题,例如,在测试后,仍然存在不良签名并有一定的逃逸概率。本文提出了一种批验证方法,即矩阵检测算法(Matrix-Detection Algorithm,简称MDA),当一批签名的坏签名个数小于4个或奇数个时,可以识别出所有的坏签名。在1024个签名和4个坏签名的情况下,MDA的最大逃逸概率pmax为5.3×10-5, max p随着数字签名或坏签名的增加而减小。分析结果表明,MDA比SET更安全、更高效。
{"title":"Verification of a Batch of Bad Signatures by Using the Matrix-Detection Algorithm","authors":"Yi-Li Huang, Chu-Hsing Lin, Fang-Yie Leu","doi":"10.1109/CCP.2011.46","DOIUrl":"https://doi.org/10.1109/CCP.2011.46","url":null,"abstract":"Batch verification is a method devised to verify multiple signatures as a whole simultaneously. In literatures, we can see that some conventional batch verification schemes cannot effectively and efficiently identity bad signatures. Small Exponent test, a popular batch verification method, has its own problems, e.g., after a test, bad signatures still exist with some escape probabilities. In this paper, we propose a batch verification approach, called Matrix-Detection Algorithm (MDA for short), with which when a batch of signatures has less than four bad signatures or odd number of bad signatures, all bad signatures can be identified. Given 1024 signatures with 4 bad signatures, the maximum escape probability pmax of the MDA is 5.3×10-5 , and max p decreases as digital signatures or bad signatures increase. Analytic results show that the MDA is more secure and efficient than the SET.","PeriodicalId":167131,"journal":{"name":"2011 First International Conference on Data Compression, Communications and Processing","volume":"46 42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131190658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}