Pub Date : 2009-12-01DOI: 10.1109/ESCIW.2009.5408006
C. Baun, M. Kunze
Scientists very often have specific requirements regarding the IT services to support their research and very often standardized offerings of their service providers do not fit. At KIT we aim to build a private Cloud to offer flexible infrastructure services that can easily be utilized and managed by end users according to their needs. With Eucalyptus, an open source solution exists that is fully compatible with Amazon EC2, S3 and EBS. This paper compares the performance of a cloud computing infrastructure implemented with Eucalyptus to Amazon EC2/S3/EBS and includes the lessons learned while building up a private cloud with Eucalyptus.
{"title":"Building a private cloud with Eucalyptus","authors":"C. Baun, M. Kunze","doi":"10.1109/ESCIW.2009.5408006","DOIUrl":"https://doi.org/10.1109/ESCIW.2009.5408006","url":null,"abstract":"Scientists very often have specific requirements regarding the IT services to support their research and very often standardized offerings of their service providers do not fit. At KIT we aim to build a private Cloud to offer flexible infrastructure services that can easily be utilized and managed by end users according to their needs. With Eucalyptus, an open source solution exists that is fully compatible with Amazon EC2, S3 and EBS. This paper compares the performance of a cloud computing infrastructure implemented with Eucalyptus to Amazon EC2/S3/EBS and includes the lessons learned while building up a private cloud with Eucalyptus.","PeriodicalId":416133,"journal":{"name":"2009 5th IEEE International Conference on E-Science Workshops","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116852484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-01DOI: 10.1109/ESCIW.2009.5407977
Rainer Simon, Joachim Korb, C. Sadilek, R. Schmidt
In this paper we present work in progress on a system for exploring and geo-referencing scanned historical maps based on collaborative user annotations. We discuss the feature set of our current implementation and identify use cases for our system. Furthermore, we place our work in the larger context of an “open historical GIS”: we explain our view of such a system as a platform for capturing, aggregating, analyzing and publishing free historical geographical data on the World Wide Web; and we argue that our prototype system addresses core functionality — in particular with regard to user-driven capturing and aggregation of historical geospatial data.
{"title":"Collaborative map annotation in the context of historical GIS","authors":"Rainer Simon, Joachim Korb, C. Sadilek, R. Schmidt","doi":"10.1109/ESCIW.2009.5407977","DOIUrl":"https://doi.org/10.1109/ESCIW.2009.5407977","url":null,"abstract":"In this paper we present work in progress on a system for exploring and geo-referencing scanned historical maps based on collaborative user annotations. We discuss the feature set of our current implementation and identify use cases for our system. Furthermore, we place our work in the larger context of an “open historical GIS”: we explain our view of such a system as a platform for capturing, aggregating, analyzing and publishing free historical geographical data on the World Wide Web; and we argue that our prototype system addresses core functionality — in particular with regard to user-driven capturing and aggregation of historical geospatial data.","PeriodicalId":416133,"journal":{"name":"2009 5th IEEE International Conference on E-Science Workshops","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124901581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-01DOI: 10.1109/ESCIW.2009.5407969
Matthias Arnold, Konrad Berner, Peter Gietz, K. Schultes, R. Wenzlhuemer
In our research cluster on transculturality many projects deal with georeferenced data. This paper offers an introduction to the new project GeoTwain that works on visualisation techniques for such data. Based on an analysis of the global telegraph network as an example of transcultural research using geo-referenced data, the paper derives user requirements by combining experiences gained from previous projects and specifies a set of solutions. It is the aim of GeoTwain to provide easy visualization of 4-D-information based on Google Earth and to grasp spatial relationships embedded in historical evidence to analyse, recombine and disaggregate geo-referenced historical data without having to use more specialized and highly complex GIS Tools. Envisioned visualization with GeoTwain allows for fast and efficient assessment of georeferencing's analytical potential in any given case; it also allows the user to carefully weigh further investments in data enrichment in relation to expected findings. Both the development and application of GeoTwain are embedded in the broader research environment infrastructure called Heidelberg Research Architecture (HRA).
{"title":"GeoTwain: Geospatial analysis and visualization for researchers of transculturality","authors":"Matthias Arnold, Konrad Berner, Peter Gietz, K. Schultes, R. Wenzlhuemer","doi":"10.1109/ESCIW.2009.5407969","DOIUrl":"https://doi.org/10.1109/ESCIW.2009.5407969","url":null,"abstract":"In our research cluster on transculturality many projects deal with georeferenced data. This paper offers an introduction to the new project GeoTwain that works on visualisation techniques for such data. Based on an analysis of the global telegraph network as an example of transcultural research using geo-referenced data, the paper derives user requirements by combining experiences gained from previous projects and specifies a set of solutions. It is the aim of GeoTwain to provide easy visualization of 4-D-information based on Google Earth and to grasp spatial relationships embedded in historical evidence to analyse, recombine and disaggregate geo-referenced historical data without having to use more specialized and highly complex GIS Tools. Envisioned visualization with GeoTwain allows for fast and efficient assessment of georeferencing's analytical potential in any given case; it also allows the user to carefully weigh further investments in data enrichment in relation to expected findings. Both the development and application of GeoTwain are embedded in the broader research environment infrastructure called Heidelberg Research Architecture (HRA).","PeriodicalId":416133,"journal":{"name":"2009 5th IEEE International Conference on E-Science Workshops","volume":"274 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125379724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-01DOI: 10.1109/ESCIW.2009.5407989
M. Junaid, Max Berger, T. Vitvar, Kassian Plankensteiner, T. Fahringer
With the increasing complexity of the Grid based application workflows, the workflow design process is also getting more and more complex. Many workflow design tools provide mechanism to ease the workflow design process and to make the life easier for a workflow designer. In this paper, we present a provenance based workflow design suggestion system for quick and easy creation of the error-free workflows. In our approach, the provenance system intercepts the users' actions, processes, stores these actions in the provenance store and provides suggestions about possible subsequent actions for the workflow design. These suggested actions are based on the current user actions and are calculated using the provenance information available in the provenance store. These design suggestions partially automate the design process providing ease of use, reliability and correctness during the workflow design process. Creating error-free workflows is of pivotal importance in distributed execution environments. Increasing complexity in the designing of these complex workflows is making the design process more error prone and tedious. Taking into account the significance of correctness of the Grid based workflow and realizing the importance of the design-time in the life of a workflow based application, we present a novel approach of using recorded provenance information.
{"title":"Workflow composition through design suggestions using design-time provenance information","authors":"M. Junaid, Max Berger, T. Vitvar, Kassian Plankensteiner, T. Fahringer","doi":"10.1109/ESCIW.2009.5407989","DOIUrl":"https://doi.org/10.1109/ESCIW.2009.5407989","url":null,"abstract":"With the increasing complexity of the Grid based application workflows, the workflow design process is also getting more and more complex. Many workflow design tools provide mechanism to ease the workflow design process and to make the life easier for a workflow designer. In this paper, we present a provenance based workflow design suggestion system for quick and easy creation of the error-free workflows. In our approach, the provenance system intercepts the users' actions, processes, stores these actions in the provenance store and provides suggestions about possible subsequent actions for the workflow design. These suggested actions are based on the current user actions and are calculated using the provenance information available in the provenance store. These design suggestions partially automate the design process providing ease of use, reliability and correctness during the workflow design process. Creating error-free workflows is of pivotal importance in distributed execution environments. Increasing complexity in the designing of these complex workflows is making the design process more error prone and tedious. Taking into account the significance of correctness of the Grid based workflow and realizing the importance of the design-time in the life of a workflow based application, we present a novel approach of using recorded provenance information.","PeriodicalId":416133,"journal":{"name":"2009 5th IEEE International Conference on E-Science Workshops","volume":"129 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131981214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-01DOI: 10.1109/ESCIW.2009.5407991
David Pettinger, G. Di Fatta
Clustering is defined as the grouping of similar items in a set, and is an important process within the field of data mining. As the amount of data for various applications continues to increase, in terms of its size and dimensionality, it is necessary to have efficient clustering methods. A popular clustering algorithm is K-Means, which adopts a greedy approach to produce a set of K-clusters with associated centres of mass, and uses a squared error distortion measure to determine convergence. Methods for improving the efficiency of K-Means have been largely explored in two main directions. The amount of computation can be significantly reduced by adopting a more efficient data structure, notably a multi-dimensional binary search tree (KD-Tree) to store either centroids or data points. A second direction is parallel processing, where data and computation loads are distributed over many processing nodes. However, little work has been done to provide a parallel formulation of the efficient sequential techniques based on KD-Trees. Such approaches are expected to have an irregular distribution of computation load and can suffer from load imbalance. This issue has so far limited the adoption of these efficient K-Means techniques in parallel computational environments. In this work, we provide a parallel formulation for the KD-Tree based K-Means algorithm and address its load balancing issues.
{"title":"Scalability of efficient parallel K-Means","authors":"David Pettinger, G. Di Fatta","doi":"10.1109/ESCIW.2009.5407991","DOIUrl":"https://doi.org/10.1109/ESCIW.2009.5407991","url":null,"abstract":"Clustering is defined as the grouping of similar items in a set, and is an important process within the field of data mining. As the amount of data for various applications continues to increase, in terms of its size and dimensionality, it is necessary to have efficient clustering methods. A popular clustering algorithm is K-Means, which adopts a greedy approach to produce a set of K-clusters with associated centres of mass, and uses a squared error distortion measure to determine convergence. Methods for improving the efficiency of K-Means have been largely explored in two main directions. The amount of computation can be significantly reduced by adopting a more efficient data structure, notably a multi-dimensional binary search tree (KD-Tree) to store either centroids or data points. A second direction is parallel processing, where data and computation loads are distributed over many processing nodes. However, little work has been done to provide a parallel formulation of the efficient sequential techniques based on KD-Trees. Such approaches are expected to have an irregular distribution of computation load and can suffer from load imbalance. This issue has so far limited the adoption of these efficient K-Means techniques in parallel computational environments. In this work, we provide a parallel formulation for the KD-Tree based K-Means algorithm and address its load balancing issues.","PeriodicalId":416133,"journal":{"name":"2009 5th IEEE International Conference on E-Science Workshops","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115755713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-01DOI: 10.1109/ESCIW.2009.5407967
S. Jeffrey, J. Richards, F. Ciravegna, S. Waller, S. Chapman, Z. Zhang
There exists a large and underutilized resource of archaeological literature, both formal, such as scholarly journals and less formal in the form of ‘grey literature’. In the archaeological domain the vast majority of this literature contains some geo-spatial element as well as the expected temporal information and therefore its ease of discovery would be greatly enhanced were it accessible via a geo-spatially enabled search mechanism. As a result of this, geo-referencing these types of material and integrating them with other resources, such as monument inventories, is seen as a desirable enhancement for digital archives serving the archaeological research community. This paper provides an overview of a number of the approaches to the integration of such legacy literature into geospatial search mechanisms in an archaeological context. In particular efforts to achieve this via the Archaeotools e-Science project and its use of natural language processing and a geo-spatial cross-walk service are discussed as well as potential future enhancements to the process.
{"title":"Integrating archaeological literature into resource discovery interfaces using natural language processing and name authority services","authors":"S. Jeffrey, J. Richards, F. Ciravegna, S. Waller, S. Chapman, Z. Zhang","doi":"10.1109/ESCIW.2009.5407967","DOIUrl":"https://doi.org/10.1109/ESCIW.2009.5407967","url":null,"abstract":"There exists a large and underutilized resource of archaeological literature, both formal, such as scholarly journals and less formal in the form of ‘grey literature’. In the archaeological domain the vast majority of this literature contains some geo-spatial element as well as the expected temporal information and therefore its ease of discovery would be greatly enhanced were it accessible via a geo-spatially enabled search mechanism. As a result of this, geo-referencing these types of material and integrating them with other resources, such as monument inventories, is seen as a desirable enhancement for digital archives serving the archaeological research community. This paper provides an overview of a number of the approaches to the integration of such legacy literature into geospatial search mechanisms in an archaeological context. In particular efforts to achieve this via the Archaeotools e-Science project and its use of natural language processing and a geo-spatial cross-walk service are discussed as well as potential future enhancements to the process.","PeriodicalId":416133,"journal":{"name":"2009 5th IEEE International Conference on E-Science Workshops","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122783657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-01DOI: 10.1109/ESCIW.2009.5407988
I. Habib, A. Anjum, P. Bloodsworth, R. McClatchey
Neuroimaging research is increasingly shifting towards distributed computing architectures for the processing of ever growing neuroimaging datasets. At present compute and data intensive neuroimaging workflows often use cluster-based resources to analyse datasets. For increased scalability however, distributed grid-based analysis platforms may be required. Such an analysis infrastructure necessitates robust methods of grid-aware planning and optimisation in order to efficiently execute often highly complex workflows. This paper presents the approaches used in neuGRID to plan the workflow gridification and enactment for neuroimaging research. Experiments show that grid-aware workflow planning techniques can achieve significant performance gains. Turn-around time of a typical neuroimaging workflow reduces by 30% compared to the same workflow enacted without grid-aware planning. Data efficiency also increases by more than 25%. The use of workflow planning techniques in the neuGRID infrastructure may enable it to process larger neuroimaging datasets and therefore allow researchers to carry out more statistically significant analysis.
{"title":"Neuroimaging analysis using grid aware planning and optimisation techniques","authors":"I. Habib, A. Anjum, P. Bloodsworth, R. McClatchey","doi":"10.1109/ESCIW.2009.5407988","DOIUrl":"https://doi.org/10.1109/ESCIW.2009.5407988","url":null,"abstract":"Neuroimaging research is increasingly shifting towards distributed computing architectures for the processing of ever growing neuroimaging datasets. At present compute and data intensive neuroimaging workflows often use cluster-based resources to analyse datasets. For increased scalability however, distributed grid-based analysis platforms may be required. Such an analysis infrastructure necessitates robust methods of grid-aware planning and optimisation in order to efficiently execute often highly complex workflows. This paper presents the approaches used in neuGRID to plan the workflow gridification and enactment for neuroimaging research. Experiments show that grid-aware workflow planning techniques can achieve significant performance gains. Turn-around time of a typical neuroimaging workflow reduces by 30% compared to the same workflow enacted without grid-aware planning. Data efficiency also increases by more than 25%. The use of workflow planning techniques in the neuGRID infrastructure may enable it to process larger neuroimaging datasets and therefore allow researchers to carry out more statistically significant analysis.","PeriodicalId":416133,"journal":{"name":"2009 5th IEEE International Conference on E-Science Workshops","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132932418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-01DOI: 10.1109/ESCIW.2009.5407963
P. Darch, A. Carusi, M. Jirotka
The acquisition by developers of e-Science applications of a thorough understanding of the requirements of end-users has been recognized as playing a critical role in the usability of such applications. However, there is another dimension to such an understanding that also plays an important role, namely the extent to which these developers converge on a shared understanding of these requirements. This paper considers why such a shared understanding is important, and highlights possible obstacles to this that may arise in the context of e-Science projects. A research project, consisting of qualitative case studies of two projects, is being undertaken, with the goal of producing recommendations for improving shared understanding of end-users' requirements amongst developers of e-Science applications. Although the data collection is still ongoing, it is anticipated that the research will be completed, and recommendations developed, in time for the IEEE e-Science conference 2009.
{"title":"Shared understanding of end-users' requirements in e-Science projects","authors":"P. Darch, A. Carusi, M. Jirotka","doi":"10.1109/ESCIW.2009.5407963","DOIUrl":"https://doi.org/10.1109/ESCIW.2009.5407963","url":null,"abstract":"The acquisition by developers of e-Science applications of a thorough understanding of the requirements of end-users has been recognized as playing a critical role in the usability of such applications. However, there is another dimension to such an understanding that also plays an important role, namely the extent to which these developers converge on a shared understanding of these requirements. This paper considers why such a shared understanding is important, and highlights possible obstacles to this that may arise in the context of e-Science projects. A research project, consisting of qualitative case studies of two projects, is being undertaken, with the goal of producing recommendations for improving shared understanding of end-users' requirements amongst developers of e-Science applications. Although the data collection is still ongoing, it is anticipated that the research will be completed, and recommendations developed, in time for the IEEE e-Science conference 2009.","PeriodicalId":416133,"journal":{"name":"2009 5th IEEE International Conference on E-Science Workshops","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125622155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-01DOI: 10.1109/ESCIW.2009.5407960
Sue Fenley
The paper outlines a series of tools and techniques for visualising data. These tools are being developed for use in large multimedia and Internet resources. The tools are intended to be generic, easily adaptable, and transferable across different resources. The paper demonstrates a series of navigation patterns, newer navigation tools such as breadcrumb trails and beacons and different types of visualisation, some of which originated in virtual world and gaming software. Using these tools and techniques would reduce the amount of time lost in learning to use new tools and techniques for each new resource and, used in conjunction with an intelligent tutor, which tracks the user and their tool usage as well as their academic progress, would develop an individual's searching and researching skills in e-Science.
{"title":"Developing tools and visualisation techniques to assist users in e-Science technologies","authors":"Sue Fenley","doi":"10.1109/ESCIW.2009.5407960","DOIUrl":"https://doi.org/10.1109/ESCIW.2009.5407960","url":null,"abstract":"The paper outlines a series of tools and techniques for visualising data. These tools are being developed for use in large multimedia and Internet resources. The tools are intended to be generic, easily adaptable, and transferable across different resources. The paper demonstrates a series of navigation patterns, newer navigation tools such as breadcrumb trails and beacons and different types of visualisation, some of which originated in virtual world and gaming software. Using these tools and techniques would reduce the amount of time lost in learning to use new tools and techniques for each new resource and, used in conjunction with an intelligent tutor, which tracks the user and their tool usage as well as their academic progress, would develop an individual's searching and researching skills in e-Science.","PeriodicalId":416133,"journal":{"name":"2009 5th IEEE International Conference on E-Science Workshops","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134075416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-11-09DOI: 10.1109/ESCIW.2009.5408003
Bin Li, Lee Gillam
To attract more users to commercially available computing, services have to specify clearly the charges, duties, liabilities and penalties in Service Level Agreements (SLAs). In this paper, we build on our existing work in SLAs by making easy measurements for a specific application run within a commercial Cloud. An outcome of this work is that certain application may run better than in a Grid or HPC environment, and this backs a recent hypothesis [7].
{"title":"Towards job-specific service level agreements in the cloud","authors":"Bin Li, Lee Gillam","doi":"10.1109/ESCIW.2009.5408003","DOIUrl":"https://doi.org/10.1109/ESCIW.2009.5408003","url":null,"abstract":"To attract more users to commercially available computing, services have to specify clearly the charges, duties, liabilities and penalties in Service Level Agreements (SLAs). In this paper, we build on our existing work in SLAs by making easy measurements for a specific application run within a commercial Cloud. An outcome of this work is that certain application may run better than in a Grid or HPC environment, and this backs a recent hypothesis [7].","PeriodicalId":416133,"journal":{"name":"2009 5th IEEE International Conference on E-Science Workshops","volume":" 81","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120826782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}