Pub Date : 2013-10-22DOI: 10.1109/eScience.2013.31
Wenchao Jiang, Zhiming Zhao, C. D. Laat
In order to reduce untrustworthy between cloud users and the underlying cloud storage platform, a novel cloud security storage solution is proposed based on autonomous data storage, management, and access control. The roles of users are re-evaluated, and the knowledge provided by the users is incorporated into the cloud storage model. Both the superiority of the public cloud in large scale data storage and the advantages of the private cloud in privacy preserving can be obtained. The main advantages of our approach include avoiding the superposition of complex security policies and overcoming the mistrust between the users and the platform. Furthermore, our security storage service can be easily integrated into the cooperative cloud computing environment. A prototype system is developed, and a use case is also presented.
{"title":"An Autonomous Security Storage Solution for Data-Intensive Cooperative Cloud Computing","authors":"Wenchao Jiang, Zhiming Zhao, C. D. Laat","doi":"10.1109/eScience.2013.31","DOIUrl":"https://doi.org/10.1109/eScience.2013.31","url":null,"abstract":"In order to reduce untrustworthy between cloud users and the underlying cloud storage platform, a novel cloud security storage solution is proposed based on autonomous data storage, management, and access control. The roles of users are re-evaluated, and the knowledge provided by the users is incorporated into the cloud storage model. Both the superiority of the public cloud in large scale data storage and the advantages of the private cloud in privacy preserving can be obtained. The main advantages of our approach include avoiding the superposition of complex security policies and overcoming the mistrust between the users and the platform. Furthermore, our security storage service can be easily integrated into the cooperative cloud computing environment. A prototype system is developed, and a use case is also presented.","PeriodicalId":325272,"journal":{"name":"2013 IEEE 9th International Conference on e-Science","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116124200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-10-22DOI: 10.1109/eScience.2013.32
Evelyn Perez Cervantes, J. Mena-Chalco, Maria Cristina Ferreira de Oliveira, R. M. C. Junior
The influence of a particular individual in a scientific collaboration network could be measured in several ways. Estimating influence commonly requires calculating computationally costly global measures, which may be impractical on networks with hundreds of thousands of vertices. In this paper, we introduce new local measures to estimate the collaborative influence of individual researchers in a collaboration network. Our approach is based on the link prediction technique, and its underlying rationale is to assess how the presence/absence of a researcher affects the link prediction outcome in the network as a whole. It is natural to assume that the absence of a researcher with strong influence in the network will cause negative impact in the correct link prediction. Scientists are represented as vertices in the collaboration graph, and a vertex removal and corresponding link prediction process are performed iteratively for all vertices, each vertex being handled independently. The SVM supervised learning model has been adopted as link predictor. The proposed approach has been tested on real collaboration networks relative to multiple time periods, processing the networks in order to assign more relevance to recent than to older collaborations. The experimental tests suggest that our measure of impact on link prediction has high negative correlation with standard vertex importance measures such as between ness and closeness centrality.
{"title":"Using Link Prediction to Estimate the Collaborative Influence of Researchers","authors":"Evelyn Perez Cervantes, J. Mena-Chalco, Maria Cristina Ferreira de Oliveira, R. M. C. Junior","doi":"10.1109/eScience.2013.32","DOIUrl":"https://doi.org/10.1109/eScience.2013.32","url":null,"abstract":"The influence of a particular individual in a scientific collaboration network could be measured in several ways. Estimating influence commonly requires calculating computationally costly global measures, which may be impractical on networks with hundreds of thousands of vertices. In this paper, we introduce new local measures to estimate the collaborative influence of individual researchers in a collaboration network. Our approach is based on the link prediction technique, and its underlying rationale is to assess how the presence/absence of a researcher affects the link prediction outcome in the network as a whole. It is natural to assume that the absence of a researcher with strong influence in the network will cause negative impact in the correct link prediction. Scientists are represented as vertices in the collaboration graph, and a vertex removal and corresponding link prediction process are performed iteratively for all vertices, each vertex being handled independently. The SVM supervised learning model has been adopted as link predictor. The proposed approach has been tested on real collaboration networks relative to multiple time periods, processing the networks in order to assign more relevance to recent than to older collaborations. The experimental tests suggest that our measure of impact on link prediction has high negative correlation with standard vertex importance measures such as between ness and closeness centrality.","PeriodicalId":325272,"journal":{"name":"2013 IEEE 9th International Conference on e-Science","volume":"1997 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128227823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-10-22DOI: 10.1109/eScience.2013.30
Yang Ruan, G. Fox
Advances in modern bio-sequencing techniques have led to a proliferation of raw genomic data that enables an unprecedented opportunity for data mining. To analyze such large volume and high-dimensional scientific data, many high performance dimension reduction and clustering algorithms have been developed. Among the known algorithms, we use Multidimensional Scaling (MDS) to reduce the dimension of original data and Pair wise Clustering, and to classify the data. We have shown that interpolative MDS, which is an online technique for real-time streaming in Big Data, can be applied to get better performance on massive data. However, SMACOF MDS approach is only directly applicable to cases where all pair wise distances are used and where weight is one for each term. In this paper, we proposed a robust and scalable MDS and interpolation algorithm using Deterministic Annealing technique, to solve problems with either missing distances or a non-trivial weight function. We compared our method to three state-of-art techniques. By experimenting on three common types of bioinformatics dataset, the results illustrate that the precision of our algorithms are better than other algorithms, and the weighted solutions has a lower computational time cost as well.
{"title":"A Robust and Scalable Solution for Interpolative Multidimensional Scaling with Weighting","authors":"Yang Ruan, G. Fox","doi":"10.1109/eScience.2013.30","DOIUrl":"https://doi.org/10.1109/eScience.2013.30","url":null,"abstract":"Advances in modern bio-sequencing techniques have led to a proliferation of raw genomic data that enables an unprecedented opportunity for data mining. To analyze such large volume and high-dimensional scientific data, many high performance dimension reduction and clustering algorithms have been developed. Among the known algorithms, we use Multidimensional Scaling (MDS) to reduce the dimension of original data and Pair wise Clustering, and to classify the data. We have shown that interpolative MDS, which is an online technique for real-time streaming in Big Data, can be applied to get better performance on massive data. However, SMACOF MDS approach is only directly applicable to cases where all pair wise distances are used and where weight is one for each term. In this paper, we proposed a robust and scalable MDS and interpolation algorithm using Deterministic Annealing technique, to solve problems with either missing distances or a non-trivial weight function. We compared our method to three state-of-art techniques. By experimenting on three common types of bioinformatics dataset, the results illustrate that the precision of our algorithms are better than other algorithms, and the weighted solutions has a lower computational time cost as well.","PeriodicalId":325272,"journal":{"name":"2013 IEEE 9th International Conference on e-Science","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131331917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-10-22DOI: 10.1109/ESCIENCE.2013.43
J. Almeida, J. A. D. Santos, Bruna Alberton, L. Morellato, R. Torres
Plant phenology studies recurrent plant life cycles events and is a key component of climate change research. To increase accuracy of observations, new technologies have been applied for phenological observation, and one of the most successful are digital cameras, used as multi-channel imaging sensors to estimate color changes that are related to phenological events. We monitored leaf-changing patterns of a cerrado-savanna vegetation by taken daily digital images. We extract individual plant color information and correlated with leaf phenological changes. To do so, time series associated with plant species were obtained, raising the need of using appropriate tools for mining patterns of interest. In this paper, we present a novel approach for representing phenological patterns of plant species derived from digital images. The proposed method is based on encoding time series as a visual rhythm, which is characterized by image description algorithms. A comparative analysis of different descriptors is conducted and discussed. Experimental results show that our approach presents high accuracy on identifying plant species.
{"title":"Plant Species Identification with Phenological Visual Rhythms","authors":"J. Almeida, J. A. D. Santos, Bruna Alberton, L. Morellato, R. Torres","doi":"10.1109/ESCIENCE.2013.43","DOIUrl":"https://doi.org/10.1109/ESCIENCE.2013.43","url":null,"abstract":"Plant phenology studies recurrent plant life cycles events and is a key component of climate change research. To increase accuracy of observations, new technologies have been applied for phenological observation, and one of the most successful are digital cameras, used as multi-channel imaging sensors to estimate color changes that are related to phenological events. We monitored leaf-changing patterns of a cerrado-savanna vegetation by taken daily digital images. We extract individual plant color information and correlated with leaf phenological changes. To do so, time series associated with plant species were obtained, raising the need of using appropriate tools for mining patterns of interest. In this paper, we present a novel approach for representing phenological patterns of plant species derived from digital images. The proposed method is based on encoding time series as a visual rhythm, which is characterized by image description algorithms. A comparative analysis of different descriptors is conducted and discussed. Experimental results show that our approach presents high accuracy on identifying plant species.","PeriodicalId":325272,"journal":{"name":"2013 IEEE 9th International Conference on e-Science","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130471554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-10-22DOI: 10.1109/eScience.2013.23
Yan Zhao, Qiong Luo, Senhong Wang, Chao Wu
Image subtraction is an effective method used in astronomy to search transient objects or identify objects that have time-varying brightness. The state-of-the-art astronomical image subtraction methods work by taking two aligned images of the same observation area, calculating a space-varying convolution kernel for the two images, and finally obtaining the difference image using the convolution kernel. With the need for fast image subtraction in astronomy projects, we study the parallelization of HOTPANTS, a popular astronomical image subtraction package by Andrew Becker, on multicore CPUs and GPUs. Specifically, we identify the components in HOTPANTS that are data parallel and parallelize these components on the GPU and multicore CPU. We divide the work between the CPU and the GPU to minimize the overall time. In the GPU-based components, we investigate the suitable setup of the GPU thread structure for the computation, and optimize data access on the GPU memory hierarchy. Consequently, P-HOTPANTS (our parallel zed HOTPANTS), achieves a 4-times speedup over the original HOTPANTS running on a desktop with an Intel i7 CPU and an NVIDIA GTX580 GPU.
{"title":"Accelerating Astronomical Image Subtraction on Heterogeneous Processors","authors":"Yan Zhao, Qiong Luo, Senhong Wang, Chao Wu","doi":"10.1109/eScience.2013.23","DOIUrl":"https://doi.org/10.1109/eScience.2013.23","url":null,"abstract":"Image subtraction is an effective method used in astronomy to search transient objects or identify objects that have time-varying brightness. The state-of-the-art astronomical image subtraction methods work by taking two aligned images of the same observation area, calculating a space-varying convolution kernel for the two images, and finally obtaining the difference image using the convolution kernel. With the need for fast image subtraction in astronomy projects, we study the parallelization of HOTPANTS, a popular astronomical image subtraction package by Andrew Becker, on multicore CPUs and GPUs. Specifically, we identify the components in HOTPANTS that are data parallel and parallelize these components on the GPU and multicore CPU. We divide the work between the CPU and the GPU to minimize the overall time. In the GPU-based components, we investigate the suitable setup of the GPU thread structure for the computation, and optimize data access on the GPU memory hierarchy. Consequently, P-HOTPANTS (our parallel zed HOTPANTS), achieves a 4-times speedup over the original HOTPANTS running on a desktop with an Intel i7 CPU and an NVIDIA GTX580 GPU.","PeriodicalId":325272,"journal":{"name":"2013 IEEE 9th International Conference on e-Science","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116828395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-10-22DOI: 10.1109/eScience.2013.49
T. Samak, R. Egan, Brian Bushnell, D. Gunter, A. Copeland, Zhong Wang
In this work we describe a method to automatically detect errors in de novo assembled genomes. The method extends a Bayesian assembly quality evaluation framework, ALE, which computes the likelihood of an assembly given a set of unassembled data. Starting from ALE output, this method applies outlier detection algorithms to identify the precise locations of assembly errors. We show results from a microbial genome with manually curated assembly errors. Our method detects all deletions, 82.3% of insertions, and 88.8% of single base substitutions. It was also able to detect an inversion error that spans more than 400 bases.
{"title":"Automatic Outlier Detection for Genome Assembly Quality Assessment","authors":"T. Samak, R. Egan, Brian Bushnell, D. Gunter, A. Copeland, Zhong Wang","doi":"10.1109/eScience.2013.49","DOIUrl":"https://doi.org/10.1109/eScience.2013.49","url":null,"abstract":"In this work we describe a method to automatically detect errors in de novo assembled genomes. The method extends a Bayesian assembly quality evaluation framework, ALE, which computes the likelihood of an assembly given a set of unassembled data. Starting from ALE output, this method applies outlier detection algorithms to identify the precise locations of assembly errors. We show results from a microbial genome with manually curated assembly errors. Our method detects all deletions, 82.3% of insertions, and 88.8% of single base substitutions. It was also able to detect an inversion error that spans more than 400 bases.","PeriodicalId":325272,"journal":{"name":"2013 IEEE 9th International Conference on e-Science","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129075849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-10-22DOI: 10.1109/eScience.2013.29
Trilce Estrada, K. Pusecker, Manuel R. Torres, J. Cohoon, M. Taufer
Volunteer Computing (VC) uses the computational resources of volunteers with Internet-connected personal computers to address fundamental problems in science. Docking Home (D@H) is a VC project targeting drug discovery through high throughput docking simulations i.e., by docking small molecules (ligands) into target proteins associated to diseases. Currently there are more than 27,000 volunteers (and 70,000 computers) worldwide supporting D@H. Similar to national trends in STEM fields, in general, the huge majority of volunteers engaged in VC projects, and in D@H in particular, are Caucasian males. This paper aims to characterize the current VC community supporting D@H and uses the information to define strategies that can help attract and retain female and ethnic minority volunteers.
{"title":"Benchmarking Gender Differences in Volunteer Computing Projects","authors":"Trilce Estrada, K. Pusecker, Manuel R. Torres, J. Cohoon, M. Taufer","doi":"10.1109/eScience.2013.29","DOIUrl":"https://doi.org/10.1109/eScience.2013.29","url":null,"abstract":"Volunteer Computing (VC) uses the computational resources of volunteers with Internet-connected personal computers to address fundamental problems in science. Docking Home (D@H) is a VC project targeting drug discovery through high throughput docking simulations i.e., by docking small molecules (ligands) into target proteins associated to diseases. Currently there are more than 27,000 volunteers (and 70,000 computers) worldwide supporting D@H. Similar to national trends in STEM fields, in general, the huge majority of volunteers engaged in VC projects, and in D@H in particular, are Caucasian males. This paper aims to characterize the current VC community supporting D@H and uses the information to define strategies that can help attract and retain female and ethnic minority volunteers.","PeriodicalId":325272,"journal":{"name":"2013 IEEE 9th International Conference on e-Science","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132113047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Senhong Wang, Yan Zhao, Qiong Luo, Chao Wu, Yang Xv
New astronomy projects generate observation images continuously and these images are converted into tabular catalogs online. Furthermore, each such new table, called a sample table, is compared against a reference table on the same patch of sky to annotate the stars that match those in the reference and to identify transient objects that have no matches. This cross match must be done within a few seconds to enable timely issuance of alerts as well as shipping of the data products off the pipeline. To perform the online cross match of tables on celestial objects, we propose two parallel algorithms, zone Match and grid Match, both of which divide up celestial objects by their locations in the spherical coordinate system. Specifically, zone Match divides the observation area by the declination coordinate of the celestial sphere whereas grid Match utilizes a two-dimensional grid on the declination and the right ascension. With the reference table indexed by zones or grid, we match the stars in the sample table through parallel index probes on the reference. We implemented these algorithms on a multicore CPU as well as a desktop GPU, and evaluated their performance on both synthetic data and real world astronomical data. Our results show that grid Match is faster than zone Match at the cost of memory space and that parallelization achieves speedups of orders of magnitude.
{"title":"Accelerating In-memory Cross Match of Astronomical Catalogs","authors":"Senhong Wang, Yan Zhao, Qiong Luo, Chao Wu, Yang Xv","doi":"10.1109/eScience.2013.9","DOIUrl":"https://doi.org/10.1109/eScience.2013.9","url":null,"abstract":"New astronomy projects generate observation images continuously and these images are converted into tabular catalogs online. Furthermore, each such new table, called a sample table, is compared against a reference table on the same patch of sky to annotate the stars that match those in the reference and to identify transient objects that have no matches. This cross match must be done within a few seconds to enable timely issuance of alerts as well as shipping of the data products off the pipeline. To perform the online cross match of tables on celestial objects, we propose two parallel algorithms, zone Match and grid Match, both of which divide up celestial objects by their locations in the spherical coordinate system. Specifically, zone Match divides the observation area by the declination coordinate of the celestial sphere whereas grid Match utilizes a two-dimensional grid on the declination and the right ascension. With the reference table indexed by zones or grid, we match the stars in the sample table through parallel index probes on the reference. We implemented these algorithms on a multicore CPU as well as a desktop GPU, and evaluated their performance on both synthetic data and real world astronomical data. Our results show that grid Match is faster than zone Match at the cost of memory space and that parallelization achieves speedups of orders of magnitude.","PeriodicalId":325272,"journal":{"name":"2013 IEEE 9th International Conference on e-Science","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115324756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-10-22DOI: 10.1109/eScience.2013.16
Yihua Lou, Haikuo Zhang, Wenjun Wu, Zhenghui Hu
Massive amount scientific data often need to be visualized in ultra-large images for scientific discovery. Although ultra-high resolution tiled-display environments have been widely used, there still lacks of proper image viewers that can display ultra-large images with billions of pixels in tiled-display environments. To address the problem, we propose Magic View, an optimized ultra-large scientific image viewer for SAGE tiled-display environment. It can achieve real-time interactive performance in viewing images with billions of pixels. Our experiments show that the performance of Magic View are at lease 8x better than Juxta View, another ultra-large image viewer for SAGE.
{"title":"Magic View: An Optimized Ultra-Large Scientific Image Viewer for SAGE Tiled-Display Environment","authors":"Yihua Lou, Haikuo Zhang, Wenjun Wu, Zhenghui Hu","doi":"10.1109/eScience.2013.16","DOIUrl":"https://doi.org/10.1109/eScience.2013.16","url":null,"abstract":"Massive amount scientific data often need to be visualized in ultra-large images for scientific discovery. Although ultra-high resolution tiled-display environments have been widely used, there still lacks of proper image viewers that can display ultra-large images with billions of pixels in tiled-display environments. To address the problem, we propose Magic View, an optimized ultra-large scientific image viewer for SAGE tiled-display environment. It can achieve real-time interactive performance in viewing images with billions of pixels. Our experiments show that the performance of Magic View are at lease 8x better than Juxta View, another ultra-large image viewer for SAGE.","PeriodicalId":325272,"journal":{"name":"2013 IEEE 9th International Conference on e-Science","volume":"117 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124098370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-10-22DOI: 10.1109/eScience.2013.26
Ferry Hendrikx, K. Bubendorfer
Collaborative systems require access control to prevent unauthorised access and change. Access control has a number of issues, including administration and maintenance overheads. In this paper we argue that it is time to reconsider how access controls work, particularly with scientific and data related domains, and to this end we propose a new paradigm based on a user's demographics and behaviour, rather than simply their identity. In essence, it is both who you are and what you do that is important. We introduce Graft, our Generalised Recommendation Architecture that allows us to support a range of different recommendation models, and provide case studies to illustrate the usefulness of our architecture.
{"title":"Malleable Access Rights to Establish and Enable Scientific Collaboration","authors":"Ferry Hendrikx, K. Bubendorfer","doi":"10.1109/eScience.2013.26","DOIUrl":"https://doi.org/10.1109/eScience.2013.26","url":null,"abstract":"Collaborative systems require access control to prevent unauthorised access and change. Access control has a number of issues, including administration and maintenance overheads. In this paper we argue that it is time to reconsider how access controls work, particularly with scientific and data related domains, and to this end we propose a new paradigm based on a user's demographics and behaviour, rather than simply their identity. In essence, it is both who you are and what you do that is important. We introduce Graft, our Generalised Recommendation Architecture that allows us to support a range of different recommendation models, and provide case studies to illustrate the usefulness of our architecture.","PeriodicalId":325272,"journal":{"name":"2013 IEEE 9th International Conference on e-Science","volume":"386 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115981111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}