Recently, kinds of Social Network Services (SNS) have gained enough popularity among Internet users. Activity Social Network (ASN) service, as a new kind of SNS with interest as the core, is dominated by the activities and it connects users much closer through social activities. However, cheating behaviors appear frequently in SNS especially ASN because of their anonymity, which makes a large lack of trust in ASN and becomes a stumbling block that hinders the development of ASN. The researches on trust mechanism have become a key issue in recent years, but most studies focused on E-commerce and traditional social networks. The existing models are not completely suitable for social activities. Motivated by the idea of PeerTrust to compute trust values, we propose ActivityTrust model based on PeerTrust according to its unequal interaction characteristics to ensure the security and reliability of the activity social platform. Meanwhile, we build a simulative ASN platform on NetLogo, and make contrast experiments on it. We verify the effectiveness and adaptability of trust model with regards to activity success rate and trust evaluation rate.
{"title":"A Novel Trust Model for Activity Social Network Based on PeerTrust","authors":"Limei Xu, Yining Ma, Kai Lei","doi":"10.1109/PDCAT.2016.065","DOIUrl":"https://doi.org/10.1109/PDCAT.2016.065","url":null,"abstract":"Recently, kinds of Social Network Services (SNS) have gained enough popularity among Internet users. Activity Social Network (ASN) service, as a new kind of SNS with interest as the core, is dominated by the activities and it connects users much closer through social activities. However, cheating behaviors appear frequently in SNS especially ASN because of their anonymity, which makes a large lack of trust in ASN and becomes a stumbling block that hinders the development of ASN. The researches on trust mechanism have become a key issue in recent years, but most studies focused on E-commerce and traditional social networks. The existing models are not completely suitable for social activities. Motivated by the idea of PeerTrust to compute trust values, we propose ActivityTrust model based on PeerTrust according to its unequal interaction characteristics to ensure the security and reliability of the activity social platform. Meanwhile, we build a simulative ASN platform on NetLogo, and make contrast experiments on it. We verify the effectiveness and adaptability of trust model with regards to activity success rate and trust evaluation rate.","PeriodicalId":203925,"journal":{"name":"2016 17th International Conference on Parallel and Distributed Computing, Applications and Technologies (PDCAT)","volume":"125 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126282113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Exploiting temporal and spatial locality is a way to improve the performance of data compression and deduplication in a storage system. Through our evaluation, we find that content level similarity measures such as similar tags of photos have a certain correlation to data compressibility. Raw images with similar tags can be compressed together to get better storage space savings. Furthermore, storing similar raw images together enables rapid data sorting, searching, and retrieval if the images are stored in a distributed and large-scale environment with reduced fragmentation. In this paper, we present the correlation results between content similarity and data compressibility using a dataset built from Flickr. The system design we proposed has been based on the evaluation and it optimizes storage efficiency for Top-N relevant images with the same tag. On one hand, the storage space is saved. On the other hand, the design may accelerate the query performance for Top-N relevance search.
{"title":"Improving Storage Efficiency for Raw Image Photo Repository by Exploiting Similarity","authors":"Binqi Zhang, Chen Wang, B. Zhou, Albert Y. Zomaya","doi":"10.1109/PDCAT.2016.045","DOIUrl":"https://doi.org/10.1109/PDCAT.2016.045","url":null,"abstract":"Exploiting temporal and spatial locality is a way to improve the performance of data compression and deduplication in a storage system. Through our evaluation, we find that content level similarity measures such as similar tags of photos have a certain correlation to data compressibility. Raw images with similar tags can be compressed together to get better storage space savings. Furthermore, storing similar raw images together enables rapid data sorting, searching, and retrieval if the images are stored in a distributed and large-scale environment with reduced fragmentation. In this paper, we present the correlation results between content similarity and data compressibility using a dataset built from Flickr. The system design we proposed has been based on the evaluation and it optimizes storage efficiency for Top-N relevant images with the same tag. On one hand, the storage space is saved. On the other hand, the design may accelerate the query performance for Top-N relevance search.","PeriodicalId":203925,"journal":{"name":"2016 17th International Conference on Parallel and Distributed Computing, Applications and Technologies (PDCAT)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130475087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
High processing capabilities of today's large systems are also used for real time applications, where executing tasks before their deadline is essential. On the other hand, with increase in the processing capability, energy consumption also increases for such systems. Thus energy efficient execution of real time tasks in such large systems has found to be promising research area in recent time. Scheduling tasks in such large systems using only low level power construct like DVFS is not efficient. In this paper, we have exploited the power consumption pattern of the recent commercial processors and derived a simple power model with a higher granularity for systems have large number of processor with each processor having multi-threading feature. We have then proposed an energy efficient scheduling technique namely, smart allocation policy for executing a set of aperiodic independent real time tasks on large system such that no task misses it deadline. We have analyzed the instantaneous power consumption and the overall energy consumption of the proposed policy along with other five baseline policies for a wide variety of synthetic data sets and real trace data. As execution time of tasks has a significant impact on scheduling and on the overall performance of the system, we have considered six different execution time models of task for our experiment. Experimental evaluation reveals that our proposed policy performs significantly better than baseline policies for all the variations of synthetic data and for real trace data.
{"title":"Energy Efficient Scheduling of Real Time Tasks on Large Systems","authors":"Manojit Ghose, A. Sahu, S. Karmakar","doi":"10.1109/PDCAT.2016.035","DOIUrl":"https://doi.org/10.1109/PDCAT.2016.035","url":null,"abstract":"High processing capabilities of today's large systems are also used for real time applications, where executing tasks before their deadline is essential. On the other hand, with increase in the processing capability, energy consumption also increases for such systems. Thus energy efficient execution of real time tasks in such large systems has found to be promising research area in recent time. Scheduling tasks in such large systems using only low level power construct like DVFS is not efficient. In this paper, we have exploited the power consumption pattern of the recent commercial processors and derived a simple power model with a higher granularity for systems have large number of processor with each processor having multi-threading feature. We have then proposed an energy efficient scheduling technique namely, smart allocation policy for executing a set of aperiodic independent real time tasks on large system such that no task misses it deadline. We have analyzed the instantaneous power consumption and the overall energy consumption of the proposed policy along with other five baseline policies for a wide variety of synthetic data sets and real trace data. As execution time of tasks has a significant impact on scheduling and on the overall performance of the system, we have considered six different execution time models of task for our experiment. Experimental evaluation reveals that our proposed policy performs significantly better than baseline policies for all the variations of synthetic data and for real trace data.","PeriodicalId":203925,"journal":{"name":"2016 17th International Conference on Parallel and Distributed Computing, Applications and Technologies (PDCAT)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128032513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hadoop is a popular cloud computing software, and its major component MapReduce can efficiently complete parallel computing in homogeneous environment. But in practical application heterogeneous cluster is a common phenomenon. In this case, it's prone to unbalance load. To solve this problem, a model of heterogeneous Hadoop cluster based on dynamic load balancing is proposed in this paper. This model starts from MapReduce and tracks node information in real time by using its monitoring module. A maximum node hit rate priority algorithm (MNHRPA) is designed and implemented in the paper, and it can achieve load balancing by dynamic adjustment of data allocation based on nodes' computing power and load. The experimental results show that the algorithm can effectively reduce tasks' completion time and achieve load balancing of the cluster compared with Hadoop's default algorithm.
{"title":"An Optimization Algorithm for Heterogeneous Hadoop Clusters Based on Dynamic Load Balancing","authors":"Wei Yan, Chunlin Li, Shumeng Du, XiJun Mao","doi":"10.1109/PDCAT.2016.061","DOIUrl":"https://doi.org/10.1109/PDCAT.2016.061","url":null,"abstract":"Hadoop is a popular cloud computing software, and its major component MapReduce can efficiently complete parallel computing in homogeneous environment. But in practical application heterogeneous cluster is a common phenomenon. In this case, it's prone to unbalance load. To solve this problem, a model of heterogeneous Hadoop cluster based on dynamic load balancing is proposed in this paper. This model starts from MapReduce and tracks node information in real time by using its monitoring module. A maximum node hit rate priority algorithm (MNHRPA) is designed and implemented in the paper, and it can achieve load balancing by dynamic adjustment of data allocation based on nodes' computing power and load. The experimental results show that the algorithm can effectively reduce tasks' completion time and achieve load balancing of the cluster compared with Hadoop's default algorithm.","PeriodicalId":203925,"journal":{"name":"2016 17th International Conference on Parallel and Distributed Computing, Applications and Technologies (PDCAT)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133496725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Big data applications store data sets through sharing data center under the Cloud computing environment, but the need of data set in big data applications is dynamic change over time. In face of multiple data centers, such applications meet new challenges in data migration which mainly include how to how to reduce the number of network access, how to reduce the overall time consumption, and how to improve the efficiency by the time of balancing the global load in the migration process. Facing these challenges, we first build the problem model and descript the dynamic migration method, then solve the global time consumption of data migration, the number of network access and global load balancing these three parameters. Finally, do the cloud computing simulation experiment under the Cloudsim experiment platform. The result shows that the proposed method makes the task completion time reduced by 10% and the data transmission time accounts for the roportion of the total time is reduced. When the amount of data sets is increase, the proportion can reduces to 50% or less. Network access number lower than Zipf and reached stable, in global load, the variance of the node's store space closed to zero.
{"title":"A Dynamic Migration Method for Big Data in Cloud","authors":"Ding Jiaman, Wang Sichen, Du Yi, Jia Lianyin","doi":"10.1109/PDCAT.2016.034","DOIUrl":"https://doi.org/10.1109/PDCAT.2016.034","url":null,"abstract":"Big data applications store data sets through sharing data center under the Cloud computing environment, but the need of data set in big data applications is dynamic change over time. In face of multiple data centers, such applications meet new challenges in data migration which mainly include how to how to reduce the number of network access, how to reduce the overall time consumption, and how to improve the efficiency by the time of balancing the global load in the migration process. Facing these challenges, we first build the problem model and descript the dynamic migration method, then solve the global time consumption of data migration, the number of network access and global load balancing these three parameters. Finally, do the cloud computing simulation experiment under the Cloudsim experiment platform. The result shows that the proposed method makes the task completion time reduced by 10% and the data transmission time accounts for the roportion of the total time is reduced. When the amount of data sets is increase, the proportion can reduces to 50% or less. Network access number lower than Zipf and reached stable, in global load, the variance of the node's store space closed to zero.","PeriodicalId":203925,"journal":{"name":"2016 17th International Conference on Parallel and Distributed Computing, Applications and Technologies (PDCAT)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131549268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Ding, Yidong Li, Xiaolin Xu, Hongwei Xing, Zhen Wang, Liang Chen, G. Wang, Yu Meng
This paper mainly presented a system which can make a prediction to the distribution transformer's load status in smart grid. Since the operation of distribution transformer's load status is generally in the post processing stage at the current stage, lacking forecasting work on distribution transformer's operation and load status. Given the issues above, to reduce costs, ensure the security of power supply, and improve the emergency response capabilities, we presented a prediction system, which can predict the load status of distribution transformer by utilising the data mining algorithm. Besides, the system also provides a platform for the management and maintenance of electrified wire netting's information. In this system, users can conveniently manage the vast and multifarious data sets.
{"title":"A Learning-Based System for Monitoring Electrical Load in Smart Grid","authors":"S. Ding, Yidong Li, Xiaolin Xu, Hongwei Xing, Zhen Wang, Liang Chen, G. Wang, Yu Meng","doi":"10.1109/PDCAT.2016.080","DOIUrl":"https://doi.org/10.1109/PDCAT.2016.080","url":null,"abstract":"This paper mainly presented a system which can make a prediction to the distribution transformer's load status in smart grid. Since the operation of distribution transformer's load status is generally in the post processing stage at the current stage, lacking forecasting work on distribution transformer's operation and load status. Given the issues above, to reduce costs, ensure the security of power supply, and improve the emergency response capabilities, we presented a prediction system, which can predict the load status of distribution transformer by utilising the data mining algorithm. Besides, the system also provides a platform for the management and maintenance of electrified wire netting's information. In this system, users can conveniently manage the vast and multifarious data sets.","PeriodicalId":203925,"journal":{"name":"2016 17th International Conference on Parallel and Distributed Computing, Applications and Technologies (PDCAT)","volume":" 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113948699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The rapid development of cloud computing, bring great convenience to developers. Recently the resource management of the cloud platform has become a hot research topic, especially the load balancing problem in data center is very important for cloud provider. In this paper, a load balancing framework is proposed for cloud platform, it use the threshold window strategy and an advanced AR prediction model to reduce the migration of VMs. Experiments show that this method can effectively achieve load balancing, promote the utilization of the physical machines, and solve the frequent migration problem caused by high instantaneous peak values significantly.
{"title":"An Advanced Load Balancing Strategy for Cloud Environment","authors":"Jiadong Zhang, Qiongxin Liu, Jiayu Chen","doi":"10.1109/PDCAT.2016.059","DOIUrl":"https://doi.org/10.1109/PDCAT.2016.059","url":null,"abstract":"The rapid development of cloud computing, bring great convenience to developers. Recently the resource management of the cloud platform has become a hot research topic, especially the load balancing problem in data center is very important for cloud provider. In this paper, a load balancing framework is proposed for cloud platform, it use the threshold window strategy and an advanced AR prediction model to reduce the migration of VMs. Experiments show that this method can effectively achieve load balancing, promote the utilization of the physical machines, and solve the frequent migration problem caused by high instantaneous peak values significantly.","PeriodicalId":203925,"journal":{"name":"2016 17th International Conference on Parallel and Distributed Computing, Applications and Technologies (PDCAT)","volume":"42 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125999338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cuili Wang, Chao Yang, Lin Liu, Ping Wang, Heng Liu
In order to meet the increasing demand of data rate, the next generation Mobile Communications (5G) are becoming more heterogeneous, irregular and complex. This will lead to more complex interference environment. So the traditional fixed geometry hexagon model is no longer applicable. In order to more accurately evaluate the performance of the 5G heterogeneous network, in this paper, we proposes to analyze a downlink two-tiers heterogeneous cellular network (HCN) based on the stochastic geometry, which considers the inter-layer and intra-layer spatial correlation between the BSs. We present our empirical study on average SINR and average throughput for edge and hotspot areas. By comparing with the traditional fixed geometry hexagon model, the stochastic geometry model is more suitable and accurate for the actual 5G heterogeneous cellular networks.
{"title":"Stochastic Geometry Interference Model for 5G Heterogeneous Network","authors":"Cuili Wang, Chao Yang, Lin Liu, Ping Wang, Heng Liu","doi":"10.1109/PDCAT.2016.067","DOIUrl":"https://doi.org/10.1109/PDCAT.2016.067","url":null,"abstract":"In order to meet the increasing demand of data rate, the next generation Mobile Communications (5G) are becoming more heterogeneous, irregular and complex. This will lead to more complex interference environment. So the traditional fixed geometry hexagon model is no longer applicable. In order to more accurately evaluate the performance of the 5G heterogeneous network, in this paper, we proposes to analyze a downlink two-tiers heterogeneous cellular network (HCN) based on the stochastic geometry, which considers the inter-layer and intra-layer spatial correlation between the BSs. We present our empirical study on average SINR and average throughput for edge and hotspot areas. By comparing with the traditional fixed geometry hexagon model, the stochastic geometry model is more suitable and accurate for the actual 5G heterogeneous cellular networks.","PeriodicalId":203925,"journal":{"name":"2016 17th International Conference on Parallel and Distributed Computing, Applications and Technologies (PDCAT)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127250416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the development of wireless sensor networks, the limited battery capacity of sensor nodes has become one of energy bottleneck problem that dominates the wide application of wireless sensor networks. In recent years, wireless rechargeable sensor networks have attracted much attention due to their potential in solving the energy bottleneck problem. In this paper, we study the scheduling strategy of mobile charger in the on-demand mobile charging wireless sensor networks. The proposed strategy divides the sensor nodes of service pool into two categories, such that the mobile charger can provide charging service in some priority according to the degree of charging request urgency during the charging tour. The proposed algorithm successfully reduces charging missing ratio by 89 percent, and it can keep the charging throughput decline rate less than 9.91 percent.
{"title":"Efficient Scheduling Strategy for Mobile Charger in Wireless Rechargeable Sensor Networks","authors":"Shanhua Zhan, Jigang Wu, L. Qu, Dan Xin","doi":"10.1109/PDCAT.2016.023","DOIUrl":"https://doi.org/10.1109/PDCAT.2016.023","url":null,"abstract":"With the development of wireless sensor networks, the limited battery capacity of sensor nodes has become one of energy bottleneck problem that dominates the wide application of wireless sensor networks. In recent years, wireless rechargeable sensor networks have attracted much attention due to their potential in solving the energy bottleneck problem. In this paper, we study the scheduling strategy of mobile charger in the on-demand mobile charging wireless sensor networks. The proposed strategy divides the sensor nodes of service pool into two categories, such that the mobile charger can provide charging service in some priority according to the degree of charging request urgency during the charging tour. The proposed algorithm successfully reduces charging missing ratio by 89 percent, and it can keep the charging throughput decline rate less than 9.91 percent.","PeriodicalId":203925,"journal":{"name":"2016 17th International Conference on Parallel and Distributed Computing, Applications and Technologies (PDCAT)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121643195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Microarrays are one of the latest breakthroughs in experimental molecular biology, which already provide huge amount of valuable gene expression data. Biclustering algorithm was introduced to capture the coherence of a subset of genes and a subset of conditions. In this paper, we presented a MIWB algorithm to find biclusters of gene expression data. MIWB algorithm uses the weighted mutual information as similarity measure which can be simultaneously detected complex linear and nonlinear relationships between genes. Our algorithm first used the weighted mutual information to construct the seed gene set of each biculster, then we calculated each gene's probability belonging to each bicluster and complete the initial partition of genes set utilizing the given threshold, then by optimising the objective function we completed weights update and conditions set selection, by further repartition of the entire dataset and optimization of biclusters we obtained the final biclusters. We evaluated our algorithm on yeast gene expression dataset, and experimental results show that MIWB algorithm can generate large capacity biclusters with lower mean squared residue.
{"title":"An Efficient Weighted Biclustering Algorithm for Gene Expression Data","authors":"Y. Jia, Yidong Li, Weihua Liu, Hai-rong Dong","doi":"10.1109/PDCAT.2016.078","DOIUrl":"https://doi.org/10.1109/PDCAT.2016.078","url":null,"abstract":"Microarrays are one of the latest breakthroughs in experimental molecular biology, which already provide huge amount of valuable gene expression data. Biclustering algorithm was introduced to capture the coherence of a subset of genes and a subset of conditions. In this paper, we presented a MIWB algorithm to find biclusters of gene expression data. MIWB algorithm uses the weighted mutual information as similarity measure which can be simultaneously detected complex linear and nonlinear relationships between genes. Our algorithm first used the weighted mutual information to construct the seed gene set of each biculster, then we calculated each gene's probability belonging to each bicluster and complete the initial partition of genes set utilizing the given threshold, then by optimising the objective function we completed weights update and conditions set selection, by further repartition of the entire dataset and optimization of biclusters we obtained the final biclusters. We evaluated our algorithm on yeast gene expression dataset, and experimental results show that MIWB algorithm can generate large capacity biclusters with lower mean squared residue.","PeriodicalId":203925,"journal":{"name":"2016 17th International Conference on Parallel and Distributed Computing, Applications and Technologies (PDCAT)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114872401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}