For parallel dam break simulations in OpenFOAM (Open Source Field Operation and Manipulation), the core procedure is solving linear systems using iterative methods and the iterative convergence rate is significant to the overall efficiency. A dynamic mesh repartitioning scheme DMRPar (Dynamic Mesh Re-Partitioning) considering the iterative convergence feature is implemented in OpenFOAM. Given that the numerical characteristics of linear systems change a lot along with the complex flow field, DMRPar takes linear system information from the previous timestep into account for the repartitioning at the current timestep. The implementation reuses current mesh topology in OpenFOAM and calculates distributed adjacency graph structure for the mesh. The repartitioning heuristic is based on a general multi-level parallel graph partitioning package called ParMetis. Numerical results on two typical dam break simulations show that DMRPar outperforms the traditional static partitioning method significantly in the total simulation time.
对于OpenFOAM (Open Source Field Operation and Manipulation)中的并行溃坝模拟,其核心步骤是采用迭代方法求解线性系统,迭代收敛速度对整体效率具有重要意义。在OpenFOAM中实现了一种考虑迭代收敛特性的动态网格重分区方案DMRPar (dynamic mesh Re-Partitioning)。考虑到线性系统的数值特征会随着复杂流场的变化而发生很大的变化,DMRPar在当前时间步长重新划分时考虑了前一时间步长的线性系统信息。该实现重用OpenFOAM中现有的网格拓扑结构,并计算网格的分布式邻接图结构。重分区启发式算法基于一个称为ParMetis的通用多级并行图分区包。两个典型溃坝模拟的数值结果表明,DMRPar方法在总模拟时间上明显优于传统的静态划分方法。
{"title":"DMRPar: A Dynamic Mesh Repartitioning Scheme for Dam Break Simulations in OpenFOAM","authors":"Miao Wang, Xiaoguang Ren, Chao Li, Zhiling Li","doi":"10.1109/PDCAT.2016.054","DOIUrl":"https://doi.org/10.1109/PDCAT.2016.054","url":null,"abstract":"For parallel dam break simulations in OpenFOAM (Open Source Field Operation and Manipulation), the core procedure is solving linear systems using iterative methods and the iterative convergence rate is significant to the overall efficiency. A dynamic mesh repartitioning scheme DMRPar (Dynamic Mesh Re-Partitioning) considering the iterative convergence feature is implemented in OpenFOAM. Given that the numerical characteristics of linear systems change a lot along with the complex flow field, DMRPar takes linear system information from the previous timestep into account for the repartitioning at the current timestep. The implementation reuses current mesh topology in OpenFOAM and calculates distributed adjacency graph structure for the mesh. The repartitioning heuristic is based on a general multi-level parallel graph partitioning package called ParMetis. Numerical results on two typical dam break simulations show that DMRPar outperforms the traditional static partitioning method significantly in the total simulation time.","PeriodicalId":203925,"journal":{"name":"2016 17th International Conference on Parallel and Distributed Computing, Applications and Technologies (PDCAT)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121554334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recently since exploiting vulnerabilities in user application is becoming very difficult, vulnerabilities in Linux kernel have been paid more and more attention, especially the use-after-free vulnerabilities gained the most focus. However, there lacks a completion theory to exploit use-after-free vulnerabilities. The key to exploit UAF vulnerability is how to refill the freed object, because those days that the space just freed will be occupied firstly is gone. We propose a strategy to exploit the use-after-free vulnerabilities by continuously allocating objects. And to promote the efficiency and success rate, we present a technique by parallelly refilling objects with multiple threads and monitor. We also make a simulation experiment to verify the effectiveness of our theory. At last we give some mitigations to this attack.
{"title":"Parallelly Refill SLUB Objects Freed in Slow Paths: An Approach to Exploit the Use-After-Free Vulnerabilities in Linux Kernel","authors":"Liu Song, Qin Xiao-Jun","doi":"10.1109/PDCAT.2016.088","DOIUrl":"https://doi.org/10.1109/PDCAT.2016.088","url":null,"abstract":"Recently since exploiting vulnerabilities in user application is becoming very difficult, vulnerabilities in Linux kernel have been paid more and more attention, especially the use-after-free vulnerabilities gained the most focus. However, there lacks a completion theory to exploit use-after-free vulnerabilities. The key to exploit UAF vulnerability is how to refill the freed object, because those days that the space just freed will be occupied firstly is gone. We propose a strategy to exploit the use-after-free vulnerabilities by continuously allocating objects. And to promote the efficiency and success rate, we present a technique by parallelly refilling objects with multiple threads and monitor. We also make a simulation experiment to verify the effectiveness of our theory. At last we give some mitigations to this attack.","PeriodicalId":203925,"journal":{"name":"2016 17th International Conference on Parallel and Distributed Computing, Applications and Technologies (PDCAT)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132660747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Spectrally Efficient Frequency Division Multiplexing (SEFDM) systems provide enhanced spectrum utilization compared with Orthogonal Frequency Division Multiplexing (OFDM) systems by relaxing the orthogonality condition among sub-carriers. However, in the SEFDM systems, the loss of orthogonality results in the inter-carrier-interference (ICI) thus reduces the transmission reliability. To alleviate the ICI and achieve better error performance, this paper proposes a novel SEFDM transmission scheme, called SEFDM with index modulation (SEFDM-IM). The index modulation, which was originally proposed for OFDM systems, performs an additional modulation besides conventional M-ary modulation by selecting the indices of the sub-carriers. Since a number of sub-carriers are switched off in index modulation, when the index modulation is applied in SEFDM systems, the ICI is reduced and better error performance can be obtained in comparison with the SEFDM systems using conventional Mary modulation. Simulation results confirm this conclusion that the joint applications of SEFDM and index modulation can effectively increase the transmission reliability.
{"title":"Spectrally Efficient Nonorthogonal Frequency Division Multiplexing with Index Modulation","authors":"Heng Liu, Lin Liu, Ping Wang","doi":"10.1109/PDCAT.2016.068","DOIUrl":"https://doi.org/10.1109/PDCAT.2016.068","url":null,"abstract":"Spectrally Efficient Frequency Division Multiplexing (SEFDM) systems provide enhanced spectrum utilization compared with Orthogonal Frequency Division Multiplexing (OFDM) systems by relaxing the orthogonality condition among sub-carriers. However, in the SEFDM systems, the loss of orthogonality results in the inter-carrier-interference (ICI) thus reduces the transmission reliability. To alleviate the ICI and achieve better error performance, this paper proposes a novel SEFDM transmission scheme, called SEFDM with index modulation (SEFDM-IM). The index modulation, which was originally proposed for OFDM systems, performs an additional modulation besides conventional M-ary modulation by selecting the indices of the sub-carriers. Since a number of sub-carriers are switched off in index modulation, when the index modulation is applied in SEFDM systems, the ICI is reduced and better error performance can be obtained in comparison with the SEFDM systems using conventional Mary modulation. Simulation results confirm this conclusion that the joint applications of SEFDM and index modulation can effectively increase the transmission reliability.","PeriodicalId":203925,"journal":{"name":"2016 17th International Conference on Parallel and Distributed Computing, Applications and Technologies (PDCAT)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130255053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhixiang Liu, E. Sha, Xianzhang Chen, Weiwen Jiang, Qingfeng Zhuge
The growing demand for high-performance data processing stimulates the development of in-memory file systems, which exploit the advanced features of emerging non-volatile memory techniques for achieving high-speed file accesses. Existing in-memory file systems, however, are all designed for the systems with uniformed memory accesses. Their performance is poor on Non-Uniform Memory Access (NUMA) machines as they do not consider the asymmetric memory access speed and the architecture of multiple nodes. In this paper, we propose a new design of NUMA-aware in-memory file systems. We propose a distributed file system layout for leveraging the loads of in-memory file accesses on different nodes, a thread-file binding algorithm and a buffer assignment technique for increasing local memory accesses during run-time. Based on the proposed techniques, we implement a functional NUMA-aware in-memory file system, HydraFS, in Linux kernel. Extensive experiments are conducted with the standard benchmark. The experimental results show that HydraFS significantly outperforms typical existing in-memory file systems, including EXT4-DAX, PMFS, and SIMFS.
{"title":"Performance Optimization for In-Memory File Systems on NUMA Machines","authors":"Zhixiang Liu, E. Sha, Xianzhang Chen, Weiwen Jiang, Qingfeng Zhuge","doi":"10.1109/PDCAT.2016.018","DOIUrl":"https://doi.org/10.1109/PDCAT.2016.018","url":null,"abstract":"The growing demand for high-performance data processing stimulates the development of in-memory file systems, which exploit the advanced features of emerging non-volatile memory techniques for achieving high-speed file accesses. Existing in-memory file systems, however, are all designed for the systems with uniformed memory accesses. Their performance is poor on Non-Uniform Memory Access (NUMA) machines as they do not consider the asymmetric memory access speed and the architecture of multiple nodes. In this paper, we propose a new design of NUMA-aware in-memory file systems. We propose a distributed file system layout for leveraging the loads of in-memory file accesses on different nodes, a thread-file binding algorithm and a buffer assignment technique for increasing local memory accesses during run-time. Based on the proposed techniques, we implement a functional NUMA-aware in-memory file system, HydraFS, in Linux kernel. Extensive experiments are conducted with the standard benchmark. The experimental results show that HydraFS significantly outperforms typical existing in-memory file systems, including EXT4-DAX, PMFS, and SIMFS.","PeriodicalId":203925,"journal":{"name":"2016 17th International Conference on Parallel and Distributed Computing, Applications and Technologies (PDCAT)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131748945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuedan Chen, Kenli Li, Xiongwei Fei, Zhe Quan, Kuan-Ching Li
With the rapid development of information technology, the security of massive amounts of digital data has attracted huge attention in recent years. In this paper, we provide an efficient parallel implementation of the Advanced Encryption Standard (AES) algorithm, a widely used symmetrical block encryption algorithm, based on the Sunway TaihuLight. The Sunway TaihuLight is a China's independently developed heterogeneous supercomputer with peak performance over 100 PFlops. We also optimize the parallel implementation of the AES algorithm based on the Sunway TaihuLight to achieve more optimized performance. The optimization of the parallel AES algorithm in a single SW26010 node is provided. Specifically, we expand the scale to 1024 nodes and achieve the throughput of about 63.91 GB/s (511.28 Gbits/s). Our parallel implementation of the AES algorithm has great parallel scalability and the speedup ratio can be very high with the number of nodes increasing.
{"title":"Implementation and Optimization of AES Algorithm on the Sunway TaihuLight","authors":"Yuedan Chen, Kenli Li, Xiongwei Fei, Zhe Quan, Kuan-Ching Li","doi":"10.1109/PDCAT.2016.062","DOIUrl":"https://doi.org/10.1109/PDCAT.2016.062","url":null,"abstract":"With the rapid development of information technology, the security of massive amounts of digital data has attracted huge attention in recent years. In this paper, we provide an efficient parallel implementation of the Advanced Encryption Standard (AES) algorithm, a widely used symmetrical block encryption algorithm, based on the Sunway TaihuLight. The Sunway TaihuLight is a China's independently developed heterogeneous supercomputer with peak performance over 100 PFlops. We also optimize the parallel implementation of the AES algorithm based on the Sunway TaihuLight to achieve more optimized performance. The optimization of the parallel AES algorithm in a single SW26010 node is provided. Specifically, we expand the scale to 1024 nodes and achieve the throughput of about 63.91 GB/s (511.28 Gbits/s). Our parallel implementation of the AES algorithm has great parallel scalability and the speedup ratio can be very high with the number of nodes increasing.","PeriodicalId":203925,"journal":{"name":"2016 17th International Conference on Parallel and Distributed Computing, Applications and Technologies (PDCAT)","volume":"109 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134391075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we present an accurate saliency detection algorithm based on depth feature for 3D images. We first calculate depth cue based on the sharp regions' positions within the depth ranges. Then, the coarse saliency map is computed based on the background and location prior. Finally, we employ the contrast information in the coarse saliency map to obtain the final result. Experimental evaluation by comparison with existed methods verifies the effectiveness of our proposed algorithm in terms of precision, recall and F-Measure.
{"title":"Depth Feature Based Accurate Saliency Detection for 3D Images","authors":"Bing Yan, Haoqian Wang, Xingzheng Wang, Yongbing Zhang","doi":"10.1109/PDCAT.2016.047","DOIUrl":"https://doi.org/10.1109/PDCAT.2016.047","url":null,"abstract":"In this paper, we present an accurate saliency detection algorithm based on depth feature for 3D images. We first calculate depth cue based on the sharp regions' positions within the depth ranges. Then, the coarse saliency map is computed based on the background and location prior. Finally, we employ the contrast information in the coarse saliency map to obtain the final result. Experimental evaluation by comparison with existed methods verifies the effectiveness of our proposed algorithm in terms of precision, recall and F-Measure.","PeriodicalId":203925,"journal":{"name":"2016 17th International Conference on Parallel and Distributed Computing, Applications and Technologies (PDCAT)","volume":"601 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116327529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Because of the number of candidate nodes in opportunistic routing is too large, this paper proposed an estimation method on the number of candidate nodes based on distance (DBNCE). This method sets the number of candidate nodes for each node which participates in forwarding data package according to the distance between current node and the destination, also combines the two factors: network density and the number of neighbor nodes in current node. Simulation results show that using DBNCE in opportunistic routing will reduce the number of candidate nodes effectively while guarantee the rate of data transmission, and improve the performance of the network.
{"title":"An Estimation Method on the Number of Candidate Nodes in Opportunistic Routing","authors":"Xinyou Zhang, Chen Lei-yi, Huanlai Xing","doi":"10.1109/PDCAT.2016.072","DOIUrl":"https://doi.org/10.1109/PDCAT.2016.072","url":null,"abstract":"Because of the number of candidate nodes in opportunistic routing is too large, this paper proposed an estimation method on the number of candidate nodes based on distance (DBNCE). This method sets the number of candidate nodes for each node which participates in forwarding data package according to the distance between current node and the destination, also combines the two factors: network density and the number of neighbor nodes in current node. Simulation results show that using DBNCE in opportunistic routing will reduce the number of candidate nodes effectively while guarantee the rate of data transmission, and improve the performance of the network.","PeriodicalId":203925,"journal":{"name":"2016 17th International Conference on Parallel and Distributed Computing, Applications and Technologies (PDCAT)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127102954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The optimization of task scheduling in Hadoop environment is an important research topic. The result of task scheduling affects the system performance and resource utilization. The existing task scheduling algorithm is lack of consideration at the cache level, which makes the performance of the task greatly affected. Therefore, this paper proposes an improved task scheduling algorithm based on cache locality and data locality. Firstly section matrix and weighted bipartite graph are constructed according to the relation between resources and tasks. Then the bipartite graph matching is used to realize map task scheduling for optimizing the local cache and data locality and reducing the data transmission amount during task execution process. The experimental results show that the proposed algorithm can effectively improve the data locality and system performance, which is better than other two algorithms.
{"title":"An Improved Task Scheduling Algorithm Based on Cache Locality and Data Locality in Hadoop","authors":"P. Zhang, Chunlin Li, Yahui Zhao","doi":"10.1109/PDCAT.2016.060","DOIUrl":"https://doi.org/10.1109/PDCAT.2016.060","url":null,"abstract":"The optimization of task scheduling in Hadoop environment is an important research topic. The result of task scheduling affects the system performance and resource utilization. The existing task scheduling algorithm is lack of consideration at the cache level, which makes the performance of the task greatly affected. Therefore, this paper proposes an improved task scheduling algorithm based on cache locality and data locality. Firstly section matrix and weighted bipartite graph are constructed according to the relation between resources and tasks. Then the bipartite graph matching is used to realize map task scheduling for optimizing the local cache and data locality and reducing the data transmission amount during task execution process. The experimental results show that the proposed algorithm can effectively improve the data locality and system performance, which is better than other two algorithms.","PeriodicalId":203925,"journal":{"name":"2016 17th International Conference on Parallel and Distributed Computing, Applications and Technologies (PDCAT)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128676198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Network simulation is an important technique for verifying new algorithms, analyzing network performance and deploying the practical networks. Different network simulation softwares are applied for different scenarios. In this paper, their performance in different applications are discussed in detail. Three kinds of main network simulation softwares are introduced in this paper: OPNET, Network Simulator (NS) and Objective Modular Network Testbed in C++ (OMNeT++). NS is widely used in network research. How to apply NS in network traffic analysis is discussed in this paper.
{"title":"Comparison on Network Simulation Techniques","authors":"Xiaoping Zhou, Hui Tian","doi":"10.1109/PDCAT.2016.073","DOIUrl":"https://doi.org/10.1109/PDCAT.2016.073","url":null,"abstract":"Network simulation is an important technique for verifying new algorithms, analyzing network performance and deploying the practical networks. Different network simulation softwares are applied for different scenarios. In this paper, their performance in different applications are discussed in detail. Three kinds of main network simulation softwares are introduced in this paper: OPNET, Network Simulator (NS) and Objective Modular Network Testbed in C++ (OMNeT++). NS is widely used in network research. How to apply NS in network traffic analysis is discussed in this paper.","PeriodicalId":203925,"journal":{"name":"2016 17th International Conference on Parallel and Distributed Computing, Applications and Technologies (PDCAT)","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128859324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Array based storage and retrieval systems are demanded in many high dimensional systems like Big data for their easy maintenance. However, the lack of scalability of the conventional approaches degrades with the dynamic size of data sets as they entail reallocation in order to preserve expanded data velocity. To maintain the velocity of data, the storage system must be scalable enough by allowing subjective expansion on the boundary of array dimension. Again, for an array based storage system, if the number of dimension and length of each dimension of the array is very high then the required address space overflows and hence it is impossible to allocate such a big array in the memory. The index array offers a dynamic storage scheme for preserving expanded data velocity by employing indices for each dimension. In this paper we demonstrate a scalable array storage scheme that divides expanded data size into segments. Hence it is able to maintain overflow and can improve the storage utilization than the conventional one. The system converts the n dimensions of the array into 2 dimensions, hence it involves only 2 indices which ensures lower cost of index computation and higher data locality.
{"title":"Towards an Efficient Maintenance of Address Space Overflow for Array Based Storage System","authors":"M. Omar, K. Hasan","doi":"10.1109/PDCAT.2016.040","DOIUrl":"https://doi.org/10.1109/PDCAT.2016.040","url":null,"abstract":"Array based storage and retrieval systems are demanded in many high dimensional systems like Big data for their easy maintenance. However, the lack of scalability of the conventional approaches degrades with the dynamic size of data sets as they entail reallocation in order to preserve expanded data velocity. To maintain the velocity of data, the storage system must be scalable enough by allowing subjective expansion on the boundary of array dimension. Again, for an array based storage system, if the number of dimension and length of each dimension of the array is very high then the required address space overflows and hence it is impossible to allocate such a big array in the memory. The index array offers a dynamic storage scheme for preserving expanded data velocity by employing indices for each dimension. In this paper we demonstrate a scalable array storage scheme that divides expanded data size into segments. Hence it is able to maintain overflow and can improve the storage utilization than the conventional one. The system converts the n dimensions of the array into 2 dimensions, hence it involves only 2 indices which ensures lower cost of index computation and higher data locality.","PeriodicalId":203925,"journal":{"name":"2016 17th International Conference on Parallel and Distributed Computing, Applications and Technologies (PDCAT)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121557858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}