Pub Date : 2014-12-01DOI: 10.1109/PDGC.2014.7030716
S. K. Panda, Subhrajit Nag, P. K. Jana
Task scheduling for heterogeneous multi-cloud environment is a well-known NP-complete problem. Due to exponential increase of client applications (i.e., workloads), cloud providers need to adopt an efficient task scheduling algorithm to handle workloads. Furthermore, the cloud provider may require to collaborate with other cloud providers to avoid fluctuation of demands. This workload sharing problem is referred as heterogeneous multi-cloud task scheduling problem. In this paper, we propose a task scheduling algorithm for heterogeneous multi-cloud environment. The algorithm is based on smoothing concept to organize the tasks. We perform rigorous experiments on synthetic and benchmark datasets and compare their results with two well-known multi-cloud algorithms namely, CMMS and CMAXMS. The comparison results show the superiority of the proposed algorithm in terms of two evaluation metrics, makespan and average cloud utilization.
{"title":"A smoothing based task scheduling algorithm for heterogeneous multi-cloud environment","authors":"S. K. Panda, Subhrajit Nag, P. K. Jana","doi":"10.1109/PDGC.2014.7030716","DOIUrl":"https://doi.org/10.1109/PDGC.2014.7030716","url":null,"abstract":"Task scheduling for heterogeneous multi-cloud environment is a well-known NP-complete problem. Due to exponential increase of client applications (i.e., workloads), cloud providers need to adopt an efficient task scheduling algorithm to handle workloads. Furthermore, the cloud provider may require to collaborate with other cloud providers to avoid fluctuation of demands. This workload sharing problem is referred as heterogeneous multi-cloud task scheduling problem. In this paper, we propose a task scheduling algorithm for heterogeneous multi-cloud environment. The algorithm is based on smoothing concept to organize the tasks. We perform rigorous experiments on synthetic and benchmark datasets and compare their results with two well-known multi-cloud algorithms namely, CMMS and CMAXMS. The comparison results show the superiority of the proposed algorithm in terms of two evaluation metrics, makespan and average cloud utilization.","PeriodicalId":311953,"journal":{"name":"2014 International Conference on Parallel, Distributed and Grid Computing","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117181236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/PDGC.2014.7030737
S. Saxena, Deepak Mehta, Jasminder Kaur, Himanshu Jindal
An underwater ad-hoc sensor network differs from terrestrial network in terms of energy consumption, communication and topology. An acoustic communication is identified as energy efficient way of communication in such networks. Further a multi-hop acoustic communication from bottom of the water to the surface adds many folds to save sensors' energy. To avoid battery recharging or replacement a better topology and improved communication method needs to be developed. In order to exploit multi-hoping advantages and to save energy we have proposed earlier an Under Water Node-Density Based Clustering Sensor Network Protocol (UWDBCSN). In this paper we will give a glimpse of that protocol and discuss some consequences of Acoustic Communication like spreading loss, absorption loss and path loss in the protocol.
{"title":"Acoustic communication characteristics in UWDBCSN","authors":"S. Saxena, Deepak Mehta, Jasminder Kaur, Himanshu Jindal","doi":"10.1109/PDGC.2014.7030737","DOIUrl":"https://doi.org/10.1109/PDGC.2014.7030737","url":null,"abstract":"An underwater ad-hoc sensor network differs from terrestrial network in terms of energy consumption, communication and topology. An acoustic communication is identified as energy efficient way of communication in such networks. Further a multi-hop acoustic communication from bottom of the water to the surface adds many folds to save sensors' energy. To avoid battery recharging or replacement a better topology and improved communication method needs to be developed. In order to exploit multi-hoping advantages and to save energy we have proposed earlier an Under Water Node-Density Based Clustering Sensor Network Protocol (UWDBCSN). In this paper we will give a glimpse of that protocol and discuss some consequences of Acoustic Communication like spreading loss, absorption loss and path loss in the protocol.","PeriodicalId":311953,"journal":{"name":"2014 International Conference on Parallel, Distributed and Grid Computing","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115013807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/PDGC.2014.7030738
Roshani Ade, P. R. Deshmukh
The amount of students data in the educational databases is growing day by day, so the knowledge taken out from these data need to be updated continuously. In the circumstances, where there is a need of handling continuous flow of student's data, there is a challenge of how to handle this massive amount of data into the information and how to accommodate new knowledge introduces with the new data. In this paper, the adaptive incremental learning algorithm for Students classification system is proposed, which competently transforms the knowledge throughout the system and also detects the new concept class efficiently. In this paper, conceptual view of the system is designed with the algorithm and experimental results on the student's data as well as some available data sets are used to prove the efficiency of the proposed algorithm.
{"title":"Incremental learning in students classification system with efficient knowledge transformation","authors":"Roshani Ade, P. R. Deshmukh","doi":"10.1109/PDGC.2014.7030738","DOIUrl":"https://doi.org/10.1109/PDGC.2014.7030738","url":null,"abstract":"The amount of students data in the educational databases is growing day by day, so the knowledge taken out from these data need to be updated continuously. In the circumstances, where there is a need of handling continuous flow of student's data, there is a challenge of how to handle this massive amount of data into the information and how to accommodate new knowledge introduces with the new data. In this paper, the adaptive incremental learning algorithm for Students classification system is proposed, which competently transforms the knowledge throughout the system and also detects the new concept class efficiently. In this paper, conceptual view of the system is designed with the algorithm and experimental results on the student's data as well as some available data sets are used to prove the efficiency of the proposed algorithm.","PeriodicalId":311953,"journal":{"name":"2014 International Conference on Parallel, Distributed and Grid Computing","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116335386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/PDGC.2014.7030736
Mohammad Shahid, Z. Raza
Computational grid is a parallel and distributed infrastructure involving heterogeneous resources providing dependable, pervasive and consistent access of compute intensive resources from multiple organizations that are not subject to centralization at the administrative level delivering optimized QoS parameters. Job scheduling (i.e. mapping) is the core issue in computational grid. The problem of mapping jobs onto heterogeneous computational resources to optimize one or more QoS parameters has been proven to be NP-Complete. This work discusses the Static Level based Batch Scheduling Strategy (SLBSS) and other state of art batch scheduling algorithms viz. Min Min, Max Min, Sufferage and LJFR-SJFR along with their design motivation and limitations. A performance evaluation and analysis of SLBSS with its peers is done to evaluate its significance in the middleware.
计算网格是一种并行和分布式的基础设施,涉及异构资源,为来自多个组织的计算密集型资源提供可靠、普遍和一致的访问,这些资源不受管理级别的集中控制,提供优化的QoS参数。作业调度(即映射)是计算网格中的核心问题。将作业映射到异构计算资源以优化一个或多个QoS参数的问题已被证明是np完全的。本文讨论了基于静态级别的批调度策略(SLBSS)和其他最先进的批调度算法,即Min Min, Max Min,高龄和LJFR-SJFR及其设计动机和局限性。对SLBSS的性能进行了评估和分析,以评估其在中间件中的重要性。
{"title":"Performance evaluation of Static Level based Batch Scheduling Strategy (SLBBS) for computational grid","authors":"Mohammad Shahid, Z. Raza","doi":"10.1109/PDGC.2014.7030736","DOIUrl":"https://doi.org/10.1109/PDGC.2014.7030736","url":null,"abstract":"Computational grid is a parallel and distributed infrastructure involving heterogeneous resources providing dependable, pervasive and consistent access of compute intensive resources from multiple organizations that are not subject to centralization at the administrative level delivering optimized QoS parameters. Job scheduling (i.e. mapping) is the core issue in computational grid. The problem of mapping jobs onto heterogeneous computational resources to optimize one or more QoS parameters has been proven to be NP-Complete. This work discusses the Static Level based Batch Scheduling Strategy (SLBSS) and other state of art batch scheduling algorithms viz. Min Min, Max Min, Sufferage and LJFR-SJFR along with their design motivation and limitations. A performance evaluation and analysis of SLBSS with its peers is done to evaluate its significance in the middleware.","PeriodicalId":311953,"journal":{"name":"2014 International Conference on Parallel, Distributed and Grid Computing","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121590912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/PDGC.2014.7030746
Pushplata Mishra, S. Samantaray, A. Bist
Video based Face Recognition is an emerging research issue which has received much attention during the recent years. In this research, an effective approach for calculating the periodicity of a subject i.e. exact appearance of a subject in different time in video data stream is presented. The system is the combination of two studies: face detection and face recognition. The face detection is performed on video frames. There is a study and implementation of Local Binary Pattern for (4,.5), (8,1), (8,2), (16,2) and (24,3) operators where first value defines neighboring pixels and second denotes radius from centre pixel to neighbor pixels. LBP, HOG and Gradientface methods are implemented for comparing the results and also to compare to show how well these methods can handle variations in expression, pose and illumination. Finally the efficient approach evolved that gives the most effective results 92.3 % result using LBP(24,3), 97 % result using HOG and 100% results by using Gradientface method for captured videos under considerations. For noisy images, Gradientface has achieved 95.7 % result which shows that the method is robust to noise in comparison to LBP and HOG.
{"title":"An effective approach for finding periodicity of a subject in video data","authors":"Pushplata Mishra, S. Samantaray, A. Bist","doi":"10.1109/PDGC.2014.7030746","DOIUrl":"https://doi.org/10.1109/PDGC.2014.7030746","url":null,"abstract":"Video based Face Recognition is an emerging research issue which has received much attention during the recent years. In this research, an effective approach for calculating the periodicity of a subject i.e. exact appearance of a subject in different time in video data stream is presented. The system is the combination of two studies: face detection and face recognition. The face detection is performed on video frames. There is a study and implementation of Local Binary Pattern for (4,.5), (8,1), (8,2), (16,2) and (24,3) operators where first value defines neighboring pixels and second denotes radius from centre pixel to neighbor pixels. LBP, HOG and Gradientface methods are implemented for comparing the results and also to compare to show how well these methods can handle variations in expression, pose and illumination. Finally the efficient approach evolved that gives the most effective results 92.3 % result using LBP(24,3), 97 % result using HOG and 100% results by using Gradientface method for captured videos under considerations. For noisy images, Gradientface has achieved 95.7 % result which shows that the method is robust to noise in comparison to LBP and HOG.","PeriodicalId":311953,"journal":{"name":"2014 International Conference on Parallel, Distributed and Grid Computing","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121702805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/PDGC.2014.7030767
B. E. Manjunathswamy, J. Thriveni, K. Venugopal, L. Patnaik
Biometric based identifications are widely adopted for personnel identification. The unimodal recognition systems currently suffer from noisy data, spoofing attacks, biometric sensor data quality and many more. Robust personnel recognition considering multimodal biometric traits can be achieved. This paper introduces the Multimodal Personnel Authentication using Finger vein and Face Images (MPAFFI) considering the Finger Vein and Face biometric traits. The use of Magnitude and Phase features obtained from Gabor Kernels is considered to define the biometric traits of personnel. The biometric feature space is reduced using Fischer Score and Linear Discriminate Analysis. Personnel recognition is achieved using the weighted K-nearest neighbor classifier. The experimental study presented in the paper considers the (Group of Machine Learning and Applications, Shandong University-Homologous Multimodal Traits) SDUMLA - HMT multimodal biometric dataset. The performance of the MPAFFI is compared with the existing recognition systems and the performance improvement is proved through the results obtained.
{"title":"Multi model Personal Authentication using Finger vein and Face Images (MPAFFI)","authors":"B. E. Manjunathswamy, J. Thriveni, K. Venugopal, L. Patnaik","doi":"10.1109/PDGC.2014.7030767","DOIUrl":"https://doi.org/10.1109/PDGC.2014.7030767","url":null,"abstract":"Biometric based identifications are widely adopted for personnel identification. The unimodal recognition systems currently suffer from noisy data, spoofing attacks, biometric sensor data quality and many more. Robust personnel recognition considering multimodal biometric traits can be achieved. This paper introduces the Multimodal Personnel Authentication using Finger vein and Face Images (MPAFFI) considering the Finger Vein and Face biometric traits. The use of Magnitude and Phase features obtained from Gabor Kernels is considered to define the biometric traits of personnel. The biometric feature space is reduced using Fischer Score and Linear Discriminate Analysis. Personnel recognition is achieved using the weighted K-nearest neighbor classifier. The experimental study presented in the paper considers the (Group of Machine Learning and Applications, Shandong University-Homologous Multimodal Traits) SDUMLA - HMT multimodal biometric dataset. The performance of the MPAFFI is compared with the existing recognition systems and the performance improvement is proved through the results obtained.","PeriodicalId":311953,"journal":{"name":"2014 International Conference on Parallel, Distributed and Grid Computing","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127925795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/PDGC.2014.7030735
Suman Jain, Inderveer Chana
Cloud computing has expressed the global vision of utility computing with promising trends to a new world of information and communication technology. The transformation of the already existing systems into new ones compatible with target cloud computing framework is one of the most important cloud computing issues. Reengineering and migration are the popular and efficient approaches to be deployed for the alteration of the legacy systems. The need of efficient reengineering approaches is very intensive with the rapid change in user demands and technological advances.
{"title":"Reengineering process of legacy systems for the cloud: An overview","authors":"Suman Jain, Inderveer Chana","doi":"10.1109/PDGC.2014.7030735","DOIUrl":"https://doi.org/10.1109/PDGC.2014.7030735","url":null,"abstract":"Cloud computing has expressed the global vision of utility computing with promising trends to a new world of information and communication technology. The transformation of the already existing systems into new ones compatible with target cloud computing framework is one of the most important cloud computing issues. Reengineering and migration are the popular and efficient approaches to be deployed for the alteration of the legacy systems. The need of efficient reengineering approaches is very intensive with the rapid change in user demands and technological advances.","PeriodicalId":311953,"journal":{"name":"2014 International Conference on Parallel, Distributed and Grid Computing","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121626731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the standard firefly algorithm, every firefly has same parameter settings and its value changes from iteration to iteration. The solutions keeps on changing as the optima are approaching which results that it may fall into local optimum. Furthermore, the underlying strength of the algorithm lies in the attractiveness of less brighter firefly towards the brighter firefly which has an impact on the convergence speed and precision. So to avoid the algorithm to fall into local optimum and reduce the impact of maximum of iteration, a mutated firefly algorithm is proposed in this paper. The proposed algorithm is based on monitoring the movement of fireflies by using different probability for each firefly and then perform mutation on each firefly according to its probability. Simulations are performed to show the performance of proposed algorithm with standard firefly algorithm, based on ten standard benchmark functions. The results reveals that proposed algorithm improves the convergence speed, accurateness and prevent the premature convergence.
{"title":"Mutated firefly algorithm","authors":"Sankalap Arora, Sarbjeet Singh, Satvir Singh, Bhanu Sharma","doi":"10.1109/PDGC.2014.7030711","DOIUrl":"https://doi.org/10.1109/PDGC.2014.7030711","url":null,"abstract":"In the standard firefly algorithm, every firefly has same parameter settings and its value changes from iteration to iteration. The solutions keeps on changing as the optima are approaching which results that it may fall into local optimum. Furthermore, the underlying strength of the algorithm lies in the attractiveness of less brighter firefly towards the brighter firefly which has an impact on the convergence speed and precision. So to avoid the algorithm to fall into local optimum and reduce the impact of maximum of iteration, a mutated firefly algorithm is proposed in this paper. The proposed algorithm is based on monitoring the movement of fireflies by using different probability for each firefly and then perform mutation on each firefly according to its probability. Simulations are performed to show the performance of proposed algorithm with standard firefly algorithm, based on ten standard benchmark functions. The results reveals that proposed algorithm improves the convergence speed, accurateness and prevent the premature convergence.","PeriodicalId":311953,"journal":{"name":"2014 International Conference on Parallel, Distributed and Grid Computing","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116956343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/PDGC.2014.7030717
P. Maurya, Amanpreet Kaur, Rohit Choudhary
A wireless sensor network (WSN) is an emerging field comprising of sensor nodes with limited resources like power, memory etc. It is used to monitor the remote areas where recharging or replacing the battery power of sensor nodes is not possible. So, energy is a most challenging issue in case of WSN. Low-Energy Adaptive Clustering Hierarchy (LEACH) is the first significant protocol which consumes less amount of energy while routing the data to the base station. In this paper LEACH protocol has been analyzed with different percentage of cluster heads at different locations of base station in the network.
{"title":"Behavior analysis of LEACH protocol","authors":"P. Maurya, Amanpreet Kaur, Rohit Choudhary","doi":"10.1109/PDGC.2014.7030717","DOIUrl":"https://doi.org/10.1109/PDGC.2014.7030717","url":null,"abstract":"A wireless sensor network (WSN) is an emerging field comprising of sensor nodes with limited resources like power, memory etc. It is used to monitor the remote areas where recharging or replacing the battery power of sensor nodes is not possible. So, energy is a most challenging issue in case of WSN. Low-Energy Adaptive Clustering Hierarchy (LEACH) is the first significant protocol which consumes less amount of energy while routing the data to the base station. In this paper LEACH protocol has been analyzed with different percentage of cluster heads at different locations of base station in the network.","PeriodicalId":311953,"journal":{"name":"2014 International Conference on Parallel, Distributed and Grid Computing","volume":"35 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130147336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/PDGC.2014.7030707
S. Sah, Dinesh Naik
This paper presents a different approach for parallelizing the Doolittle Algorithm with the help of Intel Threading Building Blocks (TBB) allowing the users to utilize the power of multiple cores present in the modern CPUs. Parallel Doolittle Algorithm (PDA) has been divided into 3 parts: Decomposing the data, Parallely processing the data, finally Composing the data. Using the PDA we can solve the linear system of equations in considerably lesser amount time as compare to Serial Doolittle Algorithm (SDA). The PDA has been implemented in C++ using TBB library which makes it highly efficient, cross-platform compatible, and scalable. The efficiency of PDA over SDA has been verified by comparing the running time on different order of matrices. Experiments proved that PDA outperformed SDA by utilizing all the cores present in the CPU.
{"title":"Parallelizing doolittle algorithm using TBB","authors":"S. Sah, Dinesh Naik","doi":"10.1109/PDGC.2014.7030707","DOIUrl":"https://doi.org/10.1109/PDGC.2014.7030707","url":null,"abstract":"This paper presents a different approach for parallelizing the Doolittle Algorithm with the help of Intel Threading Building Blocks (TBB) allowing the users to utilize the power of multiple cores present in the modern CPUs. Parallel Doolittle Algorithm (PDA) has been divided into 3 parts: Decomposing the data, Parallely processing the data, finally Composing the data. Using the PDA we can solve the linear system of equations in considerably lesser amount time as compare to Serial Doolittle Algorithm (SDA). The PDA has been implemented in C++ using TBB library which makes it highly efficient, cross-platform compatible, and scalable. The efficiency of PDA over SDA has been verified by comparing the running time on different order of matrices. Experiments proved that PDA outperformed SDA by utilizing all the cores present in the CPU.","PeriodicalId":311953,"journal":{"name":"2014 International Conference on Parallel, Distributed and Grid Computing","volume":"35 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120915942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}