Pub Date : 2020-11-06DOI: 10.1109/PDGC50313.2020.9315748
Rakhee, Archana Singh, Mamta Mittal
A timely and accurate prediction of solar radiation results in proper plant growth, seed germination and stages of flowering and fruiting. Neural Network is becoming popular in designing predictive models. However, issues like importance of variables and long training process has limited its accuracy. The objective of this study is to explore the performance of predictive model by integrating neural network with traditional step-wise discriminant analysis forming a hybrid model. The inclusion of selected features from discriminant analysis to the neural network will improve the accuracy of the designed predicted model. The paper also examines that the hybrid approach outperforms the neural network by selecting different architecture of neural network.
{"title":"Prediction of Solar Radiation using Hybrid Discriminant-Neural Network","authors":"Rakhee, Archana Singh, Mamta Mittal","doi":"10.1109/PDGC50313.2020.9315748","DOIUrl":"https://doi.org/10.1109/PDGC50313.2020.9315748","url":null,"abstract":"A timely and accurate prediction of solar radiation results in proper plant growth, seed germination and stages of flowering and fruiting. Neural Network is becoming popular in designing predictive models. However, issues like importance of variables and long training process has limited its accuracy. The objective of this study is to explore the performance of predictive model by integrating neural network with traditional step-wise discriminant analysis forming a hybrid model. The inclusion of selected features from discriminant analysis to the neural network will improve the accuracy of the designed predicted model. The paper also examines that the hybrid approach outperforms the neural network by selecting different architecture of neural network.","PeriodicalId":347216,"journal":{"name":"2020 Sixth International Conference on Parallel, Distributed and Grid Computing (PDGC)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122851200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-06DOI: 10.1109/tale.2016.7851755
S. Siengchin
I am glad to learn that the Sixth International Conference on Parallel, Distributed and Grid Computing (PDGC) in Virtual Mode is being organized by the Department of Computer Science & Engineering and Information Technology at Jaypee University of Information Technology, Waknaghat, Himachal Pradesh from 6th to 8th November, 2020.
{"title":"Message","authors":"S. Siengchin","doi":"10.1109/tale.2016.7851755","DOIUrl":"https://doi.org/10.1109/tale.2016.7851755","url":null,"abstract":"I am glad to learn that the Sixth International Conference on Parallel, Distributed and Grid Computing (PDGC) in Virtual Mode is being organized by the Department of Computer Science & Engineering and Information Technology at Jaypee University of Information Technology, Waknaghat, Himachal Pradesh from 6th to 8th November, 2020.","PeriodicalId":347216,"journal":{"name":"2020 Sixth International Conference on Parallel, Distributed and Grid Computing (PDGC)","volume":"90 11","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113954854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-06DOI: 10.1109/PDGC50313.2020.9315761
Ishpreet Kaur, Jasleen Kaur
Customer Churning is also known as customer attrition. Nowadays, there are almost 1.5 million customers that are churning in a year that is rising every year. The Banking industry faces challenges to hold clients. The clients may shift over to different banks due to fluctuating reasons, for example, better financial services at lower charges, bank branch location, low-interest rates, etc. Thus, prediction models are utilized to predict clients who are probably going to churn in the future. Because serving long-term customers is less costly as compared to losing a client that leads to a loss in profit for the bank. Also, old customers create higher benefits and provide new referrals. In this paper, different models of machine learning such as Logistic regression (LR), decision tree (DT), K-nearest neighbor (KNN), random forest (RF), etc. are applied to the bank dataset to predict the probability of customer who is going to churn. The comparison in terms of performance like accuracy, recall, etc. is presented.
{"title":"Customer Churn Analysis and Prediction in Banking Industry using Machine Learning","authors":"Ishpreet Kaur, Jasleen Kaur","doi":"10.1109/PDGC50313.2020.9315761","DOIUrl":"https://doi.org/10.1109/PDGC50313.2020.9315761","url":null,"abstract":"Customer Churning is also known as customer attrition. Nowadays, there are almost 1.5 million customers that are churning in a year that is rising every year. The Banking industry faces challenges to hold clients. The clients may shift over to different banks due to fluctuating reasons, for example, better financial services at lower charges, bank branch location, low-interest rates, etc. Thus, prediction models are utilized to predict clients who are probably going to churn in the future. Because serving long-term customers is less costly as compared to losing a client that leads to a loss in profit for the bank. Also, old customers create higher benefits and provide new referrals. In this paper, different models of machine learning such as Logistic regression (LR), decision tree (DT), K-nearest neighbor (KNN), random forest (RF), etc. are applied to the bank dataset to predict the probability of customer who is going to churn. The comparison in terms of performance like accuracy, recall, etc. is presented.","PeriodicalId":347216,"journal":{"name":"2020 Sixth International Conference on Parallel, Distributed and Grid Computing (PDGC)","volume":"356 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123106638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-06DOI: 10.1109/PDGC50313.2020.9315796
N. Pradhan, Vijaypal Singh Dhaka
In the medical field, day by day a new technology is introduced to reduce the efforts of doctors as well as patients. Before the actual treatment, patients' needs satisfaction to diagnose a defect in the body part. The current techniques available to detect the correct fractured/damaged bone part of a human is either a Computerized Tomography scan or Magnetic Resonance Imaging scan. The mentioned techniques are either unavailable in rural areas or are costly compare to the X-ray technique. This issue attracts the attention to design a technique that converts a 2-Dimensional (2-D) images into its equivalent 3- Dimensional (3-D) images. For this purpose, the authors used the Generative Adversarial Network to implement a technique that takes an X-ray image as input and gives its equivalent 0° to 360° images.
{"title":"A Deep Learning Technique for Multi-view Prediction of Bone","authors":"N. Pradhan, Vijaypal Singh Dhaka","doi":"10.1109/PDGC50313.2020.9315796","DOIUrl":"https://doi.org/10.1109/PDGC50313.2020.9315796","url":null,"abstract":"In the medical field, day by day a new technology is introduced to reduce the efforts of doctors as well as patients. Before the actual treatment, patients' needs satisfaction to diagnose a defect in the body part. The current techniques available to detect the correct fractured/damaged bone part of a human is either a Computerized Tomography scan or Magnetic Resonance Imaging scan. The mentioned techniques are either unavailable in rural areas or are costly compare to the X-ray technique. This issue attracts the attention to design a technique that converts a 2-Dimensional (2-D) images into its equivalent 3- Dimensional (3-D) images. For this purpose, the authors used the Generative Adversarial Network to implement a technique that takes an X-ray image as input and gives its equivalent 0° to 360° images.","PeriodicalId":347216,"journal":{"name":"2020 Sixth International Conference on Parallel, Distributed and Grid Computing (PDGC)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129865642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Understanding Actual meaning of a natural written language document is easy for a human but to enable a machine to do the same task require an accurate document representation as a machine do not have the same common sense as human have. For the task of document classification, it is required that text must be converted to numerical vector and recently, word embedding approaches are giving acceptable results in terms of word representation at global context level. In this study author has experimented with news dataset of multiple domain and compared the classification performance obtained from traditional bag of word model to word2vec model and found that word2vec is giving promising results in case of large vocabulary with low dimensionality which will help to classify the data dynamically as demonstrated in section experimental result.
{"title":"Text document representation and classification using Convolution Neural Network","authors":"Shikha Mundra, Ankit Mundra, Anshul Saigal, Punit Gupta","doi":"10.1109/PDGC50313.2020.9315752","DOIUrl":"https://doi.org/10.1109/PDGC50313.2020.9315752","url":null,"abstract":"Understanding Actual meaning of a natural written language document is easy for a human but to enable a machine to do the same task require an accurate document representation as a machine do not have the same common sense as human have. For the task of document classification, it is required that text must be converted to numerical vector and recently, word embedding approaches are giving acceptable results in terms of word representation at global context level. In this study author has experimented with news dataset of multiple domain and compared the classification performance obtained from traditional bag of word model to word2vec model and found that word2vec is giving promising results in case of large vocabulary with low dimensionality which will help to classify the data dynamically as demonstrated in section experimental result.","PeriodicalId":347216,"journal":{"name":"2020 Sixth International Conference on Parallel, Distributed and Grid Computing (PDGC)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126810939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-06DOI: 10.1109/PDGC50313.2020.9315786
H. Garg, A. Agrawal
As population is increasing day by day and vehicles have become an important part of a person's life. Due to lack of time and for more convenience, generally people prefer owning a vehicle. This has lead to a great rise in number of vehicles. As the number has increased to a larger extent, ensuring safety of all the vehicles have become a tedious task. Vehicle safety has become an emerging issue, for which many complex and advanced systems are created. These systems helps to handle a large problem of vehicle theft that is increasing day by day, exploiting the weaknesses of the vehicle safety systems. Many vehicle safety systems have been proposed till date to ensure the safety of the vehicles without any loopholes. This paper presents a comparative study and analysis of various works and approaches that are proposed till date to address this threat to a great extent.
{"title":"A Comparative Study on Vehicles Safety Systems","authors":"H. Garg, A. Agrawal","doi":"10.1109/PDGC50313.2020.9315786","DOIUrl":"https://doi.org/10.1109/PDGC50313.2020.9315786","url":null,"abstract":"As population is increasing day by day and vehicles have become an important part of a person's life. Due to lack of time and for more convenience, generally people prefer owning a vehicle. This has lead to a great rise in number of vehicles. As the number has increased to a larger extent, ensuring safety of all the vehicles have become a tedious task. Vehicle safety has become an emerging issue, for which many complex and advanced systems are created. These systems helps to handle a large problem of vehicle theft that is increasing day by day, exploiting the weaknesses of the vehicle safety systems. Many vehicle safety systems have been proposed till date to ensure the safety of the vehicles without any loopholes. This paper presents a comparative study and analysis of various works and approaches that are proposed till date to address this threat to a great extent.","PeriodicalId":347216,"journal":{"name":"2020 Sixth International Conference on Parallel, Distributed and Grid Computing (PDGC)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127799367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-06DOI: 10.1109/PDGC50313.2020.9315760
Hemant Petwal, Rinkle Rani
A multispecialty hospital (MSH) is a healthcare facility that provides medical and surgical services to patients. Multispecialty hospitals providing surgical care differ in their performance, such as patient care and satisfaction, success rate, mortality rate, surgical complication rate, waiting time, etc. Since multispecialty hospitals vary in large numbers, it becomes challenging for a patient to select the best MSH providing quality surgical services. In this paper, the challenge of selecting the best MSH is addressed as a problem of multicriteria decision-making (MCDM). This paper proposes an optimal MCDM framework for selecting the best and quality MSH for surgery. The proposed framework is divided into two phases: The optimization phase and the decision-making phase. In the optimization phase, the multi-objective water cycle algorithm (MOWCA) is used to obtain Pareto-optimal MSHs. Subsequently, in the decision-making phase, AHP is utilized to select the best MSH from the obtained Pareto-optimal MSHs. The proposed framework is compared with existing MCDM methods in terms of accuracy. Finally, the proposed framework is validated through a case study of a real multispecialty hospital dataset obtained from the Dehradun district of Uttarakhand, India. The results show that the proposed framework obtained more accurate results and outperforms the existing MCDM method.
{"title":"An optimal Multi-Criteria Decision-Making Framework to select best Multispecialty Hospital for surgery","authors":"Hemant Petwal, Rinkle Rani","doi":"10.1109/PDGC50313.2020.9315760","DOIUrl":"https://doi.org/10.1109/PDGC50313.2020.9315760","url":null,"abstract":"A multispecialty hospital (MSH) is a healthcare facility that provides medical and surgical services to patients. Multispecialty hospitals providing surgical care differ in their performance, such as patient care and satisfaction, success rate, mortality rate, surgical complication rate, waiting time, etc. Since multispecialty hospitals vary in large numbers, it becomes challenging for a patient to select the best MSH providing quality surgical services. In this paper, the challenge of selecting the best MSH is addressed as a problem of multicriteria decision-making (MCDM). This paper proposes an optimal MCDM framework for selecting the best and quality MSH for surgery. The proposed framework is divided into two phases: The optimization phase and the decision-making phase. In the optimization phase, the multi-objective water cycle algorithm (MOWCA) is used to obtain Pareto-optimal MSHs. Subsequently, in the decision-making phase, AHP is utilized to select the best MSH from the obtained Pareto-optimal MSHs. The proposed framework is compared with existing MCDM methods in terms of accuracy. Finally, the proposed framework is validated through a case study of a real multispecialty hospital dataset obtained from the Dehradun district of Uttarakhand, India. The results show that the proposed framework obtained more accurate results and outperforms the existing MCDM method.","PeriodicalId":347216,"journal":{"name":"2020 Sixth International Conference on Parallel, Distributed and Grid Computing (PDGC)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129204830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-06DOI: 10.1109/PDGC50313.2020.9315834
M. Behera, S. Chakravarty, Apurwa Gourav, S. Dash
Nuclear Cataract is a common eye disease that generally occurs at elder age. But if it's not detected at its earlier state, then it may affect vision and can live permanently. In this work, to detect the cataract an automated model proposed based on image processing and machine learning techniques. The input to the proposed model is, a set of fundus retinal images. For training the model, the image dataset consists of two types ofimages healthy and cataract affected. From each input retinal image a binary image, consisting of blood vessels is generated, using image processing techniques like image Filtration, segmentation and thresholding. These set of binary images are used as the feature matrix for defining the classifier by using a well-known machine learning technique Support vector machine (SVM). For validation and compression of the model, different kernels of SVM like linear, polynomial and RBF are applied and tested. Out of all, Radial Basis Function (RBF) based SVM performs good with an overall accuracy of 95.2 % and able to produce result in real time.
{"title":"Detection of Nuclear Cataract in Retinal Fundus Image using RadialBasis FunctionbasedSVM","authors":"M. Behera, S. Chakravarty, Apurwa Gourav, S. Dash","doi":"10.1109/PDGC50313.2020.9315834","DOIUrl":"https://doi.org/10.1109/PDGC50313.2020.9315834","url":null,"abstract":"Nuclear Cataract is a common eye disease that generally occurs at elder age. But if it's not detected at its earlier state, then it may affect vision and can live permanently. In this work, to detect the cataract an automated model proposed based on image processing and machine learning techniques. The input to the proposed model is, a set of fundus retinal images. For training the model, the image dataset consists of two types ofimages healthy and cataract affected. From each input retinal image a binary image, consisting of blood vessels is generated, using image processing techniques like image Filtration, segmentation and thresholding. These set of binary images are used as the feature matrix for defining the classifier by using a well-known machine learning technique Support vector machine (SVM). For validation and compression of the model, different kernels of SVM like linear, polynomial and RBF are applied and tested. Out of all, Radial Basis Function (RBF) based SVM performs good with an overall accuracy of 95.2 % and able to produce result in real time.","PeriodicalId":347216,"journal":{"name":"2020 Sixth International Conference on Parallel, Distributed and Grid Computing (PDGC)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115776622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-06DOI: 10.1109/PDGC50313.2020.9315784
Kashish Bhatia, B. Chhabra, Manish Kumar
The field of data science is getting wide day by day and more areas are using this concept. This paper uses the concept of data science for analyzing patterns of terrorism globally. We use the “Global Terrorism Database (GTD)” having information of terrorist attacks around the world from 1970 to 2017. The data was preprocessed and we use “Hive Query Language (HiveQL)” and Hadoop concepts to make various predictions out of the database. HiveQL is run by intergrating with Hadoop which is installed on a linux system. Various interesting findings were made from this database which are represented in the form of queries that were shot on the database. The queries were decided upon by framing a few questions and finding suitable answers. The results obtained are presented graphically using tableau and python for a better understanding of the reader. In the last section, various inferences were drawn from the results obtained.
数据科学领域日益广泛,越来越多的领域正在使用这个概念。本文使用数据科学的概念来分析全球恐怖主义的模式。我们使用“全球恐怖主义数据库”(GTD),该数据库拥有1970年至2017年全球恐怖袭击的信息。我们对数据进行了预处理,并使用“Hive Query Language (HiveQL)”和Hadoop概念从数据库中做出各种预测。HiveQL与安装在linux系统上的Hadoop集成运行。从这个数据库中得出了各种有趣的发现,这些发现以在数据库上拍摄的查询的形式表示。这些问题是通过构建几个问题并找到合适的答案来确定的。得到的结果用图表和python图形化地呈现,以便读者更好地理解。在最后一节中,从得到的结果中得出了各种推论。
{"title":"Data Analysis of Various Terrorism Activities Using Big Data Approaches on Global Terrorism Database","authors":"Kashish Bhatia, B. Chhabra, Manish Kumar","doi":"10.1109/PDGC50313.2020.9315784","DOIUrl":"https://doi.org/10.1109/PDGC50313.2020.9315784","url":null,"abstract":"The field of data science is getting wide day by day and more areas are using this concept. This paper uses the concept of data science for analyzing patterns of terrorism globally. We use the “Global Terrorism Database (GTD)” having information of terrorist attacks around the world from 1970 to 2017. The data was preprocessed and we use “Hive Query Language (HiveQL)” and Hadoop concepts to make various predictions out of the database. HiveQL is run by intergrating with Hadoop which is installed on a linux system. Various interesting findings were made from this database which are represented in the form of queries that were shot on the database. The queries were decided upon by framing a few questions and finding suitable answers. The results obtained are presented graphically using tableau and python for a better understanding of the reader. In the last section, various inferences were drawn from the results obtained.","PeriodicalId":347216,"journal":{"name":"2020 Sixth International Conference on Parallel, Distributed and Grid Computing (PDGC)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114167309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-06DOI: 10.1109/PDGC50313.2020.9315813
K. Sudheer Reddy, C. Santhosh Kumar, K. Mamatha
The success of a Software project is determined by how well the initial estimates have done. Hence, the effort and schedule estimates are essential in Software Project planning stages. Portal and Content Management (PCM) projects have been facing critical challenges while estimating the effort. The Organization has adopted the Function Point Analysis technique (FP) to overcome such challenges. Further, the organization has developed guidelines to apply FP on PCM projects. The key objective of this paper is to provide guidelines to estimate the effort of PCM projects by employing FP to avoid cost overruns and unproductive use of resources. Experimental results are proved that the proposed methodology is yielding better results by fixing the potential challenges. It is further ensured that the methodology leads to better estimate of the project cost, optimal resource utilization, on-time project delivery and others.
{"title":"Function Point Estimation for Portal and Content Management Projects","authors":"K. Sudheer Reddy, C. Santhosh Kumar, K. Mamatha","doi":"10.1109/PDGC50313.2020.9315813","DOIUrl":"https://doi.org/10.1109/PDGC50313.2020.9315813","url":null,"abstract":"The success of a Software project is determined by how well the initial estimates have done. Hence, the effort and schedule estimates are essential in Software Project planning stages. Portal and Content Management (PCM) projects have been facing critical challenges while estimating the effort. The Organization has adopted the Function Point Analysis technique (FP) to overcome such challenges. Further, the organization has developed guidelines to apply FP on PCM projects. The key objective of this paper is to provide guidelines to estimate the effort of PCM projects by employing FP to avoid cost overruns and unproductive use of resources. Experimental results are proved that the proposed methodology is yielding better results by fixing the potential challenges. It is further ensured that the methodology leads to better estimate of the project cost, optimal resource utilization, on-time project delivery and others.","PeriodicalId":347216,"journal":{"name":"2020 Sixth International Conference on Parallel, Distributed and Grid Computing (PDGC)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115291264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}