Pub Date : 2020-11-06DOI: 10.1109/PDGC50313.2020.9315743
Pranab Sharma
Image segmentation is the method of partitioning, or segmenting, different parts of the image in such a way that all segments are disjoint and each has similar elements. This process has wide applications in the field of medicine and photography industry. There are many ways in which image segmentation can be performed, from which K-Means clustering algorithm is well renowned due to its simplicity and effectiveness to perform the task. In this paper, an improved variant of K-Means Clustering algorithm is presented. The algorithm rests on applying partial contrast stretching, eliminating randomness in choosing the initial cluster centres for K-means algorithm, and removing the unwanted noise from median filters to obtain a high-quality image output.
{"title":"Advanced Image Segmentation Technique using Improved K Means Clustering Algorithm with Pixel Potential","authors":"Pranab Sharma","doi":"10.1109/PDGC50313.2020.9315743","DOIUrl":"https://doi.org/10.1109/PDGC50313.2020.9315743","url":null,"abstract":"Image segmentation is the method of partitioning, or segmenting, different parts of the image in such a way that all segments are disjoint and each has similar elements. This process has wide applications in the field of medicine and photography industry. There are many ways in which image segmentation can be performed, from which K-Means clustering algorithm is well renowned due to its simplicity and effectiveness to perform the task. In this paper, an improved variant of K-Means Clustering algorithm is presented. The algorithm rests on applying partial contrast stretching, eliminating randomness in choosing the initial cluster centres for K-means algorithm, and removing the unwanted noise from median filters to obtain a high-quality image output.","PeriodicalId":347216,"journal":{"name":"2020 Sixth International Conference on Parallel, Distributed and Grid Computing (PDGC)","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128581283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-06DOI: 10.1109/PDGC50313.2020.9315827
Ekta Dixit, Vandana Jindal
Presently, the sensor network is an active region of interest due to various applications. The assistance and identification of the harmful objects are assisted by the generation of the environmental monitoring schemes in emerging technology. Air Pollution is the main problem that affects living creatures. In this paper, the research on the use of WSN in air pollution monitoring has been done. The main focus of the research has been done on the idea of the detection of air pollution and related methods that helped in the detection of air pollution. Moreover, the architecture of the wireless air pollution monitoring system has been described along with the interrelated components. Also, an energy-efficient routing protocol in the wireless air pollution monitoring system has been discussed. Additionally, the comparative analysis of heterogeneous and homogeneous protocol for improving the network lifetime of WSN has been done. However, energy efficiency is the maj or restraint of the restricted lifespan of WSN. Consequently, the main goal of the current research is to find the solution to decrease the energy consumption issue and a way to improve the network lifetime of both the protocols.
{"title":"Survey on Recent Cluster Originated Energy Efficiency Routing Protocols For Air Pollution Monitoring Using WSN","authors":"Ekta Dixit, Vandana Jindal","doi":"10.1109/PDGC50313.2020.9315827","DOIUrl":"https://doi.org/10.1109/PDGC50313.2020.9315827","url":null,"abstract":"Presently, the sensor network is an active region of interest due to various applications. The assistance and identification of the harmful objects are assisted by the generation of the environmental monitoring schemes in emerging technology. Air Pollution is the main problem that affects living creatures. In this paper, the research on the use of WSN in air pollution monitoring has been done. The main focus of the research has been done on the idea of the detection of air pollution and related methods that helped in the detection of air pollution. Moreover, the architecture of the wireless air pollution monitoring system has been described along with the interrelated components. Also, an energy-efficient routing protocol in the wireless air pollution monitoring system has been discussed. Additionally, the comparative analysis of heterogeneous and homogeneous protocol for improving the network lifetime of WSN has been done. However, energy efficiency is the maj or restraint of the restricted lifespan of WSN. Consequently, the main goal of the current research is to find the solution to decrease the energy consumption issue and a way to improve the network lifetime of both the protocols.","PeriodicalId":347216,"journal":{"name":"2020 Sixth International Conference on Parallel, Distributed and Grid Computing (PDGC)","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121606429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Understanding Actual meaning of a natural written language document is easy for a human but to enable a machine to do the same task require an accurate document representation as a machine do not have the same common sense as human have. For the task of document classification, it is required that text must be converted to numerical vector and recently, word embedding approaches are giving acceptable results in terms of word representation at global context level. In this study author has experimented with news dataset of multiple domain and compared the classification performance obtained from traditional bag of word model to word2vec model and found that word2vec is giving promising results in case of large vocabulary with low dimensionality which will help to classify the data dynamically as demonstrated in section experimental result.
{"title":"Text document representation and classification using Convolution Neural Network","authors":"Shikha Mundra, Ankit Mundra, Anshul Saigal, Punit Gupta","doi":"10.1109/PDGC50313.2020.9315752","DOIUrl":"https://doi.org/10.1109/PDGC50313.2020.9315752","url":null,"abstract":"Understanding Actual meaning of a natural written language document is easy for a human but to enable a machine to do the same task require an accurate document representation as a machine do not have the same common sense as human have. For the task of document classification, it is required that text must be converted to numerical vector and recently, word embedding approaches are giving acceptable results in terms of word representation at global context level. In this study author has experimented with news dataset of multiple domain and compared the classification performance obtained from traditional bag of word model to word2vec model and found that word2vec is giving promising results in case of large vocabulary with low dimensionality which will help to classify the data dynamically as demonstrated in section experimental result.","PeriodicalId":347216,"journal":{"name":"2020 Sixth International Conference on Parallel, Distributed and Grid Computing (PDGC)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126810939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-06DOI: 10.1109/PDGC50313.2020.9315786
H. Garg, A. Agrawal
As population is increasing day by day and vehicles have become an important part of a person's life. Due to lack of time and for more convenience, generally people prefer owning a vehicle. This has lead to a great rise in number of vehicles. As the number has increased to a larger extent, ensuring safety of all the vehicles have become a tedious task. Vehicle safety has become an emerging issue, for which many complex and advanced systems are created. These systems helps to handle a large problem of vehicle theft that is increasing day by day, exploiting the weaknesses of the vehicle safety systems. Many vehicle safety systems have been proposed till date to ensure the safety of the vehicles without any loopholes. This paper presents a comparative study and analysis of various works and approaches that are proposed till date to address this threat to a great extent.
{"title":"A Comparative Study on Vehicles Safety Systems","authors":"H. Garg, A. Agrawal","doi":"10.1109/PDGC50313.2020.9315786","DOIUrl":"https://doi.org/10.1109/PDGC50313.2020.9315786","url":null,"abstract":"As population is increasing day by day and vehicles have become an important part of a person's life. Due to lack of time and for more convenience, generally people prefer owning a vehicle. This has lead to a great rise in number of vehicles. As the number has increased to a larger extent, ensuring safety of all the vehicles have become a tedious task. Vehicle safety has become an emerging issue, for which many complex and advanced systems are created. These systems helps to handle a large problem of vehicle theft that is increasing day by day, exploiting the weaknesses of the vehicle safety systems. Many vehicle safety systems have been proposed till date to ensure the safety of the vehicles without any loopholes. This paper presents a comparative study and analysis of various works and approaches that are proposed till date to address this threat to a great extent.","PeriodicalId":347216,"journal":{"name":"2020 Sixth International Conference on Parallel, Distributed and Grid Computing (PDGC)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127799367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-06DOI: 10.1109/PDGC50313.2020.9315780
S. Kaur, Nidhi Bhatla
Image forgery detection is the area of research in the field of biometric and forensics. Digital pictures are the resource of data. In the present world of technology, image processing software tools have developed to generate and modify digital images from one location to another. With the current technology, it is simple to establish image forgery by addition and subtraction of the components from the pictures that lead to image interfering. Copy-move image forgery is created by copying and pasting the element in a similar image. Hence, copy-move forgery has become an area of research in the image forensic unit. Various methods have been implemented to detect digital image forgery. Some issues still required to resolve like time complexity, fake, and blurred image. In existing research, the block and feature-based approach used to remove a forged area from the image using SIFT and RANSAC algorithm. The forgery dataset of the 80 pictures collected to achieve accuracy of up to 95%. In the research work, the PBFOA method has been implemented to optimize and extract the features using the component analysis method. FCM is used for image segmentation in the input image. PBFOA is based on an optimization process to select valuable features based on the calculation of the fitness function. In this method, two steps are used to re-verify the instance, features (i) Slower and faster condition. BFOA steps are described in detail in this research paper. Initial steps, Spread the feature set in the whole system. In the rapid condition selected and to eliminate the valuable features one at a time, then reproduction phase is implemented with the help of the fitness function to recover the feature values and detect the forgery information in the uploaded image. The simulation setup using MATLAB 2016a version and improve the accuracy rate and image quality parameter. Performance analysis depends on the proposed metrics FAR, FRR, ACC, Precision, Recall, and compared with the existing methods.
{"title":"Forgery Detection For High-Resolution Digital Images Using FCM And PBFOAAlgorithm","authors":"S. Kaur, Nidhi Bhatla","doi":"10.1109/PDGC50313.2020.9315780","DOIUrl":"https://doi.org/10.1109/PDGC50313.2020.9315780","url":null,"abstract":"Image forgery detection is the area of research in the field of biometric and forensics. Digital pictures are the resource of data. In the present world of technology, image processing software tools have developed to generate and modify digital images from one location to another. With the current technology, it is simple to establish image forgery by addition and subtraction of the components from the pictures that lead to image interfering. Copy-move image forgery is created by copying and pasting the element in a similar image. Hence, copy-move forgery has become an area of research in the image forensic unit. Various methods have been implemented to detect digital image forgery. Some issues still required to resolve like time complexity, fake, and blurred image. In existing research, the block and feature-based approach used to remove a forged area from the image using SIFT and RANSAC algorithm. The forgery dataset of the 80 pictures collected to achieve accuracy of up to 95%. In the research work, the PBFOA method has been implemented to optimize and extract the features using the component analysis method. FCM is used for image segmentation in the input image. PBFOA is based on an optimization process to select valuable features based on the calculation of the fitness function. In this method, two steps are used to re-verify the instance, features (i) Slower and faster condition. BFOA steps are described in detail in this research paper. Initial steps, Spread the feature set in the whole system. In the rapid condition selected and to eliminate the valuable features one at a time, then reproduction phase is implemented with the help of the fitness function to recover the feature values and detect the forgery information in the uploaded image. The simulation setup using MATLAB 2016a version and improve the accuracy rate and image quality parameter. Performance analysis depends on the proposed metrics FAR, FRR, ACC, Precision, Recall, and compared with the existing methods.","PeriodicalId":347216,"journal":{"name":"2020 Sixth International Conference on Parallel, Distributed and Grid Computing (PDGC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130639518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-06DOI: 10.1109/PDGC50313.2020.9315327
Kumar Satyajeet, Kavita Pandey
Today with the ever-growing demand of the internet and every second the transition to new technology, in-vehicle system also requires up-gradation. In this study, finding optimal positioning of roadside in vehicular Ad hoc Network (VANET) has been explored using Artificial Intelligence, as it is transforming every domain to a new level. Machine Learning can help us in predicting the optimal position of Roadside unit using the volume of vehicles and via verifying the longitude and latitude of the traffic vehicle. Various clustering techniques K-Means, Mean_Shift, Density-Based Spatial clustering of Application with Noise, Expectation_Maximization clustering (GMM) and Agglomerative_Hierarchical clustering has been applied on vehicle data consisting of longitude, latitude and volume of the taxi. Data was collected from NYC taxi (New York) from January 2016 to June 2016. Our results shows that machine learning provide excellent results in terms of position predictions.
在互联网需求不断增长、新技术日新月异的今天,车载系统也需要升级换代。在本研究中,利用人工智能探索了在车载自组织网络(VANET)中寻找最优路边定位,因为它正在将每个领域都转变到一个新的水平。机器学习可以帮助我们利用车辆的数量,并通过验证交通车辆的经纬度来预测路边单元的最佳位置。将K-Means、Mean_Shift、基于密度的带噪声空间聚类、Expectation_Maximization聚类(GMM)和Agglomerative_Hierarchical聚类等聚类技术应用于出租车的经纬度和体积数据。数据收集自2016年1月至2016年6月的NYC taxi (New York)。我们的研究结果表明,机器学习在位置预测方面提供了出色的结果。
{"title":"Comparative Analysis of Clustering Techniques for Deployment of Roadside Units","authors":"Kumar Satyajeet, Kavita Pandey","doi":"10.1109/PDGC50313.2020.9315327","DOIUrl":"https://doi.org/10.1109/PDGC50313.2020.9315327","url":null,"abstract":"Today with the ever-growing demand of the internet and every second the transition to new technology, in-vehicle system also requires up-gradation. In this study, finding optimal positioning of roadside in vehicular Ad hoc Network (VANET) has been explored using Artificial Intelligence, as it is transforming every domain to a new level. Machine Learning can help us in predicting the optimal position of Roadside unit using the volume of vehicles and via verifying the longitude and latitude of the traffic vehicle. Various clustering techniques K-Means, Mean_Shift, Density-Based Spatial clustering of Application with Noise, Expectation_Maximization clustering (GMM) and Agglomerative_Hierarchical clustering has been applied on vehicle data consisting of longitude, latitude and volume of the taxi. Data was collected from NYC taxi (New York) from January 2016 to June 2016. Our results shows that machine learning provide excellent results in terms of position predictions.","PeriodicalId":347216,"journal":{"name":"2020 Sixth International Conference on Parallel, Distributed and Grid Computing (PDGC)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134504401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-06DOI: 10.1109/PDGC50313.2020.9315760
Hemant Petwal, Rinkle Rani
A multispecialty hospital (MSH) is a healthcare facility that provides medical and surgical services to patients. Multispecialty hospitals providing surgical care differ in their performance, such as patient care and satisfaction, success rate, mortality rate, surgical complication rate, waiting time, etc. Since multispecialty hospitals vary in large numbers, it becomes challenging for a patient to select the best MSH providing quality surgical services. In this paper, the challenge of selecting the best MSH is addressed as a problem of multicriteria decision-making (MCDM). This paper proposes an optimal MCDM framework for selecting the best and quality MSH for surgery. The proposed framework is divided into two phases: The optimization phase and the decision-making phase. In the optimization phase, the multi-objective water cycle algorithm (MOWCA) is used to obtain Pareto-optimal MSHs. Subsequently, in the decision-making phase, AHP is utilized to select the best MSH from the obtained Pareto-optimal MSHs. The proposed framework is compared with existing MCDM methods in terms of accuracy. Finally, the proposed framework is validated through a case study of a real multispecialty hospital dataset obtained from the Dehradun district of Uttarakhand, India. The results show that the proposed framework obtained more accurate results and outperforms the existing MCDM method.
{"title":"An optimal Multi-Criteria Decision-Making Framework to select best Multispecialty Hospital for surgery","authors":"Hemant Petwal, Rinkle Rani","doi":"10.1109/PDGC50313.2020.9315760","DOIUrl":"https://doi.org/10.1109/PDGC50313.2020.9315760","url":null,"abstract":"A multispecialty hospital (MSH) is a healthcare facility that provides medical and surgical services to patients. Multispecialty hospitals providing surgical care differ in their performance, such as patient care and satisfaction, success rate, mortality rate, surgical complication rate, waiting time, etc. Since multispecialty hospitals vary in large numbers, it becomes challenging for a patient to select the best MSH providing quality surgical services. In this paper, the challenge of selecting the best MSH is addressed as a problem of multicriteria decision-making (MCDM). This paper proposes an optimal MCDM framework for selecting the best and quality MSH for surgery. The proposed framework is divided into two phases: The optimization phase and the decision-making phase. In the optimization phase, the multi-objective water cycle algorithm (MOWCA) is used to obtain Pareto-optimal MSHs. Subsequently, in the decision-making phase, AHP is utilized to select the best MSH from the obtained Pareto-optimal MSHs. The proposed framework is compared with existing MCDM methods in terms of accuracy. Finally, the proposed framework is validated through a case study of a real multispecialty hospital dataset obtained from the Dehradun district of Uttarakhand, India. The results show that the proposed framework obtained more accurate results and outperforms the existing MCDM method.","PeriodicalId":347216,"journal":{"name":"2020 Sixth International Conference on Parallel, Distributed and Grid Computing (PDGC)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129204830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-06DOI: 10.1109/PDGC50313.2020.9315834
M. Behera, S. Chakravarty, Apurwa Gourav, S. Dash
Nuclear Cataract is a common eye disease that generally occurs at elder age. But if it's not detected at its earlier state, then it may affect vision and can live permanently. In this work, to detect the cataract an automated model proposed based on image processing and machine learning techniques. The input to the proposed model is, a set of fundus retinal images. For training the model, the image dataset consists of two types ofimages healthy and cataract affected. From each input retinal image a binary image, consisting of blood vessels is generated, using image processing techniques like image Filtration, segmentation and thresholding. These set of binary images are used as the feature matrix for defining the classifier by using a well-known machine learning technique Support vector machine (SVM). For validation and compression of the model, different kernels of SVM like linear, polynomial and RBF are applied and tested. Out of all, Radial Basis Function (RBF) based SVM performs good with an overall accuracy of 95.2 % and able to produce result in real time.
{"title":"Detection of Nuclear Cataract in Retinal Fundus Image using RadialBasis FunctionbasedSVM","authors":"M. Behera, S. Chakravarty, Apurwa Gourav, S. Dash","doi":"10.1109/PDGC50313.2020.9315834","DOIUrl":"https://doi.org/10.1109/PDGC50313.2020.9315834","url":null,"abstract":"Nuclear Cataract is a common eye disease that generally occurs at elder age. But if it's not detected at its earlier state, then it may affect vision and can live permanently. In this work, to detect the cataract an automated model proposed based on image processing and machine learning techniques. The input to the proposed model is, a set of fundus retinal images. For training the model, the image dataset consists of two types ofimages healthy and cataract affected. From each input retinal image a binary image, consisting of blood vessels is generated, using image processing techniques like image Filtration, segmentation and thresholding. These set of binary images are used as the feature matrix for defining the classifier by using a well-known machine learning technique Support vector machine (SVM). For validation and compression of the model, different kernels of SVM like linear, polynomial and RBF are applied and tested. Out of all, Radial Basis Function (RBF) based SVM performs good with an overall accuracy of 95.2 % and able to produce result in real time.","PeriodicalId":347216,"journal":{"name":"2020 Sixth International Conference on Parallel, Distributed and Grid Computing (PDGC)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115776622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-06DOI: 10.1109/PDGC50313.2020.9315784
Kashish Bhatia, B. Chhabra, Manish Kumar
The field of data science is getting wide day by day and more areas are using this concept. This paper uses the concept of data science for analyzing patterns of terrorism globally. We use the “Global Terrorism Database (GTD)” having information of terrorist attacks around the world from 1970 to 2017. The data was preprocessed and we use “Hive Query Language (HiveQL)” and Hadoop concepts to make various predictions out of the database. HiveQL is run by intergrating with Hadoop which is installed on a linux system. Various interesting findings were made from this database which are represented in the form of queries that were shot on the database. The queries were decided upon by framing a few questions and finding suitable answers. The results obtained are presented graphically using tableau and python for a better understanding of the reader. In the last section, various inferences were drawn from the results obtained.
数据科学领域日益广泛,越来越多的领域正在使用这个概念。本文使用数据科学的概念来分析全球恐怖主义的模式。我们使用“全球恐怖主义数据库”(GTD),该数据库拥有1970年至2017年全球恐怖袭击的信息。我们对数据进行了预处理,并使用“Hive Query Language (HiveQL)”和Hadoop概念从数据库中做出各种预测。HiveQL与安装在linux系统上的Hadoop集成运行。从这个数据库中得出了各种有趣的发现,这些发现以在数据库上拍摄的查询的形式表示。这些问题是通过构建几个问题并找到合适的答案来确定的。得到的结果用图表和python图形化地呈现,以便读者更好地理解。在最后一节中,从得到的结果中得出了各种推论。
{"title":"Data Analysis of Various Terrorism Activities Using Big Data Approaches on Global Terrorism Database","authors":"Kashish Bhatia, B. Chhabra, Manish Kumar","doi":"10.1109/PDGC50313.2020.9315784","DOIUrl":"https://doi.org/10.1109/PDGC50313.2020.9315784","url":null,"abstract":"The field of data science is getting wide day by day and more areas are using this concept. This paper uses the concept of data science for analyzing patterns of terrorism globally. We use the “Global Terrorism Database (GTD)” having information of terrorist attacks around the world from 1970 to 2017. The data was preprocessed and we use “Hive Query Language (HiveQL)” and Hadoop concepts to make various predictions out of the database. HiveQL is run by intergrating with Hadoop which is installed on a linux system. Various interesting findings were made from this database which are represented in the form of queries that were shot on the database. The queries were decided upon by framing a few questions and finding suitable answers. The results obtained are presented graphically using tableau and python for a better understanding of the reader. In the last section, various inferences were drawn from the results obtained.","PeriodicalId":347216,"journal":{"name":"2020 Sixth International Conference on Parallel, Distributed and Grid Computing (PDGC)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114167309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-06DOI: 10.1109/PDGC50313.2020.9315813
K. Sudheer Reddy, C. Santhosh Kumar, K. Mamatha
The success of a Software project is determined by how well the initial estimates have done. Hence, the effort and schedule estimates are essential in Software Project planning stages. Portal and Content Management (PCM) projects have been facing critical challenges while estimating the effort. The Organization has adopted the Function Point Analysis technique (FP) to overcome such challenges. Further, the organization has developed guidelines to apply FP on PCM projects. The key objective of this paper is to provide guidelines to estimate the effort of PCM projects by employing FP to avoid cost overruns and unproductive use of resources. Experimental results are proved that the proposed methodology is yielding better results by fixing the potential challenges. It is further ensured that the methodology leads to better estimate of the project cost, optimal resource utilization, on-time project delivery and others.
{"title":"Function Point Estimation for Portal and Content Management Projects","authors":"K. Sudheer Reddy, C. Santhosh Kumar, K. Mamatha","doi":"10.1109/PDGC50313.2020.9315813","DOIUrl":"https://doi.org/10.1109/PDGC50313.2020.9315813","url":null,"abstract":"The success of a Software project is determined by how well the initial estimates have done. Hence, the effort and schedule estimates are essential in Software Project planning stages. Portal and Content Management (PCM) projects have been facing critical challenges while estimating the effort. The Organization has adopted the Function Point Analysis technique (FP) to overcome such challenges. Further, the organization has developed guidelines to apply FP on PCM projects. The key objective of this paper is to provide guidelines to estimate the effort of PCM projects by employing FP to avoid cost overruns and unproductive use of resources. Experimental results are proved that the proposed methodology is yielding better results by fixing the potential challenges. It is further ensured that the methodology leads to better estimate of the project cost, optimal resource utilization, on-time project delivery and others.","PeriodicalId":347216,"journal":{"name":"2020 Sixth International Conference on Parallel, Distributed and Grid Computing (PDGC)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115291264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}