首页 > 最新文献

2020 Sixth International Conference on Parallel, Distributed and Grid Computing (PDGC)最新文献

英文 中文
Advanced Image Segmentation Technique using Improved K Means Clustering Algorithm with Pixel Potential 基于像素势的改进K均值聚类算法的图像分割技术
Pub Date : 2020-11-06 DOI: 10.1109/PDGC50313.2020.9315743
Pranab Sharma
Image segmentation is the method of partitioning, or segmenting, different parts of the image in such a way that all segments are disjoint and each has similar elements. This process has wide applications in the field of medicine and photography industry. There are many ways in which image segmentation can be performed, from which K-Means clustering algorithm is well renowned due to its simplicity and effectiveness to perform the task. In this paper, an improved variant of K-Means Clustering algorithm is presented. The algorithm rests on applying partial contrast stretching, eliminating randomness in choosing the initial cluster centres for K-means algorithm, and removing the unwanted noise from median filters to obtain a high-quality image output.
图像分割是对图像的不同部分进行分割或分割的方法,所有的部分都是不相交的,每个部分都有相似的元素。该工艺在医药、摄影等行业有着广泛的应用。有许多方法可以执行图像分割,其中K-Means聚类算法因其执行任务的简单和有效而闻名。本文提出了一种改进的k -均值聚类算法。该算法依赖于应用部分对比度拉伸,消除K-means算法选择初始聚类中心时的随机性,并从中值滤波器中去除不必要的噪声以获得高质量的图像输出。
{"title":"Advanced Image Segmentation Technique using Improved K Means Clustering Algorithm with Pixel Potential","authors":"Pranab Sharma","doi":"10.1109/PDGC50313.2020.9315743","DOIUrl":"https://doi.org/10.1109/PDGC50313.2020.9315743","url":null,"abstract":"Image segmentation is the method of partitioning, or segmenting, different parts of the image in such a way that all segments are disjoint and each has similar elements. This process has wide applications in the field of medicine and photography industry. There are many ways in which image segmentation can be performed, from which K-Means clustering algorithm is well renowned due to its simplicity and effectiveness to perform the task. In this paper, an improved variant of K-Means Clustering algorithm is presented. The algorithm rests on applying partial contrast stretching, eliminating randomness in choosing the initial cluster centres for K-means algorithm, and removing the unwanted noise from median filters to obtain a high-quality image output.","PeriodicalId":347216,"journal":{"name":"2020 Sixth International Conference on Parallel, Distributed and Grid Computing (PDGC)","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128581283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Survey on Recent Cluster Originated Energy Efficiency Routing Protocols For Air Pollution Monitoring Using WSN 基于WSN的空气污染监测簇源能效路由协议研究进展
Pub Date : 2020-11-06 DOI: 10.1109/PDGC50313.2020.9315827
Ekta Dixit, Vandana Jindal
Presently, the sensor network is an active region of interest due to various applications. The assistance and identification of the harmful objects are assisted by the generation of the environmental monitoring schemes in emerging technology. Air Pollution is the main problem that affects living creatures. In this paper, the research on the use of WSN in air pollution monitoring has been done. The main focus of the research has been done on the idea of the detection of air pollution and related methods that helped in the detection of air pollution. Moreover, the architecture of the wireless air pollution monitoring system has been described along with the interrelated components. Also, an energy-efficient routing protocol in the wireless air pollution monitoring system has been discussed. Additionally, the comparative analysis of heterogeneous and homogeneous protocol for improving the network lifetime of WSN has been done. However, energy efficiency is the maj or restraint of the restricted lifespan of WSN. Consequently, the main goal of the current research is to find the solution to decrease the energy consumption issue and a way to improve the network lifetime of both the protocols.
目前,由于各种应用,传感器网络是一个活跃的研究领域。新兴技术中环境监测方案的产生有助于对有害物体的协助和识别。空气污染是影响生物的主要问题。本文对无线传感器网络在大气污染监测中的应用进行了研究。研究的主要重点是空气污染检测的想法和有助于检测空气污染的相关方法。此外,还描述了无线空气污染监测系统的体系结构以及相关组件。并讨论了无线大气污染监测系统中的节能路由协议。此外,还对异构协议和同构协议在提高WSN网络生存期方面的作用进行了对比分析。然而,能源效率是限制无线传感器网络寿命的主要制约因素。因此,当前研究的主要目标是找到降低能耗问题的解决方案和提高两种协议的网络生存期的方法。
{"title":"Survey on Recent Cluster Originated Energy Efficiency Routing Protocols For Air Pollution Monitoring Using WSN","authors":"Ekta Dixit, Vandana Jindal","doi":"10.1109/PDGC50313.2020.9315827","DOIUrl":"https://doi.org/10.1109/PDGC50313.2020.9315827","url":null,"abstract":"Presently, the sensor network is an active region of interest due to various applications. The assistance and identification of the harmful objects are assisted by the generation of the environmental monitoring schemes in emerging technology. Air Pollution is the main problem that affects living creatures. In this paper, the research on the use of WSN in air pollution monitoring has been done. The main focus of the research has been done on the idea of the detection of air pollution and related methods that helped in the detection of air pollution. Moreover, the architecture of the wireless air pollution monitoring system has been described along with the interrelated components. Also, an energy-efficient routing protocol in the wireless air pollution monitoring system has been discussed. Additionally, the comparative analysis of heterogeneous and homogeneous protocol for improving the network lifetime of WSN has been done. However, energy efficiency is the maj or restraint of the restricted lifespan of WSN. Consequently, the main goal of the current research is to find the solution to decrease the energy consumption issue and a way to improve the network lifetime of both the protocols.","PeriodicalId":347216,"journal":{"name":"2020 Sixth International Conference on Parallel, Distributed and Grid Computing (PDGC)","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121606429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Text document representation and classification using Convolution Neural Network 基于卷积神经网络的文本文档表示与分类
Pub Date : 2020-11-06 DOI: 10.1109/PDGC50313.2020.9315752
Shikha Mundra, Ankit Mundra, Anshul Saigal, Punit Gupta
Understanding Actual meaning of a natural written language document is easy for a human but to enable a machine to do the same task require an accurate document representation as a machine do not have the same common sense as human have. For the task of document classification, it is required that text must be converted to numerical vector and recently, word embedding approaches are giving acceptable results in terms of word representation at global context level. In this study author has experimented with news dataset of multiple domain and compared the classification performance obtained from traditional bag of word model to word2vec model and found that word2vec is giving promising results in case of large vocabulary with low dimensionality which will help to classify the data dynamically as demonstrated in section experimental result.
理解自然书面语言文档的实际含义对人类来说很容易,但要使机器能够完成同样的任务,需要准确的文档表示,因为机器不具有与人类相同的常识。对于文档分类任务,需要将文本转换为数字向量,最近,词嵌入方法在全局上下文级别的词表示方面给出了可接受的结果。在本研究中,作者对多领域的新闻数据集进行了实验,并将传统的词袋模型与word2vec模型的分类性能进行了比较,发现word2vec在词汇量大、维数低的情况下给出了很好的分类结果,这有助于动态地对数据进行分类。
{"title":"Text document representation and classification using Convolution Neural Network","authors":"Shikha Mundra, Ankit Mundra, Anshul Saigal, Punit Gupta","doi":"10.1109/PDGC50313.2020.9315752","DOIUrl":"https://doi.org/10.1109/PDGC50313.2020.9315752","url":null,"abstract":"Understanding Actual meaning of a natural written language document is easy for a human but to enable a machine to do the same task require an accurate document representation as a machine do not have the same common sense as human have. For the task of document classification, it is required that text must be converted to numerical vector and recently, word embedding approaches are giving acceptable results in terms of word representation at global context level. In this study author has experimented with news dataset of multiple domain and compared the classification performance obtained from traditional bag of word model to word2vec model and found that word2vec is giving promising results in case of large vocabulary with low dimensionality which will help to classify the data dynamically as demonstrated in section experimental result.","PeriodicalId":347216,"journal":{"name":"2020 Sixth International Conference on Parallel, Distributed and Grid Computing (PDGC)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126810939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Comparative Study on Vehicles Safety Systems 车辆安全系统的比较研究
Pub Date : 2020-11-06 DOI: 10.1109/PDGC50313.2020.9315786
H. Garg, A. Agrawal
As population is increasing day by day and vehicles have become an important part of a person's life. Due to lack of time and for more convenience, generally people prefer owning a vehicle. This has lead to a great rise in number of vehicles. As the number has increased to a larger extent, ensuring safety of all the vehicles have become a tedious task. Vehicle safety has become an emerging issue, for which many complex and advanced systems are created. These systems helps to handle a large problem of vehicle theft that is increasing day by day, exploiting the weaknesses of the vehicle safety systems. Many vehicle safety systems have been proposed till date to ensure the safety of the vehicles without any loopholes. This paper presents a comparative study and analysis of various works and approaches that are proposed till date to address this threat to a great extent.
随着人口的日益增长,车辆已经成为人们生活中重要的一部分。由于缺乏时间和更多的方便,一般人们更喜欢拥有一辆车。这导致了车辆数量的大幅增加。随着数量的增加,确保所有车辆的安全已成为一项繁琐的任务。汽车安全已经成为一个新兴的问题,许多复杂和先进的系统被创造出来。这些系统有助于处理日益增加的车辆盗窃问题,利用车辆安全系统的弱点。迄今为止,人们提出了许多车辆安全系统,以确保车辆的安全,没有任何漏洞。本文提出了一个比较研究和分析的各种工作和方法,提出了迄今为止,以解决这一威胁,在很大程度上。
{"title":"A Comparative Study on Vehicles Safety Systems","authors":"H. Garg, A. Agrawal","doi":"10.1109/PDGC50313.2020.9315786","DOIUrl":"https://doi.org/10.1109/PDGC50313.2020.9315786","url":null,"abstract":"As population is increasing day by day and vehicles have become an important part of a person's life. Due to lack of time and for more convenience, generally people prefer owning a vehicle. This has lead to a great rise in number of vehicles. As the number has increased to a larger extent, ensuring safety of all the vehicles have become a tedious task. Vehicle safety has become an emerging issue, for which many complex and advanced systems are created. These systems helps to handle a large problem of vehicle theft that is increasing day by day, exploiting the weaknesses of the vehicle safety systems. Many vehicle safety systems have been proposed till date to ensure the safety of the vehicles without any loopholes. This paper presents a comparative study and analysis of various works and approaches that are proposed till date to address this threat to a great extent.","PeriodicalId":347216,"journal":{"name":"2020 Sixth International Conference on Parallel, Distributed and Grid Computing (PDGC)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127799367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Forgery Detection For High-Resolution Digital Images Using FCM And PBFOAAlgorithm 基于FCM和pbfoa算法的高分辨率数字图像伪造检测
Pub Date : 2020-11-06 DOI: 10.1109/PDGC50313.2020.9315780
S. Kaur, Nidhi Bhatla
Image forgery detection is the area of research in the field of biometric and forensics. Digital pictures are the resource of data. In the present world of technology, image processing software tools have developed to generate and modify digital images from one location to another. With the current technology, it is simple to establish image forgery by addition and subtraction of the components from the pictures that lead to image interfering. Copy-move image forgery is created by copying and pasting the element in a similar image. Hence, copy-move forgery has become an area of research in the image forensic unit. Various methods have been implemented to detect digital image forgery. Some issues still required to resolve like time complexity, fake, and blurred image. In existing research, the block and feature-based approach used to remove a forged area from the image using SIFT and RANSAC algorithm. The forgery dataset of the 80 pictures collected to achieve accuracy of up to 95%. In the research work, the PBFOA method has been implemented to optimize and extract the features using the component analysis method. FCM is used for image segmentation in the input image. PBFOA is based on an optimization process to select valuable features based on the calculation of the fitness function. In this method, two steps are used to re-verify the instance, features (i) Slower and faster condition. BFOA steps are described in detail in this research paper. Initial steps, Spread the feature set in the whole system. In the rapid condition selected and to eliminate the valuable features one at a time, then reproduction phase is implemented with the help of the fitness function to recover the feature values and detect the forgery information in the uploaded image. The simulation setup using MATLAB 2016a version and improve the accuracy rate and image quality parameter. Performance analysis depends on the proposed metrics FAR, FRR, ACC, Precision, Recall, and compared with the existing methods.
图像伪造检测是生物识别和法医学领域的一个研究领域。数码图片是数据的源泉。在当今的技术世界中,图像处理软件工具已经发展到从一个位置到另一个位置生成和修改数字图像。在现有的技术条件下,通过对图像中引起图像干扰的成分进行加减处理,可以很容易地实现图像伪造。复制-移动图像伪造是通过复制和粘贴类似图像中的元素来创建的。因此,复制-移动伪造已成为图像法医单位的一个研究领域。各种检测数字图像伪造的方法已经实现。一些问题仍然需要解决,如时间复杂性,虚假和模糊的图像。在现有的研究中,基于块和特征的方法采用SIFT和RANSAC算法从图像中去除伪造区域。该伪造数据集收集了80张图片,准确率达到95%以上。在研究工作中,实现了PBFOA方法,利用成分分析法对特征进行优化和提取。FCM用于输入图像的图像分割。PBFOA是一种基于适应度函数计算的优化过程来选择有价值的特征。在该方法中,使用两个步骤来重新验证实例,特征(i)较慢和较快的条件。本文对BFOA的步骤进行了详细的描述。初始步骤,将功能集扩展到整个系统。在快速选择的条件下,每次剔除一个有价值的特征,然后利用适应度函数实现复制阶段,恢复特征值,检测上传图像中的伪造信息。利用MATLAB 2016a版进行仿真设置,提高了准确率和图像质量参数。性能分析取决于所提出的指标FAR, FRR, ACC, Precision, Recall,并与现有方法进行比较。
{"title":"Forgery Detection For High-Resolution Digital Images Using FCM And PBFOAAlgorithm","authors":"S. Kaur, Nidhi Bhatla","doi":"10.1109/PDGC50313.2020.9315780","DOIUrl":"https://doi.org/10.1109/PDGC50313.2020.9315780","url":null,"abstract":"Image forgery detection is the area of research in the field of biometric and forensics. Digital pictures are the resource of data. In the present world of technology, image processing software tools have developed to generate and modify digital images from one location to another. With the current technology, it is simple to establish image forgery by addition and subtraction of the components from the pictures that lead to image interfering. Copy-move image forgery is created by copying and pasting the element in a similar image. Hence, copy-move forgery has become an area of research in the image forensic unit. Various methods have been implemented to detect digital image forgery. Some issues still required to resolve like time complexity, fake, and blurred image. In existing research, the block and feature-based approach used to remove a forged area from the image using SIFT and RANSAC algorithm. The forgery dataset of the 80 pictures collected to achieve accuracy of up to 95%. In the research work, the PBFOA method has been implemented to optimize and extract the features using the component analysis method. FCM is used for image segmentation in the input image. PBFOA is based on an optimization process to select valuable features based on the calculation of the fitness function. In this method, two steps are used to re-verify the instance, features (i) Slower and faster condition. BFOA steps are described in detail in this research paper. Initial steps, Spread the feature set in the whole system. In the rapid condition selected and to eliminate the valuable features one at a time, then reproduction phase is implemented with the help of the fitness function to recover the feature values and detect the forgery information in the uploaded image. The simulation setup using MATLAB 2016a version and improve the accuracy rate and image quality parameter. Performance analysis depends on the proposed metrics FAR, FRR, ACC, Precision, Recall, and compared with the existing methods.","PeriodicalId":347216,"journal":{"name":"2020 Sixth International Conference on Parallel, Distributed and Grid Computing (PDGC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130639518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparative Analysis of Clustering Techniques for Deployment of Roadside Units 路边部队部署的聚类技术比较分析
Pub Date : 2020-11-06 DOI: 10.1109/PDGC50313.2020.9315327
Kumar Satyajeet, Kavita Pandey
Today with the ever-growing demand of the internet and every second the transition to new technology, in-vehicle system also requires up-gradation. In this study, finding optimal positioning of roadside in vehicular Ad hoc Network (VANET) has been explored using Artificial Intelligence, as it is transforming every domain to a new level. Machine Learning can help us in predicting the optimal position of Roadside unit using the volume of vehicles and via verifying the longitude and latitude of the traffic vehicle. Various clustering techniques K-Means, Mean_Shift, Density-Based Spatial clustering of Application with Noise, Expectation_Maximization clustering (GMM) and Agglomerative_Hierarchical clustering has been applied on vehicle data consisting of longitude, latitude and volume of the taxi. Data was collected from NYC taxi (New York) from January 2016 to June 2016. Our results shows that machine learning provide excellent results in terms of position predictions.
在互联网需求不断增长、新技术日新月异的今天,车载系统也需要升级换代。在本研究中,利用人工智能探索了在车载自组织网络(VANET)中寻找最优路边定位,因为它正在将每个领域都转变到一个新的水平。机器学习可以帮助我们利用车辆的数量,并通过验证交通车辆的经纬度来预测路边单元的最佳位置。将K-Means、Mean_Shift、基于密度的带噪声空间聚类、Expectation_Maximization聚类(GMM)和Agglomerative_Hierarchical聚类等聚类技术应用于出租车的经纬度和体积数据。数据收集自2016年1月至2016年6月的NYC taxi (New York)。我们的研究结果表明,机器学习在位置预测方面提供了出色的结果。
{"title":"Comparative Analysis of Clustering Techniques for Deployment of Roadside Units","authors":"Kumar Satyajeet, Kavita Pandey","doi":"10.1109/PDGC50313.2020.9315327","DOIUrl":"https://doi.org/10.1109/PDGC50313.2020.9315327","url":null,"abstract":"Today with the ever-growing demand of the internet and every second the transition to new technology, in-vehicle system also requires up-gradation. In this study, finding optimal positioning of roadside in vehicular Ad hoc Network (VANET) has been explored using Artificial Intelligence, as it is transforming every domain to a new level. Machine Learning can help us in predicting the optimal position of Roadside unit using the volume of vehicles and via verifying the longitude and latitude of the traffic vehicle. Various clustering techniques K-Means, Mean_Shift, Density-Based Spatial clustering of Application with Noise, Expectation_Maximization clustering (GMM) and Agglomerative_Hierarchical clustering has been applied on vehicle data consisting of longitude, latitude and volume of the taxi. Data was collected from NYC taxi (New York) from January 2016 to June 2016. Our results shows that machine learning provide excellent results in terms of position predictions.","PeriodicalId":347216,"journal":{"name":"2020 Sixth International Conference on Parallel, Distributed and Grid Computing (PDGC)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134504401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
An optimal Multi-Criteria Decision-Making Framework to select best Multispecialty Hospital for surgery 选择最佳外科多专科医院的最佳多标准决策框架
Pub Date : 2020-11-06 DOI: 10.1109/PDGC50313.2020.9315760
Hemant Petwal, Rinkle Rani
A multispecialty hospital (MSH) is a healthcare facility that provides medical and surgical services to patients. Multispecialty hospitals providing surgical care differ in their performance, such as patient care and satisfaction, success rate, mortality rate, surgical complication rate, waiting time, etc. Since multispecialty hospitals vary in large numbers, it becomes challenging for a patient to select the best MSH providing quality surgical services. In this paper, the challenge of selecting the best MSH is addressed as a problem of multicriteria decision-making (MCDM). This paper proposes an optimal MCDM framework for selecting the best and quality MSH for surgery. The proposed framework is divided into two phases: The optimization phase and the decision-making phase. In the optimization phase, the multi-objective water cycle algorithm (MOWCA) is used to obtain Pareto-optimal MSHs. Subsequently, in the decision-making phase, AHP is utilized to select the best MSH from the obtained Pareto-optimal MSHs. The proposed framework is compared with existing MCDM methods in terms of accuracy. Finally, the proposed framework is validated through a case study of a real multispecialty hospital dataset obtained from the Dehradun district of Uttarakhand, India. The results show that the proposed framework obtained more accurate results and outperforms the existing MCDM method.
多专科医院(MSH)是为患者提供医疗和外科服务的医疗机构。提供外科护理的多专科医院在患者护理和满意度、成功率、死亡率、手术并发症发生率、等待时间等方面表现各异。由于多专科医院数量众多,患者很难选择能提供优质手术服务的最佳多专科医院。本文将选择最佳MSH作为多准则决策(MCDM)问题加以解决。本文提出了一个最佳MCDM框架,用于选择最佳和高质量的手术MSH。该框架分为两个阶段:优化阶段和决策阶段。在优化阶段,采用多目标水循环算法(MOWCA)获得pareto最优msh。然后,在决策阶段,利用层次分析法从得到的pareto最优MSH中选择最优MSH。在精度方面,将该框架与现有的MCDM方法进行了比较。最后,通过对来自印度北阿坎德邦德拉敦地区的真实多专科医院数据集的案例研究,验证了所提出的框架。结果表明,所提出的框架获得了更精确的结果,优于现有的MCDM方法。
{"title":"An optimal Multi-Criteria Decision-Making Framework to select best Multispecialty Hospital for surgery","authors":"Hemant Petwal, Rinkle Rani","doi":"10.1109/PDGC50313.2020.9315760","DOIUrl":"https://doi.org/10.1109/PDGC50313.2020.9315760","url":null,"abstract":"A multispecialty hospital (MSH) is a healthcare facility that provides medical and surgical services to patients. Multispecialty hospitals providing surgical care differ in their performance, such as patient care and satisfaction, success rate, mortality rate, surgical complication rate, waiting time, etc. Since multispecialty hospitals vary in large numbers, it becomes challenging for a patient to select the best MSH providing quality surgical services. In this paper, the challenge of selecting the best MSH is addressed as a problem of multicriteria decision-making (MCDM). This paper proposes an optimal MCDM framework for selecting the best and quality MSH for surgery. The proposed framework is divided into two phases: The optimization phase and the decision-making phase. In the optimization phase, the multi-objective water cycle algorithm (MOWCA) is used to obtain Pareto-optimal MSHs. Subsequently, in the decision-making phase, AHP is utilized to select the best MSH from the obtained Pareto-optimal MSHs. The proposed framework is compared with existing MCDM methods in terms of accuracy. Finally, the proposed framework is validated through a case study of a real multispecialty hospital dataset obtained from the Dehradun district of Uttarakhand, India. The results show that the proposed framework obtained more accurate results and outperforms the existing MCDM method.","PeriodicalId":347216,"journal":{"name":"2020 Sixth International Conference on Parallel, Distributed and Grid Computing (PDGC)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129204830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Detection of Nuclear Cataract in Retinal Fundus Image using RadialBasis FunctionbasedSVM 基于径向基函数的支持向量机检测视网膜眼底图像中的核性白内障
Pub Date : 2020-11-06 DOI: 10.1109/PDGC50313.2020.9315834
M. Behera, S. Chakravarty, Apurwa Gourav, S. Dash
Nuclear Cataract is a common eye disease that generally occurs at elder age. But if it's not detected at its earlier state, then it may affect vision and can live permanently. In this work, to detect the cataract an automated model proposed based on image processing and machine learning techniques. The input to the proposed model is, a set of fundus retinal images. For training the model, the image dataset consists of two types ofimages healthy and cataract affected. From each input retinal image a binary image, consisting of blood vessels is generated, using image processing techniques like image Filtration, segmentation and thresholding. These set of binary images are used as the feature matrix for defining the classifier by using a well-known machine learning technique Support vector machine (SVM). For validation and compression of the model, different kernels of SVM like linear, polynomial and RBF are applied and tested. Out of all, Radial Basis Function (RBF) based SVM performs good with an overall accuracy of 95.2 % and able to produce result in real time.
核性白内障是一种常见于老年人的眼部疾病。但如果在早期没有被发现,那么它可能会影响视力,并可能永久存在。本文提出了一种基于图像处理和机器学习技术的白内障自动检测模型。该模型的输入是一组眼底视网膜图像。为了训练模型,图像数据集由健康图像和白内障图像两种类型组成。利用图像过滤、分割和阈值等图像处理技术,从每个输入的视网膜图像生成由血管组成的二值图像。这些二值图像集被用作特征矩阵,使用著名的机器学习技术支持向量机(SVM)来定义分类器。为了对模型进行验证和压缩,使用了线性、多项式和RBF等支持向量机的不同核并进行了测试。其中,基于径向基函数(RBF)的支持向量机表现较好,总体准确率为95.2%,能够实时生成结果。
{"title":"Detection of Nuclear Cataract in Retinal Fundus Image using RadialBasis FunctionbasedSVM","authors":"M. Behera, S. Chakravarty, Apurwa Gourav, S. Dash","doi":"10.1109/PDGC50313.2020.9315834","DOIUrl":"https://doi.org/10.1109/PDGC50313.2020.9315834","url":null,"abstract":"Nuclear Cataract is a common eye disease that generally occurs at elder age. But if it's not detected at its earlier state, then it may affect vision and can live permanently. In this work, to detect the cataract an automated model proposed based on image processing and machine learning techniques. The input to the proposed model is, a set of fundus retinal images. For training the model, the image dataset consists of two types ofimages healthy and cataract affected. From each input retinal image a binary image, consisting of blood vessels is generated, using image processing techniques like image Filtration, segmentation and thresholding. These set of binary images are used as the feature matrix for defining the classifier by using a well-known machine learning technique Support vector machine (SVM). For validation and compression of the model, different kernels of SVM like linear, polynomial and RBF are applied and tested. Out of all, Radial Basis Function (RBF) based SVM performs good with an overall accuracy of 95.2 % and able to produce result in real time.","PeriodicalId":347216,"journal":{"name":"2020 Sixth International Conference on Parallel, Distributed and Grid Computing (PDGC)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115776622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Data Analysis of Various Terrorism Activities Using Big Data Approaches on Global Terrorism Database 基于全球恐怖主义数据库的各种恐怖主义活动大数据分析
Pub Date : 2020-11-06 DOI: 10.1109/PDGC50313.2020.9315784
Kashish Bhatia, B. Chhabra, Manish Kumar
The field of data science is getting wide day by day and more areas are using this concept. This paper uses the concept of data science for analyzing patterns of terrorism globally. We use the “Global Terrorism Database (GTD)” having information of terrorist attacks around the world from 1970 to 2017. The data was preprocessed and we use “Hive Query Language (HiveQL)” and Hadoop concepts to make various predictions out of the database. HiveQL is run by intergrating with Hadoop which is installed on a linux system. Various interesting findings were made from this database which are represented in the form of queries that were shot on the database. The queries were decided upon by framing a few questions and finding suitable answers. The results obtained are presented graphically using tableau and python for a better understanding of the reader. In the last section, various inferences were drawn from the results obtained.
数据科学领域日益广泛,越来越多的领域正在使用这个概念。本文使用数据科学的概念来分析全球恐怖主义的模式。我们使用“全球恐怖主义数据库”(GTD),该数据库拥有1970年至2017年全球恐怖袭击的信息。我们对数据进行了预处理,并使用“Hive Query Language (HiveQL)”和Hadoop概念从数据库中做出各种预测。HiveQL与安装在linux系统上的Hadoop集成运行。从这个数据库中得出了各种有趣的发现,这些发现以在数据库上拍摄的查询的形式表示。这些问题是通过构建几个问题并找到合适的答案来确定的。得到的结果用图表和python图形化地呈现,以便读者更好地理解。在最后一节中,从得到的结果中得出了各种推论。
{"title":"Data Analysis of Various Terrorism Activities Using Big Data Approaches on Global Terrorism Database","authors":"Kashish Bhatia, B. Chhabra, Manish Kumar","doi":"10.1109/PDGC50313.2020.9315784","DOIUrl":"https://doi.org/10.1109/PDGC50313.2020.9315784","url":null,"abstract":"The field of data science is getting wide day by day and more areas are using this concept. This paper uses the concept of data science for analyzing patterns of terrorism globally. We use the “Global Terrorism Database (GTD)” having information of terrorist attacks around the world from 1970 to 2017. The data was preprocessed and we use “Hive Query Language (HiveQL)” and Hadoop concepts to make various predictions out of the database. HiveQL is run by intergrating with Hadoop which is installed on a linux system. Various interesting findings were made from this database which are represented in the form of queries that were shot on the database. The queries were decided upon by framing a few questions and finding suitable answers. The results obtained are presented graphically using tableau and python for a better understanding of the reader. In the last section, various inferences were drawn from the results obtained.","PeriodicalId":347216,"journal":{"name":"2020 Sixth International Conference on Parallel, Distributed and Grid Computing (PDGC)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114167309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Function Point Estimation for Portal and Content Management Projects 门户和内容管理项目的功能点评估
Pub Date : 2020-11-06 DOI: 10.1109/PDGC50313.2020.9315813
K. Sudheer Reddy, C. Santhosh Kumar, K. Mamatha
The success of a Software project is determined by how well the initial estimates have done. Hence, the effort and schedule estimates are essential in Software Project planning stages. Portal and Content Management (PCM) projects have been facing critical challenges while estimating the effort. The Organization has adopted the Function Point Analysis technique (FP) to overcome such challenges. Further, the organization has developed guidelines to apply FP on PCM projects. The key objective of this paper is to provide guidelines to estimate the effort of PCM projects by employing FP to avoid cost overruns and unproductive use of resources. Experimental results are proved that the proposed methodology is yielding better results by fixing the potential challenges. It is further ensured that the methodology leads to better estimate of the project cost, optimal resource utilization, on-time project delivery and others.
软件项目的成功取决于初始估计的完成程度。因此,工作量和进度评估在软件项目计划阶段是必不可少的。门户和内容管理(PCM)项目在估算工作量时一直面临着严峻的挑战。本组织采用了功能点分析技术(FP)来克服这些挑战。此外,该组织还制定了在PCM项目中应用FP的指导方针。本文的主要目标是提供指导方针,通过使用FP来评估PCM项目的工作,以避免成本超支和资源的非生产性使用。实验结果证明,该方法解决了潜在的挑战,取得了较好的效果。进一步确保该方法能够更好地估计项目成本、最佳资源利用、按时交付项目等。
{"title":"Function Point Estimation for Portal and Content Management Projects","authors":"K. Sudheer Reddy, C. Santhosh Kumar, K. Mamatha","doi":"10.1109/PDGC50313.2020.9315813","DOIUrl":"https://doi.org/10.1109/PDGC50313.2020.9315813","url":null,"abstract":"The success of a Software project is determined by how well the initial estimates have done. Hence, the effort and schedule estimates are essential in Software Project planning stages. Portal and Content Management (PCM) projects have been facing critical challenges while estimating the effort. The Organization has adopted the Function Point Analysis technique (FP) to overcome such challenges. Further, the organization has developed guidelines to apply FP on PCM projects. The key objective of this paper is to provide guidelines to estimate the effort of PCM projects by employing FP to avoid cost overruns and unproductive use of resources. Experimental results are proved that the proposed methodology is yielding better results by fixing the potential challenges. It is further ensured that the methodology leads to better estimate of the project cost, optimal resource utilization, on-time project delivery and others.","PeriodicalId":347216,"journal":{"name":"2020 Sixth International Conference on Parallel, Distributed and Grid Computing (PDGC)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115291264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2020 Sixth International Conference on Parallel, Distributed and Grid Computing (PDGC)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1