Pub Date : 2017-01-01DOI: 10.1109/CONFLUENCE.2017.7943188
Ashwdeep Singh, Vikas Verma, G. Raj
With the enhancement in mobile technology, QR (Quick Response) codes became popular. QR codes are widely used in our daily life from social media websites to cashless shopping wallets, ERP(Enterprise Resource Planning) software implementation to display advertising and digital marketing etc. In this paper we have focused upon one major issue with the QR codes. We have focused upon data various techniques used to increase data storage capacity of QR code. This paper is divided into five subparts. In first part we have introduced about basics of QR codes, their versions, creating and scanning process and its various applications. Then in second part we have written about various features of QR codes, due to them QR code became so popular and we have also discussed about its structure to understand its basic functionality. Thereafter we have compared three different kinds of co des-b ar code, quick response code and color quick response code, on the basis of its storage capacity, error resistance, 360° reading and other factors. Then in fourth part we have reviewed the literature and mentioned various techniques used by researcher to in crease data storage capacity of QR codes. In fifth part of this research paper we have proposed encoding and decoding algorithm, which will result into high storage color QR code. And a t the end, we have discussed about our future directions in order to increase storage capacity of QR codes and make stored information more secure and reliable for the end users.
{"title":"A novel approach for encoding and decoding of high storage capacity color QR code","authors":"Ashwdeep Singh, Vikas Verma, G. Raj","doi":"10.1109/CONFLUENCE.2017.7943188","DOIUrl":"https://doi.org/10.1109/CONFLUENCE.2017.7943188","url":null,"abstract":"With the enhancement in mobile technology, QR (Quick Response) codes became popular. QR codes are widely used in our daily life from social media websites to cashless shopping wallets, ERP(Enterprise Resource Planning) software implementation to display advertising and digital marketing etc. In this paper we have focused upon one major issue with the QR codes. We have focused upon data various techniques used to increase data storage capacity of QR code. This paper is divided into five subparts. In first part we have introduced about basics of QR codes, their versions, creating and scanning process and its various applications. Then in second part we have written about various features of QR codes, due to them QR code became so popular and we have also discussed about its structure to understand its basic functionality. Thereafter we have compared three different kinds of co des-b ar code, quick response code and color quick response code, on the basis of its storage capacity, error resistance, 360° reading and other factors. Then in fourth part we have reviewed the literature and mentioned various techniques used by researcher to in crease data storage capacity of QR codes. In fifth part of this research paper we have proposed encoding and decoding algorithm, which will result into high storage color QR code. And a t the end, we have discussed about our future directions in order to increase storage capacity of QR codes and make stored information more secure and reliable for the end users.","PeriodicalId":6651,"journal":{"name":"2017 7th International Conference on Cloud Computing, Data Science & Engineering - Confluence","volume":"23 1","pages":"425-430"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87356555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-01DOI: 10.1109/CONFLUENCE.2017.7943169
Shubham Gupta, R. Johari
The paper explains the benefits of a distributed computed based, very resourceful technology ‘cloud computing’. The paper explains how cloud computing is changing the way the data is obtained, shared and used effectively through the unique identification number (UID) application which has been designed and developed keeping in mind the power of cloud computing. This proposed UID application has never been used by any other individual or an organization. It has been discussed and successfully implemented for the first time. It involves combining the different identity proofs of an individual to get an UID number which would contain information about all the other identity proofs. Visual studio and ANEKA platform are the tools which have been used to make this application possible.
{"title":"UIDC: Cloud based UID application","authors":"Shubham Gupta, R. Johari","doi":"10.1109/CONFLUENCE.2017.7943169","DOIUrl":"https://doi.org/10.1109/CONFLUENCE.2017.7943169","url":null,"abstract":"The paper explains the benefits of a distributed computed based, very resourceful technology ‘cloud computing’. The paper explains how cloud computing is changing the way the data is obtained, shared and used effectively through the unique identification number (UID) application which has been designed and developed keeping in mind the power of cloud computing. This proposed UID application has never been used by any other individual or an organization. It has been discussed and successfully implemented for the first time. It involves combining the different identity proofs of an individual to get an UID number which would contain information about all the other identity proofs. Visual studio and ANEKA platform are the tools which have been used to make this application possible.","PeriodicalId":6651,"journal":{"name":"2017 7th International Conference on Cloud Computing, Data Science & Engineering - Confluence","volume":"11 1","pages":"319-324"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85420965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-01DOI: 10.1109/CONFLUENCE.2017.7943181
S. Jafar, Pankaj Kumar, Ranjana Rajnish, Minsa Jafar
Embedded system design is the core for many time constraint application designs like avionics and railways. These systems employ multi core architecture for faster and time critical applications. Use of multi cores as the processing part is ever challenging due to the complexities involved in their designs, memory architecture, issues related to synchronization between the cores and problems like deadlock between the executing cores. Also as per the Moore's Law, number of cores on as ingle processing element increase exponentially becoming double after every 18 months. In the face of such fast increasing cores the time is not far when there will be 100 or 1000 of cores on a single chip. Then there will be bigger challenges of dealing with problems like heat dissipation, concurrency control and speedy communication between the cores, without compromising the performance and outcome of embedded systems employing these multiple cores. In this paper we have studied some of the pre existing protocols and technologies for handling concurrency in large number of multi core systems and have proposed a framework for concurrency control with a routing protocol for multi core system employing 64 cores. Then we have proposed to scale this system for higher number of cores leading to up to 100 cores and w ill study the performance on an embedded system.
{"title":"Architectural scheme for future embedded systems involving large number of processing cores","authors":"S. Jafar, Pankaj Kumar, Ranjana Rajnish, Minsa Jafar","doi":"10.1109/CONFLUENCE.2017.7943181","DOIUrl":"https://doi.org/10.1109/CONFLUENCE.2017.7943181","url":null,"abstract":"Embedded system design is the core for many time constraint application designs like avionics and railways. These systems employ multi core architecture for faster and time critical applications. Use of multi cores as the processing part is ever challenging due to the complexities involved in their designs, memory architecture, issues related to synchronization between the cores and problems like deadlock between the executing cores. Also as per the Moore's Law, number of cores on as ingle processing element increase exponentially becoming double after every 18 months. In the face of such fast increasing cores the time is not far when there will be 100 or 1000 of cores on a single chip. Then there will be bigger challenges of dealing with problems like heat dissipation, concurrency control and speedy communication between the cores, without compromising the performance and outcome of embedded systems employing these multiple cores. In this paper we have studied some of the pre existing protocols and technologies for handling concurrency in large number of multi core systems and have proposed a framework for concurrency control with a routing protocol for multi core system employing 64 cores. Then we have proposed to scale this system for higher number of cores leading to up to 100 cores and w ill study the performance on an embedded system.","PeriodicalId":6651,"journal":{"name":"2017 7th International Conference on Cloud Computing, Data Science & Engineering - Confluence","volume":"174 1","pages":"392-396"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79628557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-01DOI: 10.1109/CONFLUENCE.2017.7943255
Praman Deep Singh, A. Chug
Software Quality is the most important aspect of a software. Software Defect Prediction can directly affect quality and has achieved significant popularity in last few years. Defective software modules have a massive impact over software's quality leading to cost overruns, delayed timelines and much higher maintenance costs. In this paper we have analyzed the most popular and widely used Machine Learning algorithms — ANN (Artificial Neural Network), PSO(P article Swarm Optimization), DT (Decision Trees), NB(Naive Bayes) and LC (Linear classifier). The five algorithms were analyzed using KEEL tool and validated using k-fold cross validation technique. Datasets used in this research were obtained from open source NASA Promise dataset repository. Seven datasets were selected for defect prediction analysis. Classification was performed on these 7 datasets and validated using 10 fold cross validation. The results demonstrated the dominance of Linear Classifier over other algorithms in terms of defect prediction accuracy.
{"title":"Software defect prediction analysis using machine learning algorithms","authors":"Praman Deep Singh, A. Chug","doi":"10.1109/CONFLUENCE.2017.7943255","DOIUrl":"https://doi.org/10.1109/CONFLUENCE.2017.7943255","url":null,"abstract":"Software Quality is the most important aspect of a software. Software Defect Prediction can directly affect quality and has achieved significant popularity in last few years. Defective software modules have a massive impact over software's quality leading to cost overruns, delayed timelines and much higher maintenance costs. In this paper we have analyzed the most popular and widely used Machine Learning algorithms — ANN (Artificial Neural Network), PSO(P article Swarm Optimization), DT (Decision Trees), NB(Naive Bayes) and LC (Linear classifier). The five algorithms were analyzed using KEEL tool and validated using k-fold cross validation technique. Datasets used in this research were obtained from open source NASA Promise dataset repository. Seven datasets were selected for defect prediction analysis. Classification was performed on these 7 datasets and validated using 10 fold cross validation. The results demonstrated the dominance of Linear Classifier over other algorithms in terms of defect prediction accuracy.","PeriodicalId":6651,"journal":{"name":"2017 7th International Conference on Cloud Computing, Data Science & Engineering - Confluence","volume":"83 1","pages":"775-781"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83398437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-01DOI: 10.1109/CONFLUENCE.2017.7943230
Anita Thakur, Deepak Mishra
The human eye can perceive information from the visible light in terms of bands of three colors (red, green, blue), so generally images store in the digital are made up of three dimensions i.e., R, G and B. But hyper spectral imaging perceives information from across the electromagnetic spectrum; the process of spectral imaging further splits the spectrum into more bands. This process of changing images into bands can be even in the invisible spectrum. Hence the hyper spectral images can be considered as n-dimensional matrices and each pixel can be regarded as n-dimens ional vector. These images contain various areas with similar characteristics like crop fields, forest area and deserts. To classify such regions one has look for certain features among the captured images. Some similarity measures should be undertaken to make clusters of areas having similar characteristics from the images. Finding the relative similarities in terms of numerical score can be carried out with the help of some standard algorithm. So, feature classification on basis of relative similarities pixel is robust method. In this paper proposing classification of hyper spectral images using Multilayer Perceptron Artificial Neural Network (MLPANN) and Functional Link Artificial Neural Network (FLANN) and their performance is compare in term of accuracy rate.
{"title":"Hyper spectral image classification using multilayer perceptron neural network & functional link ANN","authors":"Anita Thakur, Deepak Mishra","doi":"10.1109/CONFLUENCE.2017.7943230","DOIUrl":"https://doi.org/10.1109/CONFLUENCE.2017.7943230","url":null,"abstract":"The human eye can perceive information from the visible light in terms of bands of three colors (red, green, blue), so generally images store in the digital are made up of three dimensions i.e., R, G and B. But hyper spectral imaging perceives information from across the electromagnetic spectrum; the process of spectral imaging further splits the spectrum into more bands. This process of changing images into bands can be even in the invisible spectrum. Hence the hyper spectral images can be considered as n-dimensional matrices and each pixel can be regarded as n-dimens ional vector. These images contain various areas with similar characteristics like crop fields, forest area and deserts. To classify such regions one has look for certain features among the captured images. Some similarity measures should be undertaken to make clusters of areas having similar characteristics from the images. Finding the relative similarities in terms of numerical score can be carried out with the help of some standard algorithm. So, feature classification on basis of relative similarities pixel is robust method. In this paper proposing classification of hyper spectral images using Multilayer Perceptron Artificial Neural Network (MLPANN) and Functional Link Artificial Neural Network (FLANN) and their performance is compare in term of accuracy rate.","PeriodicalId":6651,"journal":{"name":"2017 7th International Conference on Cloud Computing, Data Science & Engineering - Confluence","volume":"27 1","pages":"639-642"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74062008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-01DOI: 10.1109/CONFLUENCE.2017.7943184
Jayita Saha, C. Chowdhury, Supama Biswas
Sensors embedded in smartphones, tabs can be extremely useful in providing reliable information on people's activities and behaviors, thereby ensuring a safe and sound living environment. Activity monitoring through posture identification is increasingly used for medical, surveillance and entertainment (gaming) applications. Major challenges for this task include making the task device independent, use of minimal number of sensors, position of the device, efficient feature extraction etc. Existing works mostly uses one or m ore specific devices for activity monitoring and does not focus on device independence. Ensuring energy efficiency through inexpensive feature extraction technique is another motivation. Consequently, in this paper, a machine learning based activity monitoring framework is proposed that provides device independence using inexpensive time domain features. Implementation of the framework with real devices indicates 96% accuracy with logistic regression when time domain features are used.
{"title":"Device independent activity monitoring using smart handhelds","authors":"Jayita Saha, C. Chowdhury, Supama Biswas","doi":"10.1109/CONFLUENCE.2017.7943184","DOIUrl":"https://doi.org/10.1109/CONFLUENCE.2017.7943184","url":null,"abstract":"Sensors embedded in smartphones, tabs can be extremely useful in providing reliable information on people's activities and behaviors, thereby ensuring a safe and sound living environment. Activity monitoring through posture identification is increasingly used for medical, surveillance and entertainment (gaming) applications. Major challenges for this task include making the task device independent, use of minimal number of sensors, position of the device, efficient feature extraction etc. Existing works mostly uses one or m ore specific devices for activity monitoring and does not focus on device independence. Ensuring energy efficiency through inexpensive feature extraction technique is another motivation. Consequently, in this paper, a machine learning based activity monitoring framework is proposed that provides device independence using inexpensive time domain features. Implementation of the framework with real devices indicates 96% accuracy with logistic regression when time domain features are used.","PeriodicalId":6651,"journal":{"name":"2017 7th International Conference on Cloud Computing, Data Science & Engineering - Confluence","volume":"85 1","pages":"406-411"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79325248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-01DOI: 10.1109/CONFLUENCE.2017.7943192
Monali Mavani, Krishna Asawa
6L0WPAN is a communication protocol for Internet of Things. 6LoWPAN is IPv6 protocol modified for low power and lossy personal area networks. 6LoWPAN inherits threats from its predecessors IPv4 and IPv6. IP spoofing is a known attack prevalent in IPv4 and IPv6 networks but there are new vulnerabilities which creates new paths, leading to the attack. This study performs the experimental study to check the feasibility of performing IP spoofing attack on 6LoWPAN Network. Intruder misuses 6LoWPAN control messages which results into wrong IPv6-MAC binding in router. Attack is also simulated in cooja simulator. Simulated results are analyzed for finding cost to the attacker in terms of energy and memory consumption.
{"title":"Experimental study of IP spoofing attack in 6LoWPAN network","authors":"Monali Mavani, Krishna Asawa","doi":"10.1109/CONFLUENCE.2017.7943192","DOIUrl":"https://doi.org/10.1109/CONFLUENCE.2017.7943192","url":null,"abstract":"6L0WPAN is a communication protocol for Internet of Things. 6LoWPAN is IPv6 protocol modified for low power and lossy personal area networks. 6LoWPAN inherits threats from its predecessors IPv4 and IPv6. IP spoofing is a known attack prevalent in IPv4 and IPv6 networks but there are new vulnerabilities which creates new paths, leading to the attack. This study performs the experimental study to check the feasibility of performing IP spoofing attack on 6LoWPAN Network. Intruder misuses 6LoWPAN control messages which results into wrong IPv6-MAC binding in router. Attack is also simulated in cooja simulator. Simulated results are analyzed for finding cost to the attacker in terms of energy and memory consumption.","PeriodicalId":6651,"journal":{"name":"2017 7th International Conference on Cloud Computing, Data Science & Engineering - Confluence","volume":"54 1","pages":"445-449"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75198727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-01DOI: 10.1109/CONFLUENCE.2017.7943139
B. Mishra, K. Dahal, Zeeshan Pervez
Relief logistics distribution to disaster affected areas is crucial that needs quick and effective action. Logistics distribution through an efficient method is essential for easing the impact of the disaster in the affected areas. Disaster is non-deterministic, highly composite and uncertain in nature, therefore, the relief logistics distribution becomes a challenging task. Relief items can be distributed either from a single node or from multiple distributed nodes. Resources available at distributed nodes are not utilized when only single node logistics distribution is used. This paper presents a two-phase bounded heuristic approach for logistics distribution as a response to the post-disaster relief operation. The proposed approach is focused on two major objectives: minimization of unmet demand and travel distance. Simulated disaster scenario is synthesized as a case study for the distribution of relief items. The results indicate that the proposed approach is effective in logistic scheduling. It improves the relief logistics distribution systems in the disasters affected areas by utilizing the resources available at distributed nodes hence leads to decline in unmet demands level with minimum travel time.
{"title":"Post-disaster relief distribution using a two phase bounded heuristic approach","authors":"B. Mishra, K. Dahal, Zeeshan Pervez","doi":"10.1109/CONFLUENCE.2017.7943139","DOIUrl":"https://doi.org/10.1109/CONFLUENCE.2017.7943139","url":null,"abstract":"Relief logistics distribution to disaster affected areas is crucial that needs quick and effective action. Logistics distribution through an efficient method is essential for easing the impact of the disaster in the affected areas. Disaster is non-deterministic, highly composite and uncertain in nature, therefore, the relief logistics distribution becomes a challenging task. Relief items can be distributed either from a single node or from multiple distributed nodes. Resources available at distributed nodes are not utilized when only single node logistics distribution is used. This paper presents a two-phase bounded heuristic approach for logistics distribution as a response to the post-disaster relief operation. The proposed approach is focused on two major objectives: minimization of unmet demand and travel distance. Simulated disaster scenario is synthesized as a case study for the distribution of relief items. The results indicate that the proposed approach is effective in logistic scheduling. It improves the relief logistics distribution systems in the disasters affected areas by utilizing the resources available at distributed nodes hence leads to decline in unmet demands level with minimum travel time.","PeriodicalId":6651,"journal":{"name":"2017 7th International Conference on Cloud Computing, Data Science & Engineering - Confluence","volume":"49 1","pages":"143-148"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75212224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-01DOI: 10.1109/CONFLUENCE.2017.7943220
Neha Janu, Pratistha Mathur, S. Gupta, S. Agrwal
Facial Expression Recognition is a vital topic for research in current scenario which has many applications as machine based HR interviews and human-machine interaction. Facial Expression recognition is applied for identification of person using face of a person. Researchers have proposed many research techniques for facial expression recognition but still accuracy, illumination and occlusion are the research issues which have to improve. Key Research issue of facial expression is improving the accuracy of system which is measured in term of recognition rate. Feature extraction is the main stage on which accuracy depends for facial expression recognition. In this paper we have analyzed different feature extraction technique in frequency domain as Discrete Wavelet Transform, Discrete Cosine Transform feature extraction technique, Gabor filter and different feature reduction technique developed so far and future aspects.
{"title":"Performance analysis of frequency domain based feature extraction techniques for facial expression recognition","authors":"Neha Janu, Pratistha Mathur, S. Gupta, S. Agrwal","doi":"10.1109/CONFLUENCE.2017.7943220","DOIUrl":"https://doi.org/10.1109/CONFLUENCE.2017.7943220","url":null,"abstract":"Facial Expression Recognition is a vital topic for research in current scenario which has many applications as machine based HR interviews and human-machine interaction. Facial Expression recognition is applied for identification of person using face of a person. Researchers have proposed many research techniques for facial expression recognition but still accuracy, illumination and occlusion are the research issues which have to improve. Key Research issue of facial expression is improving the accuracy of system which is measured in term of recognition rate. Feature extraction is the main stage on which accuracy depends for facial expression recognition. In this paper we have analyzed different feature extraction technique in frequency domain as Discrete Wavelet Transform, Discrete Cosine Transform feature extraction technique, Gabor filter and different feature reduction technique developed so far and future aspects.","PeriodicalId":6651,"journal":{"name":"2017 7th International Conference on Cloud Computing, Data Science & Engineering - Confluence","volume":"44 1","pages":"591-594"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73788181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-01DOI: 10.1109/CONFLUENCE.2017.7943165
Stobak Dutta, S. Sengupta
In today's scenario Cloud computing technology has emerged to manage large data sets efficiently. Large amount of data is created everyday now a days hence there is a demand of running data mining algorithm on very large data sets. As there is recent fast increase in number of clouds and their services Cloud computing technology has gained more importance. To perform data mining it is required to merge distributed data and perform mining algorithm in it. This paper presents a way to implement K-Means clustering algorithm for service discovery in the Enterprise Cloud Bus architecture.
{"title":"Implementation of K-means clustering in ECB framework of cloud computing environment","authors":"Stobak Dutta, S. Sengupta","doi":"10.1109/CONFLUENCE.2017.7943165","DOIUrl":"https://doi.org/10.1109/CONFLUENCE.2017.7943165","url":null,"abstract":"In today's scenario Cloud computing technology has emerged to manage large data sets efficiently. Large amount of data is created everyday now a days hence there is a demand of running data mining algorithm on very large data sets. As there is recent fast increase in number of clouds and their services Cloud computing technology has gained more importance. To perform data mining it is required to merge distributed data and perform mining algorithm in it. This paper presents a way to implement K-Means clustering algorithm for service discovery in the Enterprise Cloud Bus architecture.","PeriodicalId":6651,"journal":{"name":"2017 7th International Conference on Cloud Computing, Data Science & Engineering - Confluence","volume":"9 1","pages":"293-297"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82303852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}