Pub Date : 2018-02-01DOI: 10.1109/IC3IOT.2018.8668139
Sayan Kahali, J. Sing, P. Saha
Medical image segmentation plays a crucial role in medical image analyses, computer-guided surgical planning, abnormality detection, etc. The magnetic resonance (MR) image segmentation process is much more challenging as the contour of the soft tissue regions are vague or uncertain. This paper presents a new fuzzy clustering algorithm to address the class uncertainty associated with each pixel in the image region. In particular, the class uncertainty is handled by integrating the Shannon’s entropy within the objective function. In addition, the objective function also includes Gaussian probabilistic measure to estimate the membership function. The proposed algorithm is validated on several synthetic brain MR images with varying noise and inhomogeneity. Additionally, we have also validated the method on in-vivo (real-patient) human brain MR images. The empirical results of the proposed algorithm are compared with some competent image segmentation methods and found superior to them.
{"title":"A New Fuzzy Clustering Algorithm for Brain MR Image Segmentation Using Gaussian Probabilistic and Entropy-Based Likelihood Measures","authors":"Sayan Kahali, J. Sing, P. Saha","doi":"10.1109/IC3IOT.2018.8668139","DOIUrl":"https://doi.org/10.1109/IC3IOT.2018.8668139","url":null,"abstract":"Medical image segmentation plays a crucial role in medical image analyses, computer-guided surgical planning, abnormality detection, etc. The magnetic resonance (MR) image segmentation process is much more challenging as the contour of the soft tissue regions are vague or uncertain. This paper presents a new fuzzy clustering algorithm to address the class uncertainty associated with each pixel in the image region. In particular, the class uncertainty is handled by integrating the Shannon’s entropy within the objective function. In addition, the objective function also includes Gaussian probabilistic measure to estimate the membership function. The proposed algorithm is validated on several synthetic brain MR images with varying noise and inhomogeneity. Additionally, we have also validated the method on in-vivo (real-patient) human brain MR images. The empirical results of the proposed algorithm are compared with some competent image segmentation methods and found superior to them.","PeriodicalId":155587,"journal":{"name":"2018 International Conference on Communication, Computing and Internet of Things (IC3IoT)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127130312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-02-01DOI: 10.1109/IC3IOT.2018.8668174
Ramit Sawhney, Roopal Jain
Diseased caused through the rapid mediation of Human Papillomavirus (HPV) have surged in the recent decades. While there are a large amount of treatment methods, medical data is often voluminous, high dimensional and often has redundancy which make selection of a particular method difficult. Wrapper feature selection methods aim to extract a subset of features to improve computability as well as classification accuracy. To address this, we propose a modification to a relatively new evolutionary computation technique, the Binary Dragonfly algorithm (BDFA), by incorporating a penalty function for optimal feature selection. This wrapper based method using BDFA and Random forest classifier is employed on two treatment methods, Immunotherapy and Cryotherapy, showing an increase in both classification accuracy as well as feature reduction as compared to fuzzy rule based systems, genetic algorithms and random forest classifiers
{"title":"Modified Binary Dragonfly Algorithm for Feature Selection in Human Papillomavirus-Mediated Disease Treatment","authors":"Ramit Sawhney, Roopal Jain","doi":"10.1109/IC3IOT.2018.8668174","DOIUrl":"https://doi.org/10.1109/IC3IOT.2018.8668174","url":null,"abstract":"Diseased caused through the rapid mediation of Human Papillomavirus (HPV) have surged in the recent decades. While there are a large amount of treatment methods, medical data is often voluminous, high dimensional and often has redundancy which make selection of a particular method difficult. Wrapper feature selection methods aim to extract a subset of features to improve computability as well as classification accuracy. To address this, we propose a modification to a relatively new evolutionary computation technique, the Binary Dragonfly algorithm (BDFA), by incorporating a penalty function for optimal feature selection. This wrapper based method using BDFA and Random forest classifier is employed on two treatment methods, Immunotherapy and Cryotherapy, showing an increase in both classification accuracy as well as feature reduction as compared to fuzzy rule based systems, genetic algorithms and random forest classifiers","PeriodicalId":155587,"journal":{"name":"2018 International Conference on Communication, Computing and Internet of Things (IC3IoT)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131048074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-02-01DOI: 10.1109/ic3iot.2018.8668208
Jaisa Mariam Jacob, M. Methini
A brief review of the conventional life detection systems is carried out and the design of a novel system using Far Infrared (FIR) region of the infrared spectrum is proposed. FIR rays with the wavelength range 1mm to 1μm are currently being explored by scientists for its biological effects. This paper puts forth the idea of using these rays in rescue operation systems subsequent to disasters like earthquakes to save the people trapped under the wreckage. Detection of life can be accomplished based on the changes in the properties exhibited by FIR rays when they are absorbed and radiated back by the living matter and reflected by the nonliving materials. By detecting the presence of life in the victims, a lot of time for rescue can be saved and mortality rate can be significantly reduced.
{"title":"Far Infrared based Life Detection System to Rescue Alive Victims under Debris","authors":"Jaisa Mariam Jacob, M. Methini","doi":"10.1109/ic3iot.2018.8668208","DOIUrl":"https://doi.org/10.1109/ic3iot.2018.8668208","url":null,"abstract":"A brief review of the conventional life detection systems is carried out and the design of a novel system using Far Infrared (FIR) region of the infrared spectrum is proposed. FIR rays with the wavelength range 1mm to 1μm are currently being explored by scientists for its biological effects. This paper puts forth the idea of using these rays in rescue operation systems subsequent to disasters like earthquakes to save the people trapped under the wreckage. Detection of life can be accomplished based on the changes in the properties exhibited by FIR rays when they are absorbed and radiated back by the living matter and reflected by the nonliving materials. By detecting the presence of life in the victims, a lot of time for rescue can be saved and mortality rate can be significantly reduced.","PeriodicalId":155587,"journal":{"name":"2018 International Conference on Communication, Computing and Internet of Things (IC3IoT)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131695567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-02-01DOI: 10.1109/IC3IOT.2018.8668166
M. Devadharshini, M. Rajeswari, S. Sumathi
Computing technology is not only crucial but also related to environmentalists and other related authorities. Agriculture is the backbone of India, which is greatly affected due to the insufficient water supply. Prosopis juliflora (karuvelam trees) which is used only as a firewood has greater environmental impacts such as water absorption from the soil and moisture from atmosphere, emits low oxygen and more carbon dioxide comparatively. This project has been designed to give a complete solution for the eradication of karuvelam trees, as the manual removal and using heavy machines may increase the number of labours and cost for removing these trees with long penetrating roots. Much more number of trees can be eradicated using technical methods.In this project, we propose a cloud based approach for identification of karuvelam trees affecting agricultural lands using Deep learning. The system consists of a setup which can identify karuvelam leaves from images and retrieve the GPS location and update the cloud. The Data is processed in the cloud and the instruction is delivered to the chemical spraying setup through IOT board to kill the weeds. Our project aims at designing a prototype to prove that the code that we have developed for identification of karuvelam trees works well in real system. The concept can be incorporated in real world using OPTiM agri drone(automatic weedicide sprayer on selected weeds) designed by Japan.
{"title":"An Automated Approach to Weed Out Prosopis Juliflora(Karuvelam) Trees","authors":"M. Devadharshini, M. Rajeswari, S. Sumathi","doi":"10.1109/IC3IOT.2018.8668166","DOIUrl":"https://doi.org/10.1109/IC3IOT.2018.8668166","url":null,"abstract":"Computing technology is not only crucial but also related to environmentalists and other related authorities. Agriculture is the backbone of India, which is greatly affected due to the insufficient water supply. Prosopis juliflora (karuvelam trees) which is used only as a firewood has greater environmental impacts such as water absorption from the soil and moisture from atmosphere, emits low oxygen and more carbon dioxide comparatively. This project has been designed to give a complete solution for the eradication of karuvelam trees, as the manual removal and using heavy machines may increase the number of labours and cost for removing these trees with long penetrating roots. Much more number of trees can be eradicated using technical methods.In this project, we propose a cloud based approach for identification of karuvelam trees affecting agricultural lands using Deep learning. The system consists of a setup which can identify karuvelam leaves from images and retrieve the GPS location and update the cloud. The Data is processed in the cloud and the instruction is delivered to the chemical spraying setup through IOT board to kill the weeds. Our project aims at designing a prototype to prove that the code that we have developed for identification of karuvelam trees works well in real system. The concept can be incorporated in real world using OPTiM agri drone(automatic weedicide sprayer on selected weeds) designed by Japan.","PeriodicalId":155587,"journal":{"name":"2018 International Conference on Communication, Computing and Internet of Things (IC3IoT)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116378985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-02-01DOI: 10.1109/IC3IOT.2018.8668183
P. Kumar, Rasamalla Charan Kumar, Pavneet Singh Sahi, P. Goyal
aerofoil is the cross-sectional shape obtained when a wing is brought in perpendicularity with a plane. The aerofoil used is the NACA 0012 symmetric aerofoil, alterations are duly made in order to analyse the flow pattern such as altering the leading-edge radius or the blending distance or both simultaneously in addition to change in angle of attack. This paper deals with the fundamental equations associated with the generation of NACA aerofoils. The aerofoil generation is done using MATLAB. Certain elementary changes are then made in the wing geometry and their respective changes are studied through their effects on the characteristic aerodynamic configuration of the aerofoil. Since the aerofoil chosen is symmetric the changes in the aerodynamic properties can be easily studied and their variation can be easily understood. These aerodynamic characteristics are studied by the non-dimensional flow parameters such as Cl, Cd and Cm
{"title":"Computational Analysis of the Effect of Change in Leading Edge Radius, Blending Distance from the leading edge and their combined effect on a standard NACA 0012 aerofoil","authors":"P. Kumar, Rasamalla Charan Kumar, Pavneet Singh Sahi, P. Goyal","doi":"10.1109/IC3IOT.2018.8668183","DOIUrl":"https://doi.org/10.1109/IC3IOT.2018.8668183","url":null,"abstract":"aerofoil is the cross-sectional shape obtained when a wing is brought in perpendicularity with a plane. The aerofoil used is the NACA 0012 symmetric aerofoil, alterations are duly made in order to analyse the flow pattern such as altering the leading-edge radius or the blending distance or both simultaneously in addition to change in angle of attack. This paper deals with the fundamental equations associated with the generation of NACA aerofoils. The aerofoil generation is done using MATLAB. Certain elementary changes are then made in the wing geometry and their respective changes are studied through their effects on the characteristic aerodynamic configuration of the aerofoil. Since the aerofoil chosen is symmetric the changes in the aerodynamic properties can be easily studied and their variation can be easily understood. These aerodynamic characteristics are studied by the non-dimensional flow parameters such as Cl, Cd and Cm","PeriodicalId":155587,"journal":{"name":"2018 International Conference on Communication, Computing and Internet of Things (IC3IoT)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115502171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-02-01DOI: 10.1109/IC3IOT.2018.8668132
Jay Sharma, T. Choudhury, S. Satapathy, A. Sabitha
The study analyzes H.265 video compression standard against its predecessor H.264 and Google’s developed VP9. A review of the recent developments in H.265/HEVC standardization have been provided. Analysis takes place using samples of uncompressed videos (frames) subjected to these 3 standards in FFmpeg command line tool with application of different containers. Further, these encoded compressed video samples are the ones which are analyzed on the basis of SCR, SS, bandwidth, size, time, efficiency and practicality. High Efficiency Video Coding (HEVC) is the proposed new generation of video compression standard, which enables substantially higher compression capability than the existing standards and exhibits superior coding performance improvements over its predecessors. Compression performance is owed to quad-tree structure, parallel encoding/decoding (codec), intra-prediction modes, integer transforms etc. With the right codec, multimedia companies could save millions on storage while keeping the quality intact, thereby increasing throughput and profits. Cost efficiency is performed with respect to the business development.
该研究将H.265视频压缩标准与其前身H.264和谷歌开发的VP9进行了分析。本文综述了H.265/HEVC标准化的最新进展。在FFmpeg命令行工具中使用不同容器,使用符合这3个标准的未压缩视频(帧)样本进行分析。并对编码后的压缩视频样本进行了SCR、SS、带宽、大小、时间、效率和实用性等方面的分析。高效视频编码(High Efficiency Video Coding, HEVC)是新一代的视频压缩标准,它比现有的标准具有更高的压缩能力,并且在编码性能上比之前的标准有了更大的改进。压缩性能归功于四叉树结构、并行编码/解码(编解码器)、内部预测模式、整数变换等。有了正确的编解码器,多媒体公司可以在保持质量不变的情况下节省数百万美元的存储费用,从而提高吞吐量和利润。成本效率是在业务发展方面执行的。
{"title":"Study on H.265/HEVC against VP9 and H.264 : On Space and Time Complexity for Codecs","authors":"Jay Sharma, T. Choudhury, S. Satapathy, A. Sabitha","doi":"10.1109/IC3IOT.2018.8668132","DOIUrl":"https://doi.org/10.1109/IC3IOT.2018.8668132","url":null,"abstract":"The study analyzes H.265 video compression standard against its predecessor H.264 and Google’s developed VP9. A review of the recent developments in H.265/HEVC standardization have been provided. Analysis takes place using samples of uncompressed videos (frames) subjected to these 3 standards in FFmpeg command line tool with application of different containers. Further, these encoded compressed video samples are the ones which are analyzed on the basis of SCR, SS, bandwidth, size, time, efficiency and practicality. High Efficiency Video Coding (HEVC) is the proposed new generation of video compression standard, which enables substantially higher compression capability than the existing standards and exhibits superior coding performance improvements over its predecessors. Compression performance is owed to quad-tree structure, parallel encoding/decoding (codec), intra-prediction modes, integer transforms etc. With the right codec, multimedia companies could save millions on storage while keeping the quality intact, thereby increasing throughput and profits. Cost efficiency is performed with respect to the business development.","PeriodicalId":155587,"journal":{"name":"2018 International Conference on Communication, Computing and Internet of Things (IC3IoT)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128286576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-02-01DOI: 10.1109/IC3IOT.2018.8668197
R. Vani, Tithi Vyas, Nikunj Tahilramani
Content based image recovery [CBIR] is known as Query by image content based visual data recovery. CBIR is the use of PC vision used to recovering the image for finding a particular image from expansive database of image. The paper discusses about the different techniques of CBIR like SVM, genetic algorithm, neural networks, fuzzy logic and neuro-fuzzy. These techniques are extensively used for retrieval of images which have similar texture, shape and color features to the input image. The paper also attempts on comparing the techniques mentioned in this paper.
{"title":"CBIR Using SVM, Genetic algorithm, Neural Network, Fuzzy Logic, Neuro-fuzzy Technique:A Survey","authors":"R. Vani, Tithi Vyas, Nikunj Tahilramani","doi":"10.1109/IC3IOT.2018.8668197","DOIUrl":"https://doi.org/10.1109/IC3IOT.2018.8668197","url":null,"abstract":"Content based image recovery [CBIR] is known as Query by image content based visual data recovery. CBIR is the use of PC vision used to recovering the image for finding a particular image from expansive database of image. The paper discusses about the different techniques of CBIR like SVM, genetic algorithm, neural networks, fuzzy logic and neuro-fuzzy. These techniques are extensively used for retrieval of images which have similar texture, shape and color features to the input image. The paper also attempts on comparing the techniques mentioned in this paper.","PeriodicalId":155587,"journal":{"name":"2018 International Conference on Communication, Computing and Internet of Things (IC3IoT)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130550047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-02-01DOI: 10.1109/IC3IOT.2018.8668179
S. Moitra, P. Bhowmik, A. Muvvala
Design of a 150° bent HMSIW filter using T-slots is furnished in this paper. Microstrip to SIW transitional feeding technique is used at both at input and output ports over a material with dielectric constant of 3.2 and thickness 0.8. Array of five and nine T-slots are integrated over the HMSIW structure to achieve the band-pass property. The insertion loss is found to be 0.84dB. The design steps are discussed and it has been shown that the transmission pass-band can be shifted at other frequency bands by changing the parameters of the T-slots. Wideband response is obtained by using the technique within the microwave X and Ku frequency bands. The filter configurations presented in this paper are concise, cost effective and easily fabricable.
{"title":"150° Bend Half Mode Substrate Integrated Waveguide (HMSIW) Band-pass Filter using T-shaped Periodic Elements","authors":"S. Moitra, P. Bhowmik, A. Muvvala","doi":"10.1109/IC3IOT.2018.8668179","DOIUrl":"https://doi.org/10.1109/IC3IOT.2018.8668179","url":null,"abstract":"Design of a 150° bent HMSIW filter using T-slots is furnished in this paper. Microstrip to SIW transitional feeding technique is used at both at input and output ports over a material with dielectric constant of 3.2 and thickness 0.8. Array of five and nine T-slots are integrated over the HMSIW structure to achieve the band-pass property. The insertion loss is found to be 0.84dB. The design steps are discussed and it has been shown that the transmission pass-band can be shifted at other frequency bands by changing the parameters of the T-slots. Wideband response is obtained by using the technique within the microwave X and Ku frequency bands. The filter configurations presented in this paper are concise, cost effective and easily fabricable.","PeriodicalId":155587,"journal":{"name":"2018 International Conference on Communication, Computing and Internet of Things (IC3IoT)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122176643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-02-01DOI: 10.1109/IC3IOT.2018.8668110
B. Ravindra, N. Sriraam, M. Geetha
Chronic kidney disease (CKD) often referred to as chronic kidney failure considered to be life threatening disease due to huge deposition of electrolytes, fluids and waste in our body. There is a huge need to differentiate non chronic disease (NCKD) from CKD to recognize the health status of any subject visiting the nephrology clinics. This study makes use of artificial neural network (ANN) based classification of CKD and NCKD. Four attributes, Creatinine, Urea, Sodium and potassium were considered to diagnose the patient suffering from CKD or not. Datasets collected from local general hospital with n=230 was used. A feedforward back propagation neural network (BPNN) model was employed for classification and the performance of BPNN classifier was evaluated using sensitivity, specificity and classification accuracy. The Datasets was initially mined using clustering to decide the valid attribute values. The simulation results shows an overall classification accuracy 95.3% to distinguish subject suffering with CKD from NCKD.
{"title":"Chronic Kidney Disease Detection Using Back Propagation Neural Network Classifier","authors":"B. Ravindra, N. Sriraam, M. Geetha","doi":"10.1109/IC3IOT.2018.8668110","DOIUrl":"https://doi.org/10.1109/IC3IOT.2018.8668110","url":null,"abstract":"Chronic kidney disease (CKD) often referred to as chronic kidney failure considered to be life threatening disease due to huge deposition of electrolytes, fluids and waste in our body. There is a huge need to differentiate non chronic disease (NCKD) from CKD to recognize the health status of any subject visiting the nephrology clinics. This study makes use of artificial neural network (ANN) based classification of CKD and NCKD. Four attributes, Creatinine, Urea, Sodium and potassium were considered to diagnose the patient suffering from CKD or not. Datasets collected from local general hospital with n=230 was used. A feedforward back propagation neural network (BPNN) model was employed for classification and the performance of BPNN classifier was evaluated using sensitivity, specificity and classification accuracy. The Datasets was initially mined using clustering to decide the valid attribute values. The simulation results shows an overall classification accuracy 95.3% to distinguish subject suffering with CKD from NCKD.","PeriodicalId":155587,"journal":{"name":"2018 International Conference on Communication, Computing and Internet of Things (IC3IoT)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124449958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-02-01DOI: 10.1109/IC3IOT.2018.8668136
R. Khullar, Tushar Sharma, T. Choudhury, R. Mittal
Data has become necessary part of every individual, industry, economy, business function and organization. Miscellaneous industries, machines and institutions are expanding their analytical data at digital world at a very high rate. As this data set increases, selecting the relevant information becomes a laborious task. Therefore, this on-command and on-demand nature of digital universe gives creation of a data category called the Big-Data because of its sheer velocity, volume and variety. It is basically employed to differentiate the various datasets and their sizes are above the ability of the database software tools to manage, evaluate and store. It proposes exclusive computational and analytical challenges which includes measurement errors, scalability and storage bottleneck and noise accumulation.Because of a specific characteristic of the Big-Data they are put in a distributed file system Hadoop (HDFS). However, Hadoop is impartially complex. As Hadoop is new to users, this research paper discusses the important challenges and issues faced during the data mining and deployment of the file system. Aim of this paper is to make user comfortable with Hadoop.
{"title":"Addressing Challenges of Hadoop for BIG Data Analysis","authors":"R. Khullar, Tushar Sharma, T. Choudhury, R. Mittal","doi":"10.1109/IC3IOT.2018.8668136","DOIUrl":"https://doi.org/10.1109/IC3IOT.2018.8668136","url":null,"abstract":"Data has become necessary part of every individual, industry, economy, business function and organization. Miscellaneous industries, machines and institutions are expanding their analytical data at digital world at a very high rate. As this data set increases, selecting the relevant information becomes a laborious task. Therefore, this on-command and on-demand nature of digital universe gives creation of a data category called the Big-Data because of its sheer velocity, volume and variety. It is basically employed to differentiate the various datasets and their sizes are above the ability of the database software tools to manage, evaluate and store. It proposes exclusive computational and analytical challenges which includes measurement errors, scalability and storage bottleneck and noise accumulation.Because of a specific characteristic of the Big-Data they are put in a distributed file system Hadoop (HDFS). However, Hadoop is impartially complex. As Hadoop is new to users, this research paper discusses the important challenges and issues faced during the data mining and deployment of the file system. Aim of this paper is to make user comfortable with Hadoop.","PeriodicalId":155587,"journal":{"name":"2018 International Conference on Communication, Computing and Internet of Things (IC3IoT)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129080442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}