Sasmita Dash, Biraja Prasad Nayak, B. P. Mishra, Amulya Ratna Swain
Wireless Sensor Network (WSN) mainly composed of a number of sensor nodes whose prime responsibility is to sense various events from the surrounding, do the processing on top of it and finally propagate the meaningful information to the observer through multiple intermediate nodes. Area coverage is one of the issues in WSN that guarantees the selected active nodes among all the deployed nodes should cover each point of the deployed area. The objective of complete area coverage is to find out redundant nodes and deactivate them, so that the remaining active nodes can cover the deployed area. Among the various existing approaches for area coverage in WSN, a grid-based approach has been proposed that provides a better way to select the active nodes or deactivate the redundant nodes from the grid rather than choosing the nodes from the deployed area. The existing approach goes with certain limitations such as how many numbers of grids need to be made in a certain deployed area with a certain number of nodes and how much percentage of nodes need to be selected from each grid. In order to avoid these limitations, in this paper, we propose a randomized grid-based approach that splits the intended area into small grids depending upon the density of nodes in a different location of the deployed area. At the same time, rather than selecting a certain percentage of nodes from each grid, here we select only one node from each grid and repeat the process till selected nodes satisfy the whole area coverage. Matlab simulator is used to study the simulation results of the proposed work, and it is found that the proposed randomized grid-based approach outperforms the existing grid-based approach both with respect to energy as well as throughput.
{"title":"Randomized Grid-Based Approach for Complete Area Coverage in WSN","authors":"Sasmita Dash, Biraja Prasad Nayak, B. P. Mishra, Amulya Ratna Swain","doi":"10.1109/IACC.2017.0073","DOIUrl":"https://doi.org/10.1109/IACC.2017.0073","url":null,"abstract":"Wireless Sensor Network (WSN) mainly composed of a number of sensor nodes whose prime responsibility is to sense various events from the surrounding, do the processing on top of it and finally propagate the meaningful information to the observer through multiple intermediate nodes. Area coverage is one of the issues in WSN that guarantees the selected active nodes among all the deployed nodes should cover each point of the deployed area. The objective of complete area coverage is to find out redundant nodes and deactivate them, so that the remaining active nodes can cover the deployed area. Among the various existing approaches for area coverage in WSN, a grid-based approach has been proposed that provides a better way to select the active nodes or deactivate the redundant nodes from the grid rather than choosing the nodes from the deployed area. The existing approach goes with certain limitations such as how many numbers of grids need to be made in a certain deployed area with a certain number of nodes and how much percentage of nodes need to be selected from each grid. In order to avoid these limitations, in this paper, we propose a randomized grid-based approach that splits the intended area into small grids depending upon the density of nodes in a different location of the deployed area. At the same time, rather than selecting a certain percentage of nodes from each grid, here we select only one node from each grid and repeat the process till selected nodes satisfy the whole area coverage. Matlab simulator is used to study the simulation results of the proposed work, and it is found that the proposed randomized grid-based approach outperforms the existing grid-based approach both with respect to energy as well as throughput.","PeriodicalId":248433,"journal":{"name":"2017 IEEE 7th International Advance Computing Conference (IACC)","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134186008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Here we proposed a new approach which is based on Neutrosophic logic to help the patient by taking proper decision through pathological report based analysis. Neutrosophic set is used for uncertain data. Fuzzy data is used to handle incomplete data by only truth value and vague data is applicable for uncertain data by truth and false values. But both are unable to handle uncertain data for any analytical based system. Now neutrosophic set is being used to handling uncertain data in the form of neutrosophic data for pathological test report based decision making operation.
{"title":"To Handle Uncertain Data for Medical Diagnosis Purpose Using Neutrosophic Set","authors":"Soumitra De, Jaydev Mishra","doi":"10.1109/IACC.2017.0183","DOIUrl":"https://doi.org/10.1109/IACC.2017.0183","url":null,"abstract":"Here we proposed a new approach which is based on Neutrosophic logic to help the patient by taking proper decision through pathological report based analysis. Neutrosophic set is used for uncertain data. Fuzzy data is used to handle incomplete data by only truth value and vague data is applicable for uncertain data by truth and false values. But both are unable to handle uncertain data for any analytical based system. Now neutrosophic set is being used to handling uncertain data in the form of neutrosophic data for pathological test report based decision making operation.","PeriodicalId":248433,"journal":{"name":"2017 IEEE 7th International Advance Computing Conference (IACC)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116111595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vibin Vijay, P. RaghunathV., Amarjot Singh, S. N. Omkar
Clustering is a useful data exploratory method with its wide applicability in multiple fields. However, data clustering greatly relies on initialization of cluster centers that can result in large intra-cluster variance and dead centers, therefore leading to sub-optimal solutions. This paper proposes a novel variance based version of the conventional Moving K-Means (MKM) algorithm called Variance Based Moving K-Means (VMKM) that can partition data into optimal homogeneous clusters, irrespective of cluster initialization. The algorithm utilizes a novel distance metric and a unique data element selection criteria to transfer the selected elements between clusters to achieve low intra-cluster variance and subsequently avoid dead centers. Quantitative and qualitative comparison with various clustering techniques is performed on four datasets selected from image processing, bioinformatics, remote sensing and the stock market respectively. An extensive analysis highlights the superior performance of the proposed method over other techniques.
{"title":"Variance Based Moving K-Means Algorithm","authors":"Vibin Vijay, P. RaghunathV., Amarjot Singh, S. N. Omkar","doi":"10.1109/IACC.2017.0173","DOIUrl":"https://doi.org/10.1109/IACC.2017.0173","url":null,"abstract":"Clustering is a useful data exploratory method with its wide applicability in multiple fields. However, data clustering greatly relies on initialization of cluster centers that can result in large intra-cluster variance and dead centers, therefore leading to sub-optimal solutions. This paper proposes a novel variance based version of the conventional Moving K-Means (MKM) algorithm called Variance Based Moving K-Means (VMKM) that can partition data into optimal homogeneous clusters, irrespective of cluster initialization. The algorithm utilizes a novel distance metric and a unique data element selection criteria to transfer the selected elements between clusters to achieve low intra-cluster variance and subsequently avoid dead centers. Quantitative and qualitative comparison with various clustering techniques is performed on four datasets selected from image processing, bioinformatics, remote sensing and the stock market respectively. An extensive analysis highlights the superior performance of the proposed method over other techniques.","PeriodicalId":248433,"journal":{"name":"2017 IEEE 7th International Advance Computing Conference (IACC)","volume":"388 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122089588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Content Based Image Retrieval (CBIR) deals withthe automatic extraction of images from a database based ona query. For efficient retrieval the digital image CBIR requiressupport of scene classification algorithms. The Cognitive psychology suggests that the basic level classification is efficient withthe global features. However, a detailed classification requires acombination of the global and the local features. In this paper, we propose a decision fusion of the classification results based onlocal and global features. The proposed algorithm is a multi stageapproach, in the stage-1 the algorithm separates the completedatabase into natural and artificial images using spectral features. In the stage-2, the texture and color features are used to furtherclassify the image database into subcategories. The results of theproposed decision fusion algorithm give a 5% better classificationaccuracy than the single best classifier.
{"title":"A Feature Subset Based Decision Fusion Approach for Scene Classification Using Color, Spectral, and Texture Statistics","authors":"A. Turlapaty, Hema Kumar Goru, B. Gokaraju","doi":"10.1109/IACC.2017.0132","DOIUrl":"https://doi.org/10.1109/IACC.2017.0132","url":null,"abstract":"Content Based Image Retrieval (CBIR) deals withthe automatic extraction of images from a database based ona query. For efficient retrieval the digital image CBIR requiressupport of scene classification algorithms. The Cognitive psychology suggests that the basic level classification is efficient withthe global features. However, a detailed classification requires acombination of the global and the local features. In this paper, we propose a decision fusion of the classification results based onlocal and global features. The proposed algorithm is a multi stageapproach, in the stage-1 the algorithm separates the completedatabase into natural and artificial images using spectral features. In the stage-2, the texture and color features are used to furtherclassify the image database into subcategories. The results of theproposed decision fusion algorithm give a 5% better classificationaccuracy than the single best classifier.","PeriodicalId":248433,"journal":{"name":"2017 IEEE 7th International Advance Computing Conference (IACC)","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115003101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-01DOI: 10.1109/INDICON.2016.7839146
B. Ahmed, Fathima Jabeen
The simulation of Blind adaptive beamforming using Normalized Constant Modulus Algorithm (NCMA) for the application of smart antenna systems is presented. The significance and basics of smart antenna design in terms of mathematical model is discussed. In this work 16-point QAM (Quadrature Amplitude Modulation) data is considered for simulation, which is one of the preferred modulation formats in design of modems and other fixed location wireless applications in industry. The simulation results in terms of array factor and antenna array response are presented, in which significant improvement is observed compared to other related work.
{"title":"Blind Adaptive Beamforming Simulation Using NCMA for Smart Antenna","authors":"B. Ahmed, Fathima Jabeen","doi":"10.1109/INDICON.2016.7839146","DOIUrl":"https://doi.org/10.1109/INDICON.2016.7839146","url":null,"abstract":"The simulation of Blind adaptive beamforming using Normalized Constant Modulus Algorithm (NCMA) for the application of smart antenna systems is presented. The significance and basics of smart antenna design in terms of mathematical model is discussed. In this work 16-point QAM (Quadrature Amplitude Modulation) data is considered for simulation, which is one of the preferred modulation formats in design of modems and other fixed location wireless applications in industry. The simulation results in terms of array factor and antenna array response are presented, in which significant improvement is observed compared to other related work.","PeriodicalId":248433,"journal":{"name":"2017 IEEE 7th International Advance Computing Conference (IACC)","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115619093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In medical diagnostic application, early defect detection is a crucial task as it provides critical insight into diagnosis. Medical imaging technique is actively developing field inengineering. Magnetic Resonance imaging (MRI) is one those reliable imaging techniques on which medical diagnostic is based upon. Manual inspection of those images is a tedious job as the amount of data and minute details are hard to recognize by the human. For this automating those techniques are very crucial. In this paper, we are proposing a method which can be utilized to make tumor detection easier. The MRI deals with the complicated problem of brain tumor detection. Due to its complexity and variance getting better accuracy is a challenge. Using Adaboost machine learning algorithm we can improve over accuracy issue. The proposed system consists of three parts such as Preprocessing, Feature extraction and Classification. Preprocessing has removed noise in the raw data, for feature extraction we used GLCM (Gray Level Co- occurrence Matrix) and for classification boosting technique used (Adaboost).
{"title":"MR Image Classification Using Adaboost for Brain Tumor Type","authors":"Astina Minz, Chandrakant Mahobiya","doi":"10.1109/IACC.2017.0146","DOIUrl":"https://doi.org/10.1109/IACC.2017.0146","url":null,"abstract":"In medical diagnostic application, early defect detection is a crucial task as it provides critical insight into diagnosis. Medical imaging technique is actively developing field inengineering. Magnetic Resonance imaging (MRI) is one those reliable imaging techniques on which medical diagnostic is based upon. Manual inspection of those images is a tedious job as the amount of data and minute details are hard to recognize by the human. For this automating those techniques are very crucial. In this paper, we are proposing a method which can be utilized to make tumor detection easier. The MRI deals with the complicated problem of brain tumor detection. Due to its complexity and variance getting better accuracy is a challenge. Using Adaboost machine learning algorithm we can improve over accuracy issue. The proposed system consists of three parts such as Preprocessing, Feature extraction and Classification. Preprocessing has removed noise in the raw data, for feature extraction we used GLCM (Gray Level Co- occurrence Matrix) and for classification boosting technique used (Adaboost).","PeriodicalId":248433,"journal":{"name":"2017 IEEE 7th International Advance Computing Conference (IACC)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115319518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Now day's computer vision techniques are used for analysis of traffic surveillance videos which is gaining more importance. This analysis of videos can be useful for public safety and for traffic management. In recent time, there has been an increased scope for analysis of traffic activity automatically. Computer based surveillance algorithms and systems are used to extract information from the videos which is also called as Video analytics. Detection of traffic violations such as illegal turns and identification of pedestrians, vehicles from traffic videos can be done by using computer vision and pattern recognition techniques. Object detection is the process of identifying instances of real world objects which include persons, faces and vehicles in images or videos. Object detection is becoming an increasingly important challenge now days as it has so many applications. Vehicle detection helps in core detection of multiple functions such as Adaptive cruise control, forward collision warning. Automatic Generation of Traffic Signal based on Traffic Volume system can be used for traffic control. Traffic Surveillance videos of vehicles are taken as input from MIT Traffic dataset. These videos are further processed frame by frame where the background subtraction is done with the help of Gaussian Mixture Model (GMM). From the background subtracted result some amount of noise is removed with the help of Morphological opening operation and Blob analysis is done in order to the detect the vehicles. Later the vehicles are counted by incrementing the counter whenever a bounding box is appeared for the detected vehicle. Finally a signal is generated depending on the count in each frame.
{"title":"Automatic Generation of Traffic Signal Based on Traffic Volume","authors":"T. Sridevi, K. Harinath, P. Swapna","doi":"10.1109/IACC.2017.0094","DOIUrl":"https://doi.org/10.1109/IACC.2017.0094","url":null,"abstract":"Now day's computer vision techniques are used for analysis of traffic surveillance videos which is gaining more importance. This analysis of videos can be useful for public safety and for traffic management. In recent time, there has been an increased scope for analysis of traffic activity automatically. Computer based surveillance algorithms and systems are used to extract information from the videos which is also called as Video analytics. Detection of traffic violations such as illegal turns and identification of pedestrians, vehicles from traffic videos can be done by using computer vision and pattern recognition techniques. Object detection is the process of identifying instances of real world objects which include persons, faces and vehicles in images or videos. Object detection is becoming an increasingly important challenge now days as it has so many applications. Vehicle detection helps in core detection of multiple functions such as Adaptive cruise control, forward collision warning. Automatic Generation of Traffic Signal based on Traffic Volume system can be used for traffic control. Traffic Surveillance videos of vehicles are taken as input from MIT Traffic dataset. These videos are further processed frame by frame where the background subtraction is done with the help of Gaussian Mixture Model (GMM). From the background subtracted result some amount of noise is removed with the help of Morphological opening operation and Blob analysis is done in order to the detect the vehicles. Later the vehicles are counted by incrementing the counter whenever a bounding box is appeared for the detected vehicle. Finally a signal is generated depending on the count in each frame.","PeriodicalId":248433,"journal":{"name":"2017 IEEE 7th International Advance Computing Conference (IACC)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117151792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The past decade has shown us the power of cyberspace and we getting dependent on the same. The exponentialevolution in the domain has attracted attackers and defenders oftechnology equally. This inevitable domain has led to the increasein average human awareness and knowledge too. As we see theattack sophistication grow the protectors have always been a stepahead mitigating the attacks. A study of the various ThreatDetection, Protection and Mitigation Systems revealed to us acommon similarity wherein users have been totally ignored or thesystems rely heavily on the user inputs for its correct functioning. Compiling the above we designed a study wherein user inputswere taken in addition to independent Detection and Preventionsystems to identify and mitigate the risks. This approach led us toa conclusion that involvement of users exponentially enhancesmachine learning and segments the data sets faster for a morereliable output.
{"title":"End Users Can Mitigate Zero Day Attacks Faster","authors":"Vivek Bardia, Crs Kumar","doi":"10.1109/IACC.2017.0190","DOIUrl":"https://doi.org/10.1109/IACC.2017.0190","url":null,"abstract":"The past decade has shown us the power of cyberspace and we getting dependent on the same. The exponentialevolution in the domain has attracted attackers and defenders oftechnology equally. This inevitable domain has led to the increasein average human awareness and knowledge too. As we see theattack sophistication grow the protectors have always been a stepahead mitigating the attacks. A study of the various ThreatDetection, Protection and Mitigation Systems revealed to us acommon similarity wherein users have been totally ignored or thesystems rely heavily on the user inputs for its correct functioning. Compiling the above we designed a study wherein user inputswere taken in addition to independent Detection and Preventionsystems to identify and mitigate the risks. This approach led us toa conclusion that involvement of users exponentially enhancesmachine learning and segments the data sets faster for a morereliable output.","PeriodicalId":248433,"journal":{"name":"2017 IEEE 7th International Advance Computing Conference (IACC)","volume":"69 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120863201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Growing of industry and increasing demand in consumer load/distribution side place a new demand in mechanism connected with electrical motors. This is leading to different problems in working operations due to fast dynamic and instability. The stability of the system is essential to work at desired set target but due to non-linearity caused by a motor frequently reduces stability which reduces control ability to maintain speed/position at set points. BLDC motors are widely used in industries because high efficiency, low cost, roughest construction, long operating life noise less operation but the problem arises in BLDC motors are speed controlling by using sensor and senseless controllers and large torque ripples and torque oscillations. This paper presents assessment and evolution of the BLDC motor by providing proper voltage controller methods (back emf controller method) and analysis has done in MATLB/SIMULINK software. Therefore the parameters of the BLDC motor analyzed and compered with BLDC motor drive without any controller.
{"title":"Enhancement of Dynamic Performance of Brush-Less DC Motor Drive","authors":"S. K., P. P. Kumar","doi":"10.1109/IACC.2017.0098","DOIUrl":"https://doi.org/10.1109/IACC.2017.0098","url":null,"abstract":"Growing of industry and increasing demand in consumer load/distribution side place a new demand in mechanism connected with electrical motors. This is leading to different problems in working operations due to fast dynamic and instability. The stability of the system is essential to work at desired set target but due to non-linearity caused by a motor frequently reduces stability which reduces control ability to maintain speed/position at set points. BLDC motors are widely used in industries because high efficiency, low cost, roughest construction, long operating life noise less operation but the problem arises in BLDC motors are speed controlling by using sensor and senseless controllers and large torque ripples and torque oscillations. This paper presents assessment and evolution of the BLDC motor by providing proper voltage controller methods (back emf controller method) and analysis has done in MATLB/SIMULINK software. Therefore the parameters of the BLDC motor analyzed and compered with BLDC motor drive without any controller.","PeriodicalId":248433,"journal":{"name":"2017 IEEE 7th International Advance Computing Conference (IACC)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125265618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In general, opinion mining has been used to know about what people think and feel about their products and services in social media platforms. Millions of users share opinions on different aspects of life every day. Spurred by that growth, companies and media organizations are increasingly seeking way to mine information. It requires efficient techniques to collect a large amount of social media data and extract meaningful information from them. This paper aims to provide an interactive automatic system which predicts the sentiment of the review/tweets of the people posted in social media using hadoop, which can process the huge amount of data. Till now, there are few different problems predominating in this research community, namely, sentiment classification, feature based classification and handling negations. A precise method is used for predicting sentiment polarity, which helps to improve marketing strategies. This paper deals with the challenges that appear in the process of Sentiment Analysis, real time tweets areconsidered as they are rich sources of data for opinion mining and sentiment analysis. This paper focus on Sentiment analysis, Feature based Sentiment classification and Opinion Summarization. The main objective of this paper is to perform real time sentimental analysis on the tweets that are extracted from the twitter and provide time based analytics to the user.
{"title":"Sentiment Analysis on Twitter Using Streaming API","authors":"M. Trupthi, S. Pabboju, G. Narasimha","doi":"10.1109/IACC.2017.0186","DOIUrl":"https://doi.org/10.1109/IACC.2017.0186","url":null,"abstract":"In general, opinion mining has been used to know about what people think and feel about their products and services in social media platforms. Millions of users share opinions on different aspects of life every day. Spurred by that growth, companies and media organizations are increasingly seeking way to mine information. It requires efficient techniques to collect a large amount of social media data and extract meaningful information from them. This paper aims to provide an interactive automatic system which predicts the sentiment of the review/tweets of the people posted in social media using hadoop, which can process the huge amount of data. Till now, there are few different problems predominating in this research community, namely, sentiment classification, feature based classification and handling negations. A precise method is used for predicting sentiment polarity, which helps to improve marketing strategies. This paper deals with the challenges that appear in the process of Sentiment Analysis, real time tweets areconsidered as they are rich sources of data for opinion mining and sentiment analysis. This paper focus on Sentiment analysis, Feature based Sentiment classification and Opinion Summarization. The main objective of this paper is to perform real time sentimental analysis on the tweets that are extracted from the twitter and provide time based analytics to the user.","PeriodicalId":248433,"journal":{"name":"2017 IEEE 7th International Advance Computing Conference (IACC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122848801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}