Pub Date : 2017-05-01DOI: 10.1109/CCAA.2017.8230033
Prashant Kumar, N. Bhandari, Lokesh Bhargav, Rashmi Rathi, S. C. Yadav
The main objective of this paper is to design the low power consumption and less area occupied combinational circuit here we designed half adder circuit using three different logic styles: CMOS NAND gate logic, CMOS transmission gate logic, and NMOS pass transistor logic. All the circuits are simulated and compared by using Cadence Virtuoso IC 6.1.5, 180nm CMOS Technology with the supply voltage of 5 V. In this paper we compare different performance parameters of these three logic styles, like power consumption, Number of transistors, propagation delay, rise time, fall time etc.
本文的主要目标是设计低功耗和占地面积少的组合电路,在这里我们设计了半加法器电路,采用三种不同的逻辑风格:CMOS NAND门逻辑,CMOS传输门逻辑和NMOS通管逻辑。采用Cadence Virtuoso IC 6.1.5, 180nm CMOS技术,电源电压为5v,对所有电路进行了仿真和比较。在本文中,我们比较了这三种逻辑方式的不同性能参数,如功耗、晶体管数量、传播延迟、上升时间、下降时间等。
{"title":"Design of low power and area efficient half adder using pass transistor and comparison of various performance parameters","authors":"Prashant Kumar, N. Bhandari, Lokesh Bhargav, Rashmi Rathi, S. C. Yadav","doi":"10.1109/CCAA.2017.8230033","DOIUrl":"https://doi.org/10.1109/CCAA.2017.8230033","url":null,"abstract":"The main objective of this paper is to design the low power consumption and less area occupied combinational circuit here we designed half adder circuit using three different logic styles: CMOS NAND gate logic, CMOS transmission gate logic, and NMOS pass transistor logic. All the circuits are simulated and compared by using Cadence Virtuoso IC 6.1.5, 180nm CMOS Technology with the supply voltage of 5 V. In this paper we compare different performance parameters of these three logic styles, like power consumption, Number of transistors, propagation delay, rise time, fall time etc.","PeriodicalId":6627,"journal":{"name":"2017 International Conference on Computing, Communication and Automation (ICCCA)","volume":"76 1","pages":"1477-1482"},"PeriodicalIF":0.0,"publicationDate":"2017-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88549484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-05-01DOI: 10.1109/CCAA.2017.8229890
Reena Kasana, Sushil Kumar, Omprakash Kaiwartya
Geographic routing has received a lot of attention from researchers all over the world due to availability of low cost Global Positioning System (GPS) devices. It is considered as efficient routing for large scale networks and offers encouraging solutions for information dissemination in Vehicular ad hoc Networks (VANETs). The efficacy and scalability of all the geographic routing depends on the accuracy of location information obtained from positioning systems. Related literature implicitly assumed perfect location information. However, such belief is unrealistic in the real world. Measured location information inherently has inaccuracy, leading to performance degradation of geographic routing. In this paper, a novel location error tolerant geographical routing (LETGR) in vehicular environment is proposed that can reduce the impact of location inaccuracy in measurement due to instrument imprecision and obstacles in the realistic scenarios in highly mobile environment. LETGR takes the statistical error characteristic into account in its next forwarding vehicle selection logic to maximize the probability of message delivery. To alleviate the effect of mobility, LETGR exploits future locations of vehicles instead of current locations. Extended Kalman filter is used in the proposed algorithm for predicting and correcting future locations of the vehicles. Performance of the LETGR algorithm is evaluated via simulation and results show that LETGR algorithm performance is encouraging when the objective is to maximize the reception of data packets at the destination vehicle.
{"title":"Towards location error resilient geographic routing for VANETs","authors":"Reena Kasana, Sushil Kumar, Omprakash Kaiwartya","doi":"10.1109/CCAA.2017.8229890","DOIUrl":"https://doi.org/10.1109/CCAA.2017.8229890","url":null,"abstract":"Geographic routing has received a lot of attention from researchers all over the world due to availability of low cost Global Positioning System (GPS) devices. It is considered as efficient routing for large scale networks and offers encouraging solutions for information dissemination in Vehicular ad hoc Networks (VANETs). The efficacy and scalability of all the geographic routing depends on the accuracy of location information obtained from positioning systems. Related literature implicitly assumed perfect location information. However, such belief is unrealistic in the real world. Measured location information inherently has inaccuracy, leading to performance degradation of geographic routing. In this paper, a novel location error tolerant geographical routing (LETGR) in vehicular environment is proposed that can reduce the impact of location inaccuracy in measurement due to instrument imprecision and obstacles in the realistic scenarios in highly mobile environment. LETGR takes the statistical error characteristic into account in its next forwarding vehicle selection logic to maximize the probability of message delivery. To alleviate the effect of mobility, LETGR exploits future locations of vehicles instead of current locations. Extended Kalman filter is used in the proposed algorithm for predicting and correcting future locations of the vehicles. Performance of the LETGR algorithm is evaluated via simulation and results show that LETGR algorithm performance is encouraging when the objective is to maximize the reception of data packets at the destination vehicle.","PeriodicalId":6627,"journal":{"name":"2017 International Conference on Computing, Communication and Automation (ICCCA)","volume":"5 1","pages":"691-697"},"PeriodicalIF":0.0,"publicationDate":"2017-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75978437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-05-01DOI: 10.1109/CCAA.2017.8229935
Nidhi Periwal, Keyur Rana
MOOCs are Massive Open Online Courses, which are offered on web and have become a focal point for students preferring e-learning. Regardless of enormous enrollment of students in MOOCs, the amount of dropout students in these courses are too high. For the success of MOOCs, their dropout rates must decrease. As the proportion of continuing and dropout students in MOOCs varies considerably, the class imbalance problem has been observed in normally all MOOCs dataset. Researchers have developed models to predict the dropout students in MOOCs using different techniques. The features, which affect these models, can be obtained during registration and interaction of students with MOOCs' portal. Using results of these models, appropriate actions can be taken for students in order to retain them. In this paper, we have created four models using various machine learning techniques over publically available dataset. After the empirical analysis and evaluation of these models, we found that model created by Naïve Bayes technique performed well for imbalance class data of MOOCs.
{"title":"An empirical comparison of models for dropout prophecy in MOOCs","authors":"Nidhi Periwal, Keyur Rana","doi":"10.1109/CCAA.2017.8229935","DOIUrl":"https://doi.org/10.1109/CCAA.2017.8229935","url":null,"abstract":"MOOCs are Massive Open Online Courses, which are offered on web and have become a focal point for students preferring e-learning. Regardless of enormous enrollment of students in MOOCs, the amount of dropout students in these courses are too high. For the success of MOOCs, their dropout rates must decrease. As the proportion of continuing and dropout students in MOOCs varies considerably, the class imbalance problem has been observed in normally all MOOCs dataset. Researchers have developed models to predict the dropout students in MOOCs using different techniques. The features, which affect these models, can be obtained during registration and interaction of students with MOOCs' portal. Using results of these models, appropriate actions can be taken for students in order to retain them. In this paper, we have created four models using various machine learning techniques over publically available dataset. After the empirical analysis and evaluation of these models, we found that model created by Naïve Bayes technique performed well for imbalance class data of MOOCs.","PeriodicalId":6627,"journal":{"name":"2017 International Conference on Computing, Communication and Automation (ICCCA)","volume":"1 1","pages":"906-911"},"PeriodicalIF":0.0,"publicationDate":"2017-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78912100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-05-01DOI: 10.1109/CCAA.2017.8230010
Gaurav Tripathi, Bhawna Sharma, S. Rajvanshi
Internet of things (IoT) has provided a technological platform for purposeful connectivity. IoT allows smart devices and sensors to sense, connect and control the devices even remotely. The development in the field of Internet of Things has been enormous and the application of these solutions is scalable to high limits. The quantum of Internet of Things (IoT) is developing fast and is predicted to reach each and every sector of the computing world. We are already converging towards smart homes, smart highways, and smart cities. The defense sector of any nation is also affected by these developments. Defense field's solution is primarily based on sensors and their deployments. The primary aim of sensory data is the conclusion of information suitable for tactical decision and analysis in the Future battlefield environment. From capturing soldier's vital health parameters to its weapons, ammunition, location status, every data has a purposeful meaning and is of particular importance to the tactical commander sitting in the command center. We are proposing a novel mechanism to combine Internet of Things with the emerging graph database for better decision support system so as to create a situational awareness about every parameter of the soldiers in the battlefield. We present a simulated use case scenario of the future battlefield to query the graph database for situational awareness pattern for tactical advantage over the opponents.
{"title":"A combination of Internet of Things (IoT) and graph database for future battlefield systems","authors":"Gaurav Tripathi, Bhawna Sharma, S. Rajvanshi","doi":"10.1109/CCAA.2017.8230010","DOIUrl":"https://doi.org/10.1109/CCAA.2017.8230010","url":null,"abstract":"Internet of things (IoT) has provided a technological platform for purposeful connectivity. IoT allows smart devices and sensors to sense, connect and control the devices even remotely. The development in the field of Internet of Things has been enormous and the application of these solutions is scalable to high limits. The quantum of Internet of Things (IoT) is developing fast and is predicted to reach each and every sector of the computing world. We are already converging towards smart homes, smart highways, and smart cities. The defense sector of any nation is also affected by these developments. Defense field's solution is primarily based on sensors and their deployments. The primary aim of sensory data is the conclusion of information suitable for tactical decision and analysis in the Future battlefield environment. From capturing soldier's vital health parameters to its weapons, ammunition, location status, every data has a purposeful meaning and is of particular importance to the tactical commander sitting in the command center. We are proposing a novel mechanism to combine Internet of Things with the emerging graph database for better decision support system so as to create a situational awareness about every parameter of the soldiers in the battlefield. We present a simulated use case scenario of the future battlefield to query the graph database for situational awareness pattern for tactical advantage over the opponents.","PeriodicalId":6627,"journal":{"name":"2017 International Conference on Computing, Communication and Automation (ICCCA)","volume":"1 1","pages":"1252-1257"},"PeriodicalIF":0.0,"publicationDate":"2017-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80061672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-05-01DOI: 10.1109/CCAA.2017.8229944
Meena Belwal, Sudarshan TSB
The recent advancement in software industry such as Microsoft utilizing FPGAs (Field Programmable Gate Arrays) for acceleration in its search engine Bing and Intel's initiative to have its CPU along with Altera FPGA in the same chip indicates FPGA's potential as well as growing demand in the field of high performance computing. FPGAs provide accelerated computation due to their flexible architecture. However it creates challenges for the system designer as efficient design in terms of latency, power and energy demands hardware programming expertise. Hardware coding is a time consuming as well as an error prone task. High Level Synthesis (HLS) addresses these challenges by enabling programmer to code in High-level languages (HLL) such as C, C++, SystemC, CUDA and translating this code to hardware language such as Verilog or VHDL. Even though HLS tools provide several optimizations, their performance is limited due to the implementation constraints. Some of the software constructs widely used in high level language such as dynamic memory allocation, pointer-based data structures and recursion are very hard to implement well in hardware and thereby restricting the performance of HLS. Source-to-source translation is a mechanism to optimize the code in HLL so that the compiler can perform better in terms of code optimization. This article investigates whether the source-to-source translation widely used in HLL can also benefit high level synthesis. For this study, Bones source-to-source compiler is selected to perform the translation of C code to C (Optimized-C) and OpenMP code. These three types of code: C, Optimized-C and OpenMP were synthesized in LegUP HLS for three benchmarks; the performance statistics were measured for all the nine cases and analysis was conducted in terms of speedup, area reduction, power and energy consumption. OpenMP code performed better as compared to original C code in terms of execution time (speedup range 1.86–3.49), area (gain range 1–6.55) and energy (gain range 1.86–3.55). However optimized-C code did not always perform better than the original C-code in terms of execution time (speedup range 0.27–3.08), area (gain range 0.83–5.7) and energy (gain range 0.27–3.13). The power statistics observed were almost the same for all the three input versions of the code.
{"title":"Source-to-source translation: Impact on the performance of high level synthesis","authors":"Meena Belwal, Sudarshan TSB","doi":"10.1109/CCAA.2017.8229944","DOIUrl":"https://doi.org/10.1109/CCAA.2017.8229944","url":null,"abstract":"The recent advancement in software industry such as Microsoft utilizing FPGAs (Field Programmable Gate Arrays) for acceleration in its search engine Bing and Intel's initiative to have its CPU along with Altera FPGA in the same chip indicates FPGA's potential as well as growing demand in the field of high performance computing. FPGAs provide accelerated computation due to their flexible architecture. However it creates challenges for the system designer as efficient design in terms of latency, power and energy demands hardware programming expertise. Hardware coding is a time consuming as well as an error prone task. High Level Synthesis (HLS) addresses these challenges by enabling programmer to code in High-level languages (HLL) such as C, C++, SystemC, CUDA and translating this code to hardware language such as Verilog or VHDL. Even though HLS tools provide several optimizations, their performance is limited due to the implementation constraints. Some of the software constructs widely used in high level language such as dynamic memory allocation, pointer-based data structures and recursion are very hard to implement well in hardware and thereby restricting the performance of HLS. Source-to-source translation is a mechanism to optimize the code in HLL so that the compiler can perform better in terms of code optimization. This article investigates whether the source-to-source translation widely used in HLL can also benefit high level synthesis. For this study, Bones source-to-source compiler is selected to perform the translation of C code to C (Optimized-C) and OpenMP code. These three types of code: C, Optimized-C and OpenMP were synthesized in LegUP HLS for three benchmarks; the performance statistics were measured for all the nine cases and analysis was conducted in terms of speedup, area reduction, power and energy consumption. OpenMP code performed better as compared to original C code in terms of execution time (speedup range 1.86–3.49), area (gain range 1–6.55) and energy (gain range 1.86–3.55). However optimized-C code did not always perform better than the original C-code in terms of execution time (speedup range 0.27–3.08), area (gain range 0.83–5.7) and energy (gain range 0.27–3.13). The power statistics observed were almost the same for all the three input versions of the code.","PeriodicalId":6627,"journal":{"name":"2017 International Conference on Computing, Communication and Automation (ICCCA)","volume":"69 1","pages":"951-956"},"PeriodicalIF":0.0,"publicationDate":"2017-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82687934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-05-01DOI: 10.1109/CCAA.2017.8229942
Bababe B. Adam, A. Jha, Rajiv Kumar
The comfort of the home and the Society are helped by the “things” which surround them. These things are connected to each other, either directly or indirectly via the internet of things. Having full access to controlling these devices remotely with reasonable precision within the network when required is a key element in the home automation process. Many aspects of home automation needs to be developed so as to enhance it. This research gives a solution to having a precise and direct control and automatic detection of current state of devices with the use of micro-controller via an android application. It also gives a practical implementation of home automation using Wi-Fi in comparison to other technologies.
{"title":"Touch-n-play: An intelligent home automation system","authors":"Bababe B. Adam, A. Jha, Rajiv Kumar","doi":"10.1109/CCAA.2017.8229942","DOIUrl":"https://doi.org/10.1109/CCAA.2017.8229942","url":null,"abstract":"The comfort of the home and the Society are helped by the “things” which surround them. These things are connected to each other, either directly or indirectly via the internet of things. Having full access to controlling these devices remotely with reasonable precision within the network when required is a key element in the home automation process. Many aspects of home automation needs to be developed so as to enhance it. This research gives a solution to having a precise and direct control and automatic detection of current state of devices with the use of micro-controller via an android application. It also gives a practical implementation of home automation using Wi-Fi in comparison to other technologies.","PeriodicalId":6627,"journal":{"name":"2017 International Conference on Computing, Communication and Automation (ICCCA)","volume":"103 1","pages":"940-944"},"PeriodicalIF":0.0,"publicationDate":"2017-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77623787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-05-01DOI: 10.1109/CCAA.2017.8229958
R. P. Tripathi, G. Mishra
As we know that developments in technology are introducing various methods for Tele-cardiology. Tele-cardiology includes many of the applications and this is one of the fields in telemedicine which have seen excellent growth. In the procedures of Tele-cardiology we record a extremely large amount of ECG real time data. Therefore we require an efficient and lossless technique that is able to perform compression of recorded ECG signals. In this paper we have studied and analyzed various lossless data compression techniques used in the compression of ECG signals. In the course of studying various techniques we have presented the analysis of some most widely used time domain techniques those are AZTEC (Amplitude zone time epoch coding) technique and Turning point technique (TP) and in transformation based compression techniques we have presented the study of Discrete Cosine Transform technique (DCT) performed with Huffman coding technique and Empirical Mode Decomposition (EMD) technique. The overall performance of all these techniques are studied and analyzed on the basis of two main parameters those are the compression ratio (CR) and Percent Root means square Difference (PRD). We have used the data base of physionet.org website for the calculation of CR and PRD. We have calculated and compared the CR and PRD values using all above discussed techniques for 28 sets of the recorded data.
{"title":"Study of various data compression techniques used in lossless compression of ECG signals","authors":"R. P. Tripathi, G. Mishra","doi":"10.1109/CCAA.2017.8229958","DOIUrl":"https://doi.org/10.1109/CCAA.2017.8229958","url":null,"abstract":"As we know that developments in technology are introducing various methods for Tele-cardiology. Tele-cardiology includes many of the applications and this is one of the fields in telemedicine which have seen excellent growth. In the procedures of Tele-cardiology we record a extremely large amount of ECG real time data. Therefore we require an efficient and lossless technique that is able to perform compression of recorded ECG signals. In this paper we have studied and analyzed various lossless data compression techniques used in the compression of ECG signals. In the course of studying various techniques we have presented the analysis of some most widely used time domain techniques those are AZTEC (Amplitude zone time epoch coding) technique and Turning point technique (TP) and in transformation based compression techniques we have presented the study of Discrete Cosine Transform technique (DCT) performed with Huffman coding technique and Empirical Mode Decomposition (EMD) technique. The overall performance of all these techniques are studied and analyzed on the basis of two main parameters those are the compression ratio (CR) and Percent Root means square Difference (PRD). We have used the data base of physionet.org website for the calculation of CR and PRD. We have calculated and compared the CR and PRD values using all above discussed techniques for 28 sets of the recorded data.","PeriodicalId":6627,"journal":{"name":"2017 International Conference on Computing, Communication and Automation (ICCCA)","volume":"57 1","pages":"1093-1097"},"PeriodicalIF":0.0,"publicationDate":"2017-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91526364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-05-01DOI: 10.1109/CCAA.2017.8229965
Ruchi Jayaswal, Jaimala Jha
Retrieval of images using visual features is a hot research field in image processing used in various utilizations like in the business field, medical image, geographical images etc. In this research work, we propose and implement a fused approach for image retrieval technique using HSV Histogram color feature, LBP and SFTA texture feature of an image. Standardized Euclidean distance is operated as similarity check method. Wang image repository is used having 1000 images categorized into 10 classes of images for experimental evaluation. Experimental outcomes clear that the proposed system gives improved result of precision than the other conventional methods which is also presented in this paper.
{"title":"A hybrid approach for image retrieval using visual descriptors","authors":"Ruchi Jayaswal, Jaimala Jha","doi":"10.1109/CCAA.2017.8229965","DOIUrl":"https://doi.org/10.1109/CCAA.2017.8229965","url":null,"abstract":"Retrieval of images using visual features is a hot research field in image processing used in various utilizations like in the business field, medical image, geographical images etc. In this research work, we propose and implement a fused approach for image retrieval technique using HSV Histogram color feature, LBP and SFTA texture feature of an image. Standardized Euclidean distance is operated as similarity check method. Wang image repository is used having 1000 images categorized into 10 classes of images for experimental evaluation. Experimental outcomes clear that the proposed system gives improved result of precision than the other conventional methods which is also presented in this paper.","PeriodicalId":6627,"journal":{"name":"2017 International Conference on Computing, Communication and Automation (ICCCA)","volume":"3 1","pages":"1125-1130"},"PeriodicalIF":0.0,"publicationDate":"2017-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87151844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-05-01DOI: 10.1109/CCAA.2017.8229793
Suvendu Kanungo, A. Shukla
In this digital world, we are facing the flood of data, but depriving for knowledge. The eminent need of mining is useful to extract the hidden pattern from the wide availability of vast amount of data. Clustering is one such useful mining tool to handle this unfavorable situation by carrying out crucial steps refers as cluster analysis. It is the process of a grouping of patterns into clusters based on similarity. Partition based clustering algorithms are widely accepted for much diverse application such as pattern analysis, image segmentation, identification system. Among the different variations of the partition based clustering, due to its monotony and ease of implementation K-means algorithm gained a lot of attraction in the various field of research. A severe problem associated with the algorithm is that it is highly sophisticated while selecting the initial centroid and may converge to a local optimum solution of the criterion function value if the initial centroid is not chosen accurately. Additionally, it requires the prior information regarding a number of clusters to be formed and the computation of K-means are expensive. K-means algorithm is a two-step process includes initialization and assignment step. This paper works on initialization step of the algorithm and proposed an efficient enhanced K-means clustering algorithm which eliminates the deficiency of the existing one. A new initialization approach has been introduced in the paper to drawn an initial cluster centers for K means Algorithm. The paper also compares proposed technique with K-means technique.
{"title":"A novel clustering framework using farthest neighbour approach","authors":"Suvendu Kanungo, A. Shukla","doi":"10.1109/CCAA.2017.8229793","DOIUrl":"https://doi.org/10.1109/CCAA.2017.8229793","url":null,"abstract":"In this digital world, we are facing the flood of data, but depriving for knowledge. The eminent need of mining is useful to extract the hidden pattern from the wide availability of vast amount of data. Clustering is one such useful mining tool to handle this unfavorable situation by carrying out crucial steps refers as cluster analysis. It is the process of a grouping of patterns into clusters based on similarity. Partition based clustering algorithms are widely accepted for much diverse application such as pattern analysis, image segmentation, identification system. Among the different variations of the partition based clustering, due to its monotony and ease of implementation K-means algorithm gained a lot of attraction in the various field of research. A severe problem associated with the algorithm is that it is highly sophisticated while selecting the initial centroid and may converge to a local optimum solution of the criterion function value if the initial centroid is not chosen accurately. Additionally, it requires the prior information regarding a number of clusters to be formed and the computation of K-means are expensive. K-means algorithm is a two-step process includes initialization and assignment step. This paper works on initialization step of the algorithm and proposed an efficient enhanced K-means clustering algorithm which eliminates the deficiency of the existing one. A new initialization approach has been introduced in the paper to drawn an initial cluster centers for K means Algorithm. The paper also compares proposed technique with K-means technique.","PeriodicalId":6627,"journal":{"name":"2017 International Conference on Computing, Communication and Automation (ICCCA)","volume":"21 1","pages":"164-169"},"PeriodicalIF":0.0,"publicationDate":"2017-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87177073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-05-01DOI: 10.1109/CCAA.2017.8229815
Nayan Chitransh, C. Mehrotra, A. Singh
With a lot of technological advancements in recent years the data generation has increased as an outcome, large amount of data is being generated which is a major issue to the organization, as an example social media is flooding data each day, which is unmanageable. Here in this paper we would discuss the various risks and issues which are associated with big data. Big Data as itself describes the large volume of data, either structured or unstructured, in now a day's data came from so many sources that usual database systems are not able to handle that data so we need Big Data, but Big Data requires a huge assurance of hardware and processing resources which make Big Data costly. In order to provide Big Data services to every user we take help form Cloud computing which offer Big Data implementation cheaper. Cloud Computing is a technology which offers sharing of computing resources such as servers, devices etc instead of having personal one, services over Cloud Computing are delivered via Internet.
{"title":"Risk for big data in the cloud","authors":"Nayan Chitransh, C. Mehrotra, A. Singh","doi":"10.1109/CCAA.2017.8229815","DOIUrl":"https://doi.org/10.1109/CCAA.2017.8229815","url":null,"abstract":"With a lot of technological advancements in recent years the data generation has increased as an outcome, large amount of data is being generated which is a major issue to the organization, as an example social media is flooding data each day, which is unmanageable. Here in this paper we would discuss the various risks and issues which are associated with big data. Big Data as itself describes the large volume of data, either structured or unstructured, in now a day's data came from so many sources that usual database systems are not able to handle that data so we need Big Data, but Big Data requires a huge assurance of hardware and processing resources which make Big Data costly. In order to provide Big Data services to every user we take help form Cloud computing which offer Big Data implementation cheaper. Cloud Computing is a technology which offers sharing of computing resources such as servers, devices etc instead of having personal one, services over Cloud Computing are delivered via Internet.","PeriodicalId":6627,"journal":{"name":"2017 International Conference on Computing, Communication and Automation (ICCCA)","volume":"83 1","pages":"277-282"},"PeriodicalIF":0.0,"publicationDate":"2017-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85971913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}