Pub Date : 2014-03-01DOI: 10.1109/ICISCON.2014.6965228
P. Sethi, D. Juneja, N. Chauhan
Data Fusion using software agents in WSNs is a naive area of interest for researchers belonging to the community of sensors as well as software agents. Data Fusion is one of the most desired processes while dealing with information processing in wireless sensor networks as the requirement of increased lifetime of network, reduced and balanced energy consumption still pre-dominates all other factors affecting the performance of sensors deployed in nondeterministic areas. In fact, data fusion serves as a solution to very common problem known as sensor hole problem. Although there exist various data fusion strategies to address the issue, but all of these suffer from one or the other drawback. Hence, this paper initially begins with a comparative overview of various data fusion strategies exploring the exploitation of software agents and later proposes a hybrid approach integrating the pros of both parallel and serial fusion approaches for achieving the same.
{"title":"Hybrid data fusion using software agents for an event driven application of wireless sensor networks","authors":"P. Sethi, D. Juneja, N. Chauhan","doi":"10.1109/ICISCON.2014.6965228","DOIUrl":"https://doi.org/10.1109/ICISCON.2014.6965228","url":null,"abstract":"Data Fusion using software agents in WSNs is a naive area of interest for researchers belonging to the community of sensors as well as software agents. Data Fusion is one of the most desired processes while dealing with information processing in wireless sensor networks as the requirement of increased lifetime of network, reduced and balanced energy consumption still pre-dominates all other factors affecting the performance of sensors deployed in nondeterministic areas. In fact, data fusion serves as a solution to very common problem known as sensor hole problem. Although there exist various data fusion strategies to address the issue, but all of these suffer from one or the other drawback. Hence, this paper initially begins with a comparative overview of various data fusion strategies exploring the exploitation of software agents and later proposes a hybrid approach integrating the pros of both parallel and serial fusion approaches for achieving the same.","PeriodicalId":193007,"journal":{"name":"2014 International Conference on Information Systems and Computer Networks (ISCON)","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131226793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-03-01DOI: 10.1109/ICISCON.2014.6965219
R. Popli, N. Chauhan
The estimation of cost and effort is a critical task in Agile environment because of its dynamic nature. It has been observed that the current Agile methods mostly depends on historical data of project for estimation of cost, size, effort and duration and these methods are not efficient in absence of historical data. So there is need of an algorithmic method, which can calculate cost and effort of the project. In our previous work [2] some project-related factors and people-related factors were considered on the basis of which size as well as duration of the project was calculated. However, several other resistance factors may also affects estimation in the Agile dynamic environment. In this work an Algorithmic estimation method is being proposed that calculates more accurate release date, cost, effort and duration for the project by considering various factors. The effectiveness and feasibility of the proposed algorithm has been shown by considering two cases in which different levels of factors are taken and compared.
{"title":"Estimation in agile environment using resistance factors","authors":"R. Popli, N. Chauhan","doi":"10.1109/ICISCON.2014.6965219","DOIUrl":"https://doi.org/10.1109/ICISCON.2014.6965219","url":null,"abstract":"The estimation of cost and effort is a critical task in Agile environment because of its dynamic nature. It has been observed that the current Agile methods mostly depends on historical data of project for estimation of cost, size, effort and duration and these methods are not efficient in absence of historical data. So there is need of an algorithmic method, which can calculate cost and effort of the project. In our previous work [2] some project-related factors and people-related factors were considered on the basis of which size as well as duration of the project was calculated. However, several other resistance factors may also affects estimation in the Agile dynamic environment. In this work an Algorithmic estimation method is being proposed that calculates more accurate release date, cost, effort and duration for the project by considering various factors. The effectiveness and feasibility of the proposed algorithm has been shown by considering two cases in which different levels of factors are taken and compared.","PeriodicalId":193007,"journal":{"name":"2014 International Conference on Information Systems and Computer Networks (ISCON)","volume":"378 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124720341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-03-01DOI: 10.1109/ICISCON.2014.6965223
Bidyut Jyoti Saha, C. Pradhan, K. Kabi, A. K. Bisoi
Copyright protection is a major concern in digital data transmission over public channel. For copyright protection, watermarking technique is extensively used. A robust watermarking scheme using Arnold's transformation and RSA algorithm in the DWT domain has been proposed in our paper. The combined encryption has been taken to provide more security to the watermark before the embedding phase. The PSNR value shows the difference between original cover and embedded cover is minimal. Similarly, NC values show the robustness and resistance capability of the proposed technique from the common attacks such as JPEG compression, scaling etc. Thus, this combined version of Arnold's transformation and RSA algorithm can be used in case of higher security requirement of the watermark signal.
{"title":"Robust watermarking technique using Arnold's transformation and RSA in discrete wavelets","authors":"Bidyut Jyoti Saha, C. Pradhan, K. Kabi, A. K. Bisoi","doi":"10.1109/ICISCON.2014.6965223","DOIUrl":"https://doi.org/10.1109/ICISCON.2014.6965223","url":null,"abstract":"Copyright protection is a major concern in digital data transmission over public channel. For copyright protection, watermarking technique is extensively used. A robust watermarking scheme using Arnold's transformation and RSA algorithm in the DWT domain has been proposed in our paper. The combined encryption has been taken to provide more security to the watermark before the embedding phase. The PSNR value shows the difference between original cover and embedded cover is minimal. Similarly, NC values show the robustness and resistance capability of the proposed technique from the common attacks such as JPEG compression, scaling etc. Thus, this combined version of Arnold's transformation and RSA algorithm can be used in case of higher security requirement of the watermark signal.","PeriodicalId":193007,"journal":{"name":"2014 International Conference on Information Systems and Computer Networks (ISCON)","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127840252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-03-01DOI: 10.1109/ICISCON.2014.6965218
Ashlesha Gupta, A. Dixit, A. Sharma
WWW is a decentralized, distributed and heterogeneous information resource. With increased availability of information through WWW, it is very difficult to read all documents to retrieve the desired results; therefore there is a need of summarization methods which can help in providing contents of a given document in a precise manner. Keywords of a document may provide a compact representation of a document's content. As a result various algorithms and systems intended to carry out automatic keywords extraction have been proposed in the recent past. However, the existing solutions require either training models or domain specific information for automatic keyword extraction. To cater to these shortcomings an innovative hybrid approach for automatic keyword extraction using statistical and linguistic features of a document has been proposed. This statistical and linguistic technique based keyword extraction works on an individual document without any prior parameter change and takes full advantage of all the features of the document to extract the keywords. The extracted keywords can than assist in domain specific indexing. The performance of the proposed method as compared to existing Keyword Extraction tools such as Dream web design etc. in terms of Precision and Recall are also presented in this paper.
WWW是一种分散的、分布式的、异构的信息资源。随着WWW信息可用性的增加,阅读所有文档以检索所需结果变得非常困难;因此,需要一种能够帮助以精确的方式提供给定文件内容的摘要方法。文档的关键字可以提供文档内容的紧凑表示。因此,在最近的过去已经提出了各种旨在进行自动关键字提取的算法和系统。然而,现有的解决方案要么需要训练模型,要么需要特定领域的信息来自动提取关键字。为了克服这些缺点,提出了一种利用统计和语言特征自动提取关键字的创新混合方法。这种基于统计和语言技术的关键字提取在没有任何先前参数更改的情况下对单个文档进行提取,并充分利用文档的所有特征来提取关键字。提取的关键字可以帮助特定领域的索引。本文还比较了该方法与现有关键字提取工具(如Dream web design等)在查全率和查全率方面的性能。
{"title":"A novel statistical and linguistic features based technique for keyword extraction","authors":"Ashlesha Gupta, A. Dixit, A. Sharma","doi":"10.1109/ICISCON.2014.6965218","DOIUrl":"https://doi.org/10.1109/ICISCON.2014.6965218","url":null,"abstract":"WWW is a decentralized, distributed and heterogeneous information resource. With increased availability of information through WWW, it is very difficult to read all documents to retrieve the desired results; therefore there is a need of summarization methods which can help in providing contents of a given document in a precise manner. Keywords of a document may provide a compact representation of a document's content. As a result various algorithms and systems intended to carry out automatic keywords extraction have been proposed in the recent past. However, the existing solutions require either training models or domain specific information for automatic keyword extraction. To cater to these shortcomings an innovative hybrid approach for automatic keyword extraction using statistical and linguistic features of a document has been proposed. This statistical and linguistic technique based keyword extraction works on an individual document without any prior parameter change and takes full advantage of all the features of the document to extract the keywords. The extracted keywords can than assist in domain specific indexing. The performance of the proposed method as compared to existing Keyword Extraction tools such as Dream web design etc. in terms of Precision and Recall are also presented in this paper.","PeriodicalId":193007,"journal":{"name":"2014 International Conference on Information Systems and Computer Networks (ISCON)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116868634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-03-01DOI: 10.1109/ICISCON.2014.6965230
Arvind Kumar, Reetika Nagar, A. Baghel
Agile software development methodology, got importance in recent years. The agile philosophy promotes incremental and iterative design and implementation. Each iterations, delivers one or more product features. Release planning is a main activity in any of Agile approach. Main factors that need to be considered are the technical precedence inherent in the requirements; the feature's business value perceived by project stake holders, team capacity and required effort to complete the requirement. There are multiple tools available in industry to manage project but they are lacking to provide planning while considering all these factors. Genetic algorithms (GA) have arisen from concepts, introduced from the natural process of biological evolution. GA uses selection, crossover and mutation to evolve a solution to the given problem. In this paper an attempt has been made to formalize the release planning. Then an approach is proposed to do Release planning using genetic algorithms.
{"title":"A genetic algorithm approach to release planning in agile environment","authors":"Arvind Kumar, Reetika Nagar, A. Baghel","doi":"10.1109/ICISCON.2014.6965230","DOIUrl":"https://doi.org/10.1109/ICISCON.2014.6965230","url":null,"abstract":"Agile software development methodology, got importance in recent years. The agile philosophy promotes incremental and iterative design and implementation. Each iterations, delivers one or more product features. Release planning is a main activity in any of Agile approach. Main factors that need to be considered are the technical precedence inherent in the requirements; the feature's business value perceived by project stake holders, team capacity and required effort to complete the requirement. There are multiple tools available in industry to manage project but they are lacking to provide planning while considering all these factors. Genetic algorithms (GA) have arisen from concepts, introduced from the natural process of biological evolution. GA uses selection, crossover and mutation to evolve a solution to the given problem. In this paper an attempt has been made to formalize the release planning. Then an approach is proposed to do Release planning using genetic algorithms.","PeriodicalId":193007,"journal":{"name":"2014 International Conference on Information Systems and Computer Networks (ISCON)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134371118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-03-01DOI: 10.1109/ICISCON.2014.6965216
Ashutosh Rai, U. Shrawankar
Languages in India play an important role as a communication medium. As the person is traveling from one state to another s/he faces difficulty to communicate in other language with other community. So, the Multilanguage Voice Dictionary is applying for developing Indian language Machine Translation system. This application comprises of two algorithms. The word based translation model with the rule-based model is used as the main technique. The word based translation model is implementing for verb and other type of words. The rule based method is particularly used for Out Of Vocabulary (OOV) words which have to be used as it can't be translated. This is performing by extending a lexicon and writing a set of sample words. The translation is doing through templates associated with the lexicon with the word in other language. The speech processing such as input and output in voice form is to be implemented using speech simulator. For the alphabets of a language, the language word library is using in this application.
在印度,语言作为交流媒介发挥着重要作用。当一个人从一个州旅行到另一个州时,他/她面临着用其他语言与其他社区交流的困难。因此,多语言语音词典正在申请开发印度语言机器翻译系统。该应用程序由两个算法组成。采用基于词的翻译模型和基于规则的翻译模型作为主要技术。基于词的翻译模型是针对动词和其他类型的词实现的。基于规则的方法特别适用于由于无法翻译而不得不使用的Out Of Vocabulary (OOV)单词。这是通过扩展词典并编写一组示例单词来实现的。翻译是通过与词汇和其他语言中的单词相关联的模板来完成的。使用语音模拟器实现语音形式的输入和输出等语音处理。对于一种语言的字母,在这个应用程序中使用的语言词库。
{"title":"Multilanguage voice dictionary for ubiquitous environment","authors":"Ashutosh Rai, U. Shrawankar","doi":"10.1109/ICISCON.2014.6965216","DOIUrl":"https://doi.org/10.1109/ICISCON.2014.6965216","url":null,"abstract":"Languages in India play an important role as a communication medium. As the person is traveling from one state to another s/he faces difficulty to communicate in other language with other community. So, the Multilanguage Voice Dictionary is applying for developing Indian language Machine Translation system. This application comprises of two algorithms. The word based translation model with the rule-based model is used as the main technique. The word based translation model is implementing for verb and other type of words. The rule based method is particularly used for Out Of Vocabulary (OOV) words which have to be used as it can't be translated. This is performing by extending a lexicon and writing a set of sample words. The translation is doing through templates associated with the lexicon with the word in other language. The speech processing such as input and output in voice form is to be implemented using speech simulator. For the alphabets of a language, the language word library is using in this application.","PeriodicalId":193007,"journal":{"name":"2014 International Conference on Information Systems and Computer Networks (ISCON)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125941044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-03-01DOI: 10.1109/ICISCON.2014.6965233
A. Mukherjee, Anand Maheshwari, Satyabrata Maiti, A. Datta
Most of the radio frequency spectrum is not being utilized efficiently. The utilization can be improved by including unlicensed users to exploit the radio frequency spectrum by not creating any interference to the primary users. For Cognitive Radio, the main issue is to sense and then identify all spectrum holes present in the environment. In this paper, we are proposing the Quantized data fusion sensing which is applied through the Hidden Markov Model (HMM). It does not need any kind of synchronizing signals from the Primary user as well as with the secondary transmitter in a working condition. Simulation results with error rates are improved by the activity of Primary User (PU) and have been presented.
{"title":"Spectrum sensing for cognitive radio using quantized data fusion and Hidden Markov model","authors":"A. Mukherjee, Anand Maheshwari, Satyabrata Maiti, A. Datta","doi":"10.1109/ICISCON.2014.6965233","DOIUrl":"https://doi.org/10.1109/ICISCON.2014.6965233","url":null,"abstract":"Most of the radio frequency spectrum is not being utilized efficiently. The utilization can be improved by including unlicensed users to exploit the radio frequency spectrum by not creating any interference to the primary users. For Cognitive Radio, the main issue is to sense and then identify all spectrum holes present in the environment. In this paper, we are proposing the Quantized data fusion sensing which is applied through the Hidden Markov Model (HMM). It does not need any kind of synchronizing signals from the Primary user as well as with the secondary transmitter in a working condition. Simulation results with error rates are improved by the activity of Primary User (PU) and have been presented.","PeriodicalId":193007,"journal":{"name":"2014 International Conference on Information Systems and Computer Networks (ISCON)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121573984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-03-01DOI: 10.1109/ICISCON.2014.6965226
T. Jain, D. Saini, S. Bhooshan
Clustering is a useful mechanism in wireless sensor networks which helps to cope with scalability and data transmission problems. The motivation of our research is to provide efficient clustering using Hierarchical agglomerative clustering (HAC). If the distance between the sensing nodes is calculated using their location then it's quantitative HAC. This paper compares the various agglomerative clustering techniques applied in a wireless sensor network using the quantitative data. The simulations are done in MATLAB and the comparisons are made between the different protocols using dendrograms.
{"title":"Performance analysis of hierarchical agglomerative clustering in a wireless sensor network using quantitative data","authors":"T. Jain, D. Saini, S. Bhooshan","doi":"10.1109/ICISCON.2014.6965226","DOIUrl":"https://doi.org/10.1109/ICISCON.2014.6965226","url":null,"abstract":"Clustering is a useful mechanism in wireless sensor networks which helps to cope with scalability and data transmission problems. The motivation of our research is to provide efficient clustering using Hierarchical agglomerative clustering (HAC). If the distance between the sensing nodes is calculated using their location then it's quantitative HAC. This paper compares the various agglomerative clustering techniques applied in a wireless sensor network using the quantitative data. The simulations are done in MATLAB and the comparisons are made between the different protocols using dendrograms.","PeriodicalId":193007,"journal":{"name":"2014 International Conference on Information Systems and Computer Networks (ISCON)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116608522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-03-01DOI: 10.1109/ICISCON.2014.6965227
Kunal Kumar Kabi, C. Pradhan, Bidyut Jyoti Saha, Ajay Kumar Bisoi
Securing digital image during transmission is very much important in the current era. For this purpose, cryptographic techniques as well as chaotic maps can be applicable. In this paper, we have given a detailed study of the image encryption using different 2D chaotic maps such as Arnold 2D cat map, Baker map, Henon map, Cross chaos map and 2D logistic map. The security analysis of these techniques has been done by the help of NPCR (Number of Pixels Change Rate) and UACI (Unified Average Changing Intensity) values. The experimental results of NPCR and UACI show the effectiveness of the encryption processes of different techniques.
在当今时代,数字图像在传输过程中的安全是非常重要的。为此,密码技术和混沌映射都可以适用。在本文中,我们详细研究了使用Arnold 2D cat图、Baker图、Henon图、Cross混沌图和2D logistic图等不同的二维混沌图进行图像加密。利用NPCR (Number of Pixels Change Rate)和UACI (Unified Average Change Intensity)值对这些技术进行了安全性分析。NPCR和UACI的实验结果表明了不同技术加密过程的有效性。
{"title":"Comparative study of image encryption using 2D chaotic map","authors":"Kunal Kumar Kabi, C. Pradhan, Bidyut Jyoti Saha, Ajay Kumar Bisoi","doi":"10.1109/ICISCON.2014.6965227","DOIUrl":"https://doi.org/10.1109/ICISCON.2014.6965227","url":null,"abstract":"Securing digital image during transmission is very much important in the current era. For this purpose, cryptographic techniques as well as chaotic maps can be applicable. In this paper, we have given a detailed study of the image encryption using different 2D chaotic maps such as Arnold 2D cat map, Baker map, Henon map, Cross chaos map and 2D logistic map. The security analysis of these techniques has been done by the help of NPCR (Number of Pixels Change Rate) and UACI (Unified Average Changing Intensity) values. The experimental results of NPCR and UACI show the effectiveness of the encryption processes of different techniques.","PeriodicalId":193007,"journal":{"name":"2014 International Conference on Information Systems and Computer Networks (ISCON)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128866536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-03-01DOI: 10.1109/ICISCON.2014.6965221
S. Rathi, C. A. Dhote
Mining frequent itemset is an important step in association rule mining process. In this paper we are applying a parallel approach in the pre-processing step itself to make the dataset favorable for mining frequent itemsets and hence improve the speed and computation power. Due to data explosion, it is necessary to develop a system that can handle scalable data. Many efficient sequential and parallel algorithms were proposed in the recent years. We first explore some major algorithms proposed for mining frequent itemsets. Sorting the dataset in the pre-processing step parallely and pruning the infrequent itemsets improves the efficiency of our algorithm. Due to the drastic improvement in computer architectures and computer performance over the years, high performance computing is gaining importance and we are using one such technique in our implementation: CUDA.
{"title":"Using parallel approach in pre-processing to improve frequent pattern growth algorithm","authors":"S. Rathi, C. A. Dhote","doi":"10.1109/ICISCON.2014.6965221","DOIUrl":"https://doi.org/10.1109/ICISCON.2014.6965221","url":null,"abstract":"Mining frequent itemset is an important step in association rule mining process. In this paper we are applying a parallel approach in the pre-processing step itself to make the dataset favorable for mining frequent itemsets and hence improve the speed and computation power. Due to data explosion, it is necessary to develop a system that can handle scalable data. Many efficient sequential and parallel algorithms were proposed in the recent years. We first explore some major algorithms proposed for mining frequent itemsets. Sorting the dataset in the pre-processing step parallely and pruning the infrequent itemsets improves the efficiency of our algorithm. Due to the drastic improvement in computer architectures and computer performance over the years, high performance computing is gaining importance and we are using one such technique in our implementation: CUDA.","PeriodicalId":193007,"journal":{"name":"2014 International Conference on Information Systems and Computer Networks (ISCON)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133335275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}