首页 > 最新文献

2014 International Conference on Contemporary Computing and Informatics (IC3I)最新文献

英文 中文
Modified differential evolution based 0/1 clustering for classification of data points: Using modified new point symmetry based distance and dynamically controlled parameters 基于0/1聚类的改进差分进化数据点分类:使用改进的基于距离和动态控制参数的新点对称
Pub Date : 2014-11-01 DOI: 10.1109/IC3I.2014.7019722
Vikram Singh, S. Saha
Identification of Clusters is a complex task as clusters found in the data sets are of arbitrary shapes and sizes. The task becomes challenging as identification of all the clusters from a single data set requires use of different types of algorithms based on different distance measures. Symmetry is a commonly used property of objects. Many of the clusters present in a data set can be identified using some point symmetry based distances. Point symmetry based and Euclidean distance measures are individually best in identifying clusters in some particular cases but not together. This article proposes a solution after analyzing and removing the shortcomings in both types of distance measures and then merging the improved versions into one to get the best of both of them. Introduction of differential evolution based optimization technique with dynamic parameter selection further enhances the quality of results. In this paper the existing point symmetry based distance is modified and is also enabled to correctly classify clusters based on Euclidean distance without making a dynamic switch between the methods. This helps the proposed clustering technique to give a speed up in computation process. The efficiency of the algorithm is established by analyzing the results obtained on 2 diversified test data sets. With the objective of highlighting the improvements achieved by our proposed algorithm, we compare its results with the results of algorithm based purely on Euclidean Distance, new point symmetry distance and the proposed modified new point symmetry based distance.
簇的识别是一项复杂的任务,因为在数据集中发现的簇具有任意形状和大小。这项任务变得具有挑战性,因为从单个数据集中识别所有簇需要使用基于不同距离度量的不同类型的算法。对称是物体的一种常用属性。数据集中存在的许多簇可以使用一些基于点对称的距离来识别。在某些特殊情况下,基于点对称和欧几里得距离的度量分别是识别聚类的最佳方法,但不能同时使用。本文通过分析和消除这两种距离测量方法的不足,并将改进后的版本合并为一种方法,以达到两者的最佳效果。引入基于差分进化的动态参数选择优化技术,进一步提高了优化结果的质量。本文对现有的基于点对称距离的聚类方法进行了改进,使得基于欧几里得距离的聚类方法能够正确分类,而无需在两种方法之间进行动态切换。这有助于提高聚类技术在计算过程中的速度。通过对2个多样化测试数据集的结果分析,验证了该算法的有效性。为了突出本文提出的算法所取得的改进,我们将其结果与纯基于欧几里得距离的算法、新的点对称距离和改进的基于新点对称距离的算法的结果进行了比较。
{"title":"Modified differential evolution based 0/1 clustering for classification of data points: Using modified new point symmetry based distance and dynamically controlled parameters","authors":"Vikram Singh, S. Saha","doi":"10.1109/IC3I.2014.7019722","DOIUrl":"https://doi.org/10.1109/IC3I.2014.7019722","url":null,"abstract":"Identification of Clusters is a complex task as clusters found in the data sets are of arbitrary shapes and sizes. The task becomes challenging as identification of all the clusters from a single data set requires use of different types of algorithms based on different distance measures. Symmetry is a commonly used property of objects. Many of the clusters present in a data set can be identified using some point symmetry based distances. Point symmetry based and Euclidean distance measures are individually best in identifying clusters in some particular cases but not together. This article proposes a solution after analyzing and removing the shortcomings in both types of distance measures and then merging the improved versions into one to get the best of both of them. Introduction of differential evolution based optimization technique with dynamic parameter selection further enhances the quality of results. In this paper the existing point symmetry based distance is modified and is also enabled to correctly classify clusters based on Euclidean distance without making a dynamic switch between the methods. This helps the proposed clustering technique to give a speed up in computation process. The efficiency of the algorithm is established by analyzing the results obtained on 2 diversified test data sets. With the objective of highlighting the improvements achieved by our proposed algorithm, we compare its results with the results of algorithm based purely on Euclidean Distance, new point symmetry distance and the proposed modified new point symmetry based distance.","PeriodicalId":430848,"journal":{"name":"2014 International Conference on Contemporary Computing and Informatics (IC3I)","volume":"77 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120864902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
NFC technology: Current and future trends in India NFC技术:印度的当前和未来趋势
Pub Date : 2014-11-01 DOI: 10.1109/IC3I.2014.7019680
Sudipta Dhar, A. Dasgupta
Near Field Communication (NFC) is an emerging wireless short-range communication technology innovation that is based on existing standards of the Radio Frequency Identification (RFID) infrastructure. In this paper we give an overview of NFC technology and discuss its adaptation worldwide. We then focus on the current trends and application of NFC technology in India. Both existing NFC applications and some conceivable future situations are analyzed in this connection. Furthermore, security concerns, difficulties and present conflicts are also discussed.
近场通信(NFC)是一种基于射频识别(RFID)基础设施现有标准的新兴无线短距离通信技术创新。本文对近场通信技术进行了综述,并对其在世界范围内的应用进行了讨论。然后,我们将重点介绍NFC技术在印度的当前趋势和应用。在这方面分析了现有的NFC应用和一些可能的未来情况。此外,还讨论了安全问题、困难和目前的冲突。
{"title":"NFC technology: Current and future trends in India","authors":"Sudipta Dhar, A. Dasgupta","doi":"10.1109/IC3I.2014.7019680","DOIUrl":"https://doi.org/10.1109/IC3I.2014.7019680","url":null,"abstract":"Near Field Communication (NFC) is an emerging wireless short-range communication technology innovation that is based on existing standards of the Radio Frequency Identification (RFID) infrastructure. In this paper we give an overview of NFC technology and discuss its adaptation worldwide. We then focus on the current trends and application of NFC technology in India. Both existing NFC applications and some conceivable future situations are analyzed in this connection. Furthermore, security concerns, difficulties and present conflicts are also discussed.","PeriodicalId":430848,"journal":{"name":"2014 International Conference on Contemporary Computing and Informatics (IC3I)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123508857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Computing in engineering education: The current scenario 工程教育中的计算机:当前情景
Pub Date : 2014-11-01 DOI: 10.1109/IC3I.2014.7019770
A. B. Raju, Satish Annigeri
In all branches of engineering, computational work/simulation is currently seen as the third vertex of a triangle, complementing observation and theory. This requirement necessitates an engineering student to know computational concepts as well as a whole new language to express these concepts. These are challenging tasks and students might face difficulties in learning the finer details of the language. It is essential to make computing skill an integral part of engineering education and not treat it as an add on. This paper attempts a review of the current approaches to teaching computation skills to engineering students of core engineering branches. It identifies the need for teaching this skill, the components of this skill and the available choices of programming languages to teach this skill. It suggests the adoption of Python as the preferred language to teach computation by comparing its merits and demerits vis-a-vis the other available choices. It is imperative to do a complete rethinking on how engineering education approaches computation skill and arrive at a holistic and integrated approach.
在工程的所有分支中,计算工作/模拟目前被视为三角形的第三个顶点,补充了观察和理论。这一要求要求工程专业的学生了解计算概念以及一种全新的语言来表达这些概念。这些都是具有挑战性的任务,学生在学习语言的细节时可能会遇到困难。必须使计算机技能成为工程教育的一个组成部分,而不是把它当作一个附加的东西。本文试图对目前对核心工程专业工科学生进行计算技能教学的方法进行综述。它确定了教授这种技能的必要性,这种技能的组成部分以及教授这种技能的编程语言的可用选择。通过比较Python与其他可用选择的优缺点,建议采用Python作为教授计算的首选语言。必须对工程教育如何培养计算技能进行全面的反思,形成一种全面、综合的方法。
{"title":"Computing in engineering education: The current scenario","authors":"A. B. Raju, Satish Annigeri","doi":"10.1109/IC3I.2014.7019770","DOIUrl":"https://doi.org/10.1109/IC3I.2014.7019770","url":null,"abstract":"In all branches of engineering, computational work/simulation is currently seen as the third vertex of a triangle, complementing observation and theory. This requirement necessitates an engineering student to know computational concepts as well as a whole new language to express these concepts. These are challenging tasks and students might face difficulties in learning the finer details of the language. It is essential to make computing skill an integral part of engineering education and not treat it as an add on. This paper attempts a review of the current approaches to teaching computation skills to engineering students of core engineering branches. It identifies the need for teaching this skill, the components of this skill and the available choices of programming languages to teach this skill. It suggests the adoption of Python as the preferred language to teach computation by comparing its merits and demerits vis-a-vis the other available choices. It is imperative to do a complete rethinking on how engineering education approaches computation skill and arrive at a holistic and integrated approach.","PeriodicalId":430848,"journal":{"name":"2014 International Conference on Contemporary Computing and Informatics (IC3I)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125812295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Data mining algorithms for Web-services classification 用于web服务分类的数据挖掘算法
Pub Date : 2014-11-01 DOI: 10.1109/IC3I.2014.7019644
A. Mustafa, Y. S. Kumaraswamy
Web services are software components that communicate using pervasive, standards-based Web technologies including HTTP and XML-based messaging. Web services are designed to be accessed by other applications and vary in complexity from simple operations, such as checking a banking account balance online, to complex processes running Customer Relationship Management (CRM) or Enterprise Resource Planning (ERP) systems. Since they are based on open standards such as HTTP and XML-based protocols including SOAP and WSDL, Web services are hardware, programming language, and operating system independent. In this paper, Naïve Bayes, C4.5 and Random forest methods are used as classifiers for the efficiency of web services classification.
Web服务是使用普遍的、基于标准的Web技术(包括HTTP和基于xml的消息传递)进行通信的软件组件。Web服务的设计目的是供其他应用程序访问,其复杂程度各不相同,从简单的操作(例如在线检查银行帐户余额)到运行客户关系管理(CRM)或企业资源规划(ERP)系统的复杂流程。由于它们基于开放标准,如HTTP和基于xml的协议(包括SOAP和WSDL),因此Web服务独立于硬件、编程语言和操作系统。本文使用Naïve贝叶斯、C4.5和随机森林方法作为分类器来提高web服务分类的效率。
{"title":"Data mining algorithms for Web-services classification","authors":"A. Mustafa, Y. S. Kumaraswamy","doi":"10.1109/IC3I.2014.7019644","DOIUrl":"https://doi.org/10.1109/IC3I.2014.7019644","url":null,"abstract":"Web services are software components that communicate using pervasive, standards-based Web technologies including HTTP and XML-based messaging. Web services are designed to be accessed by other applications and vary in complexity from simple operations, such as checking a banking account balance online, to complex processes running Customer Relationship Management (CRM) or Enterprise Resource Planning (ERP) systems. Since they are based on open standards such as HTTP and XML-based protocols including SOAP and WSDL, Web services are hardware, programming language, and operating system independent. In this paper, Naïve Bayes, C4.5 and Random forest methods are used as classifiers for the efficiency of web services classification.","PeriodicalId":430848,"journal":{"name":"2014 International Conference on Contemporary Computing and Informatics (IC3I)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129469711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
On optimal power allocation for minimizing interferene in relay assisted cognitive radio networks 中继辅助认知无线电网络中最小干扰的最优功率分配
Pub Date : 2014-11-01 DOI: 10.1109/IC3I.2014.7019823
S. Maity, Chinmoy Maji
This paper focuses on minimization of interference to primary user (PU) through an optimal strategy of power allocation algorithm for source and relay nodes in multihop cognitive radio network (CRN) under the constraints of outage probability (successful delivery) and data rate over source-destination link. This objective is also studied in the framework of enhanced lifetime of the CRN. Extensive simulations are done for both energy aware (EA) and non-energy aware (NEA) power allocation schemes. Simulation results show that NEA based power allocation offers better capacity than EA scheme at the cost of slightly increased interference to PU. Simulation results also show a three dimensional (3D) relative trade-off performance among the data transmission capacity, network lifetime and total transmission power.
在多跳认知无线网络(CRN)中,在受中断概率(成功传输)和源-目的链路数据速率约束的情况下,通过一种最优的源节点和中继节点功率分配算法,研究对主用户(PU)的干扰最小化问题。这一目标也在提高CRN寿命的框架下进行了研究。对能量感知(EA)和非能量感知(NEA)两种功率分配方案进行了广泛的仿真。仿真结果表明,基于NEA的功率分配方案比EA方案提供了更好的容量,但对PU的干扰略有增加。仿真结果还显示了数据传输容量、网络寿命和总传输功率之间的三维相对权衡性能。
{"title":"On optimal power allocation for minimizing interferene in relay assisted cognitive radio networks","authors":"S. Maity, Chinmoy Maji","doi":"10.1109/IC3I.2014.7019823","DOIUrl":"https://doi.org/10.1109/IC3I.2014.7019823","url":null,"abstract":"This paper focuses on minimization of interference to primary user (PU) through an optimal strategy of power allocation algorithm for source and relay nodes in multihop cognitive radio network (CRN) under the constraints of outage probability (successful delivery) and data rate over source-destination link. This objective is also studied in the framework of enhanced lifetime of the CRN. Extensive simulations are done for both energy aware (EA) and non-energy aware (NEA) power allocation schemes. Simulation results show that NEA based power allocation offers better capacity than EA scheme at the cost of slightly increased interference to PU. Simulation results also show a three dimensional (3D) relative trade-off performance among the data transmission capacity, network lifetime and total transmission power.","PeriodicalId":430848,"journal":{"name":"2014 International Conference on Contemporary Computing and Informatics (IC3I)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128161128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Big data processing with harnessing hadoop - MapReduce for optimizing analytical workloads 利用hadoop - MapReduce优化分析工作负载的大数据处理
Pub Date : 2014-11-01 DOI: 10.1109/IC3I.2014.7019818
K. V. Rama Satish, N. Kavya
Now a days, we are living with social media data like heartbeat. The exponential growth with data first presented challenges to cutting-edge businesses such as Google, MSN, Flipkart, Microsoft, Facebook, Twitter, LinkedIn etc. Nevertheless, existing big data analytical models for hadoop comply with MapReduce analytical workloads that process a small segment of the whole data set, thus failing to assess the capabilities of the MapReduce model under heavy workloads that process exponentially accumulative data sizes.[1] In all social business and technical research applications, there is a need to process big data of data in efficient manner on normal uses data. In this paper, we have proposed an efficient technique to classify the big data from e-mail using firefly and naïve bayes classifier. Proposed technique is comprised into two phase, (i) Map reduce framework for training and (ii) Map reduce framework for testing. Initially, the input twitter data is given to the process to select the suitable feature for data classification. The traditional firefly algorithm is applied and the optimized feature space is adopted for the best fitting results. Once the best feature space is identified through firefly algorithm, the data classification is done using the naïve bayes classifier. Here, these two processes are effectively distributed based on the concept given in Map-Reduce framework. The results of the experiment are validated using evaluation metrics namely, computation time, accuracy, specificity and sensitivity. For comparative analysis, proposed big data classification is compared with the existing works of naïve bayes and neural network.
如今,我们生活在像心跳这样的社交媒体数据中。数据的指数级增长首先对谷歌、MSN、Flipkart、微软、Facebook、Twitter、LinkedIn等前沿企业提出了挑战。然而,现有的hadoop大数据分析模型符合MapReduce分析工作负载,处理整个数据集的一小部分,因此无法评估MapReduce模型在处理指数级累积数据量的繁重工作负载下的能力。[1]在所有的社会商业和技术研究应用中,都需要对正常使用的数据进行高效的数据大数据处理。本文提出了一种利用萤火虫和naïve贝叶斯分类器对电子邮件大数据进行分类的有效方法。建议的技术分为两个阶段,(i)用于训练的地图缩减框架和(ii)用于测试的地图缩减框架。首先,将输入的twitter数据交给流程选择合适的特征进行数据分类。采用传统的萤火虫算法,利用优化后的特征空间获得最佳拟合结果。通过萤火虫算法识别出最佳特征空间后,使用naïve贝叶斯分类器进行数据分类。在这里,这两个进程基于Map-Reduce框架中给出的概念进行了有效的分布。通过计算时间、准确性、特异性和敏感性等评价指标对实验结果进行了验证。为了进行对比分析,我们将提出的大数据分类与naïve贝叶斯和神经网络的现有工作进行了比较。
{"title":"Big data processing with harnessing hadoop - MapReduce for optimizing analytical workloads","authors":"K. V. Rama Satish, N. Kavya","doi":"10.1109/IC3I.2014.7019818","DOIUrl":"https://doi.org/10.1109/IC3I.2014.7019818","url":null,"abstract":"Now a days, we are living with social media data like heartbeat. The exponential growth with data first presented challenges to cutting-edge businesses such as Google, MSN, Flipkart, Microsoft, Facebook, Twitter, LinkedIn etc. Nevertheless, existing big data analytical models for hadoop comply with MapReduce analytical workloads that process a small segment of the whole data set, thus failing to assess the capabilities of the MapReduce model under heavy workloads that process exponentially accumulative data sizes.[1] In all social business and technical research applications, there is a need to process big data of data in efficient manner on normal uses data. In this paper, we have proposed an efficient technique to classify the big data from e-mail using firefly and naïve bayes classifier. Proposed technique is comprised into two phase, (i) Map reduce framework for training and (ii) Map reduce framework for testing. Initially, the input twitter data is given to the process to select the suitable feature for data classification. The traditional firefly algorithm is applied and the optimized feature space is adopted for the best fitting results. Once the best feature space is identified through firefly algorithm, the data classification is done using the naïve bayes classifier. Here, these two processes are effectively distributed based on the concept given in Map-Reduce framework. The results of the experiment are validated using evaluation metrics namely, computation time, accuracy, specificity and sensitivity. For comparative analysis, proposed big data classification is compared with the existing works of naïve bayes and neural network.","PeriodicalId":430848,"journal":{"name":"2014 International Conference on Contemporary Computing and Informatics (IC3I)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129535208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Refinement of data streams using Minimum Variance principle 使用最小方差原理的数据流的细化
Pub Date : 2014-11-01 DOI: 10.1109/IC3I.2014.7019638
Virendrakumar A. Dhotre, K. Karande
In this paper, we propose a refined scheme on active learning from data streams where data volumes grow continuously. The objective is to label a small portion of stream data for which a model is derived to predict future instances as accurately as possible. We propose a classifier-ensemble based active learning framework which selectively labels instances from data streams to build an ensemble classifier. Classifier ensemble's variance directly corresponds to its error rates and the efforts of reducing the variance is equivalent to improving its prediction accuracy. We introduce a Minimum-Variance principle to guide instance labeling process for data streams. The MV principle and the optimal weighting module are proposed to be combined to build an active learning framework for data streams. Results and implementation demonstrate that the percentage of accuracy of the Minimum variance margin method is good as compared to other methods.
在本文中,我们提出了一种改进的从数据量持续增长的数据流中主动学习的方案。目标是标记一小部分流数据,为其导出模型,以尽可能准确地预测未来的实例。我们提出了一个基于分类器集成的主动学习框架,该框架可以从数据流中选择性地标记实例来构建集成分类器。分类器集成的方差直接对应其错误率,减少方差的努力相当于提高其预测精度。我们引入最小方差原则来指导数据流的实例标记过程。提出将MV原理与最优加权模块相结合,构建数据流主动学习框架。结果和实现表明,与其他方法相比,最小方差边际法的准确率较高。
{"title":"Refinement of data streams using Minimum Variance principle","authors":"Virendrakumar A. Dhotre, K. Karande","doi":"10.1109/IC3I.2014.7019638","DOIUrl":"https://doi.org/10.1109/IC3I.2014.7019638","url":null,"abstract":"In this paper, we propose a refined scheme on active learning from data streams where data volumes grow continuously. The objective is to label a small portion of stream data for which a model is derived to predict future instances as accurately as possible. We propose a classifier-ensemble based active learning framework which selectively labels instances from data streams to build an ensemble classifier. Classifier ensemble's variance directly corresponds to its error rates and the efforts of reducing the variance is equivalent to improving its prediction accuracy. We introduce a Minimum-Variance principle to guide instance labeling process for data streams. The MV principle and the optimal weighting module are proposed to be combined to build an active learning framework for data streams. Results and implementation demonstrate that the percentage of accuracy of the Minimum variance margin method is good as compared to other methods.","PeriodicalId":430848,"journal":{"name":"2014 International Conference on Contemporary Computing and Informatics (IC3I)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127410029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Impact of interleaver and trace back length on performance of CODEC for burst errors 交织器和跟踪长度对突发错误编解码器性能的影响
Pub Date : 2014-11-01 DOI: 10.1109/IC3I.2014.7019617
S. V. Viraktamath, Divya Sakaray, G. V. Attimarad
An interleaving is a concept which is used in conjunction with error correcting codes to counteract the effect of burst errors. Convolutional codes are frequently used to correct errors in noisy channels. The Viterbi algorithm is the most extensively employed decoding algorithm for convolutional codes. In this paper we present our studies of impact of interleaver on performance of Convolutional Encoder and Decoder (CODEC) for burst errors as well as for distributed errors. Also the performance of the Viterbi algorithm for different generator polynomials is presented. The hard decision with ½ rate coding technique is considered in this paper.
交错是一个概念,它与纠错码一起使用,以抵消突发错误的影响。卷积码经常被用来校正噪声信道中的误差。Viterbi算法是卷积码中应用最广泛的译码算法。本文研究了交织器对卷积编解码器(CODEC)处理突发错误和分布式错误性能的影响。同时给出了Viterbi算法在不同生成器多项式下的性能。本文研究了半码率编码技术的硬决策问题。
{"title":"Impact of interleaver and trace back length on performance of CODEC for burst errors","authors":"S. V. Viraktamath, Divya Sakaray, G. V. Attimarad","doi":"10.1109/IC3I.2014.7019617","DOIUrl":"https://doi.org/10.1109/IC3I.2014.7019617","url":null,"abstract":"An interleaving is a concept which is used in conjunction with error correcting codes to counteract the effect of burst errors. Convolutional codes are frequently used to correct errors in noisy channels. The Viterbi algorithm is the most extensively employed decoding algorithm for convolutional codes. In this paper we present our studies of impact of interleaver on performance of Convolutional Encoder and Decoder (CODEC) for burst errors as well as for distributed errors. Also the performance of the Viterbi algorithm for different generator polynomials is presented. The hard decision with ½ rate coding technique is considered in this paper.","PeriodicalId":430848,"journal":{"name":"2014 International Conference on Contemporary Computing and Informatics (IC3I)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132404139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Secure and energy efficient routing algorithm for wireless sensor networks 无线传感器网络的安全节能路由算法
Pub Date : 2014-11-01 DOI: 10.1109/IC3I.2014.7019716
V. Menaria, D. Soni, A. Nagaraju, S. Jain
In a large scale sensor network, minimum spanning tree is evaluated to route data to a sink node in a hop by hop manner. But in this route any node can be compromised or a compromised node can be included and it can inject false data or it can alter the existing data. Therefore, to provide a security we use a COmpromised nOde Locator protocol (COOL) by which we can remove compromised node from the network. When a compromised node is detected then this protocol prevents further damages from misbehaved node and forms a reliable and energy saving sensor network. Thus in our proposed algorithm, we make a path using minimum spanning tree and maintains security (COOL protocol) in wireless sensor networks. Thus, by combining both (MST and COOL protocol) we creates a secure and energy conserving environment in which sensor nodes can communicate through the sink node which is the node whom all nodes send the data through routing. Also we can check the node consistency using the hash values.
在大规模传感器网络中,通过计算最小生成树来逐跳地将数据路由到汇聚节点。但是在这条路由中,任何节点都可能被入侵,或者被入侵的节点也可能被包含,它可以注入虚假数据,或者改变现有数据。因此,为了提供安全性,我们使用受损节点定位器协议(COOL),通过该协议我们可以从网络中删除受损节点。当检测到受损节点时,该协议可以防止异常节点的进一步破坏,从而形成一个可靠且节能的传感器网络。因此,在我们提出的算法中,我们使用最小生成树来生成路径并维护无线传感器网络的安全性(COOL协议)。因此,通过结合两者(MST和COOL协议),我们创建了一个安全和节能的环境,在这个环境中,传感器节点可以通过汇聚节点进行通信,汇聚节点是所有节点通过路由发送数据的节点。我们还可以使用哈希值检查节点的一致性。
{"title":"Secure and energy efficient routing algorithm for wireless sensor networks","authors":"V. Menaria, D. Soni, A. Nagaraju, S. Jain","doi":"10.1109/IC3I.2014.7019716","DOIUrl":"https://doi.org/10.1109/IC3I.2014.7019716","url":null,"abstract":"In a large scale sensor network, minimum spanning tree is evaluated to route data to a sink node in a hop by hop manner. But in this route any node can be compromised or a compromised node can be included and it can inject false data or it can alter the existing data. Therefore, to provide a security we use a COmpromised nOde Locator protocol (COOL) by which we can remove compromised node from the network. When a compromised node is detected then this protocol prevents further damages from misbehaved node and forms a reliable and energy saving sensor network. Thus in our proposed algorithm, we make a path using minimum spanning tree and maintains security (COOL protocol) in wireless sensor networks. Thus, by combining both (MST and COOL protocol) we creates a secure and energy conserving environment in which sensor nodes can communicate through the sink node which is the node whom all nodes send the data through routing. Also we can check the node consistency using the hash values.","PeriodicalId":430848,"journal":{"name":"2014 International Conference on Contemporary Computing and Informatics (IC3I)","volume":"640 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132893070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Performance assessment of different image sizes for printed Gujarati and English digits using template matching 使用模板匹配对印刷古吉拉特语和英语数字的不同图像大小进行性能评估
Pub Date : 2014-11-01 DOI: 10.1109/IC3I.2014.7019796
S. Chaudhari, R. Gulati
This paper presents a system for separation and recognition of offline printed Gujarati and English digits using template matching. Sample images of different quality of papers were collected. They were scanned at 200 dpi. Various preprocessing operations were performed on the digitized images followed by segmentation. Segmented image of various sizes was normalized to get an image of uniform size. Then the pixel density was calculated as binary pattern and a feature vector was created. These features were used in template matching for the classification of digits. The recognition rate was tested on images of 3 different sizes viz. 24 × 24, 32 × 40, and 48 × 48 for offline printed Gujarati and English digits. We collected 200 image samples which include more than 4200 symbols of both Gujarati and English digits. The results were evaluated for different image sizes of 24 × 24, 32 × 40, and 48 × 48. The overall recognition rates were 97.43, 98.30, and 97.28 for Gujarati digits and 99.07, 98.88, and 99.34 for English digits respectively.
本文提出了一种基于模板匹配的离线印刷古吉拉特语和英语数字分离与识别系统。收集了不同质量的论文样本图像。他们以200 dpi扫描。对数字化后的图像进行各种预处理操作,然后进行分割。对不同尺寸的分割图像进行归一化处理,得到统一尺寸的图像。然后将像素密度计算为二值模式,生成特征向量。将这些特征用于数字分类的模板匹配。对脱机印刷古吉拉特语和英语数字的24 × 24、32 × 40和48 × 48三种不同尺寸的图像进行了识别率测试。我们收集了200个图像样本,其中包括4200多个古吉拉特语和英语数字的符号。对24 × 24、32 × 40和48 × 48图像尺寸下的结果进行评价。古吉拉特数字的总体识别率分别为97.43、98.30和97.28,英语数字的总体识别率分别为99.07、98.88和99.34。
{"title":"Performance assessment of different image sizes for printed Gujarati and English digits using template matching","authors":"S. Chaudhari, R. Gulati","doi":"10.1109/IC3I.2014.7019796","DOIUrl":"https://doi.org/10.1109/IC3I.2014.7019796","url":null,"abstract":"This paper presents a system for separation and recognition of offline printed Gujarati and English digits using template matching. Sample images of different quality of papers were collected. They were scanned at 200 dpi. Various preprocessing operations were performed on the digitized images followed by segmentation. Segmented image of various sizes was normalized to get an image of uniform size. Then the pixel density was calculated as binary pattern and a feature vector was created. These features were used in template matching for the classification of digits. The recognition rate was tested on images of 3 different sizes viz. 24 × 24, 32 × 40, and 48 × 48 for offline printed Gujarati and English digits. We collected 200 image samples which include more than 4200 symbols of both Gujarati and English digits. The results were evaluated for different image sizes of 24 × 24, 32 × 40, and 48 × 48. The overall recognition rates were 97.43, 98.30, and 97.28 for Gujarati digits and 99.07, 98.88, and 99.34 for English digits respectively.","PeriodicalId":430848,"journal":{"name":"2014 International Conference on Contemporary Computing and Informatics (IC3I)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131964599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2014 International Conference on Contemporary Computing and Informatics (IC3I)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1