首页 > 最新文献

International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)最新文献

英文 中文
Dynamic spectrum sharing for heterogeneous wireless network via cognitive radio 基于认知无线电的异构无线网络动态频谱共享
R. Kaniezhil, C. Chandrasekar, S. Rekha
The current utilization of the spectrum is quite inefficient; consequently, if properly used, there is no shortage of the spectrum that is presently available. Therefore, it is anticipated that more flexible use of spectrum and spectrum sharing between radio systems will be key enablers to facilitate the successful implementation of future systems. Cognitive radio, however, is known as the most intelligent and promising technique in solving the problem of spectrum sharing. In this paper, we consider the technique of spectrum sharing among users of service providers to share the licensed spectrum of licensed service providers. It is shown that the proposed technique reduces the call blocking rate and improves the spectrum utilization.
目前对频谱的利用效率很低;因此,如果使用得当,目前可用的频谱并不短缺。因此,预计更灵活地使用频谱和无线电系统之间的频谱共享将是促进未来系统成功实施的关键因素。认知无线电被认为是解决频谱共享问题中最智能和最有前途的技术。在本文中,我们考虑了服务提供商用户之间的频谱共享技术,以共享许可服务提供商的许可频谱。结果表明,该方法降低了呼叫阻塞率,提高了频谱利用率。
{"title":"Dynamic spectrum sharing for heterogeneous wireless network via cognitive radio","authors":"R. Kaniezhil, C. Chandrasekar, S. Rekha","doi":"10.1109/ICPRIME.2012.6208335","DOIUrl":"https://doi.org/10.1109/ICPRIME.2012.6208335","url":null,"abstract":"The current utilization of the spectrum is quite inefficient; consequently, if properly used, there is no shortage of the spectrum that is presently available. Therefore, it is anticipated that more flexible use of spectrum and spectrum sharing between radio systems will be key enablers to facilitate the successful implementation of future systems. Cognitive radio, however, is known as the most intelligent and promising technique in solving the problem of spectrum sharing. In this paper, we consider the technique of spectrum sharing among users of service providers to share the licensed spectrum of licensed service providers. It is shown that the proposed technique reduces the call blocking rate and improves the spectrum utilization.","PeriodicalId":148511,"journal":{"name":"International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128371641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Classifying the bugs using multi-class semi supervised support vector machine 利用多类半监督支持向量机对缺陷进行分类
Ayan Nigam, Bhawna Nigam, Chayan Bhaisare, N. Arya
It is always important in the Software Industry to know about what types of bugs are getting reported into the applications developed or maintained by them. Categorizing bugs based on their characteristics helps Software Development team to take appropriate actions in order to reduce similar defects that might get reported in future releases. Defects or Bugs can be classified into many classes, for which a training set is required, known as the Class Label Data Set. If Classification is performed manually then it will consume more time and efforts. Also, human resource having expert testing skills & domain knowledge will be required for labelling the data. Therefore Semi Supervised Techniques are been used to reduce the work of labelling dataset, which takes some labeled with unlabeled dataset to train the classifier. In this paper Self Training Algorithm is used for Semi Supervised Learning and Winner-Takes-All strategy is applied to perform Multi Class Classification. This model provides Classification accuracy up to 93%.
在软件行业中,了解由他们开发或维护的应用程序中报告了哪些类型的错误总是很重要的。根据缺陷的特征对它们进行分类,有助于软件开发团队采取适当的行动,以减少在未来版本中可能报告的类似缺陷。缺陷或bug可以被划分为许多类,这些类需要一个训练集,称为类标签数据集。如果手动执行分类,则会消耗更多的时间和精力。此外,需要具有专家测试技能和领域知识的人力资源来标记数据。因此,采用半监督技术来减少标记数据集的工作量,将一些标记的数据集与未标记的数据集进行训练。本文采用自训练算法进行半监督学习,采用赢者通吃策略进行多类分类。该模型的分类准确率高达93%。
{"title":"Classifying the bugs using multi-class semi supervised support vector machine","authors":"Ayan Nigam, Bhawna Nigam, Chayan Bhaisare, N. Arya","doi":"10.1109/ICPRIME.2012.6208378","DOIUrl":"https://doi.org/10.1109/ICPRIME.2012.6208378","url":null,"abstract":"It is always important in the Software Industry to know about what types of bugs are getting reported into the applications developed or maintained by them. Categorizing bugs based on their characteristics helps Software Development team to take appropriate actions in order to reduce similar defects that might get reported in future releases. Defects or Bugs can be classified into many classes, for which a training set is required, known as the Class Label Data Set. If Classification is performed manually then it will consume more time and efforts. Also, human resource having expert testing skills & domain knowledge will be required for labelling the data. Therefore Semi Supervised Techniques are been used to reduce the work of labelling dataset, which takes some labeled with unlabeled dataset to train the classifier. In this paper Self Training Algorithm is used for Semi Supervised Learning and Winner-Takes-All strategy is applied to perform Multi Class Classification. This model provides Classification accuracy up to 93%.","PeriodicalId":148511,"journal":{"name":"International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128230475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Mono and Cross lingual speaker identification with the constraint of limited data 有限数据约束下的单语和跨语说话人识别
B. Nagaraja, H. S. Jayanna
Nowadays, Speaker identification system plays a very important role in the field of fast growing internet based communication/transactions. In this paper, speaker identification in the context of Mono-lingual and Cross-lingual are demonstrated for Indian languages with the constraint of limited data. The languages considered for the study are English, Hindi and Kannada. Since the standard Multi-lingual database is not available, experiments are carried out on an our own created database of 30 speakers who can speak the three different languages. It was found out in the experimental study that the Mono-lingual speaker identification gives better performance with English as training and testing language though it is not a native language of speakers considered for the study. Further, it was observed in Cross-lingual study that the use of English language either in training or testing gives better identification performance.
现如今,说话人识别系统在快速发展的基于互联网的通信/交易领域中扮演着非常重要的角色。本文在有限数据约束下,对印度语进行了单语和跨语语境下的说话人识别。该研究考虑的语言是英语、印地语和卡纳达语。由于标准的多语种数据库不可用,实验是在我们自己创建的一个数据库上进行的,该数据库由30名会说三种不同语言的人组成。实验研究发现,单语说话者识别在英语作为训练和测试语言时表现更好,尽管英语不是被研究对象的母语。此外,在跨语言研究中发现,无论是在训练中还是在测试中,使用英语都能获得更好的识别性能。
{"title":"Mono and Cross lingual speaker identification with the constraint of limited data","authors":"B. Nagaraja, H. S. Jayanna","doi":"10.1109/ICPRIME.2012.6208386","DOIUrl":"https://doi.org/10.1109/ICPRIME.2012.6208386","url":null,"abstract":"Nowadays, Speaker identification system plays a very important role in the field of fast growing internet based communication/transactions. In this paper, speaker identification in the context of Mono-lingual and Cross-lingual are demonstrated for Indian languages with the constraint of limited data. The languages considered for the study are English, Hindi and Kannada. Since the standard Multi-lingual database is not available, experiments are carried out on an our own created database of 30 speakers who can speak the three different languages. It was found out in the experimental study that the Mono-lingual speaker identification gives better performance with English as training and testing language though it is not a native language of speakers considered for the study. Further, it was observed in Cross-lingual study that the use of English language either in training or testing gives better identification performance.","PeriodicalId":148511,"journal":{"name":"International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114945274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
A novel approach for nose tip detection using smoothing by weighted median filtering applied to 3D face images in variant poses 一种基于加权中值滤波平滑的新型鼻尖检测方法,应用于不同姿态的三维人脸图像
P. Bagchi, D. Bhattacharjee, M. Nasipuri, D. K. Basu
This paper is based on n application of smoothing of 3D face images followed by feature detection i.e. detecting the nose tip. The present method uses a weighted mesh median filtering technique for smoothing. In this present smoothing technique we have built the neighborhood surrounding a particular point in 3D face and replaced that with the weighted value of the surrounding points in 3D face image. After applying the smoothing technique to the 3D face images our experimental results show that we have obtained considerable improvement as compared to the algorithm without smoothing. We have used here the maximum intensity algorithm for detecting the nose-tip and this method correctly detects the nose-tip in case of any pose i.e. along X, Y, and Z axes. The present technique gave us worked successfully on 535 out of 542 3D face images as compared to the method without smoothing which worked only on 521 3D face images out of 542 face images. Thus we have obtained a 98.70% performance rate over 96.12% performance rate of the algorithm without smoothing. All the experiments have been performed on the FRAV3D database.
本文基于对三维人脸图像进行平滑处理,然后进行特征检测,即鼻尖检测。该方法采用加权网格中值滤波技术进行平滑处理。在这种平滑技术中,我们在三维人脸图像中建立特定点周围的邻域,并将其替换为三维人脸图像中周围点的加权值。将平滑技术应用于三维人脸图像后,实验结果表明,与没有平滑的算法相比,我们获得了相当大的改进。我们在这里使用了最大强度算法来检测鼻尖,这种方法可以正确地检测任何姿势的鼻尖,即沿着X, Y和Z轴。与没有平滑的方法相比,目前的技术让我们成功地处理了542张3D人脸图像中的535张,而没有平滑的方法只处理了542张3D人脸图像中的521张。因此,我们获得了98.70%的性能优于无平滑算法的96.12%的性能。所有实验均在FRAV3D数据库上进行。
{"title":"A novel approach for nose tip detection using smoothing by weighted median filtering applied to 3D face images in variant poses","authors":"P. Bagchi, D. Bhattacharjee, M. Nasipuri, D. K. Basu","doi":"10.1109/ICPRIME.2012.6208357","DOIUrl":"https://doi.org/10.1109/ICPRIME.2012.6208357","url":null,"abstract":"This paper is based on n application of smoothing of 3D face images followed by feature detection i.e. detecting the nose tip. The present method uses a weighted mesh median filtering technique for smoothing. In this present smoothing technique we have built the neighborhood surrounding a particular point in 3D face and replaced that with the weighted value of the surrounding points in 3D face image. After applying the smoothing technique to the 3D face images our experimental results show that we have obtained considerable improvement as compared to the algorithm without smoothing. We have used here the maximum intensity algorithm for detecting the nose-tip and this method correctly detects the nose-tip in case of any pose i.e. along X, Y, and Z axes. The present technique gave us worked successfully on 535 out of 542 3D face images as compared to the method without smoothing which worked only on 521 3D face images out of 542 face images. Thus we have obtained a 98.70% performance rate over 96.12% performance rate of the algorithm without smoothing. All the experiments have been performed on the FRAV3D database.","PeriodicalId":148511,"journal":{"name":"International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)","volume":"136 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116715867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
A novel approach for Kannada text extraction 一种新的卡纳达语文本提取方法
S. Seeri, S. Giraddi, B. Prashant
Popularity of the digital cameras is increasing rapidly day by day because of advanced applications and availability of digital cameras. The detection and extraction of text regions in an image is a well known problem in the computer vision. Text in images contains useful semantic information which can be used to fully understand the images. Proposed method aims at detecting and extracting Kannada text from government organization signboard images acquired by digital camera. Segmentation is performed using edge detection method and heuristic features are used to remove the non text regions. Kannada text identification is performed using the structural feature boundary length of the object strokes. Rule based method is employed to validate the objects as Kannada text. The proposed method is effective, efficient and encouraging results are obtained. It has the precision rate of 84.21%, recall rate of 83.16% and Kannada text identification accuracy of 75.77%. Hence proposed method is robust with font size, small orientation and alignment of text.
由于数码相机的先进应用和可用性,数码相机的普及程度日益迅速增加。图像中文本区域的检测和提取是计算机视觉中一个众所周知的问题。图像中的文本包含有用的语义信息,可以用来充分理解图像。该方法旨在对数码相机采集的政府机构招牌图像进行卡纳达语文本的检测和提取。使用边缘检测方法进行分割,并使用启发式特征去除非文本区域。使用对象笔画的结构特征边界长度进行卡纳达语文本识别。采用基于规则的方法验证对象是否为卡纳达语文本。该方法是有效的、高效的,取得了令人鼓舞的结果。准确率为84.21%,查全率为83.16%,卡纳达语文本识别准确率为75.77%。因此,该方法对字体大小、小方向和文本对齐具有鲁棒性。
{"title":"A novel approach for Kannada text extraction","authors":"S. Seeri, S. Giraddi, B. Prashant","doi":"10.1109/ICPRIME.2012.6208387","DOIUrl":"https://doi.org/10.1109/ICPRIME.2012.6208387","url":null,"abstract":"Popularity of the digital cameras is increasing rapidly day by day because of advanced applications and availability of digital cameras. The detection and extraction of text regions in an image is a well known problem in the computer vision. Text in images contains useful semantic information which can be used to fully understand the images. Proposed method aims at detecting and extracting Kannada text from government organization signboard images acquired by digital camera. Segmentation is performed using edge detection method and heuristic features are used to remove the non text regions. Kannada text identification is performed using the structural feature boundary length of the object strokes. Rule based method is employed to validate the objects as Kannada text. The proposed method is effective, efficient and encouraging results are obtained. It has the precision rate of 84.21%, recall rate of 83.16% and Kannada text identification accuracy of 75.77%. Hence proposed method is robust with font size, small orientation and alignment of text.","PeriodicalId":148511,"journal":{"name":"International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114730089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Emotion recognition — An approach to identify the terrorist 情绪识别——一种识别恐怖分子的方法
N. Raju, P. Preethi, T. L. Priya, S. Mathini
The emotional influence on human behavior can be identified by speech. Recognition of emotion plays a vital role in many fields such as automatic emotion recognition etc. In this paper, we distinguish a normal person from the terrorist/victim by identifying their emotional state from speech. Emotional states dealt with in this paper are neutral, sad, anger, fear, etc. Two different algorithm of pitch is used to extract the pitch here. Moreover, support vector machine is used to classify the emotional state. The accuracy level of the classifier differentiates the emotional state of the normal person from the terrorist/victim. For the classification of all emotions, the average accuracy of both male and female is 80%.
情绪对人类行为的影响可以通过言语来识别。情感识别在情感自动识别等领域起着至关重要的作用。在本文中,我们通过识别恐怖分子/受害者的情绪状态来区分他们。本文处理的情绪状态有中性、悲伤、愤怒、恐惧等。本文采用了两种不同的基音提取算法。此外,使用支持向量机对情绪状态进行分类。分类器的准确度区分了正常人和恐怖分子/受害者的情绪状态。对于所有情绪的分类,男性和女性的平均准确率都是80%。
{"title":"Emotion recognition — An approach to identify the terrorist","authors":"N. Raju, P. Preethi, T. L. Priya, S. Mathini","doi":"10.1109/ICPRIME.2012.6208383","DOIUrl":"https://doi.org/10.1109/ICPRIME.2012.6208383","url":null,"abstract":"The emotional influence on human behavior can be identified by speech. Recognition of emotion plays a vital role in many fields such as automatic emotion recognition etc. In this paper, we distinguish a normal person from the terrorist/victim by identifying their emotional state from speech. Emotional states dealt with in this paper are neutral, sad, anger, fear, etc. Two different algorithm of pitch is used to extract the pitch here. Moreover, support vector machine is used to classify the emotional state. The accuracy level of the classifier differentiates the emotional state of the normal person from the terrorist/victim. For the classification of all emotions, the average accuracy of both male and female is 80%.","PeriodicalId":148511,"journal":{"name":"International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)","volume":"2020 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114733926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An efficient heuristic algorithm for fast clock mesh realization 一种快速时钟网格实现的启发式算法
P. Saranya, A. Sridevi
The application of multiple clocking domains with dedicated clock buffer will be implemented. In this paper, an algorithm is proposed for determining the minimum number of clock domains to be used for multi domain clock skew scheduling. Non-tree based distributions provide a high tolerance towards process variations. The clock mesh constraints can be overcome by two processes. First a simultaneous buffer placement and sizing is done which satisfies the signal slew constraints while minimizing the total buffer size by heuristic algorithm. The second one reduces the mesh by deleting certain edges, thereby trading off skew tolerance for low power dissipation by post processing techniques. Finally comparisons of wire length, power dissipation, nominal skew and variation skews using H-SPICE software for various sized benchmark circuits are performed.
实现了具有专用时钟缓冲器的多个时钟域的应用。本文提出了一种确定用于多域时钟偏差调度的最小时钟域数的算法。非基于树的分布提供了对过程变化的高容忍度。时钟网格的限制可以通过两个过程来克服。首先采用启发式算法,在满足信号转换约束的同时,实现了缓冲区的布局和大小的同时最小化。第二种方法是通过删除某些边缘来减少网格,从而通过后处理技术来平衡低功耗的偏差容限。最后利用H-SPICE软件对不同尺寸的基准电路的导线长度、功耗、标称偏度和变化偏度进行了比较。
{"title":"An efficient heuristic algorithm for fast clock mesh realization","authors":"P. Saranya, A. Sridevi","doi":"10.1109/ICPRIME.2012.6208302","DOIUrl":"https://doi.org/10.1109/ICPRIME.2012.6208302","url":null,"abstract":"The application of multiple clocking domains with dedicated clock buffer will be implemented. In this paper, an algorithm is proposed for determining the minimum number of clock domains to be used for multi domain clock skew scheduling. Non-tree based distributions provide a high tolerance towards process variations. The clock mesh constraints can be overcome by two processes. First a simultaneous buffer placement and sizing is done which satisfies the signal slew constraints while minimizing the total buffer size by heuristic algorithm. The second one reduces the mesh by deleting certain edges, thereby trading off skew tolerance for low power dissipation by post processing techniques. Finally comparisons of wire length, power dissipation, nominal skew and variation skews using H-SPICE software for various sized benchmark circuits are performed.","PeriodicalId":148511,"journal":{"name":"International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128569927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Area compactness architecture for elliptic curve cryptography 椭圆曲线密码的面积紧性结构
M. Janagan, M. Devanathan
Elliptic curve cryptography (ECC) is an alternative to traditional public key cryptographic systems. Even though, RSA (Rivest-Shamir-Adleman) was the most prominent cryptographic scheme, it is being replaced by ECC in many systems. This is due to the fact that ECC gives higher security with shorter bit length than RSA. In Elliptic curve based algorithms elliptic curve point multiplication is the most computationally intensive operation. Therefore implementing point multiplication using hardware makes ECC more attractive for high performance servers and small devices. This paper gives the scope of Montgomery ladder computationally. Montgomery ladder algorithm is effective in computation of Elliptic Curve Point Multiplication (ECPM) when compared to Elliptic Curve Digital Signature Algorithm (ECDSA). Compactness is achieved by reducing data paths by using multipliers and carry-chain logic. Multiplier performs effectively in terms of area/time if the word size of multiplier is large. A solution for Simple Power Analysis (SPA) attack is also provided. In Montgomery modular inversion 33% of saving in Montgomery multiplication is achieved and a saving of 50% on the number of gates required in implementation can be achieved.
椭圆曲线加密(ECC)是传统公钥加密系统的替代方案。尽管RSA (Rivest-Shamir-Adleman)是最著名的加密方案,但在许多系统中它正在被ECC所取代。这是因为ECC比RSA提供了更高的安全性和更短的比特长度。在基于椭圆曲线的算法中,椭圆曲线点乘法运算是计算量最大的运算。因此,使用硬件实现点乘法使ECC对高性能服务器和小型设备更具吸引力。本文给出了蒙哥马利梯的计算范围。与椭圆曲线数字签名算法(ECDSA)相比,Montgomery阶梯算法在椭圆曲线点乘法(ECPM)的计算中具有较高的效率。紧凑性是通过使用乘法器和携带链逻辑减少数据路径来实现的。如果乘数的字长较大,则乘数在面积/时间方面表现有效。提出了简单功率分析(Simple Power Analysis, SPA)攻击的解决方案。在Montgomery模反转中,可以实现33%的Montgomery乘法节省,并且可以实现所需的门数节省50%。
{"title":"Area compactness architecture for elliptic curve cryptography","authors":"M. Janagan, M. Devanathan","doi":"10.1109/ICPRIME.2012.6208300","DOIUrl":"https://doi.org/10.1109/ICPRIME.2012.6208300","url":null,"abstract":"Elliptic curve cryptography (ECC) is an alternative to traditional public key cryptographic systems. Even though, RSA (Rivest-Shamir-Adleman) was the most prominent cryptographic scheme, it is being replaced by ECC in many systems. This is due to the fact that ECC gives higher security with shorter bit length than RSA. In Elliptic curve based algorithms elliptic curve point multiplication is the most computationally intensive operation. Therefore implementing point multiplication using hardware makes ECC more attractive for high performance servers and small devices. This paper gives the scope of Montgomery ladder computationally. Montgomery ladder algorithm is effective in computation of Elliptic Curve Point Multiplication (ECPM) when compared to Elliptic Curve Digital Signature Algorithm (ECDSA). Compactness is achieved by reducing data paths by using multipliers and carry-chain logic. Multiplier performs effectively in terms of area/time if the word size of multiplier is large. A solution for Simple Power Analysis (SPA) attack is also provided. In Montgomery modular inversion 33% of saving in Montgomery multiplication is achieved and a saving of 50% on the number of gates required in implementation can be achieved.","PeriodicalId":148511,"journal":{"name":"International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121945596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A wavelet based method for denoising of biomedical signal 一种基于小波的生物医学信号去噪方法
P. Patil, M. Chavan
Noise removal of Electrocardiogram has always been a subject of wide research. ECG signals change their statistical properties over time. Wavelet transform is the most powerful tool for analyzing the non-stationary signals. This paper shows that how it is useful in denoising non-stationary signals e.g. The ECG signals. We considered two types of ECG signal, without additional noise and corrupted by powerline interference and we realized the signal's denoising using wavelet filtering. The ECG data is taken from standard MIT-BIH Arrhythmia database, while noise signal is generated and added to the original signal using instructions in MATLAB environment. In this paper, we present Daubechies wavelet analysis method with a decomposition tree of level 5 for analysis of noisy ECG signals. The implementation includes the procedures of signal decomposition and reconstruction with hard and soft thresholding. Furthermore quantitative study of result evaluation has been done based on Signal to Noise Ratio (SNR). The results show that, on contrast with traditional methods wavelet method can achieve optimal denoising of ECG signal.
心电图的去噪一直是人们广泛研究的课题。随着时间的推移,心电信号的统计特性会发生变化。小波变换是分析非平稳信号最有力的工具。本文介绍了该方法在非平稳信号(如心电信号)去噪中的应用。考虑了两种无附加噪声且受电力线干扰的心电信号,利用小波滤波实现了信号的去噪。心电数据取自MIT-BIH心律失常标准数据库,在MATLAB环境下使用指令生成噪声信号并加入到原始信号中。本文提出了一种基于5级分解树的多贝西小波分析方法,用于分析有噪声的心电信号。具体实现包括信号分解和硬阈值和软阈值重构。在此基础上,对基于信噪比的结果评价进行了定量研究。结果表明,与传统降噪方法相比,小波方法能够实现对心电信号的最优降噪。
{"title":"A wavelet based method for denoising of biomedical signal","authors":"P. Patil, M. Chavan","doi":"10.1109/ICPRIME.2012.6208358","DOIUrl":"https://doi.org/10.1109/ICPRIME.2012.6208358","url":null,"abstract":"Noise removal of Electrocardiogram has always been a subject of wide research. ECG signals change their statistical properties over time. Wavelet transform is the most powerful tool for analyzing the non-stationary signals. This paper shows that how it is useful in denoising non-stationary signals e.g. The ECG signals. We considered two types of ECG signal, without additional noise and corrupted by powerline interference and we realized the signal's denoising using wavelet filtering. The ECG data is taken from standard MIT-BIH Arrhythmia database, while noise signal is generated and added to the original signal using instructions in MATLAB environment. In this paper, we present Daubechies wavelet analysis method with a decomposition tree of level 5 for analysis of noisy ECG signals. The implementation includes the procedures of signal decomposition and reconstruction with hard and soft thresholding. Furthermore quantitative study of result evaluation has been done based on Signal to Noise Ratio (SNR). The results show that, on contrast with traditional methods wavelet method can achieve optimal denoising of ECG signal.","PeriodicalId":148511,"journal":{"name":"International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)","volume":"93 Pt A 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115786461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 48
Global clustering based geo cache for Vehicular Ad hoc Networks 基于全局聚类的车载自组织网络地理缓存
S. Poongodi, P. Tamilselvi
Several medium access control protocols have been proposed in the recent past for vehicles for accessing radio channels and for distributing timely active safety messages for inter-vehicle communication in Vehicular Ad hoc Networks (VANETs). As contention period is high in Medium Access Control (MAC) for channel access, MAC is unable to distribute timely safety messages. To reduce the contention period, Region based Clustering Mechanism (RCM) is applied with MAC protocols. RCM caters to the reduction of contention because the number of vehicles for each cluster is limited. In addition, it resolves the competition among vehicles to access radio channels for inter-vehicle communication. Ad Hoc On-Demand Distance Vector (AODV) routing protocol is used for providing shortest path between source and destination which increases the amount of packet reception and cluster formation. Geo cache is included in VANET in order to retain directory that includes information about nodes that have left a particular cluster through neighboring nodes.
最近提出了几种媒介访问控制协议,用于车辆访问无线电信道和在车辆自组织网络(VANETs)中及时分发车辆间通信的主动安全信息。由于通道访问的介质访问控制(Medium Access Control, MAC)争用周期长,导致MAC无法及时分发安全消息。为了缩短争用周期,MAC协议采用了基于区域的聚类机制(RCM)。RCM用于减少争用,因为每个集群的车辆数量是有限的。此外,它还解决了车辆间对无线通信信道的竞争问题。采用AODV (Ad Hoc On-Demand Distance Vector)路由协议,在源端和目的端之间提供最短路径,增加了数据包的接收量和集群的形成。地理缓存包含在VANET中是为了保留目录,其中包含通过邻近节点离开特定集群的节点的信息。
{"title":"Global clustering based geo cache for Vehicular Ad hoc Networks","authors":"S. Poongodi, P. Tamilselvi","doi":"10.1109/ICPRIME.2012.6208336","DOIUrl":"https://doi.org/10.1109/ICPRIME.2012.6208336","url":null,"abstract":"Several medium access control protocols have been proposed in the recent past for vehicles for accessing radio channels and for distributing timely active safety messages for inter-vehicle communication in Vehicular Ad hoc Networks (VANETs). As contention period is high in Medium Access Control (MAC) for channel access, MAC is unable to distribute timely safety messages. To reduce the contention period, Region based Clustering Mechanism (RCM) is applied with MAC protocols. RCM caters to the reduction of contention because the number of vehicles for each cluster is limited. In addition, it resolves the competition among vehicles to access radio channels for inter-vehicle communication. Ad Hoc On-Demand Distance Vector (AODV) routing protocol is used for providing shortest path between source and destination which increases the amount of packet reception and cluster formation. Geo cache is included in VANET in order to retain directory that includes information about nodes that have left a particular cluster through neighboring nodes.","PeriodicalId":148511,"journal":{"name":"International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128443565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1