首页 > 最新文献

2009 International Conference on Emerging Technologies最新文献

英文 中文
Iris feature extraction using gabor filter 基于gabor滤波的虹膜特征提取
Pub Date : 2009-12-11 DOI: 10.1109/ICET.2009.5353166
Saad Minhas, M. Javed
Biometric technology uses human characteristics for their reliable identification. Iris recognition is a biometric technology that utilizes iris for human identification. The human iris contains very discriminating features and hence provides the accurate authentication of persons. To extract the discriminating iris features, different methods have been used in the past. In this work, gabor filter is applied on iris images in two different ways. Firstly, it is applied on the entire image at once and unique features are extracted from the image. Secondly, it is used to capture local information from the image, which is then combined to create global features. A comparison of results is presented using different number of filter banks containing 15, 20, 25, 30 and 35 filters. A number of experiments are performed using CASIA version 1 iris database. By comparing the output feature vectors using hamming distance, it is found that the best accuracy of 99.16% is achieved after capturing the local information from the iris images.
生物识别技术利用人类的特征进行可靠的识别。虹膜识别是一种利用虹膜进行人体识别的生物识别技术。人的虹膜包含非常有区别的特征,因此提供了准确的身份验证。为了提取鉴别虹膜特征,过去使用了不同的方法。在这项工作中,gabor滤波器以两种不同的方式应用于虹膜图像。首先,将其应用于整幅图像,从图像中提取出独特的特征;其次,它用于从图像中捕获局部信息,然后将其组合成全局特征。使用不同数量的滤波器组(包括15、20、25、30和35个滤波器)对结果进行了比较。利用CASIA version 1鸢尾花数据库进行了大量实验。通过对比使用汉明距离的输出特征向量,发现从虹膜图像中提取局部信息后,准确率达到了99.16%。
{"title":"Iris feature extraction using gabor filter","authors":"Saad Minhas, M. Javed","doi":"10.1109/ICET.2009.5353166","DOIUrl":"https://doi.org/10.1109/ICET.2009.5353166","url":null,"abstract":"Biometric technology uses human characteristics for their reliable identification. Iris recognition is a biometric technology that utilizes iris for human identification. The human iris contains very discriminating features and hence provides the accurate authentication of persons. To extract the discriminating iris features, different methods have been used in the past. In this work, gabor filter is applied on iris images in two different ways. Firstly, it is applied on the entire image at once and unique features are extracted from the image. Secondly, it is used to capture local information from the image, which is then combined to create global features. A comparison of results is presented using different number of filter banks containing 15, 20, 25, 30 and 35 filters. A number of experiments are performed using CASIA version 1 iris database. By comparing the output feature vectors using hamming distance, it is found that the best accuracy of 99.16% is achieved after capturing the local information from the iris images.","PeriodicalId":307661,"journal":{"name":"2009 International Conference on Emerging Technologies","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131792536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
An application of VAR and Almon Polynomial Distributed Lag models to insurance stocks: Evidence from KSE VAR和Almon多项式分布滞后模型在保险股中的应用:来自KSE的证据
Pub Date : 2009-12-11 DOI: 10.1109/ICET.2009.5353173
M. A. Siddiqui
In the time series data, a regressand may respond to regressors with a time lag. This study employs dynamic methodology of Almon Polynomial Distributed-Lag (PDL) model as an application to the stocks of 13 selected insurance companies, using daily data for the period from 1996 to 2008. Realizing the importance of causality in economics and finance, this study focuses on the causal relationship between investment, growth in returns and market uncertainty. The study also employs VAR which is of non-structural approaches amongst the a-theoretic models. In this study I have constrained the coefficients on the distributed lag to lie on a third degree polynomial with the satisfactory test results of near and the far end points of the lag distribution. Generating the series of risk variable through GARCH (p, q) is also an academic contribution of this study. The Almon PDL model may also be considered as an alternative to the lagged regression models. For the PDL avoids the estimation problems associated with the autoregressive models. This study is in a way an attempt to invite researchers and practitioners for the maximum application of these very important dynamic models in economics, business and finance. The results of this study reveal mixed causality among the three variables. The Almon Polynomial Distributed Lag results support the theory of adaptive expectations.
在时间序列数据中,回归量可能对具有时间滞后的回归量作出响应。本文采用Almon多项式分布滞后(PDL)模型的动态方法,选取13家保险公司的股票,使用1996 - 2008年的日常数据。认识到因果关系在经济学和金融学中的重要性,本研究将重点研究投资、收益增长和市场不确定性之间的因果关系。本文还采用了VAR这一理论模型中的非结构方法。在本研究中,我将分布滞后的系数约束在一个三次多项式上,并对滞后分布的近端和远端进行了令人满意的检验。通过GARCH (p, q)生成一系列风险变量也是本研究的一个学术贡献。Almon PDL模型也可以被认为是滞后回归模型的替代方案。PDL避免了自回归模型的估计问题。本研究在某种程度上试图邀请研究人员和实践者最大限度地应用这些非常重要的动态模型在经济、商业和金融领域。本研究的结果揭示了三个变量之间的混合因果关系。Almon多项式分布滞后结果支持自适应期望理论。
{"title":"An application of VAR and Almon Polynomial Distributed Lag models to insurance stocks: Evidence from KSE","authors":"M. A. Siddiqui","doi":"10.1109/ICET.2009.5353173","DOIUrl":"https://doi.org/10.1109/ICET.2009.5353173","url":null,"abstract":"In the time series data, a regressand may respond to regressors with a time lag. This study employs dynamic methodology of Almon Polynomial Distributed-Lag (PDL) model as an application to the stocks of 13 selected insurance companies, using daily data for the period from 1996 to 2008. Realizing the importance of causality in economics and finance, this study focuses on the causal relationship between investment, growth in returns and market uncertainty. The study also employs VAR which is of non-structural approaches amongst the a-theoretic models. In this study I have constrained the coefficients on the distributed lag to lie on a third degree polynomial with the satisfactory test results of near and the far end points of the lag distribution. Generating the series of risk variable through GARCH (p, q) is also an academic contribution of this study. The Almon PDL model may also be considered as an alternative to the lagged regression models. For the PDL avoids the estimation problems associated with the autoregressive models. This study is in a way an attempt to invite researchers and practitioners for the maximum application of these very important dynamic models in economics, business and finance. The results of this study reveal mixed causality among the three variables. The Almon Polynomial Distributed Lag results support the theory of adaptive expectations.","PeriodicalId":307661,"journal":{"name":"2009 International Conference on Emerging Technologies","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131349054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Application of Niblack's method on images Niblack方法在图像上的应用
Pub Date : 2009-12-11 DOI: 10.1109/ICET.2009.5353159
S. Farid, F. Ahmed
Image segmentation is a major step in image analysis and processing. Segmentation is performed through several methods. In this work Niblack's method of segmentation is further studied. It is one of the local thresholding techniques for segmentation. The output of Niblack's method is significant and has most acceptable result out of all thresholding techniques in segmenting text documents. In this work the same method is applied on images keeping one of the variables i.e. weight k of Niblack's method constant while changing the other (window size) from images to images. The output image is better segmented but the background is noisy. Improvements in the resultant images are demonstrated by applying the morphological operations of opening and closing. Opening and closing are combination of two fundamental morphological operations dilation and erosion. Dilation thickens objects in a binary image by adding pixels to the boundaries of the objects, while erosion shrinks objects in a binary image.
图像分割是图像分析和处理的重要步骤。分割是通过几种方法进行的。本文对Niblack的分割方法进行了进一步的研究。它是一种局部阈值分割技术。Niblack方法的输出是显著的,并且在所有分割文本文档的阈值技术中具有最可接受的结果。在这项工作中,同样的方法应用于图像,保持Niblack方法的一个变量,即权重k恒定,同时改变另一个(窗口大小)从图像到图像。输出图像分割效果较好,但背景噪声较大。在所得图像的改进证明了应用形态学操作的开放和关闭。开合是扩张和侵蚀两种基本形态操作的结合。膨胀通过在物体的边界上增加像素使二值图像中的物体变厚,而侵蚀使二值图像中的物体缩小。
{"title":"Application of Niblack's method on images","authors":"S. Farid, F. Ahmed","doi":"10.1109/ICET.2009.5353159","DOIUrl":"https://doi.org/10.1109/ICET.2009.5353159","url":null,"abstract":"Image segmentation is a major step in image analysis and processing. Segmentation is performed through several methods. In this work Niblack's method of segmentation is further studied. It is one of the local thresholding techniques for segmentation. The output of Niblack's method is significant and has most acceptable result out of all thresholding techniques in segmenting text documents. In this work the same method is applied on images keeping one of the variables i.e. weight k of Niblack's method constant while changing the other (window size) from images to images. The output image is better segmented but the background is noisy. Improvements in the resultant images are demonstrated by applying the morphological operations of opening and closing. Opening and closing are combination of two fundamental morphological operations dilation and erosion. Dilation thickens objects in a binary image by adding pixels to the boundaries of the objects, while erosion shrinks objects in a binary image.","PeriodicalId":307661,"journal":{"name":"2009 International Conference on Emerging Technologies","volume":"87 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133172907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
A realistic view of 3D models of artifacts using photogrammetric approach 使用摄影测量方法的三维文物模型的现实视图
Pub Date : 2009-12-11 DOI: 10.1109/ICET.2009.5353206
S. Mussadiq, F. Ahmad
Active research has been going on in both computer vision and computer graphics to have more accurate and more realistic 3D models of an object. Efforts are being made to have a realistic view of an object by texturing it with real images of the object, to have a feel as if it is a real object. A method has been proposed that also makes use of real ortho-mages of the object for texturing. A method has been proposed for an accurate and realistic 3D modeling. In the proposed method first the model of an object will be created using CAD software, by taking manual calculations of its geometry. Then sufficient number of ortho photographs will be taken such that they could cover the whole object. Once the 3D model of an object is reconstructed, the object will be textured map with the real images/photos of the object taken. Then this model can be animated to view the object from any desired viewpoint. The proposed technique is suitable for modeling of an object and its environment.
在计算机视觉和计算机图形学方面的积极研究正在进行,以获得更准确和更逼真的物体3D模型。人们正在努力用物体的真实图像对物体进行纹理处理,从而获得一个物体的逼真视图,让人感觉它是一个真实的物体。提出了一种利用物体的真实正交图像进行纹理化的方法。提出了一种精确逼真的三维建模方法。在提出的方法中,首先将使用CAD软件创建对象的模型,通过手动计算其几何形状。然后拍摄足够数量的正射影,使其覆盖整个物体。一旦物体的三维模型被重建,物体就会被纹理化,并被拍摄成物体的真实图像/照片。然后,这个模型可以被动画化,以从任何期望的视点查看对象。该技术适用于对象及其环境的建模。
{"title":"A realistic view of 3D models of artifacts using photogrammetric approach","authors":"S. Mussadiq, F. Ahmad","doi":"10.1109/ICET.2009.5353206","DOIUrl":"https://doi.org/10.1109/ICET.2009.5353206","url":null,"abstract":"Active research has been going on in both computer vision and computer graphics to have more accurate and more realistic 3D models of an object. Efforts are being made to have a realistic view of an object by texturing it with real images of the object, to have a feel as if it is a real object. A method has been proposed that also makes use of real ortho-mages of the object for texturing. A method has been proposed for an accurate and realistic 3D modeling. In the proposed method first the model of an object will be created using CAD software, by taking manual calculations of its geometry. Then sufficient number of ortho photographs will be taken such that they could cover the whole object. Once the 3D model of an object is reconstructed, the object will be textured map with the real images/photos of the object taken. Then this model can be animated to view the object from any desired viewpoint. The proposed technique is suitable for modeling of an object and its environment.","PeriodicalId":307661,"journal":{"name":"2009 International Conference on Emerging Technologies","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125125555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
FPGA implementation of a low power, processor-independent and reusable System-on-Chip platform FPGA实现的一个低功耗、处理器无关、可重复使用的片上系统平台
Pub Date : 2009-12-11 DOI: 10.1109/ICET.2009.5353150
Ehsan ul Haq, Muhammad Kazim Hafeez, Muhammad Salman Khan, Shoaib Sial, Arshad Riazuddin
In order to achieve low cost and reduced time to market goals ASIC and Embedded system designers have always struggled to come up with a basic platform, which once built and verified can easily be reconfigured and reused. Moreover they are also been challenged with compatibility issues of their designs with different processors. In this paper, we have presented a System-on-Chip (SoC) platform architecture, which once built can be modified for different processors with minimal effort. Using a bus architecture that allows easy addition and removal of various modules, our proposed SoC can be reconfigured and reused as a platform for various projects. Moreover, we have also included those modules in our chip which are the building blocks of almost all ASIC and embedded applications. Finally, implementation parameters of this SoC on Xilinx FPGA are reported.
为了实现低成本和缩短上市时间的目标,ASIC和嵌入式系统设计人员一直在努力提出一个基本平台,一旦构建和验证,就可以轻松地重新配置和重用。此外,他们还面临着设计与不同处理器的兼容性问题的挑战。在本文中,我们提出了一个片上系统(SoC)平台架构,该架构一旦构建,就可以以最小的努力修改不同的处理器。使用允许轻松添加和删除各种模块的总线架构,我们提出的SoC可以作为各种项目的平台重新配置和重用。此外,我们还将这些模块包含在我们的芯片中,这些模块几乎是所有ASIC和嵌入式应用的构建块。最后,给出了该SoC在Xilinx FPGA上的实现参数。
{"title":"FPGA implementation of a low power, processor-independent and reusable System-on-Chip platform","authors":"Ehsan ul Haq, Muhammad Kazim Hafeez, Muhammad Salman Khan, Shoaib Sial, Arshad Riazuddin","doi":"10.1109/ICET.2009.5353150","DOIUrl":"https://doi.org/10.1109/ICET.2009.5353150","url":null,"abstract":"In order to achieve low cost and reduced time to market goals ASIC and Embedded system designers have always struggled to come up with a basic platform, which once built and verified can easily be reconfigured and reused. Moreover they are also been challenged with compatibility issues of their designs with different processors. In this paper, we have presented a System-on-Chip (SoC) platform architecture, which once built can be modified for different processors with minimal effort. Using a bus architecture that allows easy addition and removal of various modules, our proposed SoC can be reconfigured and reused as a platform for various projects. Moreover, we have also included those modules in our chip which are the building blocks of almost all ASIC and embedded applications. Finally, implementation parameters of this SoC on Xilinx FPGA are reported.","PeriodicalId":307661,"journal":{"name":"2009 International Conference on Emerging Technologies","volume":" 44","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120830653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Automated detection of infected area in lungs 自动检测肺部感染区域
Pub Date : 2009-12-11 DOI: 10.1109/ICET.2009.5353169
Shabana Habib, Owais Adnan, Nafees-ur-Rahman
Interpretation of medical image is often difficult and time consuming, even for experienced physicians. Medical imaging plays an important role in detecting infected area in lungs. Significant efforts within the field of radiation oncology have recently been centered on the ability to automatically detect lung infection in a breathing patient in real-time during radiation treatment using digital x-ray technology and image processing. The motivation of such a goal is to improve radiation treatments, possibly leading to an increase in survival rates. Diagnosis of X-rays can be used as an initial step in detecting infected area in lungs. This paper describes a pulmonary region in X-ray images. We developed a automated system for the X-ray images. The computer scans and marks suspicious looking areas in the image. Radiologists can then focus on those areas and decide if further evaluation is needed. To improve the diagnosis accuracy of this system we introduced a pre-processing stage which involves adjustment of the intensity and conversion to gray scale of X-ray image. Gaussian filter is used to remove the false structure. Then we process the digitized image and extract the lungs from X-ray image. Then we apply blood fill algorithm for lung region, localization of suspected infected area. As a result infected area of lung is detected.
医学图像的解读往往是困难和耗时的,即使是有经验的医生。医学影像学对肺部感染区域的检测具有重要作用。放射肿瘤学领域的重大努力最近集中在利用数字x射线技术和图像处理技术在放射治疗期间实时自动检测呼吸患者肺部感染的能力上。这样一个目标的动机是改善放射治疗,可能导致生存率的提高。x光诊断可作为检测肺部感染区域的第一步。本文描述了x线图像中的肺区域。我们开发了一套自动x射线成像系统。计算机扫描并标记图像中可疑的区域。放射科医生可以关注这些区域,并决定是否需要进一步的评估。为了提高该系统的诊断精度,我们引入了对x射线图像进行强度调整和灰度转换的预处理阶段。采用高斯滤波去除假结构。然后对数字化图像进行处理,从x射线图像中提取肺。然后对疑似感染区域的肺区域应用补血算法进行定位。结果发现肺部感染区域。
{"title":"Automated detection of infected area in lungs","authors":"Shabana Habib, Owais Adnan, Nafees-ur-Rahman","doi":"10.1109/ICET.2009.5353169","DOIUrl":"https://doi.org/10.1109/ICET.2009.5353169","url":null,"abstract":"Interpretation of medical image is often difficult and time consuming, even for experienced physicians. Medical imaging plays an important role in detecting infected area in lungs. Significant efforts within the field of radiation oncology have recently been centered on the ability to automatically detect lung infection in a breathing patient in real-time during radiation treatment using digital x-ray technology and image processing. The motivation of such a goal is to improve radiation treatments, possibly leading to an increase in survival rates. Diagnosis of X-rays can be used as an initial step in detecting infected area in lungs. This paper describes a pulmonary region in X-ray images. We developed a automated system for the X-ray images. The computer scans and marks suspicious looking areas in the image. Radiologists can then focus on those areas and decide if further evaluation is needed. To improve the diagnosis accuracy of this system we introduced a pre-processing stage which involves adjustment of the intensity and conversion to gray scale of X-ray image. Gaussian filter is used to remove the false structure. Then we process the digitized image and extract the lungs from X-ray image. Then we apply blood fill algorithm for lung region, localization of suspected infected area. As a result infected area of lung is detected.","PeriodicalId":307661,"journal":{"name":"2009 International Conference on Emerging Technologies","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127196753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integrated Mobility Model (IMM) for VANETs simulation and its impact VANETs模拟的综合迁移模型及其影响
Pub Date : 2009-12-11 DOI: 10.1109/ICET.2009.5353127
M. Alam, M. Sher, S.A. Husain
Mobility models represent real world scenarios for vehicular ad hoc networks (VANETs) and play a vital role in the performance evaluation of routing protocols. More research focus is now on the development of realistic mobility models for vehicular ad hoc networks. A number of mobility models have been presented and their impact on the performance on the routing protocols has been tested. In this paper we have introduced a new mobility model, Integrated Mobility Model (IMM), for vehicular ad hoc networks (VANETs). Integrated Mobility Model (IMM) is an integration of Manhattan Mobility Model, Freeway Mobility Model, Stop Sign Model and Traffic Sign Model and some other characteristics. In addition, we evaluated routing protocols AODV, DSR and OLSR using our Integrated Mobility Model and also Manhattan mobility model and Freeway mobility model and compared the results.
移动性模型代表了车辆自组织网络(vanet)的真实场景,在路由协议的性能评估中起着至关重要的作用。目前更多的研究重点是开发现实的车辆自组织网络的移动模型。提出了许多移动性模型,并测试了它们对路由协议性能的影响。本文介绍了一种新的移动模型——集成移动模型(Integrated mobility model, IMM)。综合交通模型(Integrated Mobility Model, IMM)是曼哈顿交通模型、高速公路交通模型、停车标志模型和交通标志模型等特征的综合。此外,我们还利用我们的综合交通模型、曼哈顿交通模型和高速公路交通模型对AODV、DSR和OLSR路由协议进行了评估,并对结果进行了比较。
{"title":"Integrated Mobility Model (IMM) for VANETs simulation and its impact","authors":"M. Alam, M. Sher, S.A. Husain","doi":"10.1109/ICET.2009.5353127","DOIUrl":"https://doi.org/10.1109/ICET.2009.5353127","url":null,"abstract":"Mobility models represent real world scenarios for vehicular ad hoc networks (VANETs) and play a vital role in the performance evaluation of routing protocols. More research focus is now on the development of realistic mobility models for vehicular ad hoc networks. A number of mobility models have been presented and their impact on the performance on the routing protocols has been tested. In this paper we have introduced a new mobility model, Integrated Mobility Model (IMM), for vehicular ad hoc networks (VANETs). Integrated Mobility Model (IMM) is an integration of Manhattan Mobility Model, Freeway Mobility Model, Stop Sign Model and Traffic Sign Model and some other characteristics. In addition, we evaluated routing protocols AODV, DSR and OLSR using our Integrated Mobility Model and also Manhattan mobility model and Freeway mobility model and compared the results.","PeriodicalId":307661,"journal":{"name":"2009 International Conference on Emerging Technologies","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126387443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 39
Efficient implementation of Gaussian elimination method to recover generator polynomials of convolutional codes 高斯消去法在卷积码生成器多项式恢复中的高效实现
Pub Date : 2009-12-11 DOI: 10.1109/ICET.2009.5353183
M. Atif, A. Rauf
One of the most important objectives in wireless communication is to transmit the information free of errors and to detect the data correctly. With a view to avoid occurrence of errors in communication channel, error correction techniques, also called channel coding, are used. Convolution encoding technique is the forerunner amongst those employed. In wireless communication systems, signal strength decreases logarithmically and results in fading. This fading causes random errors or burst errors (in case of deep fades). The burst errors are converted to random errors by interleaving techniques and then channel coding is used to combat the random errors. In convolutional codes, information bits are encoded by using primitive polynomials implemented in the form of shift registers. In this paper a method is proposed to detect the generator polynomial and the code rate of the convolution encoded data, once received. The information is encoded by Convolution (n, k, m) codes and then its generator polynomial is detected by using the Gaussian Elimination Method. Here n shows the data bit (parity and information), k represents the information bits and m shows the length of the registers. In Gaussian elimination method the variables are removed step by step. This elimination is different from the normal one in a sense that it is implemented over GF (2). This detection algorithm can be utilized efficiently to match convolutionally encoded reference stream to the one generated by above-mentioned convolutional encoder. This can also be utilized to verify the generator polynomial of the encoded output stream before feeding it to the complex decoder to avoid time-consuming and exhaustive debugging.
无线通信中最重要的目标之一是准确无误地传输信息并正确地检测数据。为了避免在通信信道中发生错误,使用了纠错技术,也称为信道编码。卷积编码技术是其中的先驱。在无线通信系统中,信号强度呈对数递减并导致衰落。这种衰落导致随机错误或突发错误(在深度衰落的情况下)。通过交错技术将突发错误转换为随机错误,然后使用信道编码来对抗随机错误。在卷积码中,信息位通过使用移位寄存器形式实现的原始多项式进行编码。本文提出了一种检测接收到的卷积编码数据的产生多项式和码率的方法。利用卷积(n, k, m)码对信息进行编码,然后利用高斯消去法检测其产生多项式。这里n表示数据位(奇偶校验和信息),k表示信息位,m表示寄存器的长度。在高斯消去法中,变量是逐步去除的。这种消除与常规的消除不同之处在于它是在GF(2)上实现的。该检测算法可以有效地将卷积编码的参考流与上述卷积编码器生成的参考流进行匹配。这也可以用来验证编码输出流的生成器多项式,然后再将其提供给复杂的解码器,以避免耗时和详尽的调试。
{"title":"Efficient implementation of Gaussian elimination method to recover generator polynomials of convolutional codes","authors":"M. Atif, A. Rauf","doi":"10.1109/ICET.2009.5353183","DOIUrl":"https://doi.org/10.1109/ICET.2009.5353183","url":null,"abstract":"One of the most important objectives in wireless communication is to transmit the information free of errors and to detect the data correctly. With a view to avoid occurrence of errors in communication channel, error correction techniques, also called channel coding, are used. Convolution encoding technique is the forerunner amongst those employed. In wireless communication systems, signal strength decreases logarithmically and results in fading. This fading causes random errors or burst errors (in case of deep fades). The burst errors are converted to random errors by interleaving techniques and then channel coding is used to combat the random errors. In convolutional codes, information bits are encoded by using primitive polynomials implemented in the form of shift registers. In this paper a method is proposed to detect the generator polynomial and the code rate of the convolution encoded data, once received. The information is encoded by Convolution (n, k, m) codes and then its generator polynomial is detected by using the Gaussian Elimination Method. Here n shows the data bit (parity and information), k represents the information bits and m shows the length of the registers. In Gaussian elimination method the variables are removed step by step. This elimination is different from the normal one in a sense that it is implemented over GF (2). This detection algorithm can be utilized efficiently to match convolutionally encoded reference stream to the one generated by above-mentioned convolutional encoder. This can also be utilized to verify the generator polynomial of the encoded output stream before feeding it to the complex decoder to avoid time-consuming and exhaustive debugging.","PeriodicalId":307661,"journal":{"name":"2009 International Conference on Emerging Technologies","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116700899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Identification and analysis of performance metrics for real time operating system 实时操作系统性能指标的识别和分析
Pub Date : 2009-12-11 DOI: 10.1109/ICET.2009.5353177
Z. Khan, Khalid Hussain, Zafrullah Khan, Sana Ahmed Mir
RTOS performance analysis is critical during the design and integration of embedded software to guarantee that application time constraints will be met at run time. To select an appropriate Operating System for an embedded system for a specific application, OS services needs to be analyzed. These OS services are identified by parameters to form Performance Metrics. The Performance Metrics selected include Context switching time, Preemption time, Interrupt Latency, Semaphore Shuffling time. In this research the Performance Metrics is analyzed in order to select right OS for the an embedded system for a specific application.
在嵌入式软件的设计和集成过程中,RTOS性能分析对于保证在运行时满足应用程序的时间限制至关重要。要为特定应用程序的嵌入式系统选择合适的操作系统,需要分析操作系统服务。这些操作系统服务由参数标识,形成性能指标。选择的性能指标包括上下文切换时间,抢占时间,中断延迟,信号量变换时间。在本研究中,分析了性能指标,以便为特定应用的嵌入式系统选择合适的操作系统。
{"title":"Identification and analysis of performance metrics for real time operating system","authors":"Z. Khan, Khalid Hussain, Zafrullah Khan, Sana Ahmed Mir","doi":"10.1109/ICET.2009.5353177","DOIUrl":"https://doi.org/10.1109/ICET.2009.5353177","url":null,"abstract":"RTOS performance analysis is critical during the design and integration of embedded software to guarantee that application time constraints will be met at run time. To select an appropriate Operating System for an embedded system for a specific application, OS services needs to be analyzed. These OS services are identified by parameters to form Performance Metrics. The Performance Metrics selected include Context switching time, Preemption time, Interrupt Latency, Semaphore Shuffling time. In this research the Performance Metrics is analyzed in order to select right OS for the an embedded system for a specific application.","PeriodicalId":307661,"journal":{"name":"2009 International Conference on Emerging Technologies","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129247540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A fingerprint verification system using minutiae and wavelet based features 基于小波特征的指纹验证系统
Pub Date : 2009-12-11 DOI: 10.1109/ICET.2009.5353157
Umair Mateen Khan, S. Khan, N. Ejaz, Riaz ur Rehman
Minutiae based approach is most widely used for fingerprint matching. Minutiae can be extracted either directly from gray-scaled image or from a thinned image. During matching, finding an exact match depends on the best matched minutiae pairs from both images. For matching stage, different kinds of features are extracted from extracted minutiae. The structure of some features allows us to have rotation and translation invariance. Minutiae based approach also has some drawbacks because it requires very lengthy preprocessing operations for minutiae extraction and still can result in false minutiae. Previously, to overcome this problem some kind of post-processing is used, which also eliminates valid minutiae along with false ones. So eventually, we can say that the strength of matching algorithm depends on the strength of extracted features from fingerprint. In our research, we have presented a new approach which uses wavelet based features which are fused with minutiae based features for matching purpose. In particular, we find that among the algorithms we studied, our proposed work have significant effects on overall performance. Experiment results show that using these features have made the matching process much more accurate even in the presence of false minutiae.
基于特征点的指纹匹配方法是目前应用最广泛的指纹匹配方法。细节可以直接从灰度图像中提取,也可以从稀疏图像中提取。在匹配过程中,寻找精确匹配取决于从两个图像中找到最匹配的细节对。在匹配阶段,从提取的细节中提取不同类型的特征。一些特征的结构允许我们有旋转和平移不变性。基于细节的方法也有一些缺点,因为它需要非常长的预处理操作来提取细节,并且仍然可能导致错误的细节。以前,为了克服这个问题,使用了某种后处理,这也消除了有效的细节和错误的细节。因此,最终我们可以说匹配算法的强度取决于从指纹中提取的特征的强度。在我们的研究中,我们提出了一种新的方法,将基于小波的特征与基于细节的特征融合在一起进行匹配。特别是,我们发现在我们研究的算法中,我们提出的工作对整体性能有显著影响。实验结果表明,即使在存在虚假细节的情况下,使用这些特征也可以使匹配过程更加准确。
{"title":"A fingerprint verification system using minutiae and wavelet based features","authors":"Umair Mateen Khan, S. Khan, N. Ejaz, Riaz ur Rehman","doi":"10.1109/ICET.2009.5353157","DOIUrl":"https://doi.org/10.1109/ICET.2009.5353157","url":null,"abstract":"Minutiae based approach is most widely used for fingerprint matching. Minutiae can be extracted either directly from gray-scaled image or from a thinned image. During matching, finding an exact match depends on the best matched minutiae pairs from both images. For matching stage, different kinds of features are extracted from extracted minutiae. The structure of some features allows us to have rotation and translation invariance. Minutiae based approach also has some drawbacks because it requires very lengthy preprocessing operations for minutiae extraction and still can result in false minutiae. Previously, to overcome this problem some kind of post-processing is used, which also eliminates valid minutiae along with false ones. So eventually, we can say that the strength of matching algorithm depends on the strength of extracted features from fingerprint. In our research, we have presented a new approach which uses wavelet based features which are fused with minutiae based features for matching purpose. In particular, we find that among the algorithms we studied, our proposed work have significant effects on overall performance. Experiment results show that using these features have made the matching process much more accurate even in the presence of false minutiae.","PeriodicalId":307661,"journal":{"name":"2009 International Conference on Emerging Technologies","volume":"156 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132450777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
期刊
2009 International Conference on Emerging Technologies
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1