首页 > 最新文献

International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)最新文献

英文 中文
Binary data clustering based on Wiener transformation 基于Wiener变换的二值数据聚类
D. A. Kumar, M. C. Loraine Charlet Annie
Clustering is the process of grouping similar items. Clustering becomes very tedious when data dimensionality and sparsity increases. Binary data are the simplest form of data used in information systems for very large database and it is very efficient based on computational efficiency, memory capacity to represent categorical type data. Usually the binary data clustering is done by using 0 and 1 as numerical value. In this paper, the binary data clustering is performed by preprocessing the binary data to real by wiener transformation. Wiener is a linear Transformation based upon statistics and it is optimal in terms of Mean square error. Computational results show that the clustering based on Wiener transformation is very efficient in terms of objectivity and subjectivity.
聚类是对相似的项目进行分组的过程。当数据维数和稀疏度增加时,聚类变得非常乏味。二进制数据是信息系统中用于超大型数据库的最简单的数据形式,它在计算效率、内存容量等方面非常有效地表示分类类型的数据。通常用0和1作为数值进行二值数据聚类。本文采用维纳变换将二值数据预处理为实数,实现二值数据聚类。Wiener是一种基于统计的线性变换,它在均方误差方面是最优的。计算结果表明,基于维纳变换的聚类在客观性和主观性方面都是非常有效的。
{"title":"Binary data clustering based on Wiener transformation","authors":"D. A. Kumar, M. C. Loraine Charlet Annie","doi":"10.1109/ICPRIME.2012.6208287","DOIUrl":"https://doi.org/10.1109/ICPRIME.2012.6208287","url":null,"abstract":"Clustering is the process of grouping similar items. Clustering becomes very tedious when data dimensionality and sparsity increases. Binary data are the simplest form of data used in information systems for very large database and it is very efficient based on computational efficiency, memory capacity to represent categorical type data. Usually the binary data clustering is done by using 0 and 1 as numerical value. In this paper, the binary data clustering is performed by preprocessing the binary data to real by wiener transformation. Wiener is a linear Transformation based upon statistics and it is optimal in terms of Mean square error. Computational results show that the clustering based on Wiener transformation is very efficient in terms of objectivity and subjectivity.","PeriodicalId":148511,"journal":{"name":"International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117128709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
An approach of cryptographic for estimating the impact of fingerprint for biometric 一种估计指纹对生物识别影响的密码学方法
K. Sudheesh, C. Patil
A very important step in automatic fingerprint recognition system is to automatically and reliably extract minutia from the fingerprint images. Minutia based fingerprint recognition algorithms have been widely accepted as a standard for single fingerprint recognition applications. But the performance of a minutia extraction algorithm relies heavily on the quality of the input fingerprint images and the image processing techniques used for enhancing the ridges and valleys of the fingerprint image and make the minutia distinguishable from the other parts of the fingerprint image. This paper basically focuses on how the minutia points are extracted from fingerprint images and the proposed method is to design and implement the current techniques for fingerprint recognition specifically public key cryptography (RSA Algorithm) to improve the performance of the system. The design is mainly decomposed into image preprocessing, feature extraction, implementation of RSA algorithm for extracted feature and finally matches.
自动、可靠地提取指纹图像中的细节是指纹自动识别系统的一个重要步骤。基于Minutia的指纹识别算法已被广泛接受为单一指纹识别应用的标准。但是特征提取算法的性能在很大程度上依赖于输入指纹图像的质量和用于增强指纹图像的脊谷的图像处理技术,使指纹图像的特征与指纹图像的其他部分区分开来。本文主要研究如何从指纹图像中提取细节点,提出的方法是设计和实现现有的指纹识别技术,特别是公钥加密算法(RSA算法),以提高系统的性能。该设计主要分为图像预处理、特征提取、实现RSA算法提取特征,最后进行匹配。
{"title":"An approach of cryptographic for estimating the impact of fingerprint for biometric","authors":"K. Sudheesh, C. Patil","doi":"10.1109/ICPRIME.2012.6208337","DOIUrl":"https://doi.org/10.1109/ICPRIME.2012.6208337","url":null,"abstract":"A very important step in automatic fingerprint recognition system is to automatically and reliably extract minutia from the fingerprint images. Minutia based fingerprint recognition algorithms have been widely accepted as a standard for single fingerprint recognition applications. But the performance of a minutia extraction algorithm relies heavily on the quality of the input fingerprint images and the image processing techniques used for enhancing the ridges and valleys of the fingerprint image and make the minutia distinguishable from the other parts of the fingerprint image. This paper basically focuses on how the minutia points are extracted from fingerprint images and the proposed method is to design and implement the current techniques for fingerprint recognition specifically public key cryptography (RSA Algorithm) to improve the performance of the system. The design is mainly decomposed into image preprocessing, feature extraction, implementation of RSA algorithm for extracted feature and finally matches.","PeriodicalId":148511,"journal":{"name":"International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)","volume":"318 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124505855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Comparative study on the channel estimation for OFDM system using LMS, NLMS and RLS algorithms 基于LMS、NLMS和RLS算法的OFDM系统信道估计比较研究
K. Elangovan
Orthogonal Frequency Division Multiplexing (OFDM) has recently been applied in wireless communication systems due to its high data rate transmission capability with high bandwidth efficiency and its robustness to multi-path delay. Fading is the one of the major aspect which is considered in the receiver. To cancel the effect of fading, channel estimation and equalization procedure must be done at the receiver before data demodulation. In this paper dealt the comparisons of various algorithms, complexity and advantages, on the capacity enhancement for OFDM systems channel estimation techniques. Mainly three prediction algorithms are used in the equalizer to estimate the channel responses namely, Least Mean Square (LMS), Normalized Least Mean Square (NLMS) and Recursive Least Square (RLS) algorithms. These three algorithms are considered in this work and performances are statically compared by using MATLAB Software.
正交频分复用技术(OFDM)以其高数据速率、高带宽效率和对多径延迟的鲁棒性等优点在无线通信系统中得到了广泛的应用。衰落是接收机考虑的主要方面之一。为了消除衰落的影响,必须在解调前在接收端进行信道估计和均衡处理。本文对OFDM系统中信道估计技术的容量增强进行了比较,比较了各种算法的复杂度和优点。均衡器主要使用三种预测算法来估计信道响应,即最小均方算法(LMS)、归一化最小均方算法(NLMS)和递归最小二乘算法(RLS)。本文综合考虑了这三种算法,并利用MATLAB软件对其性能进行了静态比较。
{"title":"Comparative study on the channel estimation for OFDM system using LMS, NLMS and RLS algorithms","authors":"K. Elangovan","doi":"10.1109/ICPRIME.2012.6208372","DOIUrl":"https://doi.org/10.1109/ICPRIME.2012.6208372","url":null,"abstract":"Orthogonal Frequency Division Multiplexing (OFDM) has recently been applied in wireless communication systems due to its high data rate transmission capability with high bandwidth efficiency and its robustness to multi-path delay. Fading is the one of the major aspect which is considered in the receiver. To cancel the effect of fading, channel estimation and equalization procedure must be done at the receiver before data demodulation. In this paper dealt the comparisons of various algorithms, complexity and advantages, on the capacity enhancement for OFDM systems channel estimation techniques. Mainly three prediction algorithms are used in the equalizer to estimate the channel responses namely, Least Mean Square (LMS), Normalized Least Mean Square (NLMS) and Recursive Least Square (RLS) algorithms. These three algorithms are considered in this work and performances are statically compared by using MATLAB Software.","PeriodicalId":148511,"journal":{"name":"International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127181261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
Mining coregulated biclusters from gene expression data 从基因表达数据中挖掘共调控双聚类
K. I. Lakshmi, C. P. Chandran
The objective of this paper is mining coregulated biclusters from gene expression data. Gene expression is the process which produces functional product from the gene information. Data mining is used to find relevant and useful information from databases. Clustering groups the genes according to the given conditions. Biclustering algorithms belong to a distinct class of clustering algorithms that perform simultaneous clustering of both rows and columns of the gene expression matrix. In this paper a new algorithm, Enhanced Bimax algorithm is proposed based on the Bimax algorithm [7]. The normalization technique is included which is used to display a coregulated biclusters from gene expression data and grouping the genes in the particular order. In this work, Synthetic dataset is used to display the coregulated genes.
本文的目的是从基因表达数据中挖掘共调控双聚类。基因表达是将基因信息转化为功能性产物的过程。数据挖掘用于从数据库中找到相关和有用的信息。聚类根据给定的条件对基因进行分组。双聚类算法属于一种不同的聚类算法,它同时对基因表达矩阵的行和列进行聚类。本文在极大值算法的基础上提出了一种新的算法——增强极大值算法[7]。规范化技术用于从基因表达数据中显示一个共调控的双聚类,并将基因按特定顺序分组。在这项工作中,使用合成数据集来显示共调控基因。
{"title":"Mining coregulated biclusters from gene expression data","authors":"K. I. Lakshmi, C. P. Chandran","doi":"10.1109/ICPRIME.2012.6208292","DOIUrl":"https://doi.org/10.1109/ICPRIME.2012.6208292","url":null,"abstract":"The objective of this paper is mining coregulated biclusters from gene expression data. Gene expression is the process which produces functional product from the gene information. Data mining is used to find relevant and useful information from databases. Clustering groups the genes according to the given conditions. Biclustering algorithms belong to a distinct class of clustering algorithms that perform simultaneous clustering of both rows and columns of the gene expression matrix. In this paper a new algorithm, Enhanced Bimax algorithm is proposed based on the Bimax algorithm [7]. The normalization technique is included which is used to display a coregulated biclusters from gene expression data and grouping the genes in the particular order. In this work, Synthetic dataset is used to display the coregulated genes.","PeriodicalId":148511,"journal":{"name":"International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123185894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Medical image retrieval system using GGRE framework 医学图像检索系统采用GGRE框架
J. Yogapriya, I. Vennila
This paper seeks to focus on Medical Image Retrieval based on Feature extraction, Classification and Similarity Measurements which will aid for computer assisted diagnosis. The selected features are Shape(Generic Fourier Descriptor (GFD)and Texture(Gabor Filter(GF)) that are extracted and classified as positive and negative features using a classification technique called Relevance Vector Machine (RVM) that provides a natural way to classify multiple features of images. The similarity model is used to measure the relevance between the query image and the target images based on Euclidean Distance(ED). This type of Medical Image Retrieval System framework is called GGRE. The retrieval algorithm performances are evaluated in terms of precision and recall. The results show that the multiple feature classifier system yields good retrieval performance than the retrieval systems based on the individual features.
本文旨在研究基于特征提取、分类和相似度量的医学图像检索,这将有助于计算机辅助诊断。所选择的特征是形状(通用傅里叶描述符(GFD))和纹理(Gabor滤波器(GF)),它们被提取并使用一种称为相关向量机(RVM)的分类技术分类为正特征和负特征,该技术提供了一种自然的方法来对图像的多个特征进行分类。相似度模型基于欧几里得距离(ED)度量查询图像与目标图像之间的相关性。这种类型的医学图像检索系统框架被称为GGRE。从查准率和查全率两个方面评价了检索算法的性能。结果表明,多特征分类器系统比基于单个特征的检索系统具有更好的检索性能。
{"title":"Medical image retrieval system using GGRE framework","authors":"J. Yogapriya, I. Vennila","doi":"10.1109/ICPRIME.2012.6208352","DOIUrl":"https://doi.org/10.1109/ICPRIME.2012.6208352","url":null,"abstract":"This paper seeks to focus on Medical Image Retrieval based on Feature extraction, Classification and Similarity Measurements which will aid for computer assisted diagnosis. The selected features are Shape(Generic Fourier Descriptor (GFD)and Texture(Gabor Filter(GF)) that are extracted and classified as positive and negative features using a classification technique called Relevance Vector Machine (RVM) that provides a natural way to classify multiple features of images. The similarity model is used to measure the relevance between the query image and the target images based on Euclidean Distance(ED). This type of Medical Image Retrieval System framework is called GGRE. The retrieval algorithm performances are evaluated in terms of precision and recall. The results show that the multiple feature classifier system yields good retrieval performance than the retrieval systems based on the individual features.","PeriodicalId":148511,"journal":{"name":"International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129351443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A level set based deformable model for segmenting tumors in medical images 基于水平集的可变形医学图像肿瘤分割模型
S. Somaskandan, S. Mahesan
Tumor segmentation from medical image data is a challenging task due to the high diversity in appearance of tumor tissue among different cases. In this paper we propose a new level set based deformable model to segment the tumor region. We use the gradient information as well as the regional data analysis to deform the level set. At every iteration step of the deformation, we estimate new velocity forces according to the identified tumor voxels statistical measures, and the healthy tissues information. This method provides a way to segment the objects even when there are weak edges and gaps. Moreover, the deforming contours expand or shrink as necessary so as not to miss the weak edges. Experiments are carried out on real datasets with different tumor shapes, sizes, locations, and internal texture. Our results indicate that the proposed method give promising results over high resolution medical data as well as low resolution images for the high satisfaction of the oncologist at the Cancer Treatment Unit at Jaffna Teaching Hospital.
由于不同病例的肿瘤组织形态差异很大,从医学图像数据中分割肿瘤是一项具有挑战性的任务。本文提出了一种新的基于水平集的可变形肿瘤区域分割模型。我们利用梯度信息和区域数据分析对水平集进行变形。在变形的每个迭代步骤中,我们根据识别的肿瘤体素统计度量和健康组织信息估计新的速度力。这种方法提供了一种分割对象的方法,即使在有弱边缘和间隙的情况下。此外,变形轮廓根据需要扩大或缩小,以免错过弱边缘。实验在具有不同肿瘤形状、大小、位置和内部纹理的真实数据集上进行。我们的研究结果表明,该方法在高分辨率医疗数据和低分辨率图像上取得了令人满意的结果,使贾夫纳教学医院癌症治疗部门的肿瘤学家非常满意。
{"title":"A level set based deformable model for segmenting tumors in medical images","authors":"S. Somaskandan, S. Mahesan","doi":"10.1109/ICPRIME.2012.6208356","DOIUrl":"https://doi.org/10.1109/ICPRIME.2012.6208356","url":null,"abstract":"Tumor segmentation from medical image data is a challenging task due to the high diversity in appearance of tumor tissue among different cases. In this paper we propose a new level set based deformable model to segment the tumor region. We use the gradient information as well as the regional data analysis to deform the level set. At every iteration step of the deformation, we estimate new velocity forces according to the identified tumor voxels statistical measures, and the healthy tissues information. This method provides a way to segment the objects even when there are weak edges and gaps. Moreover, the deforming contours expand or shrink as necessary so as not to miss the weak edges. Experiments are carried out on real datasets with different tumor shapes, sizes, locations, and internal texture. Our results indicate that the proposed method give promising results over high resolution medical data as well as low resolution images for the high satisfaction of the oncologist at the Cancer Treatment Unit at Jaffna Teaching Hospital.","PeriodicalId":148511,"journal":{"name":"International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)","volume":"79 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128113504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Channel estimation techniques for OFDM systems OFDM系统的信道估计技术
K. Vidhya, R. Kumar
In this work we have compared different types of channel estimation algorithm for Orthogonal Frequency Division Multiplexing (OFDM)systems. The result of the Mean Square algorithm(MMSE)was compared with Least Square(LS) algorithm.
在这项工作中,我们比较了不同类型的正交频分复用(OFDM)系统的信道估计算法。将均方算法(MMSE)与最小二乘算法(LS)的结果进行比较。
{"title":"Channel estimation techniques for OFDM systems","authors":"K. Vidhya, R. Kumar","doi":"10.1109/ICPRIME.2012.6208301","DOIUrl":"https://doi.org/10.1109/ICPRIME.2012.6208301","url":null,"abstract":"In this work we have compared different types of channel estimation algorithm for Orthogonal Frequency Division Multiplexing (OFDM)systems. The result of the Mean Square algorithm(MMSE)was compared with Least Square(LS) algorithm.","PeriodicalId":148511,"journal":{"name":"International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)","volume":"117 5 Pt 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129151064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Text extraction from digital English comic image using two blobs extraction method 基于双斑点提取方法的数字英语漫画图像文本提取
M. Sundaresan, S. Ranjini
Text extraction from image is one of the complicated areas in digital image processing. It is a complex process to detect and recognize the text from comic image due to their various size, gray scale values, complex backgrounds and different styles of font. Text extraction process from comic image helps to preserve the text and formatting during conversion process and provide high quality of text from the printed document. This paper talks about English text extraction from blob in comic image using various methods.
图像文本提取是数字图像处理中的一个复杂领域。由于漫画图像的大小、灰度值、背景复杂、字体样式不同,对漫画图像中的文字进行检测和识别是一个复杂的过程。漫画图像的文本提取过程有助于在转换过程中保留文本和格式,并提供高质量的打印文档文本。本文讨论了用各种方法从漫画图像中的blob中提取英语文本。
{"title":"Text extraction from digital English comic image using two blobs extraction method","authors":"M. Sundaresan, S. Ranjini","doi":"10.1109/ICPRIME.2012.6208388","DOIUrl":"https://doi.org/10.1109/ICPRIME.2012.6208388","url":null,"abstract":"Text extraction from image is one of the complicated areas in digital image processing. It is a complex process to detect and recognize the text from comic image due to their various size, gray scale values, complex backgrounds and different styles of font. Text extraction process from comic image helps to preserve the text and formatting during conversion process and provide high quality of text from the printed document. This paper talks about English text extraction from blob in comic image using various methods.","PeriodicalId":148511,"journal":{"name":"International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114434898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
An analysis on Qualitative Bankruptcy Prediction using Fuzzy ID3 and Ant Colony Optimization Algorithm 基于模糊ID3和蚁群优化算法的定性破产预测分析
A. Martin, V. Aswathy, S. Balaji, T. Lakshmi, V. Prasanna Venkatesan
Many Qualitative Bankruptcy Prediction models are available. These models use non-financial information as Qualitative factors to predict Bankruptcy. In the prior researches Genetic Algorithm was applied to generate Qualitative Bankruptcy Prediction Rules. However this Model uses only very less number of Qualitative factors and the generated rules has redundancy and overlapping. To improve the Prediction accuracy we have proposed a model which applies more number of Qualitative factors which can be categorized using Fuzzy ID3 Algorithm and Prediction Rules are generated using Ant Colony Optimization Algorithm (ACO). In Fuzzy ID3 the concept of Entropy and Information Gain helps to rank the qualitative parameters and this can be used to generate prediction rules in qualitative Bankruptcy prediction. The concept of pheromone depositing and updating in Ant Colony Algorithm reduce the false negative rules in the bankruptcy prediction. The heuristic and probabilistic features of Ant Colony Algorithm increase the prediction accuracy of Bankruptcy. By using these two algorithms we provide more accurate prediction.
有许多定性破产预测模型可用。这些模型使用非财务信息作为定性因素来预测破产。在以往的研究中,采用遗传算法生成定性破产预测规则。然而,该模型只使用了很少数量的定性因素,并且生成的规则存在冗余和重叠。为了提高预测精度,我们提出了一种采用模糊ID3算法对更多定性因子进行分类的模型,并采用蚁群优化算法(ACO)生成预测规则。在模糊ID3中,熵和信息增益的概念有助于对定性参数进行排序,并可用于定性破产预测中生成预测规则。蚁群算法中信息素存储和更新的概念减少了破产预测中的假负规则。蚁群算法的启发式和概率性提高了破产预测的准确性。通过使用这两种算法,我们提供了更准确的预测。
{"title":"An analysis on Qualitative Bankruptcy Prediction using Fuzzy ID3 and Ant Colony Optimization Algorithm","authors":"A. Martin, V. Aswathy, S. Balaji, T. Lakshmi, V. Prasanna Venkatesan","doi":"10.1109/ICPRIME.2012.6208382","DOIUrl":"https://doi.org/10.1109/ICPRIME.2012.6208382","url":null,"abstract":"Many Qualitative Bankruptcy Prediction models are available. These models use non-financial information as Qualitative factors to predict Bankruptcy. In the prior researches Genetic Algorithm was applied to generate Qualitative Bankruptcy Prediction Rules. However this Model uses only very less number of Qualitative factors and the generated rules has redundancy and overlapping. To improve the Prediction accuracy we have proposed a model which applies more number of Qualitative factors which can be categorized using Fuzzy ID3 Algorithm and Prediction Rules are generated using Ant Colony Optimization Algorithm (ACO). In Fuzzy ID3 the concept of Entropy and Information Gain helps to rank the qualitative parameters and this can be used to generate prediction rules in qualitative Bankruptcy prediction. The concept of pheromone depositing and updating in Ant Colony Algorithm reduce the false negative rules in the bankruptcy prediction. The heuristic and probabilistic features of Ant Colony Algorithm increase the prediction accuracy of Bankruptcy. By using these two algorithms we provide more accurate prediction.","PeriodicalId":148511,"journal":{"name":"International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127347313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Fuzzy-ant based dynamic routing on large road networks 基于模糊蚁群的大型路网动态路由
M. Geetha, G. Nawaz
Route selection is essential in everyday life. We have several algorithms for detecting efficient route on Large Road Networks. This paper introduce the hierarchical community, is presented. It splits large road networks into hierarchical structure. It introduces a multi parameter route selection system which employs Fuzzy Logic (FL) and ant's behavior in nature is applied for the dynamic routing. The important rates of parameters such as path length and traffic are adjustable by the user. The purposes of new hierarchical routing algorithm significantly reduce the search space. We develop a community-based hierarchical graph model that supports Dynamic efficient route computation on large road networks.
路线选择在日常生活中是必不可少的。我们有几种算法来检测大型道路网络上的有效路线。本文介绍了层次化社团的概念。它将大型道路网络划分为分层结构。介绍了一种采用模糊逻辑的多参数路由选择系统,并将蚂蚁的自然行为应用于动态路由。重要的速率参数,如路径长度和流量可由用户调整。新的分层路由算法的目的是显著减少搜索空间。我们开发了一个基于社区的分层图模型,支持大型道路网络的动态高效路线计算。
{"title":"Fuzzy-ant based dynamic routing on large road networks","authors":"M. Geetha, G. Nawaz","doi":"10.1109/ICPRIME.2012.6208338","DOIUrl":"https://doi.org/10.1109/ICPRIME.2012.6208338","url":null,"abstract":"Route selection is essential in everyday life. We have several algorithms for detecting efficient route on Large Road Networks. This paper introduce the hierarchical community, is presented. It splits large road networks into hierarchical structure. It introduces a multi parameter route selection system which employs Fuzzy Logic (FL) and ant's behavior in nature is applied for the dynamic routing. The important rates of parameters such as path length and traffic are adjustable by the user. The purposes of new hierarchical routing algorithm significantly reduce the search space. We develop a community-based hierarchical graph model that supports Dynamic efficient route computation on large road networks.","PeriodicalId":148511,"journal":{"name":"International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126148643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1