首页 > 最新文献

International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)最新文献

英文 中文
Mining coregulated biclusters from gene expression data 从基因表达数据中挖掘共调控双聚类
K. I. Lakshmi, C. P. Chandran
The objective of this paper is mining coregulated biclusters from gene expression data. Gene expression is the process which produces functional product from the gene information. Data mining is used to find relevant and useful information from databases. Clustering groups the genes according to the given conditions. Biclustering algorithms belong to a distinct class of clustering algorithms that perform simultaneous clustering of both rows and columns of the gene expression matrix. In this paper a new algorithm, Enhanced Bimax algorithm is proposed based on the Bimax algorithm [7]. The normalization technique is included which is used to display a coregulated biclusters from gene expression data and grouping the genes in the particular order. In this work, Synthetic dataset is used to display the coregulated genes.
本文的目的是从基因表达数据中挖掘共调控双聚类。基因表达是将基因信息转化为功能性产物的过程。数据挖掘用于从数据库中找到相关和有用的信息。聚类根据给定的条件对基因进行分组。双聚类算法属于一种不同的聚类算法,它同时对基因表达矩阵的行和列进行聚类。本文在极大值算法的基础上提出了一种新的算法——增强极大值算法[7]。规范化技术用于从基因表达数据中显示一个共调控的双聚类,并将基因按特定顺序分组。在这项工作中,使用合成数据集来显示共调控基因。
{"title":"Mining coregulated biclusters from gene expression data","authors":"K. I. Lakshmi, C. P. Chandran","doi":"10.1109/ICPRIME.2012.6208292","DOIUrl":"https://doi.org/10.1109/ICPRIME.2012.6208292","url":null,"abstract":"The objective of this paper is mining coregulated biclusters from gene expression data. Gene expression is the process which produces functional product from the gene information. Data mining is used to find relevant and useful information from databases. Clustering groups the genes according to the given conditions. Biclustering algorithms belong to a distinct class of clustering algorithms that perform simultaneous clustering of both rows and columns of the gene expression matrix. In this paper a new algorithm, Enhanced Bimax algorithm is proposed based on the Bimax algorithm [7]. The normalization technique is included which is used to display a coregulated biclusters from gene expression data and grouping the genes in the particular order. In this work, Synthetic dataset is used to display the coregulated genes.","PeriodicalId":148511,"journal":{"name":"International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123185894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparative study on the channel estimation for OFDM system using LMS, NLMS and RLS algorithms 基于LMS、NLMS和RLS算法的OFDM系统信道估计比较研究
K. Elangovan
Orthogonal Frequency Division Multiplexing (OFDM) has recently been applied in wireless communication systems due to its high data rate transmission capability with high bandwidth efficiency and its robustness to multi-path delay. Fading is the one of the major aspect which is considered in the receiver. To cancel the effect of fading, channel estimation and equalization procedure must be done at the receiver before data demodulation. In this paper dealt the comparisons of various algorithms, complexity and advantages, on the capacity enhancement for OFDM systems channel estimation techniques. Mainly three prediction algorithms are used in the equalizer to estimate the channel responses namely, Least Mean Square (LMS), Normalized Least Mean Square (NLMS) and Recursive Least Square (RLS) algorithms. These three algorithms are considered in this work and performances are statically compared by using MATLAB Software.
正交频分复用技术(OFDM)以其高数据速率、高带宽效率和对多径延迟的鲁棒性等优点在无线通信系统中得到了广泛的应用。衰落是接收机考虑的主要方面之一。为了消除衰落的影响,必须在解调前在接收端进行信道估计和均衡处理。本文对OFDM系统中信道估计技术的容量增强进行了比较,比较了各种算法的复杂度和优点。均衡器主要使用三种预测算法来估计信道响应,即最小均方算法(LMS)、归一化最小均方算法(NLMS)和递归最小二乘算法(RLS)。本文综合考虑了这三种算法,并利用MATLAB软件对其性能进行了静态比较。
{"title":"Comparative study on the channel estimation for OFDM system using LMS, NLMS and RLS algorithms","authors":"K. Elangovan","doi":"10.1109/ICPRIME.2012.6208372","DOIUrl":"https://doi.org/10.1109/ICPRIME.2012.6208372","url":null,"abstract":"Orthogonal Frequency Division Multiplexing (OFDM) has recently been applied in wireless communication systems due to its high data rate transmission capability with high bandwidth efficiency and its robustness to multi-path delay. Fading is the one of the major aspect which is considered in the receiver. To cancel the effect of fading, channel estimation and equalization procedure must be done at the receiver before data demodulation. In this paper dealt the comparisons of various algorithms, complexity and advantages, on the capacity enhancement for OFDM systems channel estimation techniques. Mainly three prediction algorithms are used in the equalizer to estimate the channel responses namely, Least Mean Square (LMS), Normalized Least Mean Square (NLMS) and Recursive Least Square (RLS) algorithms. These three algorithms are considered in this work and performances are statically compared by using MATLAB Software.","PeriodicalId":148511,"journal":{"name":"International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127181261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
Binary data clustering based on Wiener transformation 基于Wiener变换的二值数据聚类
D. A. Kumar, M. C. Loraine Charlet Annie
Clustering is the process of grouping similar items. Clustering becomes very tedious when data dimensionality and sparsity increases. Binary data are the simplest form of data used in information systems for very large database and it is very efficient based on computational efficiency, memory capacity to represent categorical type data. Usually the binary data clustering is done by using 0 and 1 as numerical value. In this paper, the binary data clustering is performed by preprocessing the binary data to real by wiener transformation. Wiener is a linear Transformation based upon statistics and it is optimal in terms of Mean square error. Computational results show that the clustering based on Wiener transformation is very efficient in terms of objectivity and subjectivity.
聚类是对相似的项目进行分组的过程。当数据维数和稀疏度增加时,聚类变得非常乏味。二进制数据是信息系统中用于超大型数据库的最简单的数据形式,它在计算效率、内存容量等方面非常有效地表示分类类型的数据。通常用0和1作为数值进行二值数据聚类。本文采用维纳变换将二值数据预处理为实数,实现二值数据聚类。Wiener是一种基于统计的线性变换,它在均方误差方面是最优的。计算结果表明,基于维纳变换的聚类在客观性和主观性方面都是非常有效的。
{"title":"Binary data clustering based on Wiener transformation","authors":"D. A. Kumar, M. C. Loraine Charlet Annie","doi":"10.1109/ICPRIME.2012.6208287","DOIUrl":"https://doi.org/10.1109/ICPRIME.2012.6208287","url":null,"abstract":"Clustering is the process of grouping similar items. Clustering becomes very tedious when data dimensionality and sparsity increases. Binary data are the simplest form of data used in information systems for very large database and it is very efficient based on computational efficiency, memory capacity to represent categorical type data. Usually the binary data clustering is done by using 0 and 1 as numerical value. In this paper, the binary data clustering is performed by preprocessing the binary data to real by wiener transformation. Wiener is a linear Transformation based upon statistics and it is optimal in terms of Mean square error. Computational results show that the clustering based on Wiener transformation is very efficient in terms of objectivity and subjectivity.","PeriodicalId":148511,"journal":{"name":"International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117128709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A level set based deformable model for segmenting tumors in medical images 基于水平集的可变形医学图像肿瘤分割模型
S. Somaskandan, S. Mahesan
Tumor segmentation from medical image data is a challenging task due to the high diversity in appearance of tumor tissue among different cases. In this paper we propose a new level set based deformable model to segment the tumor region. We use the gradient information as well as the regional data analysis to deform the level set. At every iteration step of the deformation, we estimate new velocity forces according to the identified tumor voxels statistical measures, and the healthy tissues information. This method provides a way to segment the objects even when there are weak edges and gaps. Moreover, the deforming contours expand or shrink as necessary so as not to miss the weak edges. Experiments are carried out on real datasets with different tumor shapes, sizes, locations, and internal texture. Our results indicate that the proposed method give promising results over high resolution medical data as well as low resolution images for the high satisfaction of the oncologist at the Cancer Treatment Unit at Jaffna Teaching Hospital.
由于不同病例的肿瘤组织形态差异很大,从医学图像数据中分割肿瘤是一项具有挑战性的任务。本文提出了一种新的基于水平集的可变形肿瘤区域分割模型。我们利用梯度信息和区域数据分析对水平集进行变形。在变形的每个迭代步骤中,我们根据识别的肿瘤体素统计度量和健康组织信息估计新的速度力。这种方法提供了一种分割对象的方法,即使在有弱边缘和间隙的情况下。此外,变形轮廓根据需要扩大或缩小,以免错过弱边缘。实验在具有不同肿瘤形状、大小、位置和内部纹理的真实数据集上进行。我们的研究结果表明,该方法在高分辨率医疗数据和低分辨率图像上取得了令人满意的结果,使贾夫纳教学医院癌症治疗部门的肿瘤学家非常满意。
{"title":"A level set based deformable model for segmenting tumors in medical images","authors":"S. Somaskandan, S. Mahesan","doi":"10.1109/ICPRIME.2012.6208356","DOIUrl":"https://doi.org/10.1109/ICPRIME.2012.6208356","url":null,"abstract":"Tumor segmentation from medical image data is a challenging task due to the high diversity in appearance of tumor tissue among different cases. In this paper we propose a new level set based deformable model to segment the tumor region. We use the gradient information as well as the regional data analysis to deform the level set. At every iteration step of the deformation, we estimate new velocity forces according to the identified tumor voxels statistical measures, and the healthy tissues information. This method provides a way to segment the objects even when there are weak edges and gaps. Moreover, the deforming contours expand or shrink as necessary so as not to miss the weak edges. Experiments are carried out on real datasets with different tumor shapes, sizes, locations, and internal texture. Our results indicate that the proposed method give promising results over high resolution medical data as well as low resolution images for the high satisfaction of the oncologist at the Cancer Treatment Unit at Jaffna Teaching Hospital.","PeriodicalId":148511,"journal":{"name":"International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)","volume":"79 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128113504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Medical image retrieval system using GGRE framework 医学图像检索系统采用GGRE框架
J. Yogapriya, I. Vennila
This paper seeks to focus on Medical Image Retrieval based on Feature extraction, Classification and Similarity Measurements which will aid for computer assisted diagnosis. The selected features are Shape(Generic Fourier Descriptor (GFD)and Texture(Gabor Filter(GF)) that are extracted and classified as positive and negative features using a classification technique called Relevance Vector Machine (RVM) that provides a natural way to classify multiple features of images. The similarity model is used to measure the relevance between the query image and the target images based on Euclidean Distance(ED). This type of Medical Image Retrieval System framework is called GGRE. The retrieval algorithm performances are evaluated in terms of precision and recall. The results show that the multiple feature classifier system yields good retrieval performance than the retrieval systems based on the individual features.
本文旨在研究基于特征提取、分类和相似度量的医学图像检索,这将有助于计算机辅助诊断。所选择的特征是形状(通用傅里叶描述符(GFD))和纹理(Gabor滤波器(GF)),它们被提取并使用一种称为相关向量机(RVM)的分类技术分类为正特征和负特征,该技术提供了一种自然的方法来对图像的多个特征进行分类。相似度模型基于欧几里得距离(ED)度量查询图像与目标图像之间的相关性。这种类型的医学图像检索系统框架被称为GGRE。从查准率和查全率两个方面评价了检索算法的性能。结果表明,多特征分类器系统比基于单个特征的检索系统具有更好的检索性能。
{"title":"Medical image retrieval system using GGRE framework","authors":"J. Yogapriya, I. Vennila","doi":"10.1109/ICPRIME.2012.6208352","DOIUrl":"https://doi.org/10.1109/ICPRIME.2012.6208352","url":null,"abstract":"This paper seeks to focus on Medical Image Retrieval based on Feature extraction, Classification and Similarity Measurements which will aid for computer assisted diagnosis. The selected features are Shape(Generic Fourier Descriptor (GFD)and Texture(Gabor Filter(GF)) that are extracted and classified as positive and negative features using a classification technique called Relevance Vector Machine (RVM) that provides a natural way to classify multiple features of images. The similarity model is used to measure the relevance between the query image and the target images based on Euclidean Distance(ED). This type of Medical Image Retrieval System framework is called GGRE. The retrieval algorithm performances are evaluated in terms of precision and recall. The results show that the multiple feature classifier system yields good retrieval performance than the retrieval systems based on the individual features.","PeriodicalId":148511,"journal":{"name":"International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129351443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
An approach of cryptographic for estimating the impact of fingerprint for biometric 一种估计指纹对生物识别影响的密码学方法
K. Sudheesh, C. Patil
A very important step in automatic fingerprint recognition system is to automatically and reliably extract minutia from the fingerprint images. Minutia based fingerprint recognition algorithms have been widely accepted as a standard for single fingerprint recognition applications. But the performance of a minutia extraction algorithm relies heavily on the quality of the input fingerprint images and the image processing techniques used for enhancing the ridges and valleys of the fingerprint image and make the minutia distinguishable from the other parts of the fingerprint image. This paper basically focuses on how the minutia points are extracted from fingerprint images and the proposed method is to design and implement the current techniques for fingerprint recognition specifically public key cryptography (RSA Algorithm) to improve the performance of the system. The design is mainly decomposed into image preprocessing, feature extraction, implementation of RSA algorithm for extracted feature and finally matches.
自动、可靠地提取指纹图像中的细节是指纹自动识别系统的一个重要步骤。基于Minutia的指纹识别算法已被广泛接受为单一指纹识别应用的标准。但是特征提取算法的性能在很大程度上依赖于输入指纹图像的质量和用于增强指纹图像的脊谷的图像处理技术,使指纹图像的特征与指纹图像的其他部分区分开来。本文主要研究如何从指纹图像中提取细节点,提出的方法是设计和实现现有的指纹识别技术,特别是公钥加密算法(RSA算法),以提高系统的性能。该设计主要分为图像预处理、特征提取、实现RSA算法提取特征,最后进行匹配。
{"title":"An approach of cryptographic for estimating the impact of fingerprint for biometric","authors":"K. Sudheesh, C. Patil","doi":"10.1109/ICPRIME.2012.6208337","DOIUrl":"https://doi.org/10.1109/ICPRIME.2012.6208337","url":null,"abstract":"A very important step in automatic fingerprint recognition system is to automatically and reliably extract minutia from the fingerprint images. Minutia based fingerprint recognition algorithms have been widely accepted as a standard for single fingerprint recognition applications. But the performance of a minutia extraction algorithm relies heavily on the quality of the input fingerprint images and the image processing techniques used for enhancing the ridges and valleys of the fingerprint image and make the minutia distinguishable from the other parts of the fingerprint image. This paper basically focuses on how the minutia points are extracted from fingerprint images and the proposed method is to design and implement the current techniques for fingerprint recognition specifically public key cryptography (RSA Algorithm) to improve the performance of the system. The design is mainly decomposed into image preprocessing, feature extraction, implementation of RSA algorithm for extracted feature and finally matches.","PeriodicalId":148511,"journal":{"name":"International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)","volume":"318 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124505855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A new approach to the design of knowledge base using XCLS clustering 一种基于XCLS聚类的知识库设计新方法
J. Beevi, N. Deivasigamani
A Knowledge Base is a special kind of data base used for storage and retrieval of knowledge. From the perspective of knowledge creators, maintenance and creation of knowledge base is a crucial activity in the life cycle of knowledge management. This paper presents a novel approach to the creation of knowledge base. The main focus of our approach is to extract the knowledge from unstructured web documents and create a knowledge base. Preprocessing techniques such as tokenizing, stemming are performed on the unstructured input web documents. Meanwhile, Similarity and redundancy computation is performed for duplicate knowledge removal. The extracted knowledge is organized and converted to XML documents. XCLS clustering is made on XML documents. Finally, Knowledge base is designed for storing extracted XML documents. A query interface has been developed to retrieve the search knowledge. To test the usefulness and ease of use of our prototype, we used the Technology Acceptance Model (TAM) to evaluate the system. Results are promising.
知识库是一种用于存储和检索知识的特殊数据库。从知识创造者的角度来看,知识库的维护和创造是知识管理生命周期中至关重要的活动。本文提出了一种新的知识库创建方法。我们的方法的主要焦点是从非结构化的web文档中提取知识,并创建一个知识库。预处理技术,如标记化,词干提取在非结构化的输入web文档上执行。同时,对重复知识进行相似度和冗余度计算。提取的知识被组织并转换为XML文档。XCLS集群是在XML文档上进行的。最后,设计了知识库,用于存储提取的XML文档。开发了检索检索知识的查询接口。为了测试原型的有用性和易用性,我们使用技术接受模型(TAM)来评估系统。结果是有希望的。
{"title":"A new approach to the design of knowledge base using XCLS clustering","authors":"J. Beevi, N. Deivasigamani","doi":"10.1109/ICPRIME.2012.6208280","DOIUrl":"https://doi.org/10.1109/ICPRIME.2012.6208280","url":null,"abstract":"A Knowledge Base is a special kind of data base used for storage and retrieval of knowledge. From the perspective of knowledge creators, maintenance and creation of knowledge base is a crucial activity in the life cycle of knowledge management. This paper presents a novel approach to the creation of knowledge base. The main focus of our approach is to extract the knowledge from unstructured web documents and create a knowledge base. Preprocessing techniques such as tokenizing, stemming are performed on the unstructured input web documents. Meanwhile, Similarity and redundancy computation is performed for duplicate knowledge removal. The extracted knowledge is organized and converted to XML documents. XCLS clustering is made on XML documents. Finally, Knowledge base is designed for storing extracted XML documents. A query interface has been developed to retrieve the search knowledge. To test the usefulness and ease of use of our prototype, we used the Technology Acceptance Model (TAM) to evaluate the system. Results are promising.","PeriodicalId":148511,"journal":{"name":"International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124868487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Fuzzy-ant based dynamic routing on large road networks 基于模糊蚁群的大型路网动态路由
M. Geetha, G. Nawaz
Route selection is essential in everyday life. We have several algorithms for detecting efficient route on Large Road Networks. This paper introduce the hierarchical community, is presented. It splits large road networks into hierarchical structure. It introduces a multi parameter route selection system which employs Fuzzy Logic (FL) and ant's behavior in nature is applied for the dynamic routing. The important rates of parameters such as path length and traffic are adjustable by the user. The purposes of new hierarchical routing algorithm significantly reduce the search space. We develop a community-based hierarchical graph model that supports Dynamic efficient route computation on large road networks.
路线选择在日常生活中是必不可少的。我们有几种算法来检测大型道路网络上的有效路线。本文介绍了层次化社团的概念。它将大型道路网络划分为分层结构。介绍了一种采用模糊逻辑的多参数路由选择系统,并将蚂蚁的自然行为应用于动态路由。重要的速率参数,如路径长度和流量可由用户调整。新的分层路由算法的目的是显著减少搜索空间。我们开发了一个基于社区的分层图模型,支持大型道路网络的动态高效路线计算。
{"title":"Fuzzy-ant based dynamic routing on large road networks","authors":"M. Geetha, G. Nawaz","doi":"10.1109/ICPRIME.2012.6208338","DOIUrl":"https://doi.org/10.1109/ICPRIME.2012.6208338","url":null,"abstract":"Route selection is essential in everyday life. We have several algorithms for detecting efficient route on Large Road Networks. This paper introduce the hierarchical community, is presented. It splits large road networks into hierarchical structure. It introduces a multi parameter route selection system which employs Fuzzy Logic (FL) and ant's behavior in nature is applied for the dynamic routing. The important rates of parameters such as path length and traffic are adjustable by the user. The purposes of new hierarchical routing algorithm significantly reduce the search space. We develop a community-based hierarchical graph model that supports Dynamic efficient route computation on large road networks.","PeriodicalId":148511,"journal":{"name":"International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126148643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Text extraction from digital English comic image using two blobs extraction method 基于双斑点提取方法的数字英语漫画图像文本提取
M. Sundaresan, S. Ranjini
Text extraction from image is one of the complicated areas in digital image processing. It is a complex process to detect and recognize the text from comic image due to their various size, gray scale values, complex backgrounds and different styles of font. Text extraction process from comic image helps to preserve the text and formatting during conversion process and provide high quality of text from the printed document. This paper talks about English text extraction from blob in comic image using various methods.
图像文本提取是数字图像处理中的一个复杂领域。由于漫画图像的大小、灰度值、背景复杂、字体样式不同,对漫画图像中的文字进行检测和识别是一个复杂的过程。漫画图像的文本提取过程有助于在转换过程中保留文本和格式,并提供高质量的打印文档文本。本文讨论了用各种方法从漫画图像中的blob中提取英语文本。
{"title":"Text extraction from digital English comic image using two blobs extraction method","authors":"M. Sundaresan, S. Ranjini","doi":"10.1109/ICPRIME.2012.6208388","DOIUrl":"https://doi.org/10.1109/ICPRIME.2012.6208388","url":null,"abstract":"Text extraction from image is one of the complicated areas in digital image processing. It is a complex process to detect and recognize the text from comic image due to their various size, gray scale values, complex backgrounds and different styles of font. Text extraction process from comic image helps to preserve the text and formatting during conversion process and provide high quality of text from the printed document. This paper talks about English text extraction from blob in comic image using various methods.","PeriodicalId":148511,"journal":{"name":"International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114434898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Hybrid spamicity score approach to web spam detection 混合垃圾邮件得分方法的网络垃圾邮件检测
S. P. Algur, N. T. Pendari
Web spamming refers to actions intended to mislead search engines and give some pages higher ranking than they deserve. Fundamentally, Web spam is designed to pollute search engines and corrupt the user experience by driving traffic to particular spammed Web pages, regardless of the merits of those pages. Recently, there is dramatic increase in amount of web spam, leading to a degradation of search results. Most of the existing web spam detection methods are supervised that require a large set of training web pages. The proposed system studies the problem of unsupervised web spam detection. It introduces the notion of spamicity to measure how likely a page is spam. Spamicity is a more flexible measure than the traditional supervised classification methods. In the proposed system link and content spam techniques are used to determine the spamicity score of web page. A threshold is set by empirical analysis which classifies the web page into spam or non spam.
网络垃圾邮件指的是误导搜索引擎的行为,并给予一些页面比他们应得的更高的排名。从根本上说,Web垃圾邮件的目的是通过将流量驱动到特定的垃圾邮件Web页面来污染搜索引擎并破坏用户体验,而不考虑这些页面的优点。最近,网络垃圾邮件的数量急剧增加,导致搜索结果的退化。现有的垃圾邮件检测方法大多是监督式的,需要大量的训练网页。该系统研究了无监督网络垃圾邮件检测问题。它引入了垃圾信息的概念来衡量一个页面是垃圾邮件的可能性。垃圾信息是一种比传统的监督分类方法更灵活的度量方法。在该系统中,链接垃圾邮件和内容垃圾邮件技术被用来确定网页的垃圾邮件得分。通过实证分析设置阈值,将网页分为垃圾网页和非垃圾网页。
{"title":"Hybrid spamicity score approach to web spam detection","authors":"S. P. Algur, N. T. Pendari","doi":"10.1109/ICPRIME.2012.6208284","DOIUrl":"https://doi.org/10.1109/ICPRIME.2012.6208284","url":null,"abstract":"Web spamming refers to actions intended to mislead search engines and give some pages higher ranking than they deserve. Fundamentally, Web spam is designed to pollute search engines and corrupt the user experience by driving traffic to particular spammed Web pages, regardless of the merits of those pages. Recently, there is dramatic increase in amount of web spam, leading to a degradation of search results. Most of the existing web spam detection methods are supervised that require a large set of training web pages. The proposed system studies the problem of unsupervised web spam detection. It introduces the notion of spamicity to measure how likely a page is spam. Spamicity is a more flexible measure than the traditional supervised classification methods. In the proposed system link and content spam techniques are used to determine the spamicity score of web page. A threshold is set by empirical analysis which classifies the web page into spam or non spam.","PeriodicalId":148511,"journal":{"name":"International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)","volume":"136 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124271359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
期刊
International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1