首页 > 最新文献

Proceedings of Sixth International Conference on Document Analysis and Recognition最新文献

英文 中文
How conditional independence assumption affects handwritten character segmentation 条件独立假设如何影响手写字符分割
Pub Date : 2001-09-10 DOI: 10.1109/ICDAR.2001.953792
M. Maragoudakis, E. Kavallieratou, N. Fakotakis, G. Kokkinakis
This paper deals with the use of Bayesian Belief Networks in order to improve the accuracy and training time of character segmentation for unconstrained handwritten text. Comparative experimental results have been evaluated against Naive Bayes classification, which is based on the assumption of the independence of the parameters and two additional previous commonly used methods. Results have depicted that obtaining the inferential dependencies of the training data, could lead to the reduction of the required training time and size by a factor of 55%. Moreover, the achieved accuracy in detecting segment boundaries exceeds 86% whereas limited training data are proved to endow with very satisfactory results.
为了提高无约束手写体文本字符分割的准确率和训练时间,本文研究了贝叶斯信念网络的应用。对比实验结果与基于参数独立性假设的朴素贝叶斯分类和另外两种以前常用的方法进行了评估。结果表明,获得训练数据的推理依赖关系,可以将所需的训练时间和大小减少55%。此外,在有限的训练数据下,该算法在检测片段边界方面的准确率超过86%,取得了令人满意的结果。
{"title":"How conditional independence assumption affects handwritten character segmentation","authors":"M. Maragoudakis, E. Kavallieratou, N. Fakotakis, G. Kokkinakis","doi":"10.1109/ICDAR.2001.953792","DOIUrl":"https://doi.org/10.1109/ICDAR.2001.953792","url":null,"abstract":"This paper deals with the use of Bayesian Belief Networks in order to improve the accuracy and training time of character segmentation for unconstrained handwritten text. Comparative experimental results have been evaluated against Naive Bayes classification, which is based on the assumption of the independence of the parameters and two additional previous commonly used methods. Results have depicted that obtaining the inferential dependencies of the training data, could lead to the reduction of the required training time and size by a factor of 55%. Moreover, the achieved accuracy in detecting segment boundaries exceeds 86% whereas limited training data are proved to endow with very satisfactory results.","PeriodicalId":277816,"journal":{"name":"Proceedings of Sixth International Conference on Document Analysis and Recognition","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126975769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Applying the T-Recs table recognition system to the business letter domain T-Recs表识别系统在商务信函领域的应用
Pub Date : 2001-09-10 DOI: 10.1109/ICDAR.2001.953843
T. Kieninger, A. Dengel
This paper summarizes the core idea of the T-Recs table recognition system, an integrated system covering block-segmentation, table location and a model-free structural analysis of tables. T-Recs works on the output of commercial OCR systems that provide the word bounding box geometry together with the text itself (e.g. Xerox ScanWorX). While T-Recs performs well on a number of document categories, business letters still remained a challenging domain because the T-Recs location heuristics are mislead by their header or footer resulting in a low recognition precision. Business letters such as invoices are a very interesting domain for industrial applications due to the large amount of documents to be analyzed and the importance of the data carried within their tables. Hence, we developed a more restrictive approach which is implemented in the T-Recs++ prototype. This paper describes the ideas of the T-Recs++ location and also proposes a quality evaluation measure that reflects the bottom-up strategy of either T-Recs or T-Recs++. Finally, some results comparing both systems on a collection of business letters are given.
本文总结了T-Recs表识别系统的核心思想,这是一个集块分割、表定位和表无模型结构分析为一体的系统。T-Recs用于商业OCR系统的输出,这些系统提供单词边界框几何形状以及文本本身(例如Xerox ScanWorX)。虽然T-Recs在许多文档类别上表现良好,但商业信件仍然是一个具有挑战性的领域,因为T-Recs的位置启发式受到标题或页脚的误导,导致识别精度较低。由于需要分析大量的文档以及表格中所包含的数据的重要性,诸如发票之类的商业信件对于工业应用来说是一个非常有趣的领域。因此,我们开发了一种更严格的方法,并在t - recs++原型中实现。本文阐述了T-Recs++的定位思路,并提出了一种反映T-Recs或T-Recs++自下而上策略的质量评价指标。最后,给出了两种系统在商业信函集上的比较结果。
{"title":"Applying the T-Recs table recognition system to the business letter domain","authors":"T. Kieninger, A. Dengel","doi":"10.1109/ICDAR.2001.953843","DOIUrl":"https://doi.org/10.1109/ICDAR.2001.953843","url":null,"abstract":"This paper summarizes the core idea of the T-Recs table recognition system, an integrated system covering block-segmentation, table location and a model-free structural analysis of tables. T-Recs works on the output of commercial OCR systems that provide the word bounding box geometry together with the text itself (e.g. Xerox ScanWorX). While T-Recs performs well on a number of document categories, business letters still remained a challenging domain because the T-Recs location heuristics are mislead by their header or footer resulting in a low recognition precision. Business letters such as invoices are a very interesting domain for industrial applications due to the large amount of documents to be analyzed and the importance of the data carried within their tables. Hence, we developed a more restrictive approach which is implemented in the T-Recs++ prototype. This paper describes the ideas of the T-Recs++ location and also proposes a quality evaluation measure that reflects the bottom-up strategy of either T-Recs or T-Recs++. Finally, some results comparing both systems on a collection of business letters are given.","PeriodicalId":277816,"journal":{"name":"Proceedings of Sixth International Conference on Document Analysis and Recognition","volume":"435 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126983837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 80
Substroke approach to HMM-based on-line Kanji handwriting recognition 基于hmm的在线汉字手写识别的子笔划方法
Pub Date : 2001-09-10 DOI: 10.1109/ICDAR.2001.953838
M. Nakai, N. Akira, H. Shimodaira, S. Sagayama
A new method is proposed for online handwriting recognition of Kanji characters. The method employs substroke HMM as minimum units to constitute Japanese Kanji characters and utilizes the direction of pen motion. The main motivation is to fully utilize the continuous speech recognition algorithm by relating sentence speech to Kanji character phonemes to substrokes, and grammar to Kanji structure. The proposed system consists input feature analysis, substroke HMM, a character structure dictionary and a decoder. The present approach has the following advantages over the conventional methods that employ whole character HMM. 1) Much smaller memory requirement for dictionary and models. 2) Fast recognition by employing efficient substroke network search. 3) Capability of recognizing characters not included in the training data if defined as a sequence of substrokes in the dictionary. 4) Capability of recognizing characters written by various different stroke orders with multiple definitions per one character in the dictionary. 5) Easiness in HMM adaptation to the user with a few sample character data.
提出了一种新的汉字在线手写识别方法。该方法采用子笔画HMM作为最小单位构成日文汉字,并利用笔的运动方向。其主要动机是充分利用连续语音识别算法,将句子语音与汉字音素关联到子笔画上,将语法与汉字结构关联起来。该系统由输入特征分析、子笔划HMM、字符结构字典和译码器组成。与传统的采用全特征HMM的方法相比,本方法具有以下优点:1)字典和模型的内存需求更小。2)采用高效的子行程网络搜索实现快速识别。3)如果将训练数据中未包含的字符定义为字典中的一组子笔划序列,则识别该字符的能力。4)能够识别字典中以不同笔画顺序书写的字符,每个字符有多个定义。5)使用少量样本字符数据,HMM易于适应用户。
{"title":"Substroke approach to HMM-based on-line Kanji handwriting recognition","authors":"M. Nakai, N. Akira, H. Shimodaira, S. Sagayama","doi":"10.1109/ICDAR.2001.953838","DOIUrl":"https://doi.org/10.1109/ICDAR.2001.953838","url":null,"abstract":"A new method is proposed for online handwriting recognition of Kanji characters. The method employs substroke HMM as minimum units to constitute Japanese Kanji characters and utilizes the direction of pen motion. The main motivation is to fully utilize the continuous speech recognition algorithm by relating sentence speech to Kanji character phonemes to substrokes, and grammar to Kanji structure. The proposed system consists input feature analysis, substroke HMM, a character structure dictionary and a decoder. The present approach has the following advantages over the conventional methods that employ whole character HMM. 1) Much smaller memory requirement for dictionary and models. 2) Fast recognition by employing efficient substroke network search. 3) Capability of recognizing characters not included in the training data if defined as a sequence of substrokes in the dictionary. 4) Capability of recognizing characters written by various different stroke orders with multiple definitions per one character in the dictionary. 5) Easiness in HMM adaptation to the user with a few sample character data.","PeriodicalId":277816,"journal":{"name":"Proceedings of Sixth International Conference on Document Analysis and Recognition","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125109209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 105
Measuring HMM similarity with the Bayes probability of error and its application to online handwriting recognition 用贝叶斯误差概率度量HMM相似度及其在在线手写识别中的应用
Pub Date : 2001-09-10 DOI: 10.1109/ICDAR.2001.953822
Claus Bahlmann, H. Burkhardt
We propose a novel similarity measure for hidden Markov models (HMMs). This measure calculates the Bayes probability of error for HMM state correspondences and propagates it along the Viterbi path in a similar way to the HMM Viterbi scoring. It can be applied as a tool to interpret misclassifications, as a stop criterion in iterative HMM training or as a distance measure for HMM clustering. The similarity measure is evaluated in the context of online handwriting recognition on lower case character models which have been trained from the UNIPEN database. We compare the similarities with experimental classifications. The results show that similar and misclassified class pairs are highly correlated. The measure is not limited to handwriting recognition, but can be used in other applications that use HMM based methods.
提出了一种新的隐马尔可夫模型相似度度量方法。该度量计算HMM状态对应的Bayes误差概率,并以类似于HMM Viterbi评分的方式沿着Viterbi路径传播。它可以作为解释错误分类的工具,作为迭代HMM训练的停止准则,或者作为HMM聚类的距离度量。在在线手写识别的背景下,对从UNIPEN数据库中训练出来的小写字符模型进行相似性度量评估。我们将相似性与实验分类进行比较。结果表明,相似类对和错分类类对高度相关。该方法不仅限于手写识别,还可以用于使用基于HMM方法的其他应用程序。
{"title":"Measuring HMM similarity with the Bayes probability of error and its application to online handwriting recognition","authors":"Claus Bahlmann, H. Burkhardt","doi":"10.1109/ICDAR.2001.953822","DOIUrl":"https://doi.org/10.1109/ICDAR.2001.953822","url":null,"abstract":"We propose a novel similarity measure for hidden Markov models (HMMs). This measure calculates the Bayes probability of error for HMM state correspondences and propagates it along the Viterbi path in a similar way to the HMM Viterbi scoring. It can be applied as a tool to interpret misclassifications, as a stop criterion in iterative HMM training or as a distance measure for HMM clustering. The similarity measure is evaluated in the context of online handwriting recognition on lower case character models which have been trained from the UNIPEN database. We compare the similarities with experimental classifications. The results show that similar and misclassified class pairs are highly correlated. The measure is not limited to handwriting recognition, but can be used in other applications that use HMM based methods.","PeriodicalId":277816,"journal":{"name":"Proceedings of Sixth International Conference on Document Analysis and Recognition","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125155429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 69
Character pre-classification based on fuzzy typographical analysis 基于模糊排版分析的字符预分类
Pub Date : 2001-09-10 DOI: 10.1109/ICDAR.2001.953758
Lu Da, Pu Wei, B. McCane
This paper presents a new fuzzy-logic approach for character pre-classification which gives a precise way of calculating the baseline detection algorithm with tolerance analysis through analyzing the typographical structure of textual blocks. The other virtual reference lines are extracted from clustering techniques. In order to ensure character pre-classification correctly, a fuzzy-logic approach is used to assign a membership to each typographical category for ambiguous classes. The results prove that an improved character recognition rate can be achieved by means of typographical categorization. The fuzzy typographical analysis can correctly pre-classify characters and can efficiently process more than 10000 characters per second.
本文提出了一种新的模糊逻辑字符预分类方法,通过分析文本块的排版结构,给出了一种基于容差分析的基线检测算法的精确计算方法。其他虚拟参考线是通过聚类技术提取的。为了保证字符预分类的正确性,采用模糊逻辑方法为歧义类的每个排版类别分配成员。结果表明,采用排版分类方法可以提高字符识别率。模糊排版分析可以正确地对字符进行预分类,并且每秒可以高效地处理10000个以上的字符。
{"title":"Character pre-classification based on fuzzy typographical analysis","authors":"Lu Da, Pu Wei, B. McCane","doi":"10.1109/ICDAR.2001.953758","DOIUrl":"https://doi.org/10.1109/ICDAR.2001.953758","url":null,"abstract":"This paper presents a new fuzzy-logic approach for character pre-classification which gives a precise way of calculating the baseline detection algorithm with tolerance analysis through analyzing the typographical structure of textual blocks. The other virtual reference lines are extracted from clustering techniques. In order to ensure character pre-classification correctly, a fuzzy-logic approach is used to assign a membership to each typographical category for ambiguous classes. The results prove that an improved character recognition rate can be achieved by means of typographical categorization. The fuzzy typographical analysis can correctly pre-classify characters and can efficiently process more than 10000 characters per second.","PeriodicalId":277816,"journal":{"name":"Proceedings of Sixth International Conference on Document Analysis and Recognition","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121755076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
An improved learning scheme for the moving window classifier 一种改进的移动窗口分类器学习方案
Pub Date : 2001-09-10 DOI: 10.1109/ICDAR.2001.953861
Sanaul Hoque, M. Fairhurst
The moving window classifier (MWC) is a simple and efficient classifier structure which, although shown to be capable of promising performance in a variety of tasks such as face recognition, its common application is a tool in text recognition. Various measures have been proposed to improve the MWC classification speed and to reduce memory space requirement. This paper introduces techniques for improving the MWC classification accuracy without losing any of gains previously achieved. These performance enhancement schemes are readily applicable to a range of related classifiers and hence provide a generalized method for enhancement in a variety of tasks.
移动窗口分类器(MWC)是一种简单高效的分类器结构,虽然在人脸识别等各种任务中显示出良好的性能,但其常见的应用是文本识别的工具。为了提高MWC分类速度和减少对存储空间的需求,提出了各种措施。本文介绍了提高MWC分类精度的技术,同时又不损失以往取得的任何成果。这些性能增强方案很容易适用于一系列相关的分类器,因此为各种任务的增强提供了一种通用的方法。
{"title":"An improved learning scheme for the moving window classifier","authors":"Sanaul Hoque, M. Fairhurst","doi":"10.1109/ICDAR.2001.953861","DOIUrl":"https://doi.org/10.1109/ICDAR.2001.953861","url":null,"abstract":"The moving window classifier (MWC) is a simple and efficient classifier structure which, although shown to be capable of promising performance in a variety of tasks such as face recognition, its common application is a tool in text recognition. Various measures have been proposed to improve the MWC classification speed and to reduce memory space requirement. This paper introduces techniques for improving the MWC classification accuracy without losing any of gains previously achieved. These performance enhancement schemes are readily applicable to a range of related classifiers and hence provide a generalized method for enhancement in a variety of tasks.","PeriodicalId":277816,"journal":{"name":"Proceedings of Sixth International Conference on Document Analysis and Recognition","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125065174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
AIDAS: incremental logical structure discovery in PDF documents AIDAS: PDF文档中的增量逻辑结构发现
Pub Date : 2001-09-10 DOI: 10.1109/ICDAR.2001.953816
A. Anjewierden
AIDAS is part of a research project in which the aim is to turn technical manuals into a database of indexed training material. We describe the approach AIDAS uses to extract the logical document structure from PDF documents. The approach is based on the idea that the layout structure contains cues about the logical structure and that the logical structure can be discovered incrementally.
AIDAS是一个研究项目的一部分,其目的是将技术手册变成编入索引的培训材料数据库。我们描述了AIDAS用于从PDF文档中提取逻辑文档结构的方法。该方法基于这样的思想:布局结构包含有关逻辑结构的线索,并且可以增量地发现逻辑结构。
{"title":"AIDAS: incremental logical structure discovery in PDF documents","authors":"A. Anjewierden","doi":"10.1109/ICDAR.2001.953816","DOIUrl":"https://doi.org/10.1109/ICDAR.2001.953816","url":null,"abstract":"AIDAS is part of a research project in which the aim is to turn technical manuals into a database of indexed training material. We describe the approach AIDAS uses to extract the logical document structure from PDF documents. The approach is based on the idea that the layout structure contains cues about the logical structure and that the logical structure can be discovered incrementally.","PeriodicalId":277816,"journal":{"name":"Proceedings of Sixth International Conference on Document Analysis and Recognition","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125506162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 55
Handwritten country name identification using vector quantisation and hidden Markov model 使用矢量量化和隐马尔可夫模型的手写国家名称识别
Pub Date : 2001-09-10 DOI: 10.1109/ICDAR.2001.953877
G. Leedham, W. Tan, Weng Lee Yap
This paper is a study of keyword recognition using vector quantisation and a hidden Markov model. The purpose is to be able to identify a word holistically. This study considers the problem of identifying a handwritten country name from the 189 different country names registered with the Universal Postal Union. The method divides the words in the last line of the address image into 16/spl times/16 pixel blocks which are fed into a vector quantiser. The VQ outputs are classified using a HMM. Some presorting is carried out based on the letter-length of the word. The results on a set of 415 handwritten country names show the method is 85.3% correct with the majority of errors in estimating the letter-length of the word and distorted VQ output due to sloping and slanted words/letters.
本文研究了基于向量量化和隐马尔可夫模型的关键词识别方法。目的是为了能够从整体上识别一个单词。本研究考虑了在万国邮政联盟注册的189个不同国家名称中识别手写国家名称的问题。该方法将地址图像最后一行的单词分成16/spl倍/16个像素块,这些像素块被送入矢量量化器。VQ输出使用HMM进行分类。根据单词的字母长度进行一些排序。在一组415个手写国家名称上的结果表明,该方法的正确率为85.3%,其中大多数错误是在估计单词的字母长度和由于倾斜和倾斜的单词/字母而扭曲的VQ输出。
{"title":"Handwritten country name identification using vector quantisation and hidden Markov model","authors":"G. Leedham, W. Tan, Weng Lee Yap","doi":"10.1109/ICDAR.2001.953877","DOIUrl":"https://doi.org/10.1109/ICDAR.2001.953877","url":null,"abstract":"This paper is a study of keyword recognition using vector quantisation and a hidden Markov model. The purpose is to be able to identify a word holistically. This study considers the problem of identifying a handwritten country name from the 189 different country names registered with the Universal Postal Union. The method divides the words in the last line of the address image into 16/spl times/16 pixel blocks which are fed into a vector quantiser. The VQ outputs are classified using a HMM. Some presorting is carried out based on the letter-length of the word. The results on a set of 415 handwritten country names show the method is 85.3% correct with the majority of errors in estimating the letter-length of the word and distorted VQ output due to sloping and slanted words/letters.","PeriodicalId":277816,"journal":{"name":"Proceedings of Sixth International Conference on Document Analysis and Recognition","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133672462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Web sites thematic classification using hidden Markov models 使用隐马尔可夫模型的网站主题分类
Pub Date : 2001-09-10 DOI: 10.1109/ICDAR.2001.953955
Lyonel Serradura, M. Slimane, N. Vincent
There is more and more information available on the Internet. We need tools to help us extract the right piece of information. We have developed a classification algorithm tackling this issue in French. It distinguishes web pages classifying their text content into themes. We use Hidden Markov Models (HMM) to build this method named STCoL (Supervised Thematic Corpus Learning). Once themes are modeled with HMMs, STCoL is able to classify documents from different sources. This method is not only efficient but is also robust.
互联网上有越来越多的信息。我们需要工具来帮助我们提取正确的信息。我们已经开发了一种用法语解决这个问题的分类算法。它区分网页将其文本内容分类为主题。我们使用隐马尔可夫模型(HMM)来构建这种名为STCoL(监督主题语料库学习)的方法。一旦用hmm对主题建模,STCoL就能够对来自不同来源的文档进行分类。该方法不仅效率高,而且鲁棒性好。
{"title":"Web sites thematic classification using hidden Markov models","authors":"Lyonel Serradura, M. Slimane, N. Vincent","doi":"10.1109/ICDAR.2001.953955","DOIUrl":"https://doi.org/10.1109/ICDAR.2001.953955","url":null,"abstract":"There is more and more information available on the Internet. We need tools to help us extract the right piece of information. We have developed a classification algorithm tackling this issue in French. It distinguishes web pages classifying their text content into themes. We use Hidden Markov Models (HMM) to build this method named STCoL (Supervised Thematic Corpus Learning). Once themes are modeled with HMMs, STCoL is able to classify documents from different sources. This method is not only efficient but is also robust.","PeriodicalId":277816,"journal":{"name":"Proceedings of Sixth International Conference on Document Analysis and Recognition","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133687412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Newspaper page decomposition using a split and merge approach 使用拆分和合并方法分解报纸页面
Pub Date : 2001-09-10 DOI: 10.1109/ICDAR.2001.953972
K. Hadjar, O. Hitz, R. Ingold
Indexing large newspaper archives requires automatic page decomposition algorithms with high accuracy. In this paper, we present our approach to an automatic page decomposition algorithm developed for the First International Newspaper Segmentation Contest. Our approach decomposes the newspaper image into image regions, horizontal and vertical lines, text regions and title areas. Experimental results are obtained from the data set of the contest.
索引大型报纸档案需要高精度的自动页面分解算法。在本文中,我们提出了为第一届国际报纸分割竞赛开发的自动页面分解算法的方法。我们的方法将报纸图像分解为图像区域、水平线和垂直线、文本区域和标题区域。实验结果来源于比赛数据集。
{"title":"Newspaper page decomposition using a split and merge approach","authors":"K. Hadjar, O. Hitz, R. Ingold","doi":"10.1109/ICDAR.2001.953972","DOIUrl":"https://doi.org/10.1109/ICDAR.2001.953972","url":null,"abstract":"Indexing large newspaper archives requires automatic page decomposition algorithms with high accuracy. In this paper, we present our approach to an automatic page decomposition algorithm developed for the First International Newspaper Segmentation Contest. Our approach decomposes the newspaper image into image regions, horizontal and vertical lines, text regions and title areas. Experimental results are obtained from the data set of the contest.","PeriodicalId":277816,"journal":{"name":"Proceedings of Sixth International Conference on Document Analysis and Recognition","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132356663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 41
期刊
Proceedings of Sixth International Conference on Document Analysis and Recognition
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1