首页 > 最新文献

Proceedings of 3rd International Conference on Document Analysis and Recognition最新文献

英文 中文
Detection of courtesy amount block on bank checks 检测银行支票上的礼貌金额阻塞
Pub Date : 1996-04-01 DOI: 10.1109/ICDAR.1995.602011
A. Agarwal, Karim Hussein, Amar Gupta, P. Wang
This paper discusses a technique for locating the courtesy amount block on bank checks. In the analysis and recognition process, connected components in the image are identified first. Then, strings are constructed on the basis of proximity and horizontal alignment of characters. Next, a set of rules and heuristics are applied to these strings to choose the correct one. The chosen string is only accepted if it passes a verification test, which includes an attempt to recognize the currency sign. A deterministic finite automaton system is then used for segmenting the handprinted courtesy amount. Finally, the separated components are passed on to a neural network based recognition system.
本文讨论了银行支票上礼貌金额块的定位技术。在分析和识别过程中,首先识别图像中的连通成分。然后,根据字符的接近度和水平对齐来构造字符串。接下来,对这些字符串应用一组规则和启发式方法来选择正确的字符串。所选字符串只有在通过验证测试时才被接受,验证测试包括尝试识别货币符号。然后使用确定性有限自动机系统对手印礼金金额进行分割。最后,将分离的成分传递给基于神经网络的识别系统。
{"title":"Detection of courtesy amount block on bank checks","authors":"A. Agarwal, Karim Hussein, Amar Gupta, P. Wang","doi":"10.1109/ICDAR.1995.602011","DOIUrl":"https://doi.org/10.1109/ICDAR.1995.602011","url":null,"abstract":"This paper discusses a technique for locating the courtesy amount block on bank checks. In the analysis and recognition process, connected components in the image are identified first. Then, strings are constructed on the basis of proximity and horizontal alignment of characters. Next, a set of rules and heuristics are applied to these strings to choose the correct one. The chosen string is only accepted if it passes a verification test, which includes an attempt to recognize the currency sign. A deterministic finite automaton system is then used for segmenting the handprinted courtesy amount. Finally, the separated components are passed on to a neural network based recognition system.","PeriodicalId":273519,"journal":{"name":"Proceedings of 3rd International Conference on Document Analysis and Recognition","volume":"558 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124119888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
Realization of a high-performance bilingual Chinese-English OCR system 高性能中英文双语OCR系统的实现
Pub Date : 1995-08-14 DOI: 10.1109/ICDAR.1995.602065
Hong Guo, Xiaoqing Ding, Zhong Zhang, F. Guo, Youshou Wu
This paper focuses on the realization of a bilingual Chinese-English OCR system. First, the Twice-Segment Algorithm is used for segmentation of documents with Chinese and English characters mixed. Then the comprehensive recognition method is employed to improve the robustness of Chinese character recognition. A new measurement of robustness of OCR recognition performance is also put forward here. Finally, exciting experimental results are given.
本文主要研究一个中英文双语OCR系统的实现。首先,采用二次分割算法对中英文混合字符的文档进行分割。然后采用综合识别方法提高汉字识别的鲁棒性。本文还提出了一种新的OCR识别鲁棒性度量方法。最后给出了令人振奋的实验结果。
{"title":"Realization of a high-performance bilingual Chinese-English OCR system","authors":"Hong Guo, Xiaoqing Ding, Zhong Zhang, F. Guo, Youshou Wu","doi":"10.1109/ICDAR.1995.602065","DOIUrl":"https://doi.org/10.1109/ICDAR.1995.602065","url":null,"abstract":"This paper focuses on the realization of a bilingual Chinese-English OCR system. First, the Twice-Segment Algorithm is used for segmentation of documents with Chinese and English characters mixed. Then the comprehensive recognition method is employed to improve the robustness of Chinese character recognition. A new measurement of robustness of OCR recognition performance is also put forward here. Finally, exciting experimental results are given.","PeriodicalId":273519,"journal":{"name":"Proceedings of 3rd International Conference on Document Analysis and Recognition","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115438973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Incremental character recognition with feature attribution 基于特征属性的增量字符识别
Pub Date : 1995-08-14 DOI: 10.1109/ICDAR.1995.602031
Ré Audouin, L. Shastri
The neural network learning algorithm presented in the paper splits the problem of handwritten digit recognition into easy steps by learning character classes incrementally: at each step, the neurons most relevant to the considered class are fixed so that subsequent classes will not disrupt the knowledge already acquired, but will be able to use it. A new relevance measure is also defined, for which a cheap approximation can be computed. The advantage of the attribution scheme starts to show even for small experiments, but should become more obvious as the number of classes increases. Picking only a few relevant features for each class, and sharing them between classes, constrains learning and improves generalization. Experiments were limited to pre-segmented digits, but our use of a spatio-temporal network architecture makes their extension to unsegmented strings straightforward.
本文提出的神经网络学习算法通过增量学习字符类将手写数字识别问题分解为简单的步骤:在每一步中,与所考虑的类最相关的神经元是固定的,以便后续类不会破坏已经获得的知识,而是能够使用它。还定义了一种新的相关度量,可以计算出一个便宜的近似值。即使在小型实验中,归因方案的优势也开始显现出来,但随着类别数量的增加,它会变得更加明显。为每个类只选择几个相关的特征,并在类之间共享它们,限制了学习并提高了泛化。实验仅限于预分割的数字,但我们使用的时空网络架构使其扩展到未分割的字符串很简单。
{"title":"Incremental character recognition with feature attribution","authors":"Ré Audouin, L. Shastri","doi":"10.1109/ICDAR.1995.602031","DOIUrl":"https://doi.org/10.1109/ICDAR.1995.602031","url":null,"abstract":"The neural network learning algorithm presented in the paper splits the problem of handwritten digit recognition into easy steps by learning character classes incrementally: at each step, the neurons most relevant to the considered class are fixed so that subsequent classes will not disrupt the knowledge already acquired, but will be able to use it. A new relevance measure is also defined, for which a cheap approximation can be computed. The advantage of the attribution scheme starts to show even for small experiments, but should become more obvious as the number of classes increases. Picking only a few relevant features for each class, and sharing them between classes, constrains learning and improves generalization. Experiments were limited to pre-segmented digits, but our use of a spatio-temporal network architecture makes their extension to unsegmented strings straightforward.","PeriodicalId":273519,"journal":{"name":"Proceedings of 3rd International Conference on Document Analysis and Recognition","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115482278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A set-based benchmarking method for address bloc location on arbitrarily complex grey level images 一种基于集基准的任意复杂灰度图像地址块定位方法
Pub Date : 1995-08-14 DOI: 10.1109/ICDAR.1995.601972
S. Randriamasy
A set-based benchmarking method has been previously presented for page segmentation. The author proposes to adapt it to address bloc location on grey level images of various layouts and visual quality. For each mailpiece, a ground truth file describing the address bloc lines is compared with the automatically located zones meant to contain them. The pixels forming the ground truth lines are extracted by an adaptive binarization on the manually located zones. The algorithm allows ABL to locate lines partially, to provide several solutions, and the contents-related importance of a text line is reflected numerically. It provides an explicit error diagnosis together with a numerical evaluation.
先前已经提出了一种基于集的基准测试方法用于页面分割。作者建议将其应用于各种布局和视觉质量的灰度图像上的块定位。对于每一封邮件,一个描述地址组线的真实文件与自动定位的包含它们的区域进行比较。通过对人工定位区域的自适应二值化提取构成地面真线的像素。该算法允许ABL对行进行局部定位,提供多种解决方案,并以数字形式反映文本行与内容相关的重要性。它提供了明确的错误诊断和数值评估。
{"title":"A set-based benchmarking method for address bloc location on arbitrarily complex grey level images","authors":"S. Randriamasy","doi":"10.1109/ICDAR.1995.601972","DOIUrl":"https://doi.org/10.1109/ICDAR.1995.601972","url":null,"abstract":"A set-based benchmarking method has been previously presented for page segmentation. The author proposes to adapt it to address bloc location on grey level images of various layouts and visual quality. For each mailpiece, a ground truth file describing the address bloc lines is compared with the automatically located zones meant to contain them. The pixels forming the ground truth lines are extracted by an adaptive binarization on the manually located zones. The algorithm allows ABL to locate lines partially, to provide several solutions, and the contents-related importance of a text line is reflected numerically. It provides an explicit error diagnosis together with a numerical evaluation.","PeriodicalId":273519,"journal":{"name":"Proceedings of 3rd International Conference on Document Analysis and Recognition","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124799928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Hierarchical flexible matching for recognition of Chinese characters 面向汉字识别的分层灵活匹配
Pub Date : 1995-08-14 DOI: 10.1109/ICDAR.1995.602028
F. Chang, Yung-Ping Cheng, Yao-Sheng Huang
Although there are thousands of many commonly used Chinese characters, they are actually composed of a lower number of stroke sub-patterns. Thus, in the task of recognizing optical Chinese characters, it is worthwhile to first identify the sub-patterns and then the whole characters composed of them. In such a hierarchical approach, however, decision mistakes at lower-levels can easily propagate into upper-levels to cause a high mis-recognition rate. To remedy this problem we devise a method called hierarchical flexible matching (HFM). The idea is to minimize the decision burdens at lower levels by allowing possibly conflicting sub-patterns to be identified from the same pool of primitives. The collection of these sub-patterns is then matched against some pre-specified models. In doing so, a metric is used to measure how well a candidate model is mapped into the given collection and how many primitives are covered by this mapping. We apply the HFM method to the font-indepdendent recognition of printed Chinese characters and have acquired very promising results.
虽然有成千上万的常用汉字,但它们实际上是由数量较少的笔画子模式组成的。因此,在光学汉字识别任务中,首先要识别子模式,然后再识别由子模式组成的整个汉字。然而,在这种分层方法中,较低层次的决策错误很容易传播到较高层次,从而导致较高的误认率。为了解决这个问题,我们设计了一种称为分层灵活匹配(HFM)的方法。其思想是通过允许从相同的原语池中识别可能冲突的子模式来最小化较低级别的决策负担。然后将这些子模式的集合与一些预先指定的模型进行匹配。在此过程中,使用一个度量来度量候选模型映射到给定集合的效果,以及该映射覆盖了多少原语。我们将HFM方法应用于印刷体汉字的非字体识别,取得了很好的效果。
{"title":"Hierarchical flexible matching for recognition of Chinese characters","authors":"F. Chang, Yung-Ping Cheng, Yao-Sheng Huang","doi":"10.1109/ICDAR.1995.602028","DOIUrl":"https://doi.org/10.1109/ICDAR.1995.602028","url":null,"abstract":"Although there are thousands of many commonly used Chinese characters, they are actually composed of a lower number of stroke sub-patterns. Thus, in the task of recognizing optical Chinese characters, it is worthwhile to first identify the sub-patterns and then the whole characters composed of them. In such a hierarchical approach, however, decision mistakes at lower-levels can easily propagate into upper-levels to cause a high mis-recognition rate. To remedy this problem we devise a method called hierarchical flexible matching (HFM). The idea is to minimize the decision burdens at lower levels by allowing possibly conflicting sub-patterns to be identified from the same pool of primitives. The collection of these sub-patterns is then matched against some pre-specified models. In doing so, a metric is used to measure how well a candidate model is mapped into the given collection and how many primitives are covered by this mapping. We apply the HFM method to the font-indepdendent recognition of printed Chinese characters and have acquired very promising results.","PeriodicalId":273519,"journal":{"name":"Proceedings of 3rd International Conference on Document Analysis and Recognition","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122228130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
TheSys-a comprehensive thesaurus system for intelligent document analysis and text retrieval thesys -一个用于智能文档分析和文本检索的综合词库系统
Pub Date : 1995-08-14 DOI: 10.1109/ICDAR.1995.602130
Chin Lu, K. H. Lee, H. Chen
Well designed thesauri can represent semantic/conceptual knowledge so as to reveal relationships among different elements in documents, thus serving as a critical tool in intelligent text retrieval systems and document analysis systems. In this paper, we present a thesaurus system, referred to as TheSys, which can be used as a tool for users to build thesauri according to their own requirements. It is our goal to design a comprehensive thesaurus building tool which can be used in any field of specialty rather than targeting for a particular specialty field. People can use our system to build an electronic thesaurus in any specialty field required for a specific application. We propose a thesaurus model, referred to as the thesaurus frame, which uses weighted links, to represent semantic relationships among concepts and terms. Our approach is to use a set of controlled terms, referred to as semantemes, to build the thesaurus frame. This approach can effectively reduce the size of the thesaurus yet the intelligence of the thesaurus is not compromised.
设计良好的叙词表可以表示语义/概念知识,从而揭示文档中不同元素之间的关系,是智能文本检索系统和文档分析系统的重要工具。在本文中,我们提出了一个词库系统,简称TheSys,它可以作为用户根据自己的需求构建词库的工具。这是我们的目标,设计一个全面的同义词典建设工具,可以在专业的任何领域使用,而不是针对一个特定的专业领域。人们可以使用我们的系统在特定应用程序所需的任何专业领域建立电子词库。我们提出了一个同义词典模型,称为同义词典框架,它使用加权链接来表示概念和术语之间的语义关系。我们的方法是使用一组受控术语(称为语义)来构建同义词库框架。这种方法可以有效地减少同义词典的大小,但同义词典的智能不妥协。
{"title":"TheSys-a comprehensive thesaurus system for intelligent document analysis and text retrieval","authors":"Chin Lu, K. H. Lee, H. Chen","doi":"10.1109/ICDAR.1995.602130","DOIUrl":"https://doi.org/10.1109/ICDAR.1995.602130","url":null,"abstract":"Well designed thesauri can represent semantic/conceptual knowledge so as to reveal relationships among different elements in documents, thus serving as a critical tool in intelligent text retrieval systems and document analysis systems. In this paper, we present a thesaurus system, referred to as TheSys, which can be used as a tool for users to build thesauri according to their own requirements. It is our goal to design a comprehensive thesaurus building tool which can be used in any field of specialty rather than targeting for a particular specialty field. People can use our system to build an electronic thesaurus in any specialty field required for a specific application. We propose a thesaurus model, referred to as the thesaurus frame, which uses weighted links, to represent semantic relationships among concepts and terms. Our approach is to use a set of controlled terms, referred to as semantemes, to build the thesaurus frame. This approach can effectively reduce the size of the thesaurus yet the intelligence of the thesaurus is not compromised.","PeriodicalId":273519,"journal":{"name":"Proceedings of 3rd International Conference on Document Analysis and Recognition","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116998856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Intelligent document assistant processor for pen-based computing systems 用于笔式计算系统的智能文档辅助处理器
Pub Date : 1995-08-14 DOI: 10.1109/ICDAR.1995.602093
P. Rhee, T. Fujisaki
In this paper, we present the framework design of IDAP (Intelligent Document Assistant Processor) with the capability of intelligent user interface for pen-based computer systems. No conventional computer user interface can compete with pen and paper until now in terms of convenience and ease of use in generating documents. The system gets roughly drawn documents from users, and generates neat documents satisfiable to users. Rough document includes handwritten texts, hand-drawn tables, hand-drawn diagrams, handwritten mathematical expressions. The satisfaction of users is at least as important as the functionality and performance in pen-based system. Therefore, sophisticated user interface is critical for a successful design. Intelligent user interface methodology is employed to make the system more intelligent, natural, and ease of use. Human-computer interaction model called H-COS is proposed for pen-based applications, and the model is applied to design IDAP.
本文提出了一种基于笔的计算机系统中具有智能用户界面能力的IDAP (Intelligent Document Assistant Processor)的框架设计。到目前为止,就生成文档的便利性和易用性而言,没有任何传统的计算机用户界面可以与笔和纸相竞争。系统从用户处粗略提取文档,生成用户满意的整洁文档。粗糙文档包括手写文本、手绘表格、手绘图表、手写数学表达式。在基于笔的系统中,用户的满意度至少与功能和性能同等重要。因此,复杂的用户界面对于成功的设计至关重要。采用智能用户界面方法,使系统更加智能、自然、易用。提出了笔式应用的人机交互模型H-COS,并将该模型应用于IDAP设计。
{"title":"Intelligent document assistant processor for pen-based computing systems","authors":"P. Rhee, T. Fujisaki","doi":"10.1109/ICDAR.1995.602093","DOIUrl":"https://doi.org/10.1109/ICDAR.1995.602093","url":null,"abstract":"In this paper, we present the framework design of IDAP (Intelligent Document Assistant Processor) with the capability of intelligent user interface for pen-based computer systems. No conventional computer user interface can compete with pen and paper until now in terms of convenience and ease of use in generating documents. The system gets roughly drawn documents from users, and generates neat documents satisfiable to users. Rough document includes handwritten texts, hand-drawn tables, hand-drawn diagrams, handwritten mathematical expressions. The satisfaction of users is at least as important as the functionality and performance in pen-based system. Therefore, sophisticated user interface is critical for a successful design. Intelligent user interface methodology is employed to make the system more intelligent, natural, and ease of use. Human-computer interaction model called H-COS is proposed for pen-based applications, and the model is applied to design IDAP.","PeriodicalId":273519,"journal":{"name":"Proceedings of 3rd International Conference on Document Analysis and Recognition","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128700792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Recognition of handprinted Chinese characters using Gabor features 基于Gabor特征的手印汉字识别
Pub Date : 1995-08-14 DOI: 10.1109/ICDAR.1995.602027
Y. Hamamoto, S. Uchimura, K. Masamizu, S. Tomita
A method for handprinted Chinese character recognition based on Gabor filters is proposed. The Gabor approach to character recognition is intuitively appealing because it is inspired by a multi-channel filtering theory for processing visual information in the early stages of the human visual system. The performance of a character recognition system using Gabor features is demonstrated on the ETL-8 character set. Mental results show that the Gabor features yielded an error rate of 2.4% versus the error rate of 4.4% obtained by using a popular feature extraction method.
提出了一种基于Gabor滤波器的手印汉字识别方法。Gabor字符识别方法在直观上很有吸引力,因为它的灵感来自于人类视觉系统早期处理视觉信息的多通道过滤理论。在ETL-8字符集上演示了使用Gabor特征的字符识别系统的性能。心理结果表明,Gabor特征产生的错误率为2.4%,而使用流行的特征提取方法获得的错误率为4.4%。
{"title":"Recognition of handprinted Chinese characters using Gabor features","authors":"Y. Hamamoto, S. Uchimura, K. Masamizu, S. Tomita","doi":"10.1109/ICDAR.1995.602027","DOIUrl":"https://doi.org/10.1109/ICDAR.1995.602027","url":null,"abstract":"A method for handprinted Chinese character recognition based on Gabor filters is proposed. The Gabor approach to character recognition is intuitively appealing because it is inspired by a multi-channel filtering theory for processing visual information in the early stages of the human visual system. The performance of a character recognition system using Gabor features is demonstrated on the ETL-8 character set. Mental results show that the Gabor features yielded an error rate of 2.4% versus the error rate of 4.4% obtained by using a popular feature extraction method.","PeriodicalId":273519,"journal":{"name":"Proceedings of 3rd International Conference on Document Analysis and Recognition","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130676481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
Symbolic/neural recognition of cursive amounts on bank cheques 银行支票上草书金额的符号/神经识别
Pub Date : 1995-08-14 DOI: 10.1109/ICDAR.1995.598934
Jean-Pierre Dodel, R. Shinghal
A symbolic/neural approach to recognize unconstrained handwritten cursive amounts in bank cheques is proposed. Features like ascenders and descenders are extracted from the binary image of the amount. Depending on the features extracted, some words are recognized entirely symbolically, some words entirely neurally, and the remaining both symbolically and neurally. Results of preliminary experiments are provided.
提出了一种识别银行支票中无约束手写草书金额的符号/神经方法。从金额的二值图像中提取上升、下降等特征。根据提取的特征,有些词完全是符号识别,有些词完全是神经识别,其余的词既是符号识别又是神经识别。给出了初步实验结果。
{"title":"Symbolic/neural recognition of cursive amounts on bank cheques","authors":"Jean-Pierre Dodel, R. Shinghal","doi":"10.1109/ICDAR.1995.598934","DOIUrl":"https://doi.org/10.1109/ICDAR.1995.598934","url":null,"abstract":"A symbolic/neural approach to recognize unconstrained handwritten cursive amounts in bank cheques is proposed. Features like ascenders and descenders are extracted from the binary image of the amount. Depending on the features extracted, some words are recognized entirely symbolically, some words entirely neurally, and the remaining both symbolically and neurally. Results of preliminary experiments are provided.","PeriodicalId":273519,"journal":{"name":"Proceedings of 3rd International Conference on Document Analysis and Recognition","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123886853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
An extended-shadow-code based approach for off-line signature verification. II. Evaluation of several multi-classifier combination strategies 一种基于扩展阴影码的离线签名验证方法。2几种多分类器组合策略的评价
Pub Date : 1995-08-14 DOI: 10.1109/ICDAR.1995.598975
R. Sabourin, Ginette Genest
For pt.I see Proc. 12th ICPR, p.450-3. In a real situation, the choice of the best representation R(/spl gamma/) for the implementation of a signature verification system able to cope with all types of handwriting is a very difficult task. This study is original in that the design of the integrated classifiers E(x) is based on a large number of individual classifiers e/sub k/(x) (or signature representations R(/spl gamma/)) in an attempt to overcome in some way the need for feature selection. In this paper, the authors present a first systematical evaluation of a multi-classifier-based approach for off-line signature verification. Two types of integrated classifiers based on kNN or minimum distance classifiers and 15 types of representation related to the ESC used as a shape factor have been evaluated using a signature database of 800 images (20 writers/spl times/40 signatures per writer) in the context of random forgeries.
关于p.i,见第12届公民权利公约程序,第450-3页。在实际情况中,为实现能够处理所有类型笔迹的签名验证系统选择最佳表示R(/spl gamma/)是一项非常困难的任务。本研究的独创性在于,集成分类器E(x)的设计基于大量的单个分类器E /sub k/(x)(或签名表示R(/spl gamma/)),试图以某种方式克服特征选择的需要。在本文中,作者首次系统地评估了一种基于多分类器的离线签名验证方法。基于kNN或最小距离分类器的两种类型的集成分类器和15种与ESC相关的表示类型作为形状因子,已经在随机伪造的背景下使用800个图像的签名数据库(20个作家/spl次/每个作家40个签名)进行了评估。
{"title":"An extended-shadow-code based approach for off-line signature verification. II. Evaluation of several multi-classifier combination strategies","authors":"R. Sabourin, Ginette Genest","doi":"10.1109/ICDAR.1995.598975","DOIUrl":"https://doi.org/10.1109/ICDAR.1995.598975","url":null,"abstract":"For pt.I see Proc. 12th ICPR, p.450-3. In a real situation, the choice of the best representation R(/spl gamma/) for the implementation of a signature verification system able to cope with all types of handwriting is a very difficult task. This study is original in that the design of the integrated classifiers E(x) is based on a large number of individual classifiers e/sub k/(x) (or signature representations R(/spl gamma/)) in an attempt to overcome in some way the need for feature selection. In this paper, the authors present a first systematical evaluation of a multi-classifier-based approach for off-line signature verification. Two types of integrated classifiers based on kNN or minimum distance classifiers and 15 types of representation related to the ESC used as a shape factor have been evaluated using a signature database of 800 images (20 writers/spl times/40 signatures per writer) in the context of random forgeries.","PeriodicalId":273519,"journal":{"name":"Proceedings of 3rd International Conference on Document Analysis and Recognition","volume":"392 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123392194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
期刊
Proceedings of 3rd International Conference on Document Analysis and Recognition
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1