首页 > 最新文献

2008 The Eighth IAPR International Workshop on Document Analysis Systems最新文献

英文 中文
Performance Evaluation of Symbol Recognition and Spotting Systems: An Overview 符号识别和标记系统的性能评估:综述
Pub Date : 2008-09-16 DOI: 10.1109/DAS.2008.63
Mathieu Delalandre, Ernest Valveny, J. Lladós
This paper deals with the topic of performance evaluation of the symbol recognition & spotting systems. It presents an overview as a result of the work and the discussions undertaken by a working group on this subject. The paper starts by giving a general view of symbol recognition & spotting and performance evaluation. Next, the two main issues of performance evaluation are discussed: groundtruthing and performance characterization. Different problems related to both issues are addressed: groundtruthing of real documents, generation of synthetic documents, degradation models, the use of a priori knowledge, mapping of the groundtruth with the system results, and so on. Open problems arising from this overview are also discussed at the end of the paper.
本文讨论了符号识别与标记系统的性能评价问题。它概述了一个工作组就这一问题进行的工作和讨论的结果。本文首先概述了符号识别与识别以及性能评价。接下来,讨论了绩效评估的两个主要问题:基础真相和绩效表征。解决了与这两个问题相关的不同问题:真实文档的基础事实、合成文档的生成、退化模型、先验知识的使用、基础事实与系统结果的映射,等等。本文最后还讨论了由此概述产生的开放性问题。
{"title":"Performance Evaluation of Symbol Recognition and Spotting Systems: An Overview","authors":"Mathieu Delalandre, Ernest Valveny, J. Lladós","doi":"10.1109/DAS.2008.63","DOIUrl":"https://doi.org/10.1109/DAS.2008.63","url":null,"abstract":"This paper deals with the topic of performance evaluation of the symbol recognition & spotting systems. It presents an overview as a result of the work and the discussions undertaken by a working group on this subject. The paper starts by giving a general view of symbol recognition & spotting and performance evaluation. Next, the two main issues of performance evaluation are discussed: groundtruthing and performance characterization. Different problems related to both issues are addressed: groundtruthing of real documents, generation of synthetic documents, degradation models, the use of a priori knowledge, mapping of the groundtruth with the system results, and so on. Open problems arising from this overview are also discussed at the end of the paper.","PeriodicalId":423207,"journal":{"name":"2008 The Eighth IAPR International Workshop on Document Analysis Systems","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123165492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
CCD: Connected Component Descriptor for Robust Mosaicing of Camera-Captured Document Images CCD:用于相机捕获的文档图像鲁棒拼接的连接组件描述符
Pub Date : 2008-09-16 DOI: 10.1109/DAS.2008.31
T. Kasar, A. Ramakrishnan
We propose a robust method for mosaicing of document images using features derived from connected components. Each connected component is described using the angular radial transform (ART). To ensure geometric consistency during feature matching, the ART coefficients of a connected component are augmented with those of its two nearest neighbors. The proposed method addresses two critical issues often encountered in correspondence matching: (i) the stability of features and (ii) robustness against false matches due to the multiple instances of characters in a document image. The use of connected components guarantees a stable localization across images. The augmented features ensure a successful correspondence matching even in the presence of multiple similar regions within the page. We illustrate the effectiveness of the proposed method on camera captured document images exhibiting large variations in viewpoint, illumination and scale.
我们提出了一种鲁棒的方法,利用从连接组件派生的特征来拼接文档图像。使用角径向变换(ART)描述每个连接的组件。为了确保特征匹配过程中的几何一致性,将连通分量的ART系数与其两个最近邻的ART系数进行增广。提出的方法解决了通信匹配中经常遇到的两个关键问题:(i)特征的稳定性和(ii)对文档图像中多个字符实例导致的错误匹配的鲁棒性。连接组件的使用保证了图像之间的稳定定位。增强的功能确保即使在页面中存在多个相似区域时也能成功匹配对应。我们证明了所提出的方法在相机捕获的具有视点,照明和比例大变化的文档图像上的有效性。
{"title":"CCD: Connected Component Descriptor for Robust Mosaicing of Camera-Captured Document Images","authors":"T. Kasar, A. Ramakrishnan","doi":"10.1109/DAS.2008.31","DOIUrl":"https://doi.org/10.1109/DAS.2008.31","url":null,"abstract":"We propose a robust method for mosaicing of document images using features derived from connected components. Each connected component is described using the angular radial transform (ART). To ensure geometric consistency during feature matching, the ART coefficients of a connected component are augmented with those of its two nearest neighbors. The proposed method addresses two critical issues often encountered in correspondence matching: (i) the stability of features and (ii) robustness against false matches due to the multiple instances of characters in a document image. The use of connected components guarantees a stable localization across images. The augmented features ensure a successful correspondence matching even in the presence of multiple similar regions within the page. We illustrate the effectiveness of the proposed method on camera captured document images exhibiting large variations in viewpoint, illumination and scale.","PeriodicalId":423207,"journal":{"name":"2008 The Eighth IAPR International Workshop on Document Analysis Systems","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124550161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Contrast Enhancement in Multispectral Images by Emphasizing Text Regions 强调文本区域的多光谱图像对比度增强
Pub Date : 2008-09-16 DOI: 10.1109/DAS.2008.68
M. Lettner, Florian Kleber, Robert Sablatnig, Heinz Miklas
This paper deals with the enhancement of the readability in historic texts written on parchment. Due to mold, air, humidity, water, etc. parchment and text are partially damaged and consequently hard to read. In order to enhance the readability of the text, the manuscript pages are imaged in different spectral bands ranging from 360 to 1000 nm. The readability enhancement is based on a spectral and spatial analysis of the multivariate image data by multivariate spatial correlation. The main advantage of the method is that especially the text regions are enhanced which is provided by generating a mask image. This mask is based on the automatic reconstruction of the ruling scheme of the text pages. The method is tested on two medieval Slavonic manuscripts written on parchment.
本文讨论了如何提高历史文献在羊皮纸上的可读性。由于霉菌,空气,湿度,水等,羊皮纸和文字部分损坏,因此难以阅读。为了增强文本的可读性,对手稿页在360 ~ 1000 nm的不同光谱波段进行了成像。通过多变量空间相关对多变量图像数据进行光谱和空间分析,增强了图像的可读性。该方法的主要优点是通过生成掩模图像来增强文本区域。这种掩模是基于文本页面的规则方案的自动重建。这种方法在两份写在羊皮纸上的中世纪斯拉夫手稿上进行了测试。
{"title":"Contrast Enhancement in Multispectral Images by Emphasizing Text Regions","authors":"M. Lettner, Florian Kleber, Robert Sablatnig, Heinz Miklas","doi":"10.1109/DAS.2008.68","DOIUrl":"https://doi.org/10.1109/DAS.2008.68","url":null,"abstract":"This paper deals with the enhancement of the readability in historic texts written on parchment. Due to mold, air, humidity, water, etc. parchment and text are partially damaged and consequently hard to read. In order to enhance the readability of the text, the manuscript pages are imaged in different spectral bands ranging from 360 to 1000 nm. The readability enhancement is based on a spectral and spatial analysis of the multivariate image data by multivariate spatial correlation. The main advantage of the method is that especially the text regions are enhanced which is provided by generating a mask image. This mask is based on the automatic reconstruction of the ruling scheme of the text pages. The method is tested on two medieval Slavonic manuscripts written on parchment.","PeriodicalId":423207,"journal":{"name":"2008 The Eighth IAPR International Workshop on Document Analysis Systems","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117212058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Attention-Based Document Classifier Learning 基于注意的文档分类器学习
Pub Date : 2008-09-16 DOI: 10.1109/DAS.2008.36
Georg Buscher, A. Dengel
We describe an approach for creating precise personalized document classifiers based on the user's attention. The general idea is to observe which parts of a document the user was interested in just before he or she comes to a classification decision. Having information about this manual classification decision and the document parts the decision was based on, we can learn precise classifiers. For observing the user's focus point of attention we use an unobtrusive eye tracking device and apply an algorithm for reading behavior detection. On this basis, we can extract terms characterizing the text parts interesting to the user and employ them for describing the class the document was assigned to by the user. Having learned classifiers in that way, new documents can be classified automatically using techniques of passage-based retrieval. We prove the very strong improvement of incorporating the user's visual attention by a case study that evaluates an attention-based term extraction method.
我们描述了一种基于用户注意力创建精确个性化文档分类器的方法。一般的想法是,在用户做出分类决定之前,观察他或她对文档的哪些部分感兴趣。有了这个人工分类决策和决策所基于的文档部分的信息,我们就可以学习精确的分类器。为了观察用户的关注焦点,我们使用了一种不引人注目的眼动追踪设备,并应用了一种阅读行为检测算法。在此基础上,我们可以提取描述用户感兴趣的文本部分的术语,并使用它们来描述用户分配给文档的类。以这种方式学习了分类器之后,就可以使用基于段落的检索技术对新文档进行自动分类。我们通过一个评估基于注意的术语提取方法的案例研究,证明了结合用户视觉注意的非常强的改进。
{"title":"Attention-Based Document Classifier Learning","authors":"Georg Buscher, A. Dengel","doi":"10.1109/DAS.2008.36","DOIUrl":"https://doi.org/10.1109/DAS.2008.36","url":null,"abstract":"We describe an approach for creating precise personalized document classifiers based on the user's attention. The general idea is to observe which parts of a document the user was interested in just before he or she comes to a classification decision. Having information about this manual classification decision and the document parts the decision was based on, we can learn precise classifiers. For observing the user's focus point of attention we use an unobtrusive eye tracking device and apply an algorithm for reading behavior detection. On this basis, we can extract terms characterizing the text parts interesting to the user and employ them for describing the class the document was assigned to by the user. Having learned classifiers in that way, new documents can be classified automatically using techniques of passage-based retrieval. We prove the very strong improvement of incorporating the user's visual attention by a case study that evaluates an attention-based term extraction method.","PeriodicalId":423207,"journal":{"name":"2008 The Eighth IAPR International Workshop on Document Analysis Systems","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126479754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Difference of Boxes Filters Revisited: Shadow Suppression and Efficient Character Segmentation 重新审视盒子滤波器的区别:阴影抑制和有效的字符分割
Pub Date : 2008-09-16 DOI: 10.1109/DAS.2008.12
E. Rodner, H. Süße, W. Ortmann, Joachim Denzler
A robust segmentation is the most important part of an automatic character recognition system (e.g. document processing, license plate recognition etc.). In our contribution we present an efficient segmentation framework using a preprocessing step for shadow suppression combined with a local thresholding technique. The method is based on a combination of difference of boxes filters and a new ternary segmentation, which are both simple low-level image operations. We also draw parallels to a recently published work on a ganglion cell model and show that our approach is theoretically more substantiated as well as more robust and more efficient in practice. Systematic evaluation of noisy input data as well as results on a large dataset of license plate images show the robustness and efficiency of our proposed method. Our results can be applied easily to any optical character recognition system resulting in an impressive gain of robustness against nonlinear illumination.
鲁棒分割是自动字符识别系统(如文档处理、车牌识别等)中最重要的部分。在我们的贡献中,我们提出了一个有效的分割框架,使用阴影抑制的预处理步骤结合局部阈值技术。该方法是基于差分框滤波和一种新的三元分割相结合的方法,这两种方法都是简单的低级图像操作。我们还与最近发表的一项关于神经节细胞模型的研究进行了类比,并表明我们的方法在理论上更有根据,在实践中也更稳健、更有效。对噪声输入数据的系统评估以及对大型车牌图像数据集的结果表明了我们提出的方法的鲁棒性和有效性。我们的结果可以很容易地应用于任何光学字符识别系统,从而在非线性照明下获得令人印象深刻的鲁棒性。
{"title":"Difference of Boxes Filters Revisited: Shadow Suppression and Efficient Character Segmentation","authors":"E. Rodner, H. Süße, W. Ortmann, Joachim Denzler","doi":"10.1109/DAS.2008.12","DOIUrl":"https://doi.org/10.1109/DAS.2008.12","url":null,"abstract":"A robust segmentation is the most important part of an automatic character recognition system (e.g. document processing, license plate recognition etc.). In our contribution we present an efficient segmentation framework using a preprocessing step for shadow suppression combined with a local thresholding technique. The method is based on a combination of difference of boxes filters and a new ternary segmentation, which are both simple low-level image operations. We also draw parallels to a recently published work on a ganglion cell model and show that our approach is theoretically more substantiated as well as more robust and more efficient in practice. Systematic evaluation of noisy input data as well as results on a large dataset of license plate images show the robustness and efficiency of our proposed method. Our results can be applied easily to any optical character recognition system resulting in an impressive gain of robustness against nonlinear illumination.","PeriodicalId":423207,"journal":{"name":"2008 The Eighth IAPR International Workshop on Document Analysis Systems","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122310716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Categorization of On-Line Handwritten Documents 联机手写文件的分类
Pub Date : 2008-09-16 DOI: 10.1109/DAS.2008.45
Sebastián Peña Saldarriaga, E. Morin, C. Viard-Gaudin
With the growth of on-line handwriting technologies, managing facilities for handwritten documents, such as retrieval of documents by topic, are required. These documents can contain graphics, equations or text for instance. This work reports experiments on categorization of on-line handwritten documents based on their textual contents. We assume that handwritten text blocks have been extracted from the documents, and as a first step of the proposed system, we process them with an existing handwritten recognition engine. We analyse the effect of the word recognition rate on the categorization performances, and we compare them with those obtained with the same texts available as ground truth. Two categorization algorithms (kNN and SVM) are compared in this work. The handwritten texts are a subset of the Reuters-21578 corpus collected from more than 1500 writers. Results show that there is no significant categorization performance loss when the word error rate stands below 22%.
随着在线手写技术的发展,需要手写文档的管理工具,例如按主题检索文档。例如,这些文档可以包含图形、方程或文本。本文报道了基于文本内容对在线手写文档进行分类的实验。我们假设已经从文档中提取了手写文本块,并且作为提议系统的第一步,我们使用现有的手写识别引擎处理它们。我们分析了单词识别率对分类性能的影响,并将其与使用相同文本作为基础真值获得的分类性能进行了比较。本文对两种分类算法(kNN和SVM)进行了比较。这些手写文本是路透社21578语料库的一个子集,这些语料库收集了1500多名作家。结果表明,当单词错误率低于22%时,分类性能没有明显下降。
{"title":"Categorization of On-Line Handwritten Documents","authors":"Sebastián Peña Saldarriaga, E. Morin, C. Viard-Gaudin","doi":"10.1109/DAS.2008.45","DOIUrl":"https://doi.org/10.1109/DAS.2008.45","url":null,"abstract":"With the growth of on-line handwriting technologies, managing facilities for handwritten documents, such as retrieval of documents by topic, are required. These documents can contain graphics, equations or text for instance. This work reports experiments on categorization of on-line handwritten documents based on their textual contents. We assume that handwritten text blocks have been extracted from the documents, and as a first step of the proposed system, we process them with an existing handwritten recognition engine. We analyse the effect of the word recognition rate on the categorization performances, and we compare them with those obtained with the same texts available as ground truth. Two categorization algorithms (kNN and SVM) are compared in this work. The handwritten texts are a subset of the Reuters-21578 corpus collected from more than 1500 writers. Results show that there is no significant categorization performance loss when the word error rate stands below 22%.","PeriodicalId":423207,"journal":{"name":"2008 The Eighth IAPR International Workshop on Document Analysis Systems","volume":"663 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116182254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Pre-Printed and Hand-Filled Table-Form Analysis Aiming Cell Extraction 预打印和手工填充表格分析瞄准细胞提取
Pub Date : 2008-09-16 DOI: 10.1109/DAS.2008.46
Rafaela Dandolini Felipe, L. A. P. Neves
This paper presents an approach to extract the structure of pre-printed and hand-filled table-forms. The first module performs the cell identification based on Watershed transform. A second module detects the wrong cells produced by handwritten and/or pre-printed data. In this module, wrong cells and other cells are filtered by a compactness, perimeter and area analysis. In a third module, the wrong cells are merged with other cells to determine the exact structure. A miscellaneous database composed of 300 pre-printed and hand-filled table-form images was used to evaluate the efficiency of our methodology. Experiments showed significant and promising results.
本文提出了一种提取预打印和手工填写表格结构的方法。第一个模块基于Watershed变换进行单元识别。第二个模块检测由手写和/或预打印数据产生的错误单元。在该模块中,通过密实度、周长和面积分析来过滤错误细胞和其他细胞。在第三个模块中,错误的单元格与其他单元格合并以确定确切的结构。一个由300张预印和手工填写的表格图像组成的杂项数据库被用来评估我们的方法的效率。实验显示了显著和有希望的结果。
{"title":"Pre-Printed and Hand-Filled Table-Form Analysis Aiming Cell Extraction","authors":"Rafaela Dandolini Felipe, L. A. P. Neves","doi":"10.1109/DAS.2008.46","DOIUrl":"https://doi.org/10.1109/DAS.2008.46","url":null,"abstract":"This paper presents an approach to extract the structure of pre-printed and hand-filled table-forms. The first module performs the cell identification based on Watershed transform. A second module detects the wrong cells produced by handwritten and/or pre-printed data. In this module, wrong cells and other cells are filtered by a compactness, perimeter and area analysis. In a third module, the wrong cells are merged with other cells to determine the exact structure. A miscellaneous database composed of 300 pre-printed and hand-filled table-form images was used to evaluate the efficiency of our methodology. Experiments showed significant and promising results.","PeriodicalId":423207,"journal":{"name":"2008 The Eighth IAPR International Workshop on Document Analysis Systems","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116589263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Text String Extraction from Scene Image Based on Edge Feature and Morphology 基于边缘特征和形态学的场景图像文本字符串提取
Pub Date : 2008-09-16 DOI: 10.1109/DAS.2008.51
Yuming Wang, Naoki Tanaka
Extraction of text from scene image is much difficult than extraction from simple document image. A lot of researches succeeded in extracting single text string from image, but can not deal with image including many text strings. Meanwhile, the result may be mixed with noises be similar to text. This paper describes an algorithm that uses mathematical morphology to extract text effectively, and edge border ratio is utilized to differentiate text region from noise region, using the edge contrast feature of the text region in real scene. This paper also describes the method which can connect characters into text strings, and distribute text strings to different subimages according to their width of strokes. The algorithm is implied to scene image like signs, indicators as well as magazine covers, and its robustness is proved.
从场景图像中提取文本比从简单的文档图像中提取文本要困难得多。许多研究成功地从图像中提取了单个文本字符串,但不能处理包含多个文本字符串的图像。同时,结果可能会混入与文本相似的噪声。本文描述了一种利用数学形态学有效提取文本的算法,利用真实场景中文本区域的边缘对比度特征,利用边缘边缘比来区分文本区域和噪声区域。本文还介绍了将字符连接成字符串,并根据笔画宽度将字符串分配到不同子图像的方法。将该算法应用于标志、指标、杂志封面等场景图像,证明了该算法的鲁棒性。
{"title":"Text String Extraction from Scene Image Based on Edge Feature and Morphology","authors":"Yuming Wang, Naoki Tanaka","doi":"10.1109/DAS.2008.51","DOIUrl":"https://doi.org/10.1109/DAS.2008.51","url":null,"abstract":"Extraction of text from scene image is much difficult than extraction from simple document image. A lot of researches succeeded in extracting single text string from image, but can not deal with image including many text strings. Meanwhile, the result may be mixed with noises be similar to text. This paper describes an algorithm that uses mathematical morphology to extract text effectively, and edge border ratio is utilized to differentiate text region from noise region, using the edge contrast feature of the text region in real scene. This paper also describes the method which can connect characters into text strings, and distribute text strings to different subimages according to their width of strokes. The algorithm is implied to scene image like signs, indicators as well as magazine covers, and its robustness is proved.","PeriodicalId":423207,"journal":{"name":"2008 The Eighth IAPR International Workshop on Document Analysis Systems","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124751149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Exploring Evolutionary Technical Trends from Academic Research Papers 从学术研究论文中探索进化技术趋势
Pub Date : 2008-09-16 DOI: 10.1109/DAS.2008.25
Teng-Kai Fan, Chia-Hui Chang
Automatic Term Recognition (ATR) is concerned with discovering terminology in large volumes of text corpora. Technical terms are vital elements for understanding the techniques used in academic research papers, and in this paper, we use focused technical terms to explore technical trends in the research literature. The major purpose of this work is to understand the relationship between techniques and research topics to better explore technical trends. We define this new text mining issue and apply machine learning algorithms for solving this problem by (1) recognizing focused technical terms from research papers; (2) classifying these terms into predefined technology categories; (3) analyzing the evolution of technical trends. The dataset consists of 656 papers collected from well-known conferences on ACM. The experimental results indicate that our proposed methods can effectively explore interesting evolutionary technical trends in various research topics.
自动术语识别(ATR)涉及在大量文本语料库中发现术语。技术术语是理解学术研究论文中使用的技术的重要元素,在本文中,我们使用重点技术术语来探索研究文献中的技术趋势。这项工作的主要目的是了解技术和研究课题之间的关系,以更好地探索技术趋势。我们定义了这个新的文本挖掘问题,并应用机器学习算法来解决这个问题,方法是:(1)从研究论文中识别重点技术术语;(二)将这些术语划分为预先确定的技术类别;(3)分析技术趋势演变。该数据集由656篇来自ACM知名会议的论文组成。实验结果表明,我们提出的方法可以有效地探索各种研究课题中有趣的进化技术趋势。
{"title":"Exploring Evolutionary Technical Trends from Academic Research Papers","authors":"Teng-Kai Fan, Chia-Hui Chang","doi":"10.1109/DAS.2008.25","DOIUrl":"https://doi.org/10.1109/DAS.2008.25","url":null,"abstract":"Automatic Term Recognition (ATR) is concerned with discovering terminology in large volumes of text corpora. Technical terms are vital elements for understanding the techniques used in academic research papers, and in this paper, we use focused technical terms to explore technical trends in the research literature. The major purpose of this work is to understand the relationship between techniques and research topics to better explore technical trends. We define this new text mining issue and apply machine learning algorithms for solving this problem by (1) recognizing focused technical terms from research papers; (2) classifying these terms into predefined technology categories; (3) analyzing the evolution of technical trends. The dataset consists of 656 papers collected from well-known conferences on ACM. The experimental results indicate that our proposed methods can effectively explore interesting evolutionary technical trends in various research topics.","PeriodicalId":423207,"journal":{"name":"2008 The Eighth IAPR International Workshop on Document Analysis Systems","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128259497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
PaperDiff: A Script Independent Automatic Method for Finding the Text Differences Between Two Document Images PaperDiff:一种独立于脚本的自动查找两个文档图像之间文本差异的方法
Pub Date : 2008-09-16 DOI: 10.1109/DAS.2008.69
R. Sitaram, Gopal Datt Joshi, S. Noushath, Pulkit Parikh, Vishal Gupta
In this paper, we introduce a novel concept called {PaperDiff} and propose an algorithm to implement it. The aim of PaperDiff is to compare two printed (paper) documents using their images and determine the differences in terms of text inserted, deleted and substituted between them. This lets an end-user compare two documents which are already printed or even if one of which is printed (the other could be in electronic form such as MS-word *.doc file). The algorithm we have proposed for realizing PaperDiff is based on word image comparison and is even suitable for symbol strings and for any script/language (including multiple scripts) in the documents, where even mature optical character recognition (OCR) technology has had very little success. PaperDiff enables end-users like lawyers, novelists, etc, in comparing new document versions with older versions of them. Our proposed method is suitable even when the formatting of content is different between the two input documents, where the structures of the document images are different (for e.g., differing page widths, page structure etc). An experiment of PaperDiff on single column text documents yielded 99.2 % accuracy while detecting 135 induced differences in 10 pairs of documents.
在本文中,我们引入了一个新的概念,称为{PaperDiff},并提出了一个算法来实现它。PaperDiff的目的是比较两种印刷(纸质)文档,使用它们的图像,并确定它们之间在插入,删除和替换文本方面的差异。这允许最终用户比较两个已经打印的文档,甚至其中一个已经打印(另一个可以是电子形式,如MS-word *.doc文件)。我们提出的实现PaperDiff的算法基于单词图像比较,甚至适用于符号字符串和文档中的任何脚本/语言(包括多种脚本),即使是成熟的光学字符识别(OCR)技术也很少成功。PaperDiff让律师、小说家等终端用户能够比较新版本和旧版本的文档。即使两个输入文档的内容格式不同,其中文档图像的结构不同(例如,不同的页面宽度,页面结构等),我们提出的方法也适用。在单列文本文档中,PaperDiff检测出10对文档中的135个诱导差异,准确率达到99.2%。
{"title":"PaperDiff: A Script Independent Automatic Method for Finding the Text Differences Between Two Document Images","authors":"R. Sitaram, Gopal Datt Joshi, S. Noushath, Pulkit Parikh, Vishal Gupta","doi":"10.1109/DAS.2008.69","DOIUrl":"https://doi.org/10.1109/DAS.2008.69","url":null,"abstract":"In this paper, we introduce a novel concept called {PaperDiff} and propose an algorithm to implement it. The aim of PaperDiff is to compare two printed (paper) documents using their images and determine the differences in terms of text inserted, deleted and substituted between them. This lets an end-user compare two documents which are already printed or even if one of which is printed (the other could be in electronic form such as MS-word *.doc file). The algorithm we have proposed for realizing PaperDiff is based on word image comparison and is even suitable for symbol strings and for any script/language (including multiple scripts) in the documents, where even mature optical character recognition (OCR) technology has had very little success. PaperDiff enables end-users like lawyers, novelists, etc, in comparing new document versions with older versions of them. Our proposed method is suitable even when the formatting of content is different between the two input documents, where the structures of the document images are different (for e.g., differing page widths, page structure etc). An experiment of PaperDiff on single column text documents yielded 99.2 % accuracy while detecting 135 induced differences in 10 pairs of documents.","PeriodicalId":423207,"journal":{"name":"2008 The Eighth IAPR International Workshop on Document Analysis Systems","volume":"500 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127592971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
期刊
2008 The Eighth IAPR International Workshop on Document Analysis Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1