首页 > 最新文献

Proceedings of the ACM Symposium on Document Engineering. ACM Symposium on Document Engineering最新文献

英文 中文
DH-CASE II: collaborative annotations in shared environments: metadata, tools and techniques in the digital humanities 案例二:共享环境中的协作注释:数字人文学科中的元数据、工具和技术
P. Schmitz, L. Pearce, Quinn Dombrowski
The DH-CASE II Workshop, held in conjunction with ACM Document Engineering 2014, focused on the tools and environments that support annotation, broadly defined, including modeling, authoring, analysis, publication and sharing. Participants explored shared challenges and differing approaches, seeking to identify emerging best practices, as well as those approaches that may have potential for wider application or influence.
与ACM文档工程2014年联合举办的DH-CASE II研讨会,重点关注支持广泛定义的注释的工具和环境,包括建模、编写、分析、发布和共享。与会者探讨了共同的挑战和不同的方法,力求确定新出现的最佳做法,以及那些可能有更广泛应用或影响的方法。
{"title":"DH-CASE II: collaborative annotations in shared environments: metadata, tools and techniques in the digital humanities","authors":"P. Schmitz, L. Pearce, Quinn Dombrowski","doi":"10.1145/2644866.2644898","DOIUrl":"https://doi.org/10.1145/2644866.2644898","url":null,"abstract":"The DH-CASE II Workshop, held in conjunction with ACM Document Engineering 2014, focused on the tools and environments that support annotation, broadly defined, including modeling, authoring, analysis, publication and sharing. Participants explored shared challenges and differing approaches, seeking to identify emerging best practices, as well as those approaches that may have potential for wider application or influence.","PeriodicalId":91385,"journal":{"name":"Proceedings of the ACM Symposium on Document Engineering. ACM Symposium on Document Engineering","volume":"7 1","pages":"211-212"},"PeriodicalIF":0.0,"publicationDate":"2014-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85405133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A new sentence similarity assessment measure based on a three-layer sentence representation 一种基于三层句子表示的句子相似度评价方法
Rafael Ferreira, R. Lins, F. Freitas, S. Simske, M. Riss
Sentence similarity is used to measure the degree of likelihood between sentences. It is used in many natural language applications, such as text summarization, information retrieval, text categorization, and machine translation. The current methods for assessing sentence similarity represent sentences as vectors of bag of words or the syntactic information of the words in the sentence. The degree of likelihood between phrases is calculated by composing the similarity between the words in the sentences. Two important concerns in the area, the meaning problem and the word order, are not handled, however. This paper proposes a new sentence similarity assessment measure that largely improves and refines a recently published method that takes into account the lexical, syntactic and semantic components of sentences. The new method proposed here was benchmarked using a publically available standard dataset. The results obtained show that the new similarity assessment measure proposed outperforms the state of the art systems and achieve results comparable to the evaluation made by humans.
句子相似度用于衡量句子之间的似然程度。它用于许多自然语言应用程序,如文本摘要、信息检索、文本分类和机器翻译。目前的句子相似度评估方法将句子表示为词包向量或句子中词的句法信息。短语之间的似然度是通过组合句子中单词之间的相似度来计算的。然而,该领域的两个重要问题,即意义问题和语序问题,并没有得到解决。本文提出了一种新的句子相似度评估方法,该方法在很大程度上改进和完善了最近发表的一种考虑句子的词汇、句法和语义成分的方法。本文提出的新方法使用公开可用的标准数据集进行基准测试。结果表明,所提出的新的相似度评估方法优于现有的系统,并取得了与人类评估相当的结果。
{"title":"A new sentence similarity assessment measure based on a three-layer sentence representation","authors":"Rafael Ferreira, R. Lins, F. Freitas, S. Simske, M. Riss","doi":"10.1145/2644866.2644881","DOIUrl":"https://doi.org/10.1145/2644866.2644881","url":null,"abstract":"Sentence similarity is used to measure the degree of likelihood between sentences. It is used in many natural language applications, such as text summarization, information retrieval, text categorization, and machine translation. The current methods for assessing sentence similarity represent sentences as vectors of bag of words or the syntactic information of the words in the sentence. The degree of likelihood between phrases is calculated by composing the similarity between the words in the sentences. Two important concerns in the area, the meaning problem and the word order, are not handled, however. This paper proposes a new sentence similarity assessment measure that largely improves and refines a recently published method that takes into account the lexical, syntactic and semantic components of sentences. The new method proposed here was benchmarked using a publically available standard dataset. The results obtained show that the new similarity assessment measure proposed outperforms the state of the art systems and achieve results comparable to the evaluation made by humans.","PeriodicalId":91385,"journal":{"name":"Proceedings of the ACM Symposium on Document Engineering. ACM Symposium on Document Engineering","volume":"13 1","pages":"25-34"},"PeriodicalIF":0.0,"publicationDate":"2014-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77535971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Building digital project rooms for web meetings 为网络会议建立数字项目室
Laurent Denoue, S. Carter, Andreas Girgensohn, Matthew L. Cooper
Distributed teams must co-ordinate a variety of tasks. To do so they need to be able to create, share, and annotate documents as well as discuss plans and goals. Many workflow tools support document sharing, while other tools support videoconferencing. However, there exists little support for connecting the two. In this work, we describe a system that allows users to share and markup content during web meetings. This shared content can provide important conversational props within the context of a meeting; it can also help users review archived meetings. Users can also extract content from meetings directly into their personal notes or other workflow tools.
分布式团队必须协调各种任务。要做到这一点,他们需要能够创建、共享和注释文档,以及讨论计划和目标。许多工作流工具支持文档共享,而其他工具支持视频会议。然而,很少有人支持将两者连接起来。在这项工作中,我们描述了一个允许用户在web会议期间共享和标记内容的系统。这种共享的内容可以在会议的背景下提供重要的会话道具;它还可以帮助用户查看存档的会议。用户还可以从会议中直接提取内容到他们的个人笔记或其他工作流工具中。
{"title":"Building digital project rooms for web meetings","authors":"Laurent Denoue, S. Carter, Andreas Girgensohn, Matthew L. Cooper","doi":"10.1145/2644866.2644889","DOIUrl":"https://doi.org/10.1145/2644866.2644889","url":null,"abstract":"Distributed teams must co-ordinate a variety of tasks. To do so they need to be able to create, share, and annotate documents as well as discuss plans and goals. Many workflow tools support document sharing, while other tools support videoconferencing. However, there exists little support for connecting the two. In this work, we describe a system that allows users to share and markup content during web meetings. This shared content can provide important conversational props within the context of a meeting; it can also help users review archived meetings. Users can also extract content from meetings directly into their personal notes or other workflow tools.","PeriodicalId":91385,"journal":{"name":"Proceedings of the ACM Symposium on Document Engineering. ACM Symposium on Document Engineering","volume":"54 1","pages":"135-138"},"PeriodicalIF":0.0,"publicationDate":"2014-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84978811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
The evolving scholarly record: new uses and new forms 不断发展的学术记录:新用法和新形式
C. Lynch
This presentation will take a very broad view of the emergence of literary corpora as objects of computation, with a particular focus on the various literatures and genres that form the scholarly record. The developments and implications here that I will explore include: the evolution of the scholarly literature into a semi-structured network of information used by both human readers and computational agents through the introduction of markup technologies; the interpenetration and interweaving of data and evidence with the literature; and the creation of an invisible infrastructure of names, taxonomies and ontologies, and the challenges this presents. Primary forms of computation on this corpus include both comprehensive text mining and stream analysis (focused on what's new and what's changing as the base of literature and related factual databases expand with reports of new discoveries). I'll explore some of the developments in this area, including some practical considerations about platforms, licensing, and access. As the use of the literature evolves, so do the individual genres that comprise it. Today's typical digital journal article looks almost identical to one half a century old, except that it is viewed on screen and printed on demand. Yet there is a great deal of activity driven by the move to data and computationally intensive scholarship, demands for greater precision and replicability in scientific communication, and related sources to move journal articles "beyond the PDF," reconsidering relationships among traditional texts, software, workflows, data and the broad cultural record in its role as evidence. I'll look briefly at some of these developments, with particular focus on what this may mean for the management of the scholarly record as a whole, and also briefly discuss some parallel challenges emerging in scholarly monographs. Finally, I will close with a very brief discussion of what might be called corpus-scale thinking with regard to the scholarly record at the disciplinary level. I'll briefly discuss the findings of a 2014 National Research Council study that I co-chaired dealing with the future of the mathematics literature and the possibility of creating a global digital mathematics library, as well as offering some comments on developments in the life sciences. I will also consider the emergence of new corpus-wide tools and standards, such as Web-scale annotation, and some of their implications.
这次演讲将从一个非常广泛的角度来看待作为计算对象的文学语料库的出现,特别关注形成学术记录的各种文学和体裁。我将探讨的发展和影响包括:通过引入标记技术,学术文献演变为人类读者和计算代理使用的半结构化信息网络;数据和证据与文献的相互渗透和交织;以及名称、分类法和本体的不可见基础设施的创建,以及由此带来的挑战。该语料库的主要计算形式包括全面的文本挖掘和流分析(随着文献基础和相关事实数据库随着新发现的报告而扩展,重点关注新的内容和正在变化的内容)。我将探讨这一领域的一些发展,包括关于平台、许可和访问的一些实际考虑。随着文学使用的演变,构成文学的各个体裁也在演变。今天典型的电子期刊文章看起来几乎和半个世纪前的一模一样,除了它是在屏幕上观看和按需印刷的。然而,由于向数据和计算密集型学术的转变,对科学交流中更高精度和可复制性的要求,以及将期刊文章“超越PDF”的相关来源,重新考虑传统文本、软件、工作流程、数据和作为证据的广泛文化记录之间的关系,推动了大量的活动。我将简要介绍其中的一些发展,特别关注这对整个学术记录的管理可能意味着什么,并简要讨论学术专著中出现的一些平行挑战。最后,我将以一个非常简短的讨论来结束,关于学科水平的学术记录,我们可以称之为语料库尺度思维。我将简要讨论2014年国家研究委员会(National Research Council)的一项研究的结果,该研究是我共同主持的,涉及数学文献的未来和创建全球数字数学图书馆的可能性,并对生命科学的发展提出一些评论。我还将考虑新的语料库范围的工具和标准的出现,例如web规模的注释,以及它们的一些含义。
{"title":"The evolving scholarly record: new uses and new forms","authors":"C. Lynch","doi":"10.1145/2644866.2644900","DOIUrl":"https://doi.org/10.1145/2644866.2644900","url":null,"abstract":"This presentation will take a very broad view of the emergence of literary corpora as objects of computation, with a particular focus on the various literatures and genres that form the scholarly record. The developments and implications here that I will explore include: the evolution of the scholarly literature into a semi-structured network of information used by both human readers and computational agents through the introduction of markup technologies; the interpenetration and interweaving of data and evidence with the literature; and the creation of an invisible infrastructure of names, taxonomies and ontologies, and the challenges this presents.\u0000 Primary forms of computation on this corpus include both comprehensive text mining and stream analysis (focused on what's new and what's changing as the base of literature and related factual databases expand with reports of new discoveries). I'll explore some of the developments in this area, including some practical considerations about platforms, licensing, and access.\u0000 As the use of the literature evolves, so do the individual genres that comprise it. Today's typical digital journal article looks almost identical to one half a century old, except that it is viewed on screen and printed on demand. Yet there is a great deal of activity driven by the move to data and computationally intensive scholarship, demands for greater precision and replicability in scientific communication, and related sources to move journal articles \"beyond the PDF,\" reconsidering relationships among traditional texts, software, workflows, data and the broad cultural record in its role as evidence. I'll look briefly at some of these developments, with particular focus on what this may mean for the management of the scholarly record as a whole, and also briefly discuss some parallel challenges emerging in scholarly monographs.\u0000 Finally, I will close with a very brief discussion of what might be called corpus-scale thinking with regard to the scholarly record at the disciplinary level. I'll briefly discuss the findings of a 2014 National Research Council study that I co-chaired dealing with the future of the mathematics literature and the possibility of creating a global digital mathematics library, as well as offering some comments on developments in the life sciences. I will also consider the emergence of new corpus-wide tools and standards, such as Web-scale annotation, and some of their implications.","PeriodicalId":91385,"journal":{"name":"Proceedings of the ACM Symposium on Document Engineering. ACM Symposium on Document Engineering","volume":"52 1","pages":"1-2"},"PeriodicalIF":0.0,"publicationDate":"2014-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84914236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An ensemble approach for text document clustering using Wikipedia concepts 使用维基百科概念的文本文档聚类的集成方法
Seyednaser Nourashrafeddin, E. Milios, D. Arnold
Most text clustering algorithms represent a corpus as a document-term matrix in the bag of words model. The feature values are computed based on term frequencies in documents and no semantic relatedness between terms is considered. Therefore, two semantically similar documents may sit in different clusters if they do not share any terms. One solution to this problem is to enrich the document representation using an external resource like Wikipedia. We propose a new way to integrate Wikipedia concepts in partitional text document clustering in this work. A text corpus is first represented as a document-term matrix and a document-concept matrix. Terms that exist in the corpus are then clustered based on the document-term representation. Given the term clusters, we propose two methods, one based on the document-term representation and the other one based on the document-concept representation, to find two sets of seed documents. The two sets are then used in our text clustering algorithm in an ensemble approach to cluster documents. The experimental results show that even though the document-concept representations do not result in good document clusters per se, integrating them in our ensemble approach improves the quality of document clusters significantly.
大多数文本聚类算法在词包模型中将语料库表示为文档-术语矩阵。特征值是根据文档中的词频率计算的,不考虑词之间的语义相关性。因此,如果两个语义相似的文档不共享任何术语,它们可能位于不同的集群中。这个问题的一个解决方案是使用像Wikipedia这样的外部资源来丰富文档表示。本文提出了一种新的方法,将维基百科概念整合到部分文本文档聚类中。文本语料库首先表示为文档术语矩阵和文档概念矩阵。然后根据文档术语表示对语料库中存在的术语进行聚类。在给定术语聚类的情况下,我们提出了两种方法来寻找两组种子文档,一种是基于文档术语表示,另一种是基于文档概念表示。然后在我们的文本聚类算法中以集成方法使用这两个集合来聚类文档。实验结果表明,尽管文档概念表示本身不能产生良好的文档聚类,但将它们集成到我们的集成方法中可以显着提高文档聚类的质量。
{"title":"An ensemble approach for text document clustering using Wikipedia concepts","authors":"Seyednaser Nourashrafeddin, E. Milios, D. Arnold","doi":"10.1145/2644866.2644868","DOIUrl":"https://doi.org/10.1145/2644866.2644868","url":null,"abstract":"Most text clustering algorithms represent a corpus as a document-term matrix in the bag of words model. The feature values are computed based on term frequencies in documents and no semantic relatedness between terms is considered. Therefore, two semantically similar documents may sit in different clusters if they do not share any terms. One solution to this problem is to enrich the document representation using an external resource like Wikipedia. We propose a new way to integrate Wikipedia concepts in partitional text document clustering in this work. A text corpus is first represented as a document-term matrix and a document-concept matrix. Terms that exist in the corpus are then clustered based on the document-term representation. Given the term clusters, we propose two methods, one based on the document-term representation and the other one based on the document-concept representation, to find two sets of seed documents. The two sets are then used in our text clustering algorithm in an ensemble approach to cluster documents. The experimental results show that even though the document-concept representations do not result in good document clusters per se, integrating them in our ensemble approach improves the quality of document clusters significantly.","PeriodicalId":91385,"journal":{"name":"Proceedings of the ACM Symposium on Document Engineering. ACM Symposium on Document Engineering","volume":"32 1","pages":"107-116"},"PeriodicalIF":0.0,"publicationDate":"2014-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73149588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
FlexiFont: a flexible system to generate personal font libraries FlexiFont:一个灵活的生成个人字体库的系统
W. Pan, Z. Lian, Rongju Sun, Yingmin Tang, Jianguo Xiao
This paper proposes FlexiFont, a system designed to generate personal font libraries from the camera-captured character images. Compared with existing methods, our system is able to process most kinds of languages and the generated font libraries can be extended by adding new characters based on the user's requirement. Moreover, digital cameras instead of scanners are chosen as the input devices, so that it is more convenient for common people to use the system. First of all, the users should choose a default template or define their own templates, then write the characters on the printed templates according to the certain instructions. After the users upload the photos of the templates with written characters, the system will automatically correct the perspective and split the whole photo into a set of individual character images. As the final step, FlexiFont will denoise, vectorize, and normalize each character image before storing it into a TrueType file. Experimental results demonstrate the robustness and efficiency of our system.
本文提出了FlexiFont系统,该系统旨在从相机捕获的字符图像中生成个人字体库。与现有的方法相比,本系统可以处理大多数语言,生成的字体库可以根据用户的需要增加新的字符进行扩展。此外,输入设备选择了数码相机而不是扫描仪,这样更方便普通人使用系统。首先,用户应该选择一个默认模板或定义自己的模板,然后根据一定的说明在打印出来的模板上书写字符。用户上传文字模板照片后,系统会自动校正视角,将整张照片分割成一组独立的文字图片。作为最后一步,FlexiFont将去噪、矢量化和规范化每个字符图像,然后将其存储到TrueType文件中。实验结果证明了系统的鲁棒性和有效性。
{"title":"FlexiFont: a flexible system to generate personal font libraries","authors":"W. Pan, Z. Lian, Rongju Sun, Yingmin Tang, Jianguo Xiao","doi":"10.1145/2644866.2644886","DOIUrl":"https://doi.org/10.1145/2644866.2644886","url":null,"abstract":"This paper proposes FlexiFont, a system designed to generate personal font libraries from the camera-captured character images. Compared with existing methods, our system is able to process most kinds of languages and the generated font libraries can be extended by adding new characters based on the user's requirement. Moreover, digital cameras instead of scanners are chosen as the input devices, so that it is more convenient for common people to use the system. First of all, the users should choose a default template or define their own templates, then write the characters on the printed templates according to the certain instructions. After the users upload the photos of the templates with written characters, the system will automatically correct the perspective and split the whole photo into a set of individual character images. As the final step, FlexiFont will denoise, vectorize, and normalize each character image before storing it into a TrueType file. Experimental results demonstrate the robustness and efficiency of our system.","PeriodicalId":91385,"journal":{"name":"Proceedings of the ACM Symposium on Document Engineering. ACM Symposium on Document Engineering","volume":"1 1","pages":"17-20"},"PeriodicalIF":0.0,"publicationDate":"2014-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74473544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
P-GTM: privacy-preserving google tri-gram method for semantic text similarity P-GTM:语义文本相似度的隐私保护google三图方法
O. Davison, A. Mohammad, E. Milios
This paper presents P-GTM, a privacy-preserving text similarity algorithm that extends the Google Tri-gram Method (GTM). The Google Tri-gram Method is a high-performance unsupervised semantic text similarity method based on the use of context from the Google Web 1T n-gram dataset. P-GTM computes the semantic similarity between two input bag-of-words documents on public cloud hardware, without disclosing the documents' contents. Like the GTM, P-GTM requires the uni-gram and tri-gram lists from the Google Web 1T n-gram dataset as additional inputs. The need for these additional lists makes private computation of GTM text similarities a challenging problem. P-GTM uses a combination of pre-computation, encryption, and randomized preprocessing to enable private computation of text similarities using the GTM. We discuss the security of the algorithm and quantify its privacy using standard and real life corpora.
本文提出了一种保护隐私的文本相似度算法P-GTM,它扩展了谷歌三图方法(GTM)。谷歌三图方法是一种高性能的无监督语义文本相似度方法,该方法基于使用谷歌Web 1T n-图数据集的上下文。P-GTM在公有云硬件上计算两个输入词袋文档之间的语义相似度,而不披露文档的内容。与GTM一样,P-GTM需要b谷歌Web 1T n-gram数据集中的一元和三元列表作为额外输入。对这些附加列表的需求使得GTM文本相似度的私有计算成为一个具有挑战性的问题。P-GTM使用预计算、加密和随机预处理的组合来支持使用GTM进行文本相似度的私有计算。我们讨论了该算法的安全性,并使用标准和现实生活中的语料库量化了其隐私性。
{"title":"P-GTM: privacy-preserving google tri-gram method for semantic text similarity","authors":"O. Davison, A. Mohammad, E. Milios","doi":"10.1145/2644866.2644882","DOIUrl":"https://doi.org/10.1145/2644866.2644882","url":null,"abstract":"This paper presents P-GTM, a privacy-preserving text similarity algorithm that extends the Google Tri-gram Method (GTM). The Google Tri-gram Method is a high-performance unsupervised semantic text similarity method based on the use of context from the Google Web 1T n-gram dataset. P-GTM computes the semantic similarity between two input bag-of-words documents on public cloud hardware, without disclosing the documents' contents. Like the GTM, P-GTM requires the uni-gram and tri-gram lists from the Google Web 1T n-gram dataset as additional inputs. The need for these additional lists makes private computation of GTM text similarities a challenging problem. P-GTM uses a combination of pre-computation, encryption, and randomized preprocessing to enable private computation of text similarities using the GTM. We discuss the security of the algorithm and quantify its privacy using standard and real life corpora.","PeriodicalId":91385,"journal":{"name":"Proceedings of the ACM Symposium on Document Engineering. ACM Symposium on Document Engineering","volume":"33 1","pages":"81-84"},"PeriodicalIF":0.0,"publicationDate":"2014-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76241857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DOCENG 2014: PDF tutorial docen 2014: PDF教程
S. Bagley, Matthew R. B. Hardy
Many billions of documents are stored in the Portable Document Format (PDF). These documents contain a wealth of information and yet PDF is often seen as an inaccessible format and, for that reason, often gets a very bad press. In this tutorial, we get under the hood of PDF and analyze the poor practices that cause PDF files to be inaccessible. We discuss how to access the text and graphics within a PDF and we identify those features of PDF that can be used to make the information much more accessible. We also discuss some of the new ISO standards that provide profiles for producing Accessible PDF files.
数以十亿计的文档以可移植文档格式(PDF)存储。这些文档包含了丰富的信息,但PDF通常被视为一种无法访问的格式,因此,经常得到非常糟糕的报道。在本教程中,我们将深入了解PDF,并分析导致PDF文件无法访问的不良实践。我们讨论了如何访问PDF中的文本和图形,并确定了PDF的那些可用于使信息更易于访问的特性。我们还讨论了一些新的ISO标准,这些标准为生成可访问PDF文件提供了配置文件。
{"title":"DOCENG 2014: PDF tutorial","authors":"S. Bagley, Matthew R. B. Hardy","doi":"10.1145/2644866.2644899","DOIUrl":"https://doi.org/10.1145/2644866.2644899","url":null,"abstract":"Many billions of documents are stored in the Portable Document Format (PDF). These documents contain a wealth of information and yet PDF is often seen as an inaccessible format and, for that reason, often gets a very bad press. In this tutorial, we get under the hood of PDF and analyze the poor practices that cause PDF files to be inaccessible. We discuss how to access the text and graphics within a PDF and we identify those features of PDF that can be used to make the information much more accessible. We also discuss some of the new ISO standards that provide profiles for producing Accessible PDF files.","PeriodicalId":91385,"journal":{"name":"Proceedings of the ACM Symposium on Document Engineering. ACM Symposium on Document Engineering","volume":"1 1","pages":"213-214"},"PeriodicalIF":0.0,"publicationDate":"2014-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78454388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
ARCTIC: metadata extraction from scientific papers in pdf using two-layer CRF 北极:利用双层CRF从pdf格式的科学论文中提取元数据
Alan Souza, V. Moreira, C. Heuser
Most scientific articles are available in PDF format. The PDF standard allows the generation of metadata that is included within the document. However, many authors do not define this information, making this feature unreliable or incomplete. This fact has been motivating research which aims to extract metadata automatically. Automatic metadata extraction has been identified as one of the most challenging tasks in document engineering. This work proposes Artic, a method for metadata extraction from scientific papers which employs a two-layer probabilistic framework based on Conditional Random Fields. The first layer aims at identifying the main sections with metadata information, and the second layer finds, for each section, the corresponding metadata. Given a PDF file containing a scientific paper, Artic extracts the title, author names, emails, affiliations, and venue information. We report on experiments using 100 real papers from a variety of publishers. Our results outperformed the state-of-the-art system used as the baseline, achieving a precision of over 99%.
大多数科学文章都有PDF格式。PDF标准允许生成包含在文档中的元数据。然而,许多作者没有定义这个信息,使得这个特性不可靠或不完整。这一事实激发了旨在自动提取元数据的研究。自动元数据提取已被确定为文档工程中最具挑战性的任务之一。本文提出了一种基于条件随机场的双层概率框架的科学论文元数据提取方法——Artic。第一层的目的是用元数据信息标识主要部分,第二层为每个部分查找相应的元数据。给定一个包含科学论文的PDF文件,Artic提取标题、作者姓名、电子邮件、隶属关系和地点信息。我们使用来自不同出版商的100篇真实论文来报道实验。我们的结果优于最先进的系统作为基准,达到超过99%的精度。
{"title":"ARCTIC: metadata extraction from scientific papers in pdf using two-layer CRF","authors":"Alan Souza, V. Moreira, C. Heuser","doi":"10.1145/2644866.2644872","DOIUrl":"https://doi.org/10.1145/2644866.2644872","url":null,"abstract":"Most scientific articles are available in PDF format. The PDF standard allows the generation of metadata that is included within the document. However, many authors do not define this information, making this feature unreliable or incomplete. This fact has been motivating research which aims to extract metadata automatically. Automatic metadata extraction has been identified as one of the most challenging tasks in document engineering. This work proposes Artic, a method for metadata extraction from scientific papers which employs a two-layer probabilistic framework based on Conditional Random Fields. The first layer aims at identifying the main sections with metadata information, and the second layer finds, for each section, the corresponding metadata. Given a PDF file containing a scientific paper, Artic extracts the title, author names, emails, affiliations, and venue information. We report on experiments using 100 real papers from a variety of publishers. Our results outperformed the state-of-the-art system used as the baseline, achieving a precision of over 99%.","PeriodicalId":91385,"journal":{"name":"Proceedings of the ACM Symposium on Document Engineering. ACM Symposium on Document Engineering","volume":"90 1","pages":"121-130"},"PeriodicalIF":0.0,"publicationDate":"2014-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83278093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Transforming graph-based sentence representations to alleviate overfitting in relation extraction 转换基于图的句子表示以减轻关系提取中的过拟合
Rinaldo Lima, Jamilson Batista, Rafael Ferreira, F. Freitas, R. Lins, S. Simske, M. Riss
Relation extraction (RE) aims at finding the way entities, such as person, location, organization, date, etc., depend upon each other in a text document. Ontology Population, Automatic Summarization, and Question Answering are fields in which relation extraction offers valuable solutions. A relation extraction method based on inductive logic programming that induces extraction rules suitable to identify semantic relations between entities was proposed by the authors in a previous work. This paper proposes a method to simplify graph-based representations of sentences that replaces dependency graphs of sentences by simpler ones, keeping the target entities in it. The goal is to speed up the learning phase in a RE framework, by applying several rules for graph simplification that constrain the hypothesis space for generating extraction rules. Moreover, the direct impact on the extraction performance results is also investigated. The proposed techniques outperformed some other state-of-the-art systems when assessed on two standard datasets for relation extraction in the biomedical domain.
关系抽取(RE)的目的是寻找文本文档中实体(如人、位置、组织、日期等)相互依赖的方式。关系抽取在本体填充、自动摘要和问题回答等领域提供了有价值的解决方案。作者在前人的工作中提出了一种基于归纳逻辑规划的关系抽取方法,该方法归纳出适合识别实体间语义关系的抽取规则。本文提出了一种简化基于图的句子表示的方法,用更简单的依赖图代替句子的依赖图,并保留目标实体。目标是通过应用一些规则来约束生成提取规则的假设空间,从而加快正则框架中的学习阶段。此外,还研究了对提取性能结果的直接影响。当在生物医学领域的关系提取的两个标准数据集上进行评估时,所提出的技术优于其他一些最先进的系统。
{"title":"Transforming graph-based sentence representations to alleviate overfitting in relation extraction","authors":"Rinaldo Lima, Jamilson Batista, Rafael Ferreira, F. Freitas, R. Lins, S. Simske, M. Riss","doi":"10.1145/2644866.2644875","DOIUrl":"https://doi.org/10.1145/2644866.2644875","url":null,"abstract":"Relation extraction (RE) aims at finding the way entities, such as person, location, organization, date, etc., depend upon each other in a text document. Ontology Population, Automatic Summarization, and Question Answering are fields in which relation extraction offers valuable solutions. A relation extraction method based on inductive logic programming that induces extraction rules suitable to identify semantic relations between entities was proposed by the authors in a previous work. This paper proposes a method to simplify graph-based representations of sentences that replaces dependency graphs of sentences by simpler ones, keeping the target entities in it. The goal is to speed up the learning phase in a RE framework, by applying several rules for graph simplification that constrain the hypothesis space for generating extraction rules. Moreover, the direct impact on the extraction performance results is also investigated. The proposed techniques outperformed some other state-of-the-art systems when assessed on two standard datasets for relation extraction in the biomedical domain.","PeriodicalId":91385,"journal":{"name":"Proceedings of the ACM Symposium on Document Engineering. ACM Symposium on Document Engineering","volume":"46 1","pages":"53-62"},"PeriodicalIF":0.0,"publicationDate":"2014-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79106941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
期刊
Proceedings of the ACM Symposium on Document Engineering. ACM Symposium on Document Engineering
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1