Mathematical Entities: Corpora and Benchmarks

Jacob Collard, Valeria de Paiva, Eswaran Subrahmanian
{"title":"Mathematical Entities: Corpora and Benchmarks","authors":"Jacob Collard, Valeria de Paiva, Eswaran Subrahmanian","doi":"arxiv-2406.11577","DOIUrl":null,"url":null,"abstract":"Mathematics is a highly specialized domain with its own unique set of\nchallenges. Despite this, there has been relatively little research on natural\nlanguage processing for mathematical texts, and there are few mathematical\nlanguage resources aimed at NLP. In this paper, we aim to provide annotated\ncorpora that can be used to study the language of mathematics in different\ncontexts, ranging from fundamental concepts found in textbooks to advanced\nresearch mathematics. We preprocess the corpora with a neural parsing model and\nsome manual intervention to provide part-of-speech tags, lemmas, and dependency\ntrees. In total, we provide 182397 sentences across three corpora. We then aim\nto test and evaluate several noteworthy natural language processing models\nusing these corpora, to show how well they can adapt to the domain of\nmathematics and provide useful tools for exploring mathematical language. We\nevaluate several neural and symbolic models against benchmarks that we extract\nfrom the corpus metadata to show that terminology extraction and definition\nextraction do not easily generalize to mathematics, and that additional work is\nneeded to achieve good performance on these metrics. Finally, we provide a\nlearning assistant that grants access to the content of these corpora in a\ncontext-sensitive manner, utilizing text search and entity linking. Though our\ncorpora and benchmarks provide useful metrics for evaluating mathematical\nlanguage processing, further work is necessary to adapt models to mathematics\nin order to provide more effective learning assistants and apply NLP methods to\ndifferent mathematical domains.","PeriodicalId":501462,"journal":{"name":"arXiv - MATH - History and Overview","volume":"139 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - MATH - History and Overview","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2406.11577","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Mathematics is a highly specialized domain with its own unique set of challenges. Despite this, there has been relatively little research on natural language processing for mathematical texts, and there are few mathematical language resources aimed at NLP. In this paper, we aim to provide annotated corpora that can be used to study the language of mathematics in different contexts, ranging from fundamental concepts found in textbooks to advanced research mathematics. We preprocess the corpora with a neural parsing model and some manual intervention to provide part-of-speech tags, lemmas, and dependency trees. In total, we provide 182397 sentences across three corpora. We then aim to test and evaluate several noteworthy natural language processing models using these corpora, to show how well they can adapt to the domain of mathematics and provide useful tools for exploring mathematical language. We evaluate several neural and symbolic models against benchmarks that we extract from the corpus metadata to show that terminology extraction and definition extraction do not easily generalize to mathematics, and that additional work is needed to achieve good performance on these metrics. Finally, we provide a learning assistant that grants access to the content of these corpora in a context-sensitive manner, utilizing text search and entity linking. Though our corpora and benchmarks provide useful metrics for evaluating mathematical language processing, further work is necessary to adapt models to mathematics in order to provide more effective learning assistants and apply NLP methods to different mathematical domains.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
数学实体:语料库和基准
数学是一个高度专业化的领域,有其独特的挑战。尽管如此,针对数学文本的自然语言处理研究相对较少,而且针对 NLP 的数学语言资源也很少。本文旨在提供注释语料库,用于研究不同语境下的数学语言,范围从教科书中的基本概念到高级研究数学。我们利用神经解析模型和一些人工干预对语料库进行预处理,以提供语篇标签、词性和依赖树。我们总共提供了三个语料库中的 182397 个句子。然后,我们将利用这些语料对几个值得关注的自然语言处理模型进行测试和评估,以展示这些模型如何适应数学领域并为探索数学语言提供有用的工具。我们根据从语料库元数据中提取的基准对几个神经和符号模型进行了评估,结果表明术语提取和定义提取并不容易推广到数学领域,还需要做更多的工作才能在这些指标上取得良好的性能。最后,我们提供了一个学习助手,利用文本搜索和实体链接,以对上下文敏感的方式访问这些语料库的内容。尽管我们的语料库和基准为评估数学语言处理提供了有用的指标,但仍有必要开展进一步的工作,使模型适应数学,从而提供更有效的学习助手,并将 NLP 方法应用于不同的数学领域。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Roger Godement et les fonctions de type positif Winning Lights Out with Fibonacci A Mathematical Model of The Effects of Strike On Nigerian Universities Generalized Carlos Scales Samgamagrāma Mādhava: An Updated Biography
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1