Jacob Collard, Valeria de Paiva, Eswaran Subrahmanian
{"title":"Mathematical Entities: Corpora and Benchmarks","authors":"Jacob Collard, Valeria de Paiva, Eswaran Subrahmanian","doi":"arxiv-2406.11577","DOIUrl":null,"url":null,"abstract":"Mathematics is a highly specialized domain with its own unique set of\nchallenges. Despite this, there has been relatively little research on natural\nlanguage processing for mathematical texts, and there are few mathematical\nlanguage resources aimed at NLP. In this paper, we aim to provide annotated\ncorpora that can be used to study the language of mathematics in different\ncontexts, ranging from fundamental concepts found in textbooks to advanced\nresearch mathematics. We preprocess the corpora with a neural parsing model and\nsome manual intervention to provide part-of-speech tags, lemmas, and dependency\ntrees. In total, we provide 182397 sentences across three corpora. We then aim\nto test and evaluate several noteworthy natural language processing models\nusing these corpora, to show how well they can adapt to the domain of\nmathematics and provide useful tools for exploring mathematical language. We\nevaluate several neural and symbolic models against benchmarks that we extract\nfrom the corpus metadata to show that terminology extraction and definition\nextraction do not easily generalize to mathematics, and that additional work is\nneeded to achieve good performance on these metrics. Finally, we provide a\nlearning assistant that grants access to the content of these corpora in a\ncontext-sensitive manner, utilizing text search and entity linking. Though our\ncorpora and benchmarks provide useful metrics for evaluating mathematical\nlanguage processing, further work is necessary to adapt models to mathematics\nin order to provide more effective learning assistants and apply NLP methods to\ndifferent mathematical domains.","PeriodicalId":501462,"journal":{"name":"arXiv - MATH - History and Overview","volume":"139 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - MATH - History and Overview","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2406.11577","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Mathematics is a highly specialized domain with its own unique set of
challenges. Despite this, there has been relatively little research on natural
language processing for mathematical texts, and there are few mathematical
language resources aimed at NLP. In this paper, we aim to provide annotated
corpora that can be used to study the language of mathematics in different
contexts, ranging from fundamental concepts found in textbooks to advanced
research mathematics. We preprocess the corpora with a neural parsing model and
some manual intervention to provide part-of-speech tags, lemmas, and dependency
trees. In total, we provide 182397 sentences across three corpora. We then aim
to test and evaluate several noteworthy natural language processing models
using these corpora, to show how well they can adapt to the domain of
mathematics and provide useful tools for exploring mathematical language. We
evaluate several neural and symbolic models against benchmarks that we extract
from the corpus metadata to show that terminology extraction and definition
extraction do not easily generalize to mathematics, and that additional work is
needed to achieve good performance on these metrics. Finally, we provide a
learning assistant that grants access to the content of these corpora in a
context-sensitive manner, utilizing text search and entity linking. Though our
corpora and benchmarks provide useful metrics for evaluating mathematical
language processing, further work is necessary to adapt models to mathematics
in order to provide more effective learning assistants and apply NLP methods to
different mathematical domains.