{"title":"Multi-hop Question Answering","authors":"Vaibhav Mavi, Anubhav Jangra, Jatowt Adam","doi":"10.1561/1500000102","DOIUrl":null,"url":null,"abstract":"<p>The task of Question Answering (QA) has attracted significant\nresearch interest for a long time. Its relevance to\nlanguage understanding and knowledge retrieval tasks, along\nwith the simple setting, makes the task of QA crucial for\nstrong AI systems. Recent success on simple QA tasks has\nshifted the focus to more complex settings. Among these,\nMulti-Hop QA (MHQA) is one of the most researched tasks\nover recent years. In broad terms, MHQA is the task of answering\nnatural language questions that involve extracting\nand combining multiple pieces of information and doing multiple\nsteps of reasoning. An example of a multi-hop question\nwould be “The Argentine PGA Championship record holder\nhas won how many tournaments worldwide?”. Answering\nthe question would need two pieces of information: “Who is\nthe record holder for Argentine PGA Championship tournaments?”\nand “How many tournaments did [Answer of Sub\nQ1] win?”. The ability to answer multi-hop questions and\nperform multi step reasoning can significantly improve the\nutility of NLP systems. Consequently, the field has seen a\nsurge of high quality datasets, models and evaluation strategies.\nThe notion of ‘multiple hops’ is somewhat abstract\nwhich results in a large variety of tasks that require multihop\nreasoning. This leads to different datasets and models\nthat differ significantly from each other and make the field\nchallenging to generalize and survey. We aim to provide a\ngeneral and formal definition of the MHQA task, and organize\nand summarize existing MHQA frameworks. We also\noutline some best practices for building MHQA datasets.\nThis monograph provides a systematic and thorough introduction\nas well as the structuring of the existing attempts\nto this highly interesting, yet quite challenging task.</p>","PeriodicalId":48829,"journal":{"name":"Foundations and Trends in Information Retrieval","volume":null,"pages":null},"PeriodicalIF":8.3000,"publicationDate":"2024-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Foundations and Trends in Information Retrieval","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1561/1500000102","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
The task of Question Answering (QA) has attracted significant
research interest for a long time. Its relevance to
language understanding and knowledge retrieval tasks, along
with the simple setting, makes the task of QA crucial for
strong AI systems. Recent success on simple QA tasks has
shifted the focus to more complex settings. Among these,
Multi-Hop QA (MHQA) is one of the most researched tasks
over recent years. In broad terms, MHQA is the task of answering
natural language questions that involve extracting
and combining multiple pieces of information and doing multiple
steps of reasoning. An example of a multi-hop question
would be “The Argentine PGA Championship record holder
has won how many tournaments worldwide?”. Answering
the question would need two pieces of information: “Who is
the record holder for Argentine PGA Championship tournaments?”
and “How many tournaments did [Answer of Sub
Q1] win?”. The ability to answer multi-hop questions and
perform multi step reasoning can significantly improve the
utility of NLP systems. Consequently, the field has seen a
surge of high quality datasets, models and evaluation strategies.
The notion of ‘multiple hops’ is somewhat abstract
which results in a large variety of tasks that require multihop
reasoning. This leads to different datasets and models
that differ significantly from each other and make the field
challenging to generalize and survey. We aim to provide a
general and formal definition of the MHQA task, and organize
and summarize existing MHQA frameworks. We also
outline some best practices for building MHQA datasets.
This monograph provides a systematic and thorough introduction
as well as the structuring of the existing attempts
to this highly interesting, yet quite challenging task.
期刊介绍:
The surge in research across all domains in the past decade has resulted in a plethora of new publications, causing an exponential growth in published research. Navigating through this extensive literature and staying current has become a time-consuming challenge. While electronic publishing provides instant access to more articles than ever, discerning the essential ones for a comprehensive understanding of any topic remains an issue. To tackle this, Foundations and Trends® in Information Retrieval - FnTIR - addresses the problem by publishing high-quality survey and tutorial monographs in the field.
Each issue of Foundations and Trends® in Information Retrieval - FnT IR features a 50-100 page monograph authored by research leaders, covering tutorial subjects, research retrospectives, and survey papers that provide state-of-the-art reviews within the scope of the journal.