预训练语言模型:他们知道什么?

Nuno Guimarães, Ricardo Campos, Alípio Jorge
{"title":"预训练语言模型:他们知道什么?","authors":"Nuno Guimarães, Ricardo Campos, Alípio Jorge","doi":"10.1002/widm.1518","DOIUrl":null,"url":null,"abstract":"Abstract Large language models (LLMs) have substantially pushed artificial intelligence (AI) research and applications in the last few years. They are currently able to achieve high effectiveness in different natural language processing (NLP) tasks, such as machine translation, named entity recognition, text classification, question answering, or text summarization. Recently, significant attention has been drawn to OpenAI's GPT models' capabilities and extremely accessible interface. LLMs are nowadays routinely used and studied for downstream tasks and specific applications with great success, pushing forward the state of the art in almost all of them. However, they also exhibit impressive inference capabilities when used off the shelf without further training. In this paper, we aim to study the behavior of pre‐trained language models (PLMs) in some inference tasks they were not initially trained for. Therefore, we focus our attention on very recent research works related to the inference capabilities of PLMs in some selected tasks such as factual probing and common‐sense reasoning. We highlight relevant achievements made by these models, as well as some of their current limitations that open opportunities for further research. This article is categorized under: Fundamental Concepts of Data and Knowledge &gt; Key Design Issues in Data Mining Technologies &gt; Artificial Intelligence","PeriodicalId":500599,"journal":{"name":"WIREs Data Mining and Knowledge Discovery","volume":"52 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"<scp>Pre‐trained</scp> language models: What do they know?\",\"authors\":\"Nuno Guimarães, Ricardo Campos, Alípio Jorge\",\"doi\":\"10.1002/widm.1518\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Abstract Large language models (LLMs) have substantially pushed artificial intelligence (AI) research and applications in the last few years. They are currently able to achieve high effectiveness in different natural language processing (NLP) tasks, such as machine translation, named entity recognition, text classification, question answering, or text summarization. Recently, significant attention has been drawn to OpenAI's GPT models' capabilities and extremely accessible interface. LLMs are nowadays routinely used and studied for downstream tasks and specific applications with great success, pushing forward the state of the art in almost all of them. However, they also exhibit impressive inference capabilities when used off the shelf without further training. In this paper, we aim to study the behavior of pre‐trained language models (PLMs) in some inference tasks they were not initially trained for. Therefore, we focus our attention on very recent research works related to the inference capabilities of PLMs in some selected tasks such as factual probing and common‐sense reasoning. We highlight relevant achievements made by these models, as well as some of their current limitations that open opportunities for further research. This article is categorized under: Fundamental Concepts of Data and Knowledge &gt; Key Design Issues in Data Mining Technologies &gt; Artificial Intelligence\",\"PeriodicalId\":500599,\"journal\":{\"name\":\"WIREs Data Mining and Knowledge Discovery\",\"volume\":\"52 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-09-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"WIREs Data Mining and Knowledge Discovery\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1002/widm.1518\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"WIREs Data Mining and Knowledge Discovery","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1002/widm.1518","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

大型语言模型(llm)在过去几年中极大地推动了人工智能(AI)的研究和应用。它们目前能够在不同的自然语言处理(NLP)任务中实现高效,例如机器翻译、命名实体识别、文本分类、问题回答或文本摘要。最近,OpenAI的GPT模型的功能和极易访问的界面引起了人们的极大关注。如今,法学硕士在下游任务和特定应用中经常被使用和研究,并取得了巨大成功,推动了几乎所有这些领域的最新发展。然而,在没有进一步训练的情况下,它们也表现出令人印象深刻的推理能力。在本文中,我们的目标是研究预训练语言模型(PLMs)在一些未初始训练的推理任务中的行为。因此,我们将注意力集中在最近与plm在某些选定任务(如事实探测和常识推理)中的推理能力相关的研究工作上。我们强调了这些模型所取得的相关成就,以及它们目前的一些局限性,为进一步的研究提供了机会。本文分类如下:数据和知识的基本概念>数据挖掘技术中的关键设计问题人工智能
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Pre‐trained language models: What do they know?
Abstract Large language models (LLMs) have substantially pushed artificial intelligence (AI) research and applications in the last few years. They are currently able to achieve high effectiveness in different natural language processing (NLP) tasks, such as machine translation, named entity recognition, text classification, question answering, or text summarization. Recently, significant attention has been drawn to OpenAI's GPT models' capabilities and extremely accessible interface. LLMs are nowadays routinely used and studied for downstream tasks and specific applications with great success, pushing forward the state of the art in almost all of them. However, they also exhibit impressive inference capabilities when used off the shelf without further training. In this paper, we aim to study the behavior of pre‐trained language models (PLMs) in some inference tasks they were not initially trained for. Therefore, we focus our attention on very recent research works related to the inference capabilities of PLMs in some selected tasks such as factual probing and common‐sense reasoning. We highlight relevant achievements made by these models, as well as some of their current limitations that open opportunities for further research. This article is categorized under: Fundamental Concepts of Data and Knowledge > Key Design Issues in Data Mining Technologies > Artificial Intelligence
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Causality and causal inference for engineers: Beyond correlation, regression, prediction and artificial intelligence Evolution toward intelligent communications: Impact of deep learning applications on the future of 6G technology The state‐of‐art review of ultra‐precision machining using text mining: Identification of main themes and recommendations for the future direction Deep learning models for price forecasting of financial time series: A review of recent advancements: 2020–2022 Pre‐trained language models: What do they know?
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1