Guiding transformer to generate graph structure for AMR parsing

Runliang Niu, Qi Wang
{"title":"Guiding transformer to generate graph structure for AMR parsing","authors":"Runliang Niu, Qi Wang","doi":"10.1117/12.2639102","DOIUrl":null,"url":null,"abstract":"Abstract Meaning Representation (AMR) is a kind of semantic representation of natural language, which aims to represent the semantics of a sentence by a rooted, directed, and acyclic graph (DAG). Most existing AMR parsing works are designed under specific dictionary. However, these works make the content length of each node limited, and they mainly need to go through a very complicated post-processing process. In this paper, we propose a novel encoder-decoder framework for AMR parsing to address these issues, which generates a graph structure and predicts node relationships simultaneously. Specifically, we represent each node as a five-tuple form, containing token sequence of variable length and the connection relationship with other nodes. BERT model is employed as the encoder module. Our decoder module first generates a linearization representation of the graph structure, then predicts multiple elements of each node by four different attention based classifiers. We also found an effective way to improve the generalization performance of Transformer model for graph generation. By assigning different index number to nodes in each training step and remove positional encoding used in most generative models, the model can learn the relationship between nodes better. Experiments against two AMR datasets demonstrate the competitive performance of our proposed method compared with baseline methods.","PeriodicalId":336892,"journal":{"name":"Neural Networks, Information and Communication Engineering","volume":"9 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks, Information and Communication Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1117/12.2639102","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Abstract Meaning Representation (AMR) is a kind of semantic representation of natural language, which aims to represent the semantics of a sentence by a rooted, directed, and acyclic graph (DAG). Most existing AMR parsing works are designed under specific dictionary. However, these works make the content length of each node limited, and they mainly need to go through a very complicated post-processing process. In this paper, we propose a novel encoder-decoder framework for AMR parsing to address these issues, which generates a graph structure and predicts node relationships simultaneously. Specifically, we represent each node as a five-tuple form, containing token sequence of variable length and the connection relationship with other nodes. BERT model is employed as the encoder module. Our decoder module first generates a linearization representation of the graph structure, then predicts multiple elements of each node by four different attention based classifiers. We also found an effective way to improve the generalization performance of Transformer model for graph generation. By assigning different index number to nodes in each training step and remove positional encoding used in most generative models, the model can learn the relationship between nodes better. Experiments against two AMR datasets demonstrate the competitive performance of our proposed method compared with baseline methods.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
引导变压器生成用于AMR解析的图结构
摘要意义表示(AMR)是自然语言的一种语义表示,其目的是用有根、有向、无环图(DAG)来表示句子的语义。现有的大多数AMR解析工作都是在特定的字典下设计的。但是这些作品使得每个节点的内容长度有限,主要需要经过非常复杂的后处理过程。在本文中,我们提出了一种新的编码器-解码器框架来解决这些问题,该框架可以同时生成图结构和预测节点关系。具体来说,我们将每个节点表示为一个五元组形式,其中包含可变长度的令牌序列以及与其他节点的连接关系。编码器模块采用BERT模型。我们的解码器模块首先生成图结构的线性化表示,然后通过四个不同的基于注意力的分类器预测每个节点的多个元素。我们还找到了一种有效的方法来提高Transformer模型在图生成中的泛化性能。通过在每个训练步骤中为节点分配不同的索引号,并去除大多数生成模型中使用的位置编码,模型可以更好地学习节点之间的关系。在两个AMR数据集上的实验表明,与基线方法相比,我们提出的方法具有竞争力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Improve vulnerability prediction performance using self-attention mechanism and convolutional neural network Design of digital pulse-position modulation system based on minimum distance method Design of an externally adjustable oscillator circuit Research on non-intrusive video capture technology based on FPD-linkⅢ The communication process of digital binary pulse-position modulation with additive white Gaussian noise
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1