{"title":"Enhancing pre-trained language models with Chinese character morphological knowledge","authors":"Zhenzhong Zheng , Xiaoming Wu , Xiangzhi Liu","doi":"10.1016/j.ipm.2024.103945","DOIUrl":null,"url":null,"abstract":"<div><div>Pre-trained language models (PLMs) have demonstrated success in Chinese natural language processing (NLP) tasks by acquiring high-quality representations through contextual learning. However, these models tend to neglect the glyph features of Chinese characters, which contain valuable semantic knowledge. To address this issue, this paper introduces a self-supervised learning strategy, named SGBERT, aiming to learn high-quality semantic knowledge from Chinese Character morphology to enhance PLMs’ understanding of natural language. Specifically, the learning process of SGBERT can be divided into two stages. In the first stage, we preheat the glyph encoder by constructing contrastive learning between glyphs, enabling it to obtain preliminary glyph coding capabilities. In the second stage, we transform the glyph features captured by the glyph encoder into context-sensitive representations through a glyph-aware window. These representations are then contrasted with the character representations generated by the PLMs, leveraging the powerful representation capabilities of the PLMs to guide glyph learning. Finally, the glyph knowledge is fused with the pre-trained model representations to obtain semantically richer representations. We conduct experiments on ten datasets covering six Chinese NLP tasks, and the results demonstrate that SGBERT significantly enhances commonly used Chinese PLMs. On average, the introduction of SGBERT resulted in a performance improvement of 1.36% for BERT and 1.09% for RoBERTa.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"62 1","pages":"Article 103945"},"PeriodicalIF":7.4000,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Processing & Management","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0306457324003042","RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Pre-trained language models (PLMs) have demonstrated success in Chinese natural language processing (NLP) tasks by acquiring high-quality representations through contextual learning. However, these models tend to neglect the glyph features of Chinese characters, which contain valuable semantic knowledge. To address this issue, this paper introduces a self-supervised learning strategy, named SGBERT, aiming to learn high-quality semantic knowledge from Chinese Character morphology to enhance PLMs’ understanding of natural language. Specifically, the learning process of SGBERT can be divided into two stages. In the first stage, we preheat the glyph encoder by constructing contrastive learning between glyphs, enabling it to obtain preliminary glyph coding capabilities. In the second stage, we transform the glyph features captured by the glyph encoder into context-sensitive representations through a glyph-aware window. These representations are then contrasted with the character representations generated by the PLMs, leveraging the powerful representation capabilities of the PLMs to guide glyph learning. Finally, the glyph knowledge is fused with the pre-trained model representations to obtain semantically richer representations. We conduct experiments on ten datasets covering six Chinese NLP tasks, and the results demonstrate that SGBERT significantly enhances commonly used Chinese PLMs. On average, the introduction of SGBERT resulted in a performance improvement of 1.36% for BERT and 1.09% for RoBERTa.
期刊介绍:
Information Processing and Management is dedicated to publishing cutting-edge original research at the convergence of computing and information science. Our scope encompasses theory, methods, and applications across various domains, including advertising, business, health, information science, information technology marketing, and social computing.
We aim to cater to the interests of both primary researchers and practitioners by offering an effective platform for the timely dissemination of advanced and topical issues in this interdisciplinary field. The journal places particular emphasis on original research articles, research survey articles, research method articles, and articles addressing critical applications of research. Join us in advancing knowledge and innovation at the intersection of computing and information science.