Validating pretrained language models for content quality classification with semantic-preserving metamorphic relations

Pak Yuen Patrick Chan, Jacky Keung
{"title":"Validating pretrained language models for content quality classification with semantic-preserving metamorphic relations","authors":"Pak Yuen Patrick Chan,&nbsp;Jacky Keung","doi":"10.1016/j.nlp.2024.100114","DOIUrl":null,"url":null,"abstract":"<div><h3>Context:</h3><div>Utilizing pretrained language models (PLMs) has become common practice in maintaining the content quality of question-answering (Q&amp;A) websites. However, evaluating the effectiveness of PLMs poses a challenge as they tend to provide local optima rather than global optima.</div></div><div><h3>Objective:</h3><div>In this study, we propose using semantic-preserving Metamorphic Relations (MRs) derived from Metamorphic Testing (MT) to address this challenge and validate PLMs.</div></div><div><h3>Methods:</h3><div>To validate four selected PLMs, we conducted an empirical experiment using a publicly available dataset comprising 60000 data points. We defined three groups of Metamorphic Relations (MRGs), consisting of thirteen semantic-preserving MRs, which were then employed to generate “Follow-up” testing datasets based on the original “Source” testing datasets. The PLMs were trained using a separate training dataset. A comparison was made between the predictions of the four trained PLMs for “Source” and “Follow-up” testing datasets in order to identify instances of violations, which corresponded to inconsistent predictions between the two datasets. If no violation was found, it indicated that the PLM was insensitive to the associate MR; thereby, the MR can be used for validation. In cases where no violation occurred across the entire MRG, non-violation regions were identified and supported simulation metamorphic testing.</div></div><div><h3>Results:</h3><div>The results of this study demonstrated that the proposed MRs could effectively serve as a validation tool for content quality classification on Stack Overflow Q&amp;A using PLMs. One PLM did not violate the “Uppercase conversion” MRG and the “Duplication” MRG. Furthermore, the absence of violations in the MRGs allowed for the identification of non-violation regions, confirming the ability of the proposed MRs to support simulation metamorphic testing.</div></div><div><h3>Conclusion:</h3><div>The experimental findings indicate that the proposed MRs can validate PLMs effectively and support simulation metamorphic testing for PLMs. However, further investigations are required to enhance the semantic comprehension and common sense knowledge of PLMs and explore highly informative statistical patterns of PLMs, in order to improve their overall performance.</div></div>","PeriodicalId":100944,"journal":{"name":"Natural Language Processing Journal","volume":"9 ","pages":"Article 100114"},"PeriodicalIF":0.0000,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Natural Language Processing Journal","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2949719124000621","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Context:

Utilizing pretrained language models (PLMs) has become common practice in maintaining the content quality of question-answering (Q&A) websites. However, evaluating the effectiveness of PLMs poses a challenge as they tend to provide local optima rather than global optima.

Objective:

In this study, we propose using semantic-preserving Metamorphic Relations (MRs) derived from Metamorphic Testing (MT) to address this challenge and validate PLMs.

Methods:

To validate four selected PLMs, we conducted an empirical experiment using a publicly available dataset comprising 60000 data points. We defined three groups of Metamorphic Relations (MRGs), consisting of thirteen semantic-preserving MRs, which were then employed to generate “Follow-up” testing datasets based on the original “Source” testing datasets. The PLMs were trained using a separate training dataset. A comparison was made between the predictions of the four trained PLMs for “Source” and “Follow-up” testing datasets in order to identify instances of violations, which corresponded to inconsistent predictions between the two datasets. If no violation was found, it indicated that the PLM was insensitive to the associate MR; thereby, the MR can be used for validation. In cases where no violation occurred across the entire MRG, non-violation regions were identified and supported simulation metamorphic testing.

Results:

The results of this study demonstrated that the proposed MRs could effectively serve as a validation tool for content quality classification on Stack Overflow Q&A using PLMs. One PLM did not violate the “Uppercase conversion” MRG and the “Duplication” MRG. Furthermore, the absence of violations in the MRGs allowed for the identification of non-violation regions, confirming the ability of the proposed MRs to support simulation metamorphic testing.

Conclusion:

The experimental findings indicate that the proposed MRs can validate PLMs effectively and support simulation metamorphic testing for PLMs. However, further investigations are required to enhance the semantic comprehension and common sense knowledge of PLMs and explore highly informative statistical patterns of PLMs, in order to improve their overall performance.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
利用语义保留变形关系验证预训练语言模型的内容质量分类
背景:利用预训练语言模型(PLMs)已成为保持问题解答(Q&A)网站内容质量的常见做法。方法:为了验证四个选定的 PLM,我们使用一个包含 60000 个数据点的公开数据集进行了实证实验。我们定义了三组变形关系(MRGs),由 13 个语义保留 MRs 组成,然后利用它们在原始 "源 "测试数据集的基础上生成 "后续 "测试数据集。PLM 使用单独的训练数据集进行训练。对 "源 "测试数据集和 "后续 "测试数据集的四个训练有素的 PLM 的预测结果进行比较,以识别违规实例,这相当于两个数据集之间的预测结果不一致。如果没有发现违规情况,则表明 PLM 对相关 MR 不敏感;因此,可以使用 MR 进行验证。结果:本研究的结果表明,所提出的 MRs 可以有效地作为使用 PLMs 对 Stack Overflow Q&A 进行内容质量分类的验证工具。其中一个 PLM 没有违反 "大写转换 "MRG 和 "重复 "MRG。结论:实验结果表明,所提出的 MRs 可以有效验证 PLM,并支持 PLM 的仿真变形测试。然而,还需要进一步研究如何增强 PLM 的语义理解和常识性知识,探索 PLM 的高信息量统计模式,以提高其整体性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Uzbek language morphology analyser Evaluation of google translate for Mandarin Chinese translation using sentiment and semantic analysis Bridging gaps in natural language processing for Yorùbá: A systematic review of a decade of progress and prospects Llama3SP: A resource-Efficient large language model for agile story point estimation A systematic review of figurative language detection: Methods, challenges, and multilingual perspectives
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1