Nan Hu, Yike Wu, Guilin Qi, Dehai Min, Jiaoyan Chen, Jeff Z Pan, Zafar Ali
{"title":"简单知识图问答中预训练语言模型的实证研究","authors":"Nan Hu, Yike Wu, Guilin Qi, Dehai Min, Jiaoyan Chen, Jeff Z Pan, Zafar Ali","doi":"10.1007/s11280-023-01166-y","DOIUrl":null,"url":null,"abstract":"Large-scale pre-trained language models (PLMs) such as BERT have recently achieved great success and become a milestone in natural language processing (NLP). It is now the consensus of the NLP community to adopt PLMs as the backbone for downstream tasks. In recent works on knowledge graph question answering (KGQA), BERT or its variants have become necessary in their KGQA models. However, there is still a lack of comprehensive research and comparison of the performance of different PLMs in KGQA. To this end, we summarize two basic KGQA frameworks based on PLMs without additional neural network modules to compare the performance of nine PLMs in terms of accuracy and efficiency. In addition, we present three benchmarks for larger-scale KGs based on the popular SimpleQuestions benchmark to investigate the scalability of PLMs. We carefully analyze the results of all PLMs-based KGQA basic frameworks on these benchmarks and two other popular datasets, WebQuestionSP and FreebaseQA, and find that knowledge distillation techniques and knowledge enhancement methods in PLMs are promising for KGQA. Furthermore, we test ChatGPT ( https://chat.openai.com/ ), which has drawn a great deal of attention in the NLP community, demonstrating its impressive capabilities and limitations in zero-shot KGQA. We have released the code and benchmarks to promote the use of PLMs on KGQA ( https://github.com/aannonymouuss/PLMs-in-Practical-KBQA ).","PeriodicalId":49356,"journal":{"name":"World Wide Web-Internet and Web Information Systems","volume":"46 1","pages":"0"},"PeriodicalIF":2.7000,"publicationDate":"2023-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":"{\"title\":\"An empirical study of pre-trained language models in simple knowledge graph question answering\",\"authors\":\"Nan Hu, Yike Wu, Guilin Qi, Dehai Min, Jiaoyan Chen, Jeff Z Pan, Zafar Ali\",\"doi\":\"10.1007/s11280-023-01166-y\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Large-scale pre-trained language models (PLMs) such as BERT have recently achieved great success and become a milestone in natural language processing (NLP). It is now the consensus of the NLP community to adopt PLMs as the backbone for downstream tasks. In recent works on knowledge graph question answering (KGQA), BERT or its variants have become necessary in their KGQA models. However, there is still a lack of comprehensive research and comparison of the performance of different PLMs in KGQA. To this end, we summarize two basic KGQA frameworks based on PLMs without additional neural network modules to compare the performance of nine PLMs in terms of accuracy and efficiency. In addition, we present three benchmarks for larger-scale KGs based on the popular SimpleQuestions benchmark to investigate the scalability of PLMs. We carefully analyze the results of all PLMs-based KGQA basic frameworks on these benchmarks and two other popular datasets, WebQuestionSP and FreebaseQA, and find that knowledge distillation techniques and knowledge enhancement methods in PLMs are promising for KGQA. Furthermore, we test ChatGPT ( https://chat.openai.com/ ), which has drawn a great deal of attention in the NLP community, demonstrating its impressive capabilities and limitations in zero-shot KGQA. We have released the code and benchmarks to promote the use of PLMs on KGQA ( https://github.com/aannonymouuss/PLMs-in-Practical-KBQA ).\",\"PeriodicalId\":49356,\"journal\":{\"name\":\"World Wide Web-Internet and Web Information Systems\",\"volume\":\"46 1\",\"pages\":\"0\"},\"PeriodicalIF\":2.7000,\"publicationDate\":\"2023-05-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"7\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"World Wide Web-Internet and Web Information Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1007/s11280-023-01166-y\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"World Wide Web-Internet and Web Information Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s11280-023-01166-y","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
An empirical study of pre-trained language models in simple knowledge graph question answering
Large-scale pre-trained language models (PLMs) such as BERT have recently achieved great success and become a milestone in natural language processing (NLP). It is now the consensus of the NLP community to adopt PLMs as the backbone for downstream tasks. In recent works on knowledge graph question answering (KGQA), BERT or its variants have become necessary in their KGQA models. However, there is still a lack of comprehensive research and comparison of the performance of different PLMs in KGQA. To this end, we summarize two basic KGQA frameworks based on PLMs without additional neural network modules to compare the performance of nine PLMs in terms of accuracy and efficiency. In addition, we present three benchmarks for larger-scale KGs based on the popular SimpleQuestions benchmark to investigate the scalability of PLMs. We carefully analyze the results of all PLMs-based KGQA basic frameworks on these benchmarks and two other popular datasets, WebQuestionSP and FreebaseQA, and find that knowledge distillation techniques and knowledge enhancement methods in PLMs are promising for KGQA. Furthermore, we test ChatGPT ( https://chat.openai.com/ ), which has drawn a great deal of attention in the NLP community, demonstrating its impressive capabilities and limitations in zero-shot KGQA. We have released the code and benchmarks to promote the use of PLMs on KGQA ( https://github.com/aannonymouuss/PLMs-in-Practical-KBQA ).
期刊介绍:
World Wide Web: Internet and Web Information Systems (WWW) is an international, archival, peer-reviewed journal which covers all aspects of the World Wide Web, including issues related to architectures, applications, Internet and Web information systems, and communities. The purpose of this journal is to provide an international forum for researchers, professionals, and industrial practitioners to share their rapidly developing knowledge and report on new advances in Internet and web-based systems. The journal also focuses on all database- and information-system topics that relate to the Internet and the Web, particularly on ways to model, design, develop, integrate, and manage these systems.
Appearing quarterly, the journal publishes (1) papers describing original ideas and new results, (2) vision papers, (3) reviews of important techniques in related areas, (4) innovative application papers, and (5) progress reports on major international research projects. Papers published in the WWW journal deal with subjects directly or indirectly related to the World Wide Web. The WWW journal provides timely, in-depth coverage of the most recent developments in the World Wide Web discipline to enable anyone involved to keep up-to-date with this dynamically changing technology.