{"title":"Intent detection for task‐oriented conversational agents: A comparative study of recurrent neural networks and transformer models","authors":"Mourad Jbene, Abdellah Chehri, Rachid Saadane, Smail Tigani, Gwanggil Jeon","doi":"10.1111/exsy.13712","DOIUrl":null,"url":null,"abstract":"Conversational assistants (CAs) and Task‐oriented ones, in particular, are designed to interact with users in a natural language manner, assisting them in completing specific tasks or providing relevant information. These systems employ advanced natural language understanding (NLU) and dialogue management techniques to comprehend user inputs, infer their intentions, and generate appropriate responses or actions. Over time, the CAs have gradually diversified to today touch various fields such as e‐commerce, healthcare, tourism, fashion, travel, and many other sectors. NLU is fundamental in the natural language processing (NLP) field. Identifying user intents from natural language utterances is a sub‐task of NLU that is crucial for conversational systems. The diversity in user utterances makes intent detection (ID) even a challenging problem. Recently, with the emergence of Deep Neural Networks. New State of the Art (SOA) results have been achieved for different NLP tasks. Recurrent neural networks (RNNs) and Transformer architectures are two major players in those improvements. RNNs have significantly contributed to sequence modelling across various application areas. Conversely, Transformer models represent a newer architecture leveraging attention mechanisms, extensive training data sets, and computational power. This review paper begins with a detailed exploration of RNN and Transformer models. Subsequently, it conducts a comparative analysis of their performance in intent recognition for Task‐oriented (CAs). Finally, it concludes by addressing the main challenges and outlining future research directions.","PeriodicalId":51053,"journal":{"name":"Expert Systems","volume":"75 1","pages":""},"PeriodicalIF":3.0000,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Expert Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1111/exsy.13712","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Conversational assistants (CAs) and Task‐oriented ones, in particular, are designed to interact with users in a natural language manner, assisting them in completing specific tasks or providing relevant information. These systems employ advanced natural language understanding (NLU) and dialogue management techniques to comprehend user inputs, infer their intentions, and generate appropriate responses or actions. Over time, the CAs have gradually diversified to today touch various fields such as e‐commerce, healthcare, tourism, fashion, travel, and many other sectors. NLU is fundamental in the natural language processing (NLP) field. Identifying user intents from natural language utterances is a sub‐task of NLU that is crucial for conversational systems. The diversity in user utterances makes intent detection (ID) even a challenging problem. Recently, with the emergence of Deep Neural Networks. New State of the Art (SOA) results have been achieved for different NLP tasks. Recurrent neural networks (RNNs) and Transformer architectures are two major players in those improvements. RNNs have significantly contributed to sequence modelling across various application areas. Conversely, Transformer models represent a newer architecture leveraging attention mechanisms, extensive training data sets, and computational power. This review paper begins with a detailed exploration of RNN and Transformer models. Subsequently, it conducts a comparative analysis of their performance in intent recognition for Task‐oriented (CAs). Finally, it concludes by addressing the main challenges and outlining future research directions.
期刊介绍:
Expert Systems: The Journal of Knowledge Engineering publishes papers dealing with all aspects of knowledge engineering, including individual methods and techniques in knowledge acquisition and representation, and their application in the construction of systems – including expert systems – based thereon. Detailed scientific evaluation is an essential part of any paper.
As well as traditional application areas, such as Software and Requirements Engineering, Human-Computer Interaction, and Artificial Intelligence, we are aiming at the new and growing markets for these technologies, such as Business, Economy, Market Research, and Medical and Health Care. The shift towards this new focus will be marked by a series of special issues covering hot and emergent topics.