{"title":"How we can use text classification in the Back-Office environment of a bank as ‘business as usual’ solution","authors":"Zsolt Krutilla, Attila Kovari","doi":"10.1109/SACI58269.2023.10158670","DOIUrl":null,"url":null,"abstract":"Natural Language Processing nowadays provides scientists with many research areas and opportunities, but as with most applied sciences, our goal in Natural Language Processing is to refine the underlying science and technology until it can be used reliably for business purposes. Transformer models, such as GPT or BERT, are currently showing outstanding results in the field of natural-language processing, but they require huge computational power and data to teach, but these conditions can only be met by larger research centers and large companies, and their inaccuracy makes them unsuitable for use as a ‘business as usual’ (BAU) solution. In this paper, we present a solution that is able to overcome these problems by focusing on accuracy and usability, and that can also bring a new perspective to the process of teaching deep learning models.","PeriodicalId":339156,"journal":{"name":"2023 IEEE 17th International Symposium on Applied Computational Intelligence and Informatics (SACI)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE 17th International Symposium on Applied Computational Intelligence and Informatics (SACI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SACI58269.2023.10158670","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Natural Language Processing nowadays provides scientists with many research areas and opportunities, but as with most applied sciences, our goal in Natural Language Processing is to refine the underlying science and technology until it can be used reliably for business purposes. Transformer models, such as GPT or BERT, are currently showing outstanding results in the field of natural-language processing, but they require huge computational power and data to teach, but these conditions can only be met by larger research centers and large companies, and their inaccuracy makes them unsuitable for use as a ‘business as usual’ (BAU) solution. In this paper, we present a solution that is able to overcome these problems by focusing on accuracy and usability, and that can also bring a new perspective to the process of teaching deep learning models.