Jinlei Zhu, Taihao Wang, Chuanfeng Zhang, Kun Zhang, Kun Jing, Houjin Chen
{"title":"Representational-Interactive Feature Fusion Method for Text Intent Matching","authors":"Jinlei Zhu, Taihao Wang, Chuanfeng Zhang, Kun Zhang, Kun Jing, Houjin Chen","doi":"10.1109/ICET51757.2021.9450903","DOIUrl":null,"url":null,"abstract":"This paper introduces a new architecture for text intent matching method, which fuses deep representational and deep interactive features, then the combined features are trained finally by a single reasoning network model. The architecture mainly includes two fusion typical sub models based on the fusion of separated features. Then, we defined two novel loss functions for the models, and they have very excellent performance in the experiments. More importantly, the F1-Score of our model improves by 2.6% than the art-of-the-methods on the well-known datasets.","PeriodicalId":316980,"journal":{"name":"2021 IEEE 4th International Conference on Electronics Technology (ICET)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 4th International Conference on Electronics Technology (ICET)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICET51757.2021.9450903","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
This paper introduces a new architecture for text intent matching method, which fuses deep representational and deep interactive features, then the combined features are trained finally by a single reasoning network model. The architecture mainly includes two fusion typical sub models based on the fusion of separated features. Then, we defined two novel loss functions for the models, and they have very excellent performance in the experiments. More importantly, the F1-Score of our model improves by 2.6% than the art-of-the-methods on the well-known datasets.