{"title":"基于综合解释和神经网络的关系提取","authors":"Rozan Chahardoli, Denilson Barbosa, Davood Rafiei","doi":"10.1145/3459104.3459147","DOIUrl":null,"url":null,"abstract":"The state-of-the-art for Relation Extraction, defined as the detection of existing relations between a pair of entities in a sentence, relies on neural networks that require a large number of training examples to perform well. To address that cost, Distant Supervision has become the preferred choice for collecting labeled sentences. However, Distant Supervision has many limitations and often introduces noise into the training set. Recent work has shown an alternative way of training neural methods for relation extraction, namely to provide a small number of annotated sentences and explanations for why those sentences express the relation. Training classifiers with this approach results in accuracy comparable to Distant Supervision, but requires humans to annotate the sentences and provide the explanations. In this paper, we show a way to generate synthetic explanations from a small number of relational trigger words, for each relation, whose resulting explanations achieve comparable accuracy to human produced ones. We validate the method on five relation extraction tasks with different entity types (person-person, person-location, etc.). Furthermore, experiments on two public datasets demonstrate the effectiveness of our generated synthetic explanations, with 6% improvement in accuracy on relation extraction and 19% improvement in F1-score on generating labeled training sentences compared to the next best methods.","PeriodicalId":142284,"journal":{"name":"2021 International Symposium on Electrical, Electronics and Information Engineering","volume":"34 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Relation Extraction with Synthetic Explanations and Neural Network\",\"authors\":\"Rozan Chahardoli, Denilson Barbosa, Davood Rafiei\",\"doi\":\"10.1145/3459104.3459147\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The state-of-the-art for Relation Extraction, defined as the detection of existing relations between a pair of entities in a sentence, relies on neural networks that require a large number of training examples to perform well. To address that cost, Distant Supervision has become the preferred choice for collecting labeled sentences. However, Distant Supervision has many limitations and often introduces noise into the training set. Recent work has shown an alternative way of training neural methods for relation extraction, namely to provide a small number of annotated sentences and explanations for why those sentences express the relation. Training classifiers with this approach results in accuracy comparable to Distant Supervision, but requires humans to annotate the sentences and provide the explanations. In this paper, we show a way to generate synthetic explanations from a small number of relational trigger words, for each relation, whose resulting explanations achieve comparable accuracy to human produced ones. We validate the method on five relation extraction tasks with different entity types (person-person, person-location, etc.). Furthermore, experiments on two public datasets demonstrate the effectiveness of our generated synthetic explanations, with 6% improvement in accuracy on relation extraction and 19% improvement in F1-score on generating labeled training sentences compared to the next best methods.\",\"PeriodicalId\":142284,\"journal\":{\"name\":\"2021 International Symposium on Electrical, Electronics and Information Engineering\",\"volume\":\"34 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-02-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 International Symposium on Electrical, Electronics and Information Engineering\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3459104.3459147\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Symposium on Electrical, Electronics and Information Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3459104.3459147","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Relation Extraction with Synthetic Explanations and Neural Network
The state-of-the-art for Relation Extraction, defined as the detection of existing relations between a pair of entities in a sentence, relies on neural networks that require a large number of training examples to perform well. To address that cost, Distant Supervision has become the preferred choice for collecting labeled sentences. However, Distant Supervision has many limitations and often introduces noise into the training set. Recent work has shown an alternative way of training neural methods for relation extraction, namely to provide a small number of annotated sentences and explanations for why those sentences express the relation. Training classifiers with this approach results in accuracy comparable to Distant Supervision, but requires humans to annotate the sentences and provide the explanations. In this paper, we show a way to generate synthetic explanations from a small number of relational trigger words, for each relation, whose resulting explanations achieve comparable accuracy to human produced ones. We validate the method on five relation extraction tasks with different entity types (person-person, person-location, etc.). Furthermore, experiments on two public datasets demonstrate the effectiveness of our generated synthetic explanations, with 6% improvement in accuracy on relation extraction and 19% improvement in F1-score on generating labeled training sentences compared to the next best methods.