Speech Emotion Recognition is one of the challenges in Natural Language Processing (NLP) area. There are many factors used to identify emotions in speech, such as pitch, intensity, frequency, duration, and speakers' nationality. This paper implements a speech emotion recognition model specifically for Thai language by classifying it into 5 emotions: Angry, Frustrated, Neutral, Sad, and Happy. This research uses a dataset from VISTEC-depa AI Research Institute of Thailand. There are 21,562 sounds (scripts) divided into 70% of training data and 30% of test data. We use the Mel spectrogram and Mel-frequency Cepstral Coefficients (MFCC) technique for feature extraction and 1D Convolutional Neural Network (Conv1D) all together with 2D Convolutional Neural Network (Conv2D), to classify emotions. With respect to the result, MFCC with Conv2D provides the highest accuracy at 80.59%, and is higher than the baseline study, which is of 71.35%.
{"title":"Feature Extraction Technique Based on Conv1D and Conv2D Network for Thai Speech Emotion Recognition","authors":"Naris Prombut, S. Waijanya, Nuttachot Promrit","doi":"10.1145/3508230.3508238","DOIUrl":"https://doi.org/10.1145/3508230.3508238","url":null,"abstract":"Speech Emotion Recognition is one of the challenges in Natural Language Processing (NLP) area. There are many factors used to identify emotions in speech, such as pitch, intensity, frequency, duration, and speakers' nationality. This paper implements a speech emotion recognition model specifically for Thai language by classifying it into 5 emotions: Angry, Frustrated, Neutral, Sad, and Happy. This research uses a dataset from VISTEC-depa AI Research Institute of Thailand. There are 21,562 sounds (scripts) divided into 70% of training data and 30% of test data. We use the Mel spectrogram and Mel-frequency Cepstral Coefficients (MFCC) technique for feature extraction and 1D Convolutional Neural Network (Conv1D) all together with 2D Convolutional Neural Network (Conv2D), to classify emotions. With respect to the result, MFCC with Conv2D provides the highest accuracy at 80.59%, and is higher than the baseline study, which is of 71.35%.","PeriodicalId":252146,"journal":{"name":"Proceedings of the 2021 5th International Conference on Natural Language Processing and Information Retrieval","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128565889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xuan Sun, Luqun Li, F. Mercaldo, Yichen Yang, A. Santone, F. Martinelli
In the field of software engineering, intention mining is an interesting but challenging task, where the goal is to have a good understanding of user generated texts so as to capture their requirements that are useful for software maintenance and evolution. Recently, BERT and its variants have achieved state-of-the-art performance among various natural language processing tasks such as machine translation, machine reading comprehension and natural language inference. However, few studies try to investigate the efficacy of pre-trained language models in the task. In this paper, we present a new baseline with fine-tuned BERT model. Our method achieves state-of-the-art results on three benchmark data sets, outscoring baselines by a substantial margin. We also further investigate the efficacy of the pre-trained BERT model with shallower network depths through a simple strategy for layer selection.
{"title":"Automated Intention Mining with Comparatively Fine-tuning BERT","authors":"Xuan Sun, Luqun Li, F. Mercaldo, Yichen Yang, A. Santone, F. Martinelli","doi":"10.1145/3508230.3508254","DOIUrl":"https://doi.org/10.1145/3508230.3508254","url":null,"abstract":"In the field of software engineering, intention mining is an interesting but challenging task, where the goal is to have a good understanding of user generated texts so as to capture their requirements that are useful for software maintenance and evolution. Recently, BERT and its variants have achieved state-of-the-art performance among various natural language processing tasks such as machine translation, machine reading comprehension and natural language inference. However, few studies try to investigate the efficacy of pre-trained language models in the task. In this paper, we present a new baseline with fine-tuned BERT model. Our method achieves state-of-the-art results on three benchmark data sets, outscoring baselines by a substantial margin. We also further investigate the efficacy of the pre-trained BERT model with shallower network depths through a simple strategy for layer selection.","PeriodicalId":252146,"journal":{"name":"Proceedings of the 2021 5th International Conference on Natural Language Processing and Information Retrieval","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134325488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Extracting causality information from unstructured natural language text is a challenging problem in natural language processing. However, there are no mature special causality extraction systems. Most people use basic sequence labeling methods, such as BERT-CRF model, to extract causal elements from unstructured text and the results are usually not well. At the same time, there is a large number of causal event relations in the field of finance. If we can extract enormous financial causality, this information will help us better understand the relationships between financial events and build related event evolutionary graphs in the future. In this paper, we propose a causality extraction method for this question, named CBCP (Center word-based BERT-CRF with Pattern extraction), which can directly extract cause elements and effect elements from unstructured text. Compared to BERT-CRF model, our model incorporates the information of center words as prior conditions and performs better in the performance of entity extraction. Moreover, our method combined with pattern can further improve the effect of extracting causality. Then we evaluate our method and compare it to the basic sequence labeling method. We prove that our method performs better than other basic extraction methods on causality extraction tasks in the finance field. At last, we summarize our work and prospect some future work.
从非结构化的自然语言文本中提取因果关系信息是自然语言处理中的一个难题。然而,目前还没有成熟的专门的因果关系提取系统。大多数人使用基本的序列标记方法,如BERT-CRF模型,从非结构化文本中提取因果元素,结果往往不太理想。同时,在金融领域中存在着大量的因果事件关系。如果我们能够提取大量的金融因果关系,这些信息将有助于我们更好地理解金融事件之间的关系,并在未来构建相关的事件演化图。针对这一问题,我们提出了一种基于中心词的BERT-CRF with Pattern extraction (CBCP)的因果关系提取方法,该方法可以直接从非结构化文本中提取原因元素和效果元素。与BERT-CRF模型相比,我们的模型将中心词信息作为先验条件,在实体提取方面表现更好。此外,我们的方法与模式相结合,可以进一步提高因果关系提取的效果。然后对该方法进行了评价,并与基本序列标记方法进行了比较。在金融领域的因果关系提取任务中,我们证明了我们的方法比其他基本提取方法表现得更好。最后,对工作进行了总结,并对今后的工作进行了展望。
{"title":"CBCP: A Method of Causality Extraction from Unstructured Financial Text","authors":"Lang Cao, Shihuangzhai Zhang, Juxing Chen","doi":"10.1145/3508230.3508250","DOIUrl":"https://doi.org/10.1145/3508230.3508250","url":null,"abstract":"Extracting causality information from unstructured natural language text is a challenging problem in natural language processing. However, there are no mature special causality extraction systems. Most people use basic sequence labeling methods, such as BERT-CRF model, to extract causal elements from unstructured text and the results are usually not well. At the same time, there is a large number of causal event relations in the field of finance. If we can extract enormous financial causality, this information will help us better understand the relationships between financial events and build related event evolutionary graphs in the future. In this paper, we propose a causality extraction method for this question, named CBCP (Center word-based BERT-CRF with Pattern extraction), which can directly extract cause elements and effect elements from unstructured text. Compared to BERT-CRF model, our model incorporates the information of center words as prior conditions and performs better in the performance of entity extraction. Moreover, our method combined with pattern can further improve the effect of extracting causality. Then we evaluate our method and compare it to the basic sequence labeling method. We prove that our method performs better than other basic extraction methods on causality extraction tasks in the finance field. At last, we summarize our work and prospect some future work.","PeriodicalId":252146,"journal":{"name":"Proceedings of the 2021 5th International Conference on Natural Language Processing and Information Retrieval","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128896098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Deep learning is widely used in the study of English toxic comment classification. However, most existing studies failed to consider data imbalance. Aiming at an imbalanced English Toxic Comments Dataset, we propose an improved Bi-gated recurrent unit (GRU) model that combines an oversampling and cost-sensitive method. We use random oversampling in the improved model to reduce the data imbalance, introduce a cost-sensitive method, and propose a new loss function for the Bi-GRU model. Experimental results show that the improved Bi-GRU model demonstrates a significantly improved classification performance in the imbalanced English Toxic Comments Dataset.
{"title":"Improved Bi-GRU Model for Imbalanced English Toxic Comments Dataset","authors":"Zhongguo Wang, Bao Zhang","doi":"10.1145/3508230.3508234","DOIUrl":"https://doi.org/10.1145/3508230.3508234","url":null,"abstract":"Deep learning is widely used in the study of English toxic comment classification. However, most existing studies failed to consider data imbalance. Aiming at an imbalanced English Toxic Comments Dataset, we propose an improved Bi-gated recurrent unit (GRU) model that combines an oversampling and cost-sensitive method. We use random oversampling in the improved model to reduce the data imbalance, introduce a cost-sensitive method, and propose a new loss function for the Bi-GRU model. Experimental results show that the improved Bi-GRU model demonstrates a significantly improved classification performance in the imbalanced English Toxic Comments Dataset.","PeriodicalId":252146,"journal":{"name":"Proceedings of the 2021 5th International Conference on Natural Language Processing and Information Retrieval","volume":"142 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129298806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A certain scale of finely annotated essay dataset of EFL/ESL (English as a foreign language or the second language) learners is not only an important language resource for language research and teaching, but also contributing materials for language-related computing science. Unfortunately, this type of data open on the Internet are not only of small quantity but also of uneven quality, especially such data of Chinese learners. We collected 147 essays of Chinese EFL/ESL learners and had four teachers score them under the same criteria and one teacher annotate major errors, and have them scored in Pigai scoring system. We then structured the score file, error-annotated files, essay files together with context information, and built the Scored and Error-annotated Essay Dataset of Chinese EFL/ESL Learners (SeedCel) which is open on the Internet and will be incrementally updated. This paper explains how SeedCel is constructed, what the details of SeedCel are, and where SeedCel will be used.
{"title":"Scored and Error-annotated Essay Dataset of Chinese EFL/ESL Learners","authors":"Kai Jin, Wuying Liu","doi":"10.1145/3508230.3508245","DOIUrl":"https://doi.org/10.1145/3508230.3508245","url":null,"abstract":"A certain scale of finely annotated essay dataset of EFL/ESL (English as a foreign language or the second language) learners is not only an important language resource for language research and teaching, but also contributing materials for language-related computing science. Unfortunately, this type of data open on the Internet are not only of small quantity but also of uneven quality, especially such data of Chinese learners. We collected 147 essays of Chinese EFL/ESL learners and had four teachers score them under the same criteria and one teacher annotate major errors, and have them scored in Pigai scoring system. We then structured the score file, error-annotated files, essay files together with context information, and built the Scored and Error-annotated Essay Dataset of Chinese EFL/ESL Learners (SeedCel) which is open on the Internet and will be incrementally updated. This paper explains how SeedCel is constructed, what the details of SeedCel are, and where SeedCel will be used.","PeriodicalId":252146,"journal":{"name":"Proceedings of the 2021 5th International Conference on Natural Language Processing and Information Retrieval","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128600485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Taiga Kirihara, Kazuyuki Matsumoto, M. Sasayama, Minoru Yoshida, K. Kita
In this study, topic segmentation was performed by referring to the interview dialogue corpus. Utterance intention tags were added to the existing interview dialogue corpus, and uttered sentences were vectorized using BERT, Sentence BERT, and Distil BERT. In addition, topic classification was performed using the utterance intention tags and the features of the preceding and following uttered sentences. Consequently, the greatest accuracy was achieved when the utterance intention tag was used with DistilBERT.
{"title":"Topic Segmentation for Interview Dialogue System","authors":"Taiga Kirihara, Kazuyuki Matsumoto, M. Sasayama, Minoru Yoshida, K. Kita","doi":"10.1145/3508230.3508237","DOIUrl":"https://doi.org/10.1145/3508230.3508237","url":null,"abstract":"In this study, topic segmentation was performed by referring to the interview dialogue corpus. Utterance intention tags were added to the existing interview dialogue corpus, and uttered sentences were vectorized using BERT, Sentence BERT, and Distil BERT. In addition, topic classification was performed using the utterance intention tags and the features of the preceding and following uttered sentences. Consequently, the greatest accuracy was achieved when the utterance intention tag was used with DistilBERT.","PeriodicalId":252146,"journal":{"name":"Proceedings of the 2021 5th International Conference on Natural Language Processing and Information Retrieval","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114645610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Machine reading comprehension (MRC) is a task used to test the degree to which a machine understands natural language by asking the machine to answer questions according to a given context. Judgment reasoning is one of MRC tasks which means that given a context and questions, let machine gives the true and false answers, for some real-world data, there will be another option of unknown. Considering the current research status, this paper uses natural language inference (NLI) models to further study this judgment reasoning task, which is mainly to judge the semantic relationship between two sentences. In our paper, we first explain how the NLI task can be used to train universal sentence encoding models in the judgment reasoning process and subsequently describe the architectures used in NLI task, which covers a suitable range of sentence encoders currently in use and take the bi-directional long short-term memory (BI-LSTM) model with max-pooling over the hidden representations as an example explained in this paper. After some comparative experiments, we have verified that our NLI models are effective strategies to improve the performance of judgment reasoning in Chinese medical texts, which can effectively improve the accuracy values.
{"title":"Research on judgment reasoning using natural language inference in Chinese medical texts","authors":"Xin Li, Wenping Kong","doi":"10.1145/3508230.3508248","DOIUrl":"https://doi.org/10.1145/3508230.3508248","url":null,"abstract":"Machine reading comprehension (MRC) is a task used to test the degree to which a machine understands natural language by asking the machine to answer questions according to a given context. Judgment reasoning is one of MRC tasks which means that given a context and questions, let machine gives the true and false answers, for some real-world data, there will be another option of unknown. Considering the current research status, this paper uses natural language inference (NLI) models to further study this judgment reasoning task, which is mainly to judge the semantic relationship between two sentences. In our paper, we first explain how the NLI task can be used to train universal sentence encoding models in the judgment reasoning process and subsequently describe the architectures used in NLI task, which covers a suitable range of sentence encoders currently in use and take the bi-directional long short-term memory (BI-LSTM) model with max-pooling over the hidden representations as an example explained in this paper. After some comparative experiments, we have verified that our NLI models are effective strategies to improve the performance of judgment reasoning in Chinese medical texts, which can effectively improve the accuracy values.","PeriodicalId":252146,"journal":{"name":"Proceedings of the 2021 5th International Conference on Natural Language Processing and Information Retrieval","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124430715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The majority of inhabitants in Hong Kong are able to read and write in standard Chinese but use Cantonese as the primary spoken language in daily life. Spoken Cantonese can be transcribed into Chinese characters, which constitute the so-called written Cantonese. Written Cantonese exhibits significant lexical and grammatical differences from standard written Chinese. The rise of written Cantonese is increasingly evident in the cyber world. The growing interaction between Mandarin speakers and Cantonese speakers is leading to a clear demand for automatic translation between Chinese and Cantonese. This paper describes a transformer-based neural machine translation (NMT) system for written-Chinese-to-written-Cantonese translation. Given that parallel text data of Chinese and Cantonese are extremely scarce, a major focus of this study is on the effort of preparing good amount of training data for NMT. In addition to collecting 28K parallel sentences from previous linguistic studies and scattered internet resources, we devise an effective approach to obtaining 72K parallel sentences by automatically extracting pairs of semantically similar sentences from parallel articles on Chinese Wikipedia and Cantonese Wikipedia. We show that leveraging highly similar sentence pairs mined from Wikipedia improves translation performance in all test sets. Our system outperforms Baidu Fanyi's Chinese-to-Cantonese translation on 6 out of 8 test sets in BLEU scores. Translation examples reveal that our system is able to capture important linguistic transformations between standard Chinese and spoken Cantonese.
{"title":"Low-Resource NMT: A Case Study on the Written and Spoken Languages in Hong Kong","authors":"Hei Yi Mak, Tan Lee","doi":"10.1145/3508230.3508242","DOIUrl":"https://doi.org/10.1145/3508230.3508242","url":null,"abstract":"The majority of inhabitants in Hong Kong are able to read and write in standard Chinese but use Cantonese as the primary spoken language in daily life. Spoken Cantonese can be transcribed into Chinese characters, which constitute the so-called written Cantonese. Written Cantonese exhibits significant lexical and grammatical differences from standard written Chinese. The rise of written Cantonese is increasingly evident in the cyber world. The growing interaction between Mandarin speakers and Cantonese speakers is leading to a clear demand for automatic translation between Chinese and Cantonese. This paper describes a transformer-based neural machine translation (NMT) system for written-Chinese-to-written-Cantonese translation. Given that parallel text data of Chinese and Cantonese are extremely scarce, a major focus of this study is on the effort of preparing good amount of training data for NMT. In addition to collecting 28K parallel sentences from previous linguistic studies and scattered internet resources, we devise an effective approach to obtaining 72K parallel sentences by automatically extracting pairs of semantically similar sentences from parallel articles on Chinese Wikipedia and Cantonese Wikipedia. We show that leveraging highly similar sentence pairs mined from Wikipedia improves translation performance in all test sets. Our system outperforms Baidu Fanyi's Chinese-to-Cantonese translation on 6 out of 8 test sets in BLEU scores. Translation examples reveal that our system is able to capture important linguistic transformations between standard Chinese and spoken Cantonese.","PeriodicalId":252146,"journal":{"name":"Proceedings of the 2021 5th International Conference on Natural Language Processing and Information Retrieval","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129902136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Natural language processing (NLP) has a great potential to help scientists automatically extract information from large-scale text datasets. In this paper, we focus on the process of NLP — including text acquisition, text preprocessing, word embedding training, and named entity recognition — applied on 106,181 abstracts of fuel cell papers. Then we evaluate our trained model on its ability of analogy, use the model to analyze the research trend in fuel cell materials and predict new materials. To the best of our knowledge, it is the first time that NLP has been applied in the field of fuel cells. This data-driven technique is demonstrated to have the potential to promote the discoveries of new fuel cell materials.
{"title":"Natural Language Processing Applied on Large Scale Data Extraction from Scientific Papers in Fuel Cells","authors":"Feifan Yang","doi":"10.1145/3508230.3508256","DOIUrl":"https://doi.org/10.1145/3508230.3508256","url":null,"abstract":"Natural language processing (NLP) has a great potential to help scientists automatically extract information from large-scale text datasets. In this paper, we focus on the process of NLP — including text acquisition, text preprocessing, word embedding training, and named entity recognition — applied on 106,181 abstracts of fuel cell papers. Then we evaluate our trained model on its ability of analogy, use the model to analyze the research trend in fuel cell materials and predict new materials. To the best of our knowledge, it is the first time that NLP has been applied in the field of fuel cells. This data-driven technique is demonstrated to have the potential to promote the discoveries of new fuel cell materials.","PeriodicalId":252146,"journal":{"name":"Proceedings of the 2021 5th International Conference on Natural Language Processing and Information Retrieval","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134455202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Conceptnet is a crowd sourced knowledge graph used to find relationship between words and concepts. PubMed is the largest source of documents for the bio-medical domain. From the PubMed abstracts stop words are removed and remaining words are used as seed words. For these seed words “Nearest neighbor” words are identified as candidate words using 3 popular Word Vectors (WV) - Word2Vec, Glove and FastText. Similarity is calculated for these words for each strata of relationship. Bootstrap estimator in Random Effects Model (REM) is used to study this relationship using the similarity scores. Analysis shows that there is heterogeneity among the relationships independent of the WV used as base.
{"title":"Examination of the quality of Conceptnet relations for PubMed abstracts","authors":"Rajeswaran Viswanathan, S. Priya","doi":"10.1145/3508230.3508243","DOIUrl":"https://doi.org/10.1145/3508230.3508243","url":null,"abstract":"Conceptnet is a crowd sourced knowledge graph used to find relationship between words and concepts. PubMed is the largest source of documents for the bio-medical domain. From the PubMed abstracts stop words are removed and remaining words are used as seed words. For these seed words “Nearest neighbor” words are identified as candidate words using 3 popular Word Vectors (WV) - Word2Vec, Glove and FastText. Similarity is calculated for these words for each strata of relationship. Bootstrap estimator in Random Effects Model (REM) is used to study this relationship using the similarity scores. Analysis shows that there is heterogeneity among the relationships independent of the WV used as base.","PeriodicalId":252146,"journal":{"name":"Proceedings of the 2021 5th International Conference on Natural Language Processing and Information Retrieval","volume":"29 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133670306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}