{"title":"基于对抗训练的无监督跨语言词嵌入学习","authors":"Yuling Li, Yuhong Zhang, Peipei Li, Xuegang Hu","doi":"10.1109/ICBK.2019.00029","DOIUrl":null,"url":null,"abstract":"Recent works have managed to learn cross-lingual word embeddings (CLWEs) in an unsupervised manner. As a prominent unsupervised model, generative adversarial networks (GANs) have been heavily studied for unsupervised CLWEs learning by aligning the embedding spaces of different languages. Due to disturbing the embedding distribution, the embeddings of low-frequency words (LFEs) are usually treated as noises in the alignment process. To alleviate the impact of LFEs, existing GANs based models utilized a heuristic rule to aggressively sample the embeddings of high-frequency words (HFEs). However, such sampling rule lacks of theoretical support. In this paper, we propose a novel GANs based model to learn cross-lingual word embeddings without any parallel resource. To address the noise problem caused by the LFEs, some perturbations are injected into the LFEs for offsetting the distribution disturbance. In addition, a modified framework based on Cramér GAN is designed to train the perturbed LFEs and the HFEs jointly. Empirical evaluation on bilingual lexicon induction demonstrates that the proposed model outperforms the state-of-the-art GANs based model in several language pairs.","PeriodicalId":383917,"journal":{"name":"2019 IEEE International Conference on Big Knowledge (ICBK)","volume":"91 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Unsupervised Cross-Lingual Word Embeddings Learning with Adversarial Training\",\"authors\":\"Yuling Li, Yuhong Zhang, Peipei Li, Xuegang Hu\",\"doi\":\"10.1109/ICBK.2019.00029\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recent works have managed to learn cross-lingual word embeddings (CLWEs) in an unsupervised manner. As a prominent unsupervised model, generative adversarial networks (GANs) have been heavily studied for unsupervised CLWEs learning by aligning the embedding spaces of different languages. Due to disturbing the embedding distribution, the embeddings of low-frequency words (LFEs) are usually treated as noises in the alignment process. To alleviate the impact of LFEs, existing GANs based models utilized a heuristic rule to aggressively sample the embeddings of high-frequency words (HFEs). However, such sampling rule lacks of theoretical support. In this paper, we propose a novel GANs based model to learn cross-lingual word embeddings without any parallel resource. To address the noise problem caused by the LFEs, some perturbations are injected into the LFEs for offsetting the distribution disturbance. In addition, a modified framework based on Cramér GAN is designed to train the perturbed LFEs and the HFEs jointly. Empirical evaluation on bilingual lexicon induction demonstrates that the proposed model outperforms the state-of-the-art GANs based model in several language pairs.\",\"PeriodicalId\":383917,\"journal\":{\"name\":\"2019 IEEE International Conference on Big Knowledge (ICBK)\",\"volume\":\"91 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 IEEE International Conference on Big Knowledge (ICBK)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICBK.2019.00029\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE International Conference on Big Knowledge (ICBK)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICBK.2019.00029","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Unsupervised Cross-Lingual Word Embeddings Learning with Adversarial Training
Recent works have managed to learn cross-lingual word embeddings (CLWEs) in an unsupervised manner. As a prominent unsupervised model, generative adversarial networks (GANs) have been heavily studied for unsupervised CLWEs learning by aligning the embedding spaces of different languages. Due to disturbing the embedding distribution, the embeddings of low-frequency words (LFEs) are usually treated as noises in the alignment process. To alleviate the impact of LFEs, existing GANs based models utilized a heuristic rule to aggressively sample the embeddings of high-frequency words (HFEs). However, such sampling rule lacks of theoretical support. In this paper, we propose a novel GANs based model to learn cross-lingual word embeddings without any parallel resource. To address the noise problem caused by the LFEs, some perturbations are injected into the LFEs for offsetting the distribution disturbance. In addition, a modified framework based on Cramér GAN is designed to train the perturbed LFEs and the HFEs jointly. Empirical evaluation on bilingual lexicon induction demonstrates that the proposed model outperforms the state-of-the-art GANs based model in several language pairs.