{"title":"Ruri: Japanese General Text Embeddings","authors":"Hayato Tsukagoshi, Ryohei Sasano","doi":"arxiv-2409.07737","DOIUrl":null,"url":null,"abstract":"We report the development of Ruri, a series of Japanese general text\nembedding models. While the development of general-purpose text embedding\nmodels in English and multilingual contexts has been active in recent years,\nmodel development in Japanese remains insufficient. The primary reasons for\nthis are the lack of datasets and the absence of necessary expertise. In this\nreport, we provide a detailed account of the development process of Ruri.\nSpecifically, we discuss the training of embedding models using synthesized\ndatasets generated by LLMs, the construction of the reranker for dataset\nfiltering and knowledge distillation, and the performance evaluation of the\nresulting general-purpose text embedding models.","PeriodicalId":501030,"journal":{"name":"arXiv - CS - Computation and Language","volume":"23 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computation and Language","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.07737","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
We report the development of Ruri, a series of Japanese general text
embedding models. While the development of general-purpose text embedding
models in English and multilingual contexts has been active in recent years,
model development in Japanese remains insufficient. The primary reasons for
this are the lack of datasets and the absence of necessary expertise. In this
report, we provide a detailed account of the development process of Ruri.
Specifically, we discuss the training of embedding models using synthesized
datasets generated by LLMs, the construction of the reranker for dataset
filtering and knowledge distillation, and the performance evaluation of the
resulting general-purpose text embedding models.