{"title":"Applying Web Crawler Technologies for Compiling Parallel Corpora as one Stage of Natural Language Processing","authors":"Nilufar Abdurakhmonovaa, Ismailov Alisher, Guli Toirovaa","doi":"10.1109/UBMK55850.2022.9919521","DOIUrl":null,"url":null,"abstract":"over the past decade, the amount of information on the internet has increased. A large amount of unstructured data, referred to as big data on the web, has been created. Finding and extracting data on the internet is called information retrieval. In the search for information, there are web crawler tools, which are a program that scans information on the internet and downloads web documents automatically. Search robot applications can be used in various fields, such as news, finance, medicine, etc. In this article, we will discuss the basic principle and characteristics of search engines as an example to build parallel corpora, as well as the classification of modern popular crawlers, strategies and current applications of crawlers. Finally, we will end this article with a discussion of future directions for research on crawlers.","PeriodicalId":417604,"journal":{"name":"2022 7th International Conference on Computer Science and Engineering (UBMK)","volume":"1214 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 7th International Conference on Computer Science and Engineering (UBMK)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/UBMK55850.2022.9919521","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
over the past decade, the amount of information on the internet has increased. A large amount of unstructured data, referred to as big data on the web, has been created. Finding and extracting data on the internet is called information retrieval. In the search for information, there are web crawler tools, which are a program that scans information on the internet and downloads web documents automatically. Search robot applications can be used in various fields, such as news, finance, medicine, etc. In this article, we will discuss the basic principle and characteristics of search engines as an example to build parallel corpora, as well as the classification of modern popular crawlers, strategies and current applications of crawlers. Finally, we will end this article with a discussion of future directions for research on crawlers.