E. Akulinina, A. Karmanov, N. A. Teplykh, V. Vlasov, V. Baluta, S. S. Varykhanov, A. Karandeev, V. Osipov, Y. Rykov, B. Chetverushkin
{"title":"Extracting Factual Information about the Pandemic from Open Internet Sources","authors":"E. Akulinina, A. Karmanov, N. A. Teplykh, V. Vlasov, V. Baluta, S. S. Varykhanov, A. Karandeev, V. Osipov, Y. Rykov, B. Chetverushkin","doi":"10.17537/2022.17.423","DOIUrl":null,"url":null,"abstract":"\n A large number of different source data is needed for multi-agent models of the spread of infectious diseases. Most of them are not directly accessible. Therefore, one of the key problems to design such models is the development of tools for obtaining data from various sources. This article presents approaches that allow to extract the values of the parameters of the functioning of the simulated society and statistical data on the development of the pandemic from text messages published in the Internet. The proposed method and software implementation provide intelligent search of open source information in the Internet and process of unstructured data. The data collected this way used to set up parameters of mathematical model, which provides ability to study various scenarios and predict progress of the epidemic in concrete regions. The emphasis of the proposed approach is placed on two main technologies. The first is the use of regular expressions. The second is analysis using machine learning methods. The use of the regular expression method allows for high-speed text processing, but its applicability is limited by a strong dependence on the context. Machine learning allows to adapt the information context of the message, but at the same time there is a relatively large amount of time spent on analysis. To improve the accuracy of the analysis and to level the shortcomings of each of these approaches, ways of combining these technologies are proposed. The article presents the obtained results of optimization of algorithms for obtaining the necessary data.\n","PeriodicalId":53525,"journal":{"name":"Mathematical Biology and Bioinformatics","volume":"49 10","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2022-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Mathematical Biology and Bioinformatics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.17537/2022.17.423","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"Mathematics","Score":null,"Total":0}
引用次数: 0
Abstract
A large number of different source data is needed for multi-agent models of the spread of infectious diseases. Most of them are not directly accessible. Therefore, one of the key problems to design such models is the development of tools for obtaining data from various sources. This article presents approaches that allow to extract the values of the parameters of the functioning of the simulated society and statistical data on the development of the pandemic from text messages published in the Internet. The proposed method and software implementation provide intelligent search of open source information in the Internet and process of unstructured data. The data collected this way used to set up parameters of mathematical model, which provides ability to study various scenarios and predict progress of the epidemic in concrete regions. The emphasis of the proposed approach is placed on two main technologies. The first is the use of regular expressions. The second is analysis using machine learning methods. The use of the regular expression method allows for high-speed text processing, but its applicability is limited by a strong dependence on the context. Machine learning allows to adapt the information context of the message, but at the same time there is a relatively large amount of time spent on analysis. To improve the accuracy of the analysis and to level the shortcomings of each of these approaches, ways of combining these technologies are proposed. The article presents the obtained results of optimization of algorithms for obtaining the necessary data.