Pub Date : 2019-09-01DOI: 10.1109/IVMEM.2019.00016
M. Poltavtseva, D.A. Bazarnova
The work is devoted to the automation of security assessment on the basis of open source data. The authors analyzed the work in this area and considered problems of the popular search engines use for automatic data collection and determining the attacker awareness. The authors use data analysis using Named Entity recognition The article presents the developed parameters of the recognition system and the training sample. It presents a data search and processing method and analysis of its effectiveness in detecting in open sources the software and hardware names, where the software or hardware belongs to the organization or object.
{"title":"Automation of Open Sources Data Processing for the Security Assessment","authors":"M. Poltavtseva, D.A. Bazarnova","doi":"10.1109/IVMEM.2019.00016","DOIUrl":"https://doi.org/10.1109/IVMEM.2019.00016","url":null,"abstract":"The work is devoted to the automation of security assessment on the basis of open source data. The authors analyzed the work in this area and considered problems of the popular search engines use for automatic data collection and determining the attacker awareness. The authors use data analysis using Named Entity recognition The article presents the developed parameters of the recognition system and the training sample. It presents a data search and processing method and analysis of its effectiveness in detecting in open sources the software and hardware names, where the software or hardware belongs to the organization or object.","PeriodicalId":166102,"journal":{"name":"2019 Ivannikov Memorial Workshop (IVMEM)","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127369438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-09-01DOI: 10.1109/IVMEM.2019.00023
I. Shanin, S. Stupnikov, V. Zakharov
Efficient and timely fault detection is a significant problem due to the intensifying use of modern technological solutions in machine condition monitoring. This work is carried out as part of a project that is aimed at development of software solutions for a housing and utility condition monitoring system. An experimental setup was designed and assembled for the study of basic housing infrastructure elements operating modes. The setup includes electric pumps, power transformers, ventilation and air conditioning systems (HVAC), heaters and electric boilers. Every element is equipped with various sensors. Sensor readings were gathered, processed and analyzed. This dataset was used to fit statistical and probabilistic models such as linear regression and Hidden Markov model in order to classify regular and faulty operating modes of equipment. Nine classes of equipment malfunction were modeled, these models are intended to be used as a theoretical basis for the design of industrial housing and utility condition monitoring systems.
{"title":"Application of Anomaly Detection Methods in the Housing and Utility Infrastructure Data","authors":"I. Shanin, S. Stupnikov, V. Zakharov","doi":"10.1109/IVMEM.2019.00023","DOIUrl":"https://doi.org/10.1109/IVMEM.2019.00023","url":null,"abstract":"Efficient and timely fault detection is a significant problem due to the intensifying use of modern technological solutions in machine condition monitoring. This work is carried out as part of a project that is aimed at development of software solutions for a housing and utility condition monitoring system. An experimental setup was designed and assembled for the study of basic housing infrastructure elements operating modes. The setup includes electric pumps, power transformers, ventilation and air conditioning systems (HVAC), heaters and electric boilers. Every element is equipped with various sensors. Sensor readings were gathered, processed and analyzed. This dataset was used to fit statistical and probabilistic models such as linear regression and Hidden Markov model in order to classify regular and faulty operating modes of equipment. Nine classes of equipment malfunction were modeled, these models are intended to be used as a theoretical basis for the design of industrial housing and utility condition monitoring systems.","PeriodicalId":166102,"journal":{"name":"2019 Ivannikov Memorial Workshop (IVMEM)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127770869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-09-01DOI: 10.1109/IVMEM.2019.00009
M. Pantilimonov, R. Buchatskiy, R. Zhuykov, E. Sharygin, D. Melnik
As the efficiency of main and external memory grows, alongside with decreasing hardware costs, the performance of database management systems (DBMS) on certain kinds of queries is more determined by CPU characteristics and the way it is utilized. Relational DBMS utilize diverse execution models to run SQL queries. Those models have different properties, but in either way suffer from substantial overhead during query plan interpretation. The overhead comes from indirect calls to handler functions, runtime checks and large number of branch instructions. One way to solve this problem is dynamic query compilation that is reasonable only in those cases when query interpretation time is larger than the time of compilation and optimized machine code execution. This requirement can be satisfied only when the amount of data to be processed is large enough. If query interpretation takes milliseconds to finish, then the cost of dynamic compilation can be hundreds of times more than the execution time of generated machine code. To pay off the cost of dynamic compilation, the generated machine code has to be reused in subsequent executions, thus saving the cost of code compilation and optimization. In this paper, we examine the method of machine code caching in our query JIT-compiler for DBMS PostgreSQL. The proposed method allows us to eliminate compilation overhead. The results show that dynamic compilation of queries with machine code caching feature gives a significant speedup on OLTP queries.
{"title":"Machine Code Caching in PostgreSQL Query JIT-Compiler","authors":"M. Pantilimonov, R. Buchatskiy, R. Zhuykov, E. Sharygin, D. Melnik","doi":"10.1109/IVMEM.2019.00009","DOIUrl":"https://doi.org/10.1109/IVMEM.2019.00009","url":null,"abstract":"As the efficiency of main and external memory grows, alongside with decreasing hardware costs, the performance of database management systems (DBMS) on certain kinds of queries is more determined by CPU characteristics and the way it is utilized. Relational DBMS utilize diverse execution models to run SQL queries. Those models have different properties, but in either way suffer from substantial overhead during query plan interpretation. The overhead comes from indirect calls to handler functions, runtime checks and large number of branch instructions. One way to solve this problem is dynamic query compilation that is reasonable only in those cases when query interpretation time is larger than the time of compilation and optimized machine code execution. This requirement can be satisfied only when the amount of data to be processed is large enough. If query interpretation takes milliseconds to finish, then the cost of dynamic compilation can be hundreds of times more than the execution time of generated machine code. To pay off the cost of dynamic compilation, the generated machine code has to be reused in subsequent executions, thus saving the cost of code compilation and optimization. In this paper, we examine the method of machine code caching in our query JIT-compiler for DBMS PostgreSQL. The proposed method allows us to eliminate compilation overhead. The results show that dynamic compilation of queries with machine code caching feature gives a significant speedup on OLTP queries.","PeriodicalId":166102,"journal":{"name":"2019 Ivannikov Memorial Workshop (IVMEM)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114184201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-09-01DOI: 10.1109/IVMEM.2019.00015
A. Bugerya, I. Kulagin, V. Padaryan, M. A. Solovev, A. Tikhonov
One of the tasks of binary code security analysis is detection of undocumented features in software. This task is hard to automate, and it requires participation of a cybersecurity expert. The way of representation of the algorithm under analysis strongly determines the analysis effort and quality of its results. Existing intermediate representations and languages are intended for use in software that either carries out optimizing transformations or analyzes binary code. Such representations and intermediate languages are unsuitable for manual data flow analysis. This paper proposes a high-level hierarchical flowchart-based representation of a program algorithm as well as an algorithm for its construction. The proposed representation is based on a hypergraph and it allows both automatic and manual data flow analysis on different detail levels. The hypergraph nodes represent functions. Every node contains a set of other nodes which are fragments. The fragment is a linear sequence of instructions that does not contain call and ret instructions. Edges represent data flows between nodes and correspond to memory buffers and registers. In the future this representation can be used to implement automatic analysis algorithms. An approach is proposed to increasing quality of the developed algorithm representation using grouping of single data flows into one flow connecting logical algorithm modules.
{"title":"Recovery of High-Level Intermediate Representations of Algorithms from Binary Code","authors":"A. Bugerya, I. Kulagin, V. Padaryan, M. A. Solovev, A. Tikhonov","doi":"10.1109/IVMEM.2019.00015","DOIUrl":"https://doi.org/10.1109/IVMEM.2019.00015","url":null,"abstract":"One of the tasks of binary code security analysis is detection of undocumented features in software. This task is hard to automate, and it requires participation of a cybersecurity expert. The way of representation of the algorithm under analysis strongly determines the analysis effort and quality of its results. Existing intermediate representations and languages are intended for use in software that either carries out optimizing transformations or analyzes binary code. Such representations and intermediate languages are unsuitable for manual data flow analysis. This paper proposes a high-level hierarchical flowchart-based representation of a program algorithm as well as an algorithm for its construction. The proposed representation is based on a hypergraph and it allows both automatic and manual data flow analysis on different detail levels. The hypergraph nodes represent functions. Every node contains a set of other nodes which are fragments. The fragment is a linear sequence of instructions that does not contain call and ret instructions. Edges represent data flows between nodes and correspond to memory buffers and registers. In the future this representation can be used to implement automatic analysis algorithms. An approach is proposed to increasing quality of the developed algorithm representation using grouping of single data flows into one flow connecting logical algorithm modules.","PeriodicalId":166102,"journal":{"name":"2019 Ivannikov Memorial Workshop (IVMEM)","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131275043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-09-01DOI: 10.1109/IVMEM.2019.00014
Mariam Arutunian, H. Aslanyan, V. Vardanyan, V. Sirunyan, S. Kurmangaleev, S. Gaissaryan
Software developers often copy and paste code within a project. Due to the possible existence of defects in the initial code fragment, this can lead to defects propagation across the project. Software changes in new version (patches) usually contain bug fixes, which can be used for detecting similar defects in a project. The purpose of this work is to develop method for analyzing the nature of patches between versions of executables and finding unpatched code fragments. At first, two versions of executables are compared for finding common and changed parts of code. Then, the method determines patches that can possibly be fixes of bugs. The final step is detection of unpatched code fragments. It is based on finding all clones of the buggy code fragments found in previous step which are not patched in the new version of the program. These fragments possibly contain defects. Developed tool allows to analyze programs of several architectures (x86, x86-64, arm, mips, powerpc). The experimental results show that the average percentage of true positive rate on the CoreBench test suite is 73%.
{"title":"Analysis of Program Patches Nature and Searching for Unpatched Code Fragments","authors":"Mariam Arutunian, H. Aslanyan, V. Vardanyan, V. Sirunyan, S. Kurmangaleev, S. Gaissaryan","doi":"10.1109/IVMEM.2019.00014","DOIUrl":"https://doi.org/10.1109/IVMEM.2019.00014","url":null,"abstract":"Software developers often copy and paste code within a project. Due to the possible existence of defects in the initial code fragment, this can lead to defects propagation across the project. Software changes in new version (patches) usually contain bug fixes, which can be used for detecting similar defects in a project. The purpose of this work is to develop method for analyzing the nature of patches between versions of executables and finding unpatched code fragments. At first, two versions of executables are compared for finding common and changed parts of code. Then, the method determines patches that can possibly be fixes of bugs. The final step is detection of unpatched code fragments. It is based on finding all clones of the buggy code fragments found in previous step which are not patched in the new version of the program. These fragments possibly contain defects. Developed tool allows to analyze programs of several architectures (x86, x86-64, arm, mips, powerpc). The experimental results show that the average percentage of true positive rate on the CoreBench test suite is 73%.","PeriodicalId":166102,"journal":{"name":"2019 Ivannikov Memorial Workshop (IVMEM)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127768201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-09-01DOI: 10.1109/IVMEM.2019.00006
M. Akopyan, O.V. Belyaeva, T.P. Plechov, D. Turdakov
Text recognition problem has been studied many years. A few OCR engines exist, which successfully solve the problem for many languages. But these engines work well only with high quality scanned images. Social networks nowadays contain large number of images that need to analyze and recognize the text contained in them, but they have different quality: mixed text with images, poor quality images taken from camera of smartphone, etc. In this paper a text extraction pipeline is provided to address text extraction from various quality images collected form social media. Input images are categorized into different classes and then class specific preprocessing is applied to them (illumination improvement, text localization etc.). Then OCR engine used to recognize text. In the paper we present results of our experiments on dataset collected from social media.
{"title":"Text Recognition on Images from Social Media","authors":"M. Akopyan, O.V. Belyaeva, T.P. Plechov, D. Turdakov","doi":"10.1109/IVMEM.2019.00006","DOIUrl":"https://doi.org/10.1109/IVMEM.2019.00006","url":null,"abstract":"Text recognition problem has been studied many years. A few OCR engines exist, which successfully solve the problem for many languages. But these engines work well only with high quality scanned images. Social networks nowadays contain large number of images that need to analyze and recognize the text contained in them, but they have different quality: mixed text with images, poor quality images taken from camera of smartphone, etc. In this paper a text extraction pipeline is provided to address text extraction from various quality images collected form social media. Input images are categorized into different classes and then class specific preprocessing is applied to them (illumination improvement, text localization etc.). Then OCR engine used to recognize text. In the paper we present results of our experiments on dataset collected from social media.","PeriodicalId":166102,"journal":{"name":"2019 Ivannikov Memorial Workshop (IVMEM)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126856089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-09-01DOI: 10.1109/IVMEM.2019.00012
Valeriya Shvetcova, O. Borisenko, M. Polischuk
With increasing number of cloud providers and offered cloud services, the need of effective deployment and portability of software application infrastructures in the cloud environments is becoming more essential. The paper provides a method of unified description and creation of infrastructures, including hardware and software requirements. It describes the developed Ansible module which deploys the required infrastructure in specified cloud environment depending on specific description. The module uses model and language of the TOSCA standard (Topology and Orchestration Specification for Cloud Applications) to describe the relevant node requirements of infrastructure and orchestration tool Ansible to deploy infrastructure in cloud environment. The paper also provides instructions to add new provider support in developed module and describes the maps between TOSCA elements and resources provided by OpenStack and Amazon. With increasing number of cloud providers and offered cloud services, the need of effective deployment and portability of software application infrastructures in the cloud environments is becoming more essential. The paper provides a method of unified description and creation of infrastructures, including hardware and software requirements. It describes the developed Ansible module which deploys the required infrastructure in specified cloud environment depending on specific description. The module uses model and language of the TOSCA standard (Topology and Orchestration Specification for Cloud Applications) to describe the relevant node requirements of infrastructure and orchestration tool Ansible to deploy infrastructure in cloud environment. The paper also provides instructions to add new provider support in developed module and describes the maps between TOSCA elements and resources provided by OpenStack and Amazon.
随着云提供商和提供的云服务数量的增加,在云环境中对软件应用程序基础设施的有效部署和可移植性的需求变得越来越重要。本文提供了一种统一描述和创建基础设施的方法,包括硬件和软件需求。它描述了开发的Ansible模块,该模块根据具体描述在指定的云环境中部署所需的基础设施。该模块使用TOSCA标准(Topology and Orchestration Specification for Cloud Applications)的模型和语言来描述基础设施的相关节点需求,以及可在云环境中部署基础设施的编排工具。本文还提供了在已开发模块中添加新的提供者支持的说明,并描述了TOSCA元素与OpenStack和Amazon提供的资源之间的映射。随着云提供商和提供的云服务数量的增加,在云环境中对软件应用程序基础设施的有效部署和可移植性的需求变得越来越重要。本文提供了一种统一描述和创建基础设施的方法,包括硬件和软件需求。它描述了开发的Ansible模块,该模块根据具体描述在指定的云环境中部署所需的基础设施。该模块使用TOSCA标准(Topology and Orchestration Specification for Cloud Applications)的模型和语言来描述基础设施的相关节点需求,以及可在云环境中部署基础设施的编排工具。本文还提供了在已开发模块中添加新的提供者支持的说明,并描述了TOSCA元素与OpenStack和Amazon提供的资源之间的映射。
{"title":"Domain-Specific Language for Infrastructure as Code","authors":"Valeriya Shvetcova, O. Borisenko, M. Polischuk","doi":"10.1109/IVMEM.2019.00012","DOIUrl":"https://doi.org/10.1109/IVMEM.2019.00012","url":null,"abstract":"With increasing number of cloud providers and offered cloud services, the need of effective deployment and portability of software application infrastructures in the cloud environments is becoming more essential. The paper provides a method of unified description and creation of infrastructures, including hardware and software requirements. It describes the developed Ansible module which deploys the required infrastructure in specified cloud environment depending on specific description. The module uses model and language of the TOSCA standard (Topology and Orchestration Specification for Cloud Applications) to describe the relevant node requirements of infrastructure and orchestration tool Ansible to deploy infrastructure in cloud environment. The paper also provides instructions to add new provider support in developed module and describes the maps between TOSCA elements and resources provided by OpenStack and Amazon. With increasing number of cloud providers and offered cloud services, the need of effective deployment and portability of software application infrastructures in the cloud environments is becoming more essential. The paper provides a method of unified description and creation of infrastructures, including hardware and software requirements. It describes the developed Ansible module which deploys the required infrastructure in specified cloud environment depending on specific description. The module uses model and language of the TOSCA standard (Topology and Orchestration Specification for Cloud Applications) to describe the relevant node requirements of infrastructure and orchestration tool Ansible to deploy infrastructure in cloud environment. The paper also provides instructions to add new provider support in developed module and describes the maps between TOSCA elements and resources provided by OpenStack and Amazon.","PeriodicalId":166102,"journal":{"name":"2019 Ivannikov Memorial Workshop (IVMEM)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124587532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-09-01DOI: 10.1109/IVMEM.2019.00008
D. Kovalev, S. Stupnikov
Data intensive research is increasingly dependent on the explicit use of hypotheses, simulations and computational models. This paper is devoted to the development of infrastructure for explicit management of virtual experiments and research hypotheses. In particular, hypothesis lattices construction issues are considered. Basic concepts for working with research hypotheses such as hypotheses structure, its basic properties, causal correspondence of equations and variables over the defined structures are provided. The notion of hypotheses lattice is presented as a graph whose vertices are hypotheses, edges are the derived by relationship between hypotheses. An algorithm for constructing hypothesis lattices in virtual experiments is presented. A proof of the proposition on the complexity of the algorithm for constructing a lattice of hypotheses is provided. The developed method for constructing hypothesis lattices is implemented as a program component in the Python3 language.
{"title":"Constructing Hypothesis Lattices for Virtual Experiments in Data Intensive Research","authors":"D. Kovalev, S. Stupnikov","doi":"10.1109/IVMEM.2019.00008","DOIUrl":"https://doi.org/10.1109/IVMEM.2019.00008","url":null,"abstract":"Data intensive research is increasingly dependent on the explicit use of hypotheses, simulations and computational models. This paper is devoted to the development of infrastructure for explicit management of virtual experiments and research hypotheses. In particular, hypothesis lattices construction issues are considered. Basic concepts for working with research hypotheses such as hypotheses structure, its basic properties, causal correspondence of equations and variables over the defined structures are provided. The notion of hypotheses lattice is presented as a graph whose vertices are hypotheses, edges are the derived by relationship between hypotheses. An algorithm for constructing hypothesis lattices in virtual experiments is presented. A proof of the proposition on the complexity of the algorithm for constructing a lattice of hypotheses is provided. The developed method for constructing hypothesis lattices is implemented as a program component in the Python3 language.","PeriodicalId":166102,"journal":{"name":"2019 Ivannikov Memorial Workshop (IVMEM)","volume":"603 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114860202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-09-01DOI: 10.1109/IVMEM.2019.00022
Maxim V. Shatskiy, D. Ryazanov, K. Vatutin, Michael D. Kalugin, I. Sibgatullin
In the paper we investigate scaling of parallel performance for an implementation of quasi-hydrodynamic (QHD) approach as an OpenFOAM solver. Time-dependent partial differential equations are discretized using Finite volume method (FVM). As a test hydrodynamical problem we take internal wave generation in a bounded tank of trapezoidal shape, and set of parameters, which allows internal wave attractors to evolve. Proper Orthogonal Decomposition was applied to analyze and compare 2D and full 3D simulations.
{"title":"OpenFOAM Solver Based on Regularized Hydrodynamic Equations for High Performance Computing","authors":"Maxim V. Shatskiy, D. Ryazanov, K. Vatutin, Michael D. Kalugin, I. Sibgatullin","doi":"10.1109/IVMEM.2019.00022","DOIUrl":"https://doi.org/10.1109/IVMEM.2019.00022","url":null,"abstract":"In the paper we investigate scaling of parallel performance for an implementation of quasi-hydrodynamic (QHD) approach as an OpenFOAM solver. Time-dependent partial differential equations are discretized using Finite volume method (FVM). As a test hydrodynamical problem we take internal wave generation in a bounded tank of trapezoidal shape, and set of parameters, which allows internal wave attractors to evolve. Proper Orthogonal Decomposition was applied to analyze and compare 2D and full 3D simulations.","PeriodicalId":166102,"journal":{"name":"2019 Ivannikov Memorial Workshop (IVMEM)","volume":"135 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124260298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}