首页 > 最新文献

2019 Ivannikov Memorial Workshop (IVMEM)最新文献

英文 中文
Automation of Open Sources Data Processing for the Security Assessment 面向安全评估的开源数据处理自动化
Pub Date : 2019-09-01 DOI: 10.1109/IVMEM.2019.00016
M. Poltavtseva, D.A. Bazarnova
The work is devoted to the automation of security assessment on the basis of open source data. The authors analyzed the work in this area and considered problems of the popular search engines use for automatic data collection and determining the attacker awareness. The authors use data analysis using Named Entity recognition The article presents the developed parameters of the recognition system and the training sample. It presents a data search and processing method and analysis of its effectiveness in detecting in open sources the software and hardware names, where the software or hardware belongs to the organization or object.
该工作致力于基于开源数据的安全评估自动化。作者分析了这一领域的工作,并考虑了用于自动收集数据和确定攻击者意识的流行搜索引擎的问题。采用命名实体识别进行数据分析,给出了识别系统的开发参数和训练样本。提出了一种数据搜索和处理方法,并分析了其在开源软件和硬件名称检测中的有效性,该软件或硬件属于哪个组织或对象。
{"title":"Automation of Open Sources Data Processing for the Security Assessment","authors":"M. Poltavtseva, D.A. Bazarnova","doi":"10.1109/IVMEM.2019.00016","DOIUrl":"https://doi.org/10.1109/IVMEM.2019.00016","url":null,"abstract":"The work is devoted to the automation of security assessment on the basis of open source data. The authors analyzed the work in this area and considered problems of the popular search engines use for automatic data collection and determining the attacker awareness. The authors use data analysis using Named Entity recognition The article presents the developed parameters of the recognition system and the training sample. It presents a data search and processing method and analysis of its effectiveness in detecting in open sources the software and hardware names, where the software or hardware belongs to the organization or object.","PeriodicalId":166102,"journal":{"name":"2019 Ivannikov Memorial Workshop (IVMEM)","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127369438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Application of Anomaly Detection Methods in the Housing and Utility Infrastructure Data 异常检测方法在住房和公用设施数据中的应用
Pub Date : 2019-09-01 DOI: 10.1109/IVMEM.2019.00023
I. Shanin, S. Stupnikov, V. Zakharov
Efficient and timely fault detection is a significant problem due to the intensifying use of modern technological solutions in machine condition monitoring. This work is carried out as part of a project that is aimed at development of software solutions for a housing and utility condition monitoring system. An experimental setup was designed and assembled for the study of basic housing infrastructure elements operating modes. The setup includes electric pumps, power transformers, ventilation and air conditioning systems (HVAC), heaters and electric boilers. Every element is equipped with various sensors. Sensor readings were gathered, processed and analyzed. This dataset was used to fit statistical and probabilistic models such as linear regression and Hidden Markov model in order to classify regular and faulty operating modes of equipment. Nine classes of equipment malfunction were modeled, these models are intended to be used as a theoretical basis for the design of industrial housing and utility condition monitoring systems.
随着现代技术在机械状态监测中的应用日益广泛,高效、及时的故障检测已成为一个重要问题。这项工作是一个项目的一部分,该项目旨在为住房和公用事业状况监测系统开发软件解决方案。设计并组装了一个实验装置,用于研究基本住房基础设施要素的运行模式。设备包括电泵、电力变压器、通风和空调系统(HVAC)、加热器和电锅炉。每个元件都配备了各种传感器。收集、处理和分析传感器读数。该数据集用于拟合统计和概率模型,如线性回归和隐马尔可夫模型,以分类设备的正常和故障运行模式。对9类设备故障进行了建模,这些模型旨在为工业住宅和公用事业状态监测系统的设计提供理论依据。
{"title":"Application of Anomaly Detection Methods in the Housing and Utility Infrastructure Data","authors":"I. Shanin, S. Stupnikov, V. Zakharov","doi":"10.1109/IVMEM.2019.00023","DOIUrl":"https://doi.org/10.1109/IVMEM.2019.00023","url":null,"abstract":"Efficient and timely fault detection is a significant problem due to the intensifying use of modern technological solutions in machine condition monitoring. This work is carried out as part of a project that is aimed at development of software solutions for a housing and utility condition monitoring system. An experimental setup was designed and assembled for the study of basic housing infrastructure elements operating modes. The setup includes electric pumps, power transformers, ventilation and air conditioning systems (HVAC), heaters and electric boilers. Every element is equipped with various sensors. Sensor readings were gathered, processed and analyzed. This dataset was used to fit statistical and probabilistic models such as linear regression and Hidden Markov model in order to classify regular and faulty operating modes of equipment. Nine classes of equipment malfunction were modeled, these models are intended to be used as a theoretical basis for the design of industrial housing and utility condition monitoring systems.","PeriodicalId":166102,"journal":{"name":"2019 Ivannikov Memorial Workshop (IVMEM)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127770869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Machine Code Caching in PostgreSQL Query JIT-Compiler PostgreSQL查询jit编译器中的机器码缓存
Pub Date : 2019-09-01 DOI: 10.1109/IVMEM.2019.00009
M. Pantilimonov, R. Buchatskiy, R. Zhuykov, E. Sharygin, D. Melnik
As the efficiency of main and external memory grows, alongside with decreasing hardware costs, the performance of database management systems (DBMS) on certain kinds of queries is more determined by CPU characteristics and the way it is utilized. Relational DBMS utilize diverse execution models to run SQL queries. Those models have different properties, but in either way suffer from substantial overhead during query plan interpretation. The overhead comes from indirect calls to handler functions, runtime checks and large number of branch instructions. One way to solve this problem is dynamic query compilation that is reasonable only in those cases when query interpretation time is larger than the time of compilation and optimized machine code execution. This requirement can be satisfied only when the amount of data to be processed is large enough. If query interpretation takes milliseconds to finish, then the cost of dynamic compilation can be hundreds of times more than the execution time of generated machine code. To pay off the cost of dynamic compilation, the generated machine code has to be reused in subsequent executions, thus saving the cost of code compilation and optimization. In this paper, we examine the method of machine code caching in our query JIT-compiler for DBMS PostgreSQL. The proposed method allows us to eliminate compilation overhead. The results show that dynamic compilation of queries with machine code caching feature gives a significant speedup on OLTP queries.
随着主内存和外部内存效率的提高,以及硬件成本的降低,数据库管理系统(DBMS)处理某些查询的性能更多地取决于CPU特性及其使用方式。关系型DBMS利用不同的执行模型来运行SQL查询。这些模型具有不同的属性,但在查询计划解释期间,无论哪种方式都会产生大量开销。开销来自于对处理程序函数的间接调用、运行时检查和大量分支指令。解决这个问题的一种方法是动态查询编译,只有在查询解释时间大于编译和优化机器码执行时间的情况下,动态查询编译才是合理的。只有当要处理的数据量足够大时,才能满足这一要求。如果查询解释需要几毫秒才能完成,那么动态编译的成本可能比生成的机器代码的执行时间高出数百倍。为了补偿动态编译的成本,生成的机器码必须在随后的执行中重用,从而节省代码编译和优化的成本。本文研究了PostgreSQL查询jit编译器中机器代码缓存的方法。建议的方法允许我们消除编译开销。结果表明,使用机器代码缓存特性对查询进行动态编译可以显著提高OLTP查询的速度。
{"title":"Machine Code Caching in PostgreSQL Query JIT-Compiler","authors":"M. Pantilimonov, R. Buchatskiy, R. Zhuykov, E. Sharygin, D. Melnik","doi":"10.1109/IVMEM.2019.00009","DOIUrl":"https://doi.org/10.1109/IVMEM.2019.00009","url":null,"abstract":"As the efficiency of main and external memory grows, alongside with decreasing hardware costs, the performance of database management systems (DBMS) on certain kinds of queries is more determined by CPU characteristics and the way it is utilized. Relational DBMS utilize diverse execution models to run SQL queries. Those models have different properties, but in either way suffer from substantial overhead during query plan interpretation. The overhead comes from indirect calls to handler functions, runtime checks and large number of branch instructions. One way to solve this problem is dynamic query compilation that is reasonable only in those cases when query interpretation time is larger than the time of compilation and optimized machine code execution. This requirement can be satisfied only when the amount of data to be processed is large enough. If query interpretation takes milliseconds to finish, then the cost of dynamic compilation can be hundreds of times more than the execution time of generated machine code. To pay off the cost of dynamic compilation, the generated machine code has to be reused in subsequent executions, thus saving the cost of code compilation and optimization. In this paper, we examine the method of machine code caching in our query JIT-compiler for DBMS PostgreSQL. The proposed method allows us to eliminate compilation overhead. The results show that dynamic compilation of queries with machine code caching feature gives a significant speedup on OLTP queries.","PeriodicalId":166102,"journal":{"name":"2019 Ivannikov Memorial Workshop (IVMEM)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114184201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Recovery of High-Level Intermediate Representations of Algorithms from Binary Code 从二进制码中恢复算法的高级中间表示
Pub Date : 2019-09-01 DOI: 10.1109/IVMEM.2019.00015
A. Bugerya, I. Kulagin, V. Padaryan, M. A. Solovev, A. Tikhonov
One of the tasks of binary code security analysis is detection of undocumented features in software. This task is hard to automate, and it requires participation of a cybersecurity expert. The way of representation of the algorithm under analysis strongly determines the analysis effort and quality of its results. Existing intermediate representations and languages are intended for use in software that either carries out optimizing transformations or analyzes binary code. Such representations and intermediate languages are unsuitable for manual data flow analysis. This paper proposes a high-level hierarchical flowchart-based representation of a program algorithm as well as an algorithm for its construction. The proposed representation is based on a hypergraph and it allows both automatic and manual data flow analysis on different detail levels. The hypergraph nodes represent functions. Every node contains a set of other nodes which are fragments. The fragment is a linear sequence of instructions that does not contain call and ret instructions. Edges represent data flows between nodes and correspond to memory buffers and registers. In the future this representation can be used to implement automatic analysis algorithms. An approach is proposed to increasing quality of the developed algorithm representation using grouping of single data flows into one flow connecting logical algorithm modules.
二进制代码安全分析的任务之一是检测软件中未记录的特性。这项任务很难自动化,需要网络安全专家的参与。被分析算法的表示方式在很大程度上决定了分析的效果和结果的质量。现有的中间表示和语言旨在用于执行优化转换或分析二进制代码的软件。这种表示和中间语言不适合手工数据流分析。本文提出了一种基于高级层次流程图的程序算法表示方法及其构造算法。提出的表示是基于超图的,它允许在不同的细节级别上进行自动和手动数据流分析。超图节点表示函数。每个节点包含一组其他节点,这些节点是片段。片段是一个不包含调用和ret指令的线性指令序列。边表示节点之间的数据流,并对应于内存缓冲区和寄存器。在未来,这种表示可以用于实现自动分析算法。提出了一种通过将单个数据流分组为一个连接逻辑算法模块的流来提高所开发算法表示质量的方法。
{"title":"Recovery of High-Level Intermediate Representations of Algorithms from Binary Code","authors":"A. Bugerya, I. Kulagin, V. Padaryan, M. A. Solovev, A. Tikhonov","doi":"10.1109/IVMEM.2019.00015","DOIUrl":"https://doi.org/10.1109/IVMEM.2019.00015","url":null,"abstract":"One of the tasks of binary code security analysis is detection of undocumented features in software. This task is hard to automate, and it requires participation of a cybersecurity expert. The way of representation of the algorithm under analysis strongly determines the analysis effort and quality of its results. Existing intermediate representations and languages are intended for use in software that either carries out optimizing transformations or analyzes binary code. Such representations and intermediate languages are unsuitable for manual data flow analysis. This paper proposes a high-level hierarchical flowchart-based representation of a program algorithm as well as an algorithm for its construction. The proposed representation is based on a hypergraph and it allows both automatic and manual data flow analysis on different detail levels. The hypergraph nodes represent functions. Every node contains a set of other nodes which are fragments. The fragment is a linear sequence of instructions that does not contain call and ret instructions. Edges represent data flows between nodes and correspond to memory buffers and registers. In the future this representation can be used to implement automatic analysis algorithms. An approach is proposed to increasing quality of the developed algorithm representation using grouping of single data flows into one flow connecting logical algorithm modules.","PeriodicalId":166102,"journal":{"name":"2019 Ivannikov Memorial Workshop (IVMEM)","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131275043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Analysis of Program Patches Nature and Searching for Unpatched Code Fragments 程序补丁性质分析及未修补代码片段搜索
Pub Date : 2019-09-01 DOI: 10.1109/IVMEM.2019.00014
Mariam Arutunian, H. Aslanyan, V. Vardanyan, V. Sirunyan, S. Kurmangaleev, S. Gaissaryan
Software developers often copy and paste code within a project. Due to the possible existence of defects in the initial code fragment, this can lead to defects propagation across the project. Software changes in new version (patches) usually contain bug fixes, which can be used for detecting similar defects in a project. The purpose of this work is to develop method for analyzing the nature of patches between versions of executables and finding unpatched code fragments. At first, two versions of executables are compared for finding common and changed parts of code. Then, the method determines patches that can possibly be fixes of bugs. The final step is detection of unpatched code fragments. It is based on finding all clones of the buggy code fragments found in previous step which are not patched in the new version of the program. These fragments possibly contain defects. Developed tool allows to analyze programs of several architectures (x86, x86-64, arm, mips, powerpc). The experimental results show that the average percentage of true positive rate on the CoreBench test suite is 73%.
软件开发人员经常在项目中复制和粘贴代码。由于在初始代码片段中可能存在缺陷,这可能导致缺陷在整个项目中传播。新版本(补丁)中的软件更改通常包含错误修复,可用于检测项目中类似的缺陷。这项工作的目的是开发一种方法来分析可执行文件版本之间补丁的性质,并找到未打补丁的代码片段。首先,比较两个版本的可执行文件,以查找代码的公共部分和更改部分。然后,该方法确定可能是错误修复的补丁。最后一步是检测未打补丁的代码片段。它是基于查找在前一步中发现的错误代码片段的所有克隆,这些代码片段在新版本的程序中没有修补。这些片段可能包含缺陷。开发的工具允许分析多种体系结构(x86, x86-64, arm, mips, powerpc)的程序。实验结果表明,CoreBench测试套件的平均真阳性率为73%。
{"title":"Analysis of Program Patches Nature and Searching for Unpatched Code Fragments","authors":"Mariam Arutunian, H. Aslanyan, V. Vardanyan, V. Sirunyan, S. Kurmangaleev, S. Gaissaryan","doi":"10.1109/IVMEM.2019.00014","DOIUrl":"https://doi.org/10.1109/IVMEM.2019.00014","url":null,"abstract":"Software developers often copy and paste code within a project. Due to the possible existence of defects in the initial code fragment, this can lead to defects propagation across the project. Software changes in new version (patches) usually contain bug fixes, which can be used for detecting similar defects in a project. The purpose of this work is to develop method for analyzing the nature of patches between versions of executables and finding unpatched code fragments. At first, two versions of executables are compared for finding common and changed parts of code. Then, the method determines patches that can possibly be fixes of bugs. The final step is detection of unpatched code fragments. It is based on finding all clones of the buggy code fragments found in previous step which are not patched in the new version of the program. These fragments possibly contain defects. Developed tool allows to analyze programs of several architectures (x86, x86-64, arm, mips, powerpc). The experimental results show that the average percentage of true positive rate on the CoreBench test suite is 73%.","PeriodicalId":166102,"journal":{"name":"2019 Ivannikov Memorial Workshop (IVMEM)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127768201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Text Recognition on Images from Social Media 基于社交媒体图像的文本识别
Pub Date : 2019-09-01 DOI: 10.1109/IVMEM.2019.00006
M. Akopyan, O.V. Belyaeva, T.P. Plechov, D. Turdakov
Text recognition problem has been studied many years. A few OCR engines exist, which successfully solve the problem for many languages. But these engines work well only with high quality scanned images. Social networks nowadays contain large number of images that need to analyze and recognize the text contained in them, but they have different quality: mixed text with images, poor quality images taken from camera of smartphone, etc. In this paper a text extraction pipeline is provided to address text extraction from various quality images collected form social media. Input images are categorized into different classes and then class specific preprocessing is applied to them (illumination improvement, text localization etc.). Then OCR engine used to recognize text. In the paper we present results of our experiments on dataset collected from social media.
文本识别问题已经被研究了很多年。存在一些OCR引擎,它们成功地解决了许多语言的问题。但这些引擎只有在高质量的扫描图像上才能正常工作。现在的社交网络包含了大量的图片,需要对其中的文字进行分析和识别,但是这些图片的质量各不相同:文字和图片混合,智能手机相机拍摄的图像质量差等。本文提供了一个文本提取管道来解决从社交媒体上收集的各种高质量图像的文本提取问题。输入图像被分类为不同的类别,然后对其进行特定类别的预处理(照明改善,文本定位等)。然后用OCR引擎对文本进行识别。在本文中,我们展示了我们从社交媒体收集的数据集的实验结果。
{"title":"Text Recognition on Images from Social Media","authors":"M. Akopyan, O.V. Belyaeva, T.P. Plechov, D. Turdakov","doi":"10.1109/IVMEM.2019.00006","DOIUrl":"https://doi.org/10.1109/IVMEM.2019.00006","url":null,"abstract":"Text recognition problem has been studied many years. A few OCR engines exist, which successfully solve the problem for many languages. But these engines work well only with high quality scanned images. Social networks nowadays contain large number of images that need to analyze and recognize the text contained in them, but they have different quality: mixed text with images, poor quality images taken from camera of smartphone, etc. In this paper a text extraction pipeline is provided to address text extraction from various quality images collected form social media. Input images are categorized into different classes and then class specific preprocessing is applied to them (illumination improvement, text localization etc.). Then OCR engine used to recognize text. In the paper we present results of our experiments on dataset collected from social media.","PeriodicalId":166102,"journal":{"name":"2019 Ivannikov Memorial Workshop (IVMEM)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126856089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Domain-Specific Language for Infrastructure as Code 作为代码的基础设施领域特定语言
Pub Date : 2019-09-01 DOI: 10.1109/IVMEM.2019.00012
Valeriya Shvetcova, O. Borisenko, M. Polischuk
With increasing number of cloud providers and offered cloud services, the need of effective deployment and portability of software application infrastructures in the cloud environments is becoming more essential. The paper provides a method of unified description and creation of infrastructures, including hardware and software requirements. It describes the developed Ansible module which deploys the required infrastructure in specified cloud environment depending on specific description. The module uses model and language of the TOSCA standard (Topology and Orchestration Specification for Cloud Applications) to describe the relevant node requirements of infrastructure and orchestration tool Ansible to deploy infrastructure in cloud environment. The paper also provides instructions to add new provider support in developed module and describes the maps between TOSCA elements and resources provided by OpenStack and Amazon. With increasing number of cloud providers and offered cloud services, the need of effective deployment and portability of software application infrastructures in the cloud environments is becoming more essential. The paper provides a method of unified description and creation of infrastructures, including hardware and software requirements. It describes the developed Ansible module which deploys the required infrastructure in specified cloud environment depending on specific description. The module uses model and language of the TOSCA standard (Topology and Orchestration Specification for Cloud Applications) to describe the relevant node requirements of infrastructure and orchestration tool Ansible to deploy infrastructure in cloud environment. The paper also provides instructions to add new provider support in developed module and describes the maps between TOSCA elements and resources provided by OpenStack and Amazon.
随着云提供商和提供的云服务数量的增加,在云环境中对软件应用程序基础设施的有效部署和可移植性的需求变得越来越重要。本文提供了一种统一描述和创建基础设施的方法,包括硬件和软件需求。它描述了开发的Ansible模块,该模块根据具体描述在指定的云环境中部署所需的基础设施。该模块使用TOSCA标准(Topology and Orchestration Specification for Cloud Applications)的模型和语言来描述基础设施的相关节点需求,以及可在云环境中部署基础设施的编排工具。本文还提供了在已开发模块中添加新的提供者支持的说明,并描述了TOSCA元素与OpenStack和Amazon提供的资源之间的映射。随着云提供商和提供的云服务数量的增加,在云环境中对软件应用程序基础设施的有效部署和可移植性的需求变得越来越重要。本文提供了一种统一描述和创建基础设施的方法,包括硬件和软件需求。它描述了开发的Ansible模块,该模块根据具体描述在指定的云环境中部署所需的基础设施。该模块使用TOSCA标准(Topology and Orchestration Specification for Cloud Applications)的模型和语言来描述基础设施的相关节点需求,以及可在云环境中部署基础设施的编排工具。本文还提供了在已开发模块中添加新的提供者支持的说明,并描述了TOSCA元素与OpenStack和Amazon提供的资源之间的映射。
{"title":"Domain-Specific Language for Infrastructure as Code","authors":"Valeriya Shvetcova, O. Borisenko, M. Polischuk","doi":"10.1109/IVMEM.2019.00012","DOIUrl":"https://doi.org/10.1109/IVMEM.2019.00012","url":null,"abstract":"With increasing number of cloud providers and offered cloud services, the need of effective deployment and portability of software application infrastructures in the cloud environments is becoming more essential. The paper provides a method of unified description and creation of infrastructures, including hardware and software requirements. It describes the developed Ansible module which deploys the required infrastructure in specified cloud environment depending on specific description. The module uses model and language of the TOSCA standard (Topology and Orchestration Specification for Cloud Applications) to describe the relevant node requirements of infrastructure and orchestration tool Ansible to deploy infrastructure in cloud environment. The paper also provides instructions to add new provider support in developed module and describes the maps between TOSCA elements and resources provided by OpenStack and Amazon. With increasing number of cloud providers and offered cloud services, the need of effective deployment and portability of software application infrastructures in the cloud environments is becoming more essential. The paper provides a method of unified description and creation of infrastructures, including hardware and software requirements. It describes the developed Ansible module which deploys the required infrastructure in specified cloud environment depending on specific description. The module uses model and language of the TOSCA standard (Topology and Orchestration Specification for Cloud Applications) to describe the relevant node requirements of infrastructure and orchestration tool Ansible to deploy infrastructure in cloud environment. The paper also provides instructions to add new provider support in developed module and describes the maps between TOSCA elements and resources provided by OpenStack and Amazon.","PeriodicalId":166102,"journal":{"name":"2019 Ivannikov Memorial Workshop (IVMEM)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124587532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Constructing Hypothesis Lattices for Virtual Experiments in Data Intensive Research 数据密集型研究中虚拟实验假设格的构造
Pub Date : 2019-09-01 DOI: 10.1109/IVMEM.2019.00008
D. Kovalev, S. Stupnikov
Data intensive research is increasingly dependent on the explicit use of hypotheses, simulations and computational models. This paper is devoted to the development of infrastructure for explicit management of virtual experiments and research hypotheses. In particular, hypothesis lattices construction issues are considered. Basic concepts for working with research hypotheses such as hypotheses structure, its basic properties, causal correspondence of equations and variables over the defined structures are provided. The notion of hypotheses lattice is presented as a graph whose vertices are hypotheses, edges are the derived by relationship between hypotheses. An algorithm for constructing hypothesis lattices in virtual experiments is presented. A proof of the proposition on the complexity of the algorithm for constructing a lattice of hypotheses is provided. The developed method for constructing hypothesis lattices is implemented as a program component in the Python3 language.
数据密集型研究越来越依赖于假设、模拟和计算模型的明确使用。本文致力于开发虚拟实验和研究假设的明确管理基础设施。特别考虑了假设格的构造问题。提供了研究假设的基本概念,如假设结构,其基本性质,方程和变量在定义结构上的因果对应关系。假设格的概念被提出为一个图,它的顶点是假设,边是由假设之间的关系推导出来的。提出了一种虚拟实验中假设格的构造算法。给出了构造假设格算法复杂性的证明。开发的构造假设格的方法在Python3语言中作为程序组件实现。
{"title":"Constructing Hypothesis Lattices for Virtual Experiments in Data Intensive Research","authors":"D. Kovalev, S. Stupnikov","doi":"10.1109/IVMEM.2019.00008","DOIUrl":"https://doi.org/10.1109/IVMEM.2019.00008","url":null,"abstract":"Data intensive research is increasingly dependent on the explicit use of hypotheses, simulations and computational models. This paper is devoted to the development of infrastructure for explicit management of virtual experiments and research hypotheses. In particular, hypothesis lattices construction issues are considered. Basic concepts for working with research hypotheses such as hypotheses structure, its basic properties, causal correspondence of equations and variables over the defined structures are provided. The notion of hypotheses lattice is presented as a graph whose vertices are hypotheses, edges are the derived by relationship between hypotheses. An algorithm for constructing hypothesis lattices in virtual experiments is presented. A proof of the proposition on the complexity of the algorithm for constructing a lattice of hypotheses is provided. The developed method for constructing hypothesis lattices is implemented as a program component in the Python3 language.","PeriodicalId":166102,"journal":{"name":"2019 Ivannikov Memorial Workshop (IVMEM)","volume":"603 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114860202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
OpenFOAM Solver Based on Regularized Hydrodynamic Equations for High Performance Computing 高性能计算中基于正则化流体动力学方程的OpenFOAM求解器
Pub Date : 2019-09-01 DOI: 10.1109/IVMEM.2019.00022
Maxim V. Shatskiy, D. Ryazanov, K. Vatutin, Michael D. Kalugin, I. Sibgatullin
In the paper we investigate scaling of parallel performance for an implementation of quasi-hydrodynamic (QHD) approach as an OpenFOAM solver. Time-dependent partial differential equations are discretized using Finite volume method (FVM). As a test hydrodynamical problem we take internal wave generation in a bounded tank of trapezoidal shape, and set of parameters, which allows internal wave attractors to evolve. Proper Orthogonal Decomposition was applied to analyze and compare 2D and full 3D simulations.
在本文中,我们研究了准流体动力学(QHD)方法作为OpenFOAM求解器实现的并行性能的缩放。采用有限体积法对时变偏微分方程进行离散化。作为一个测试水动力问题,我们取一个有界的梯形槽内的内波产生和一组允许内波吸引子演化的参数。采用适当的正交分解对二维和全三维仿真进行分析比较。
{"title":"OpenFOAM Solver Based on Regularized Hydrodynamic Equations for High Performance Computing","authors":"Maxim V. Shatskiy, D. Ryazanov, K. Vatutin, Michael D. Kalugin, I. Sibgatullin","doi":"10.1109/IVMEM.2019.00022","DOIUrl":"https://doi.org/10.1109/IVMEM.2019.00022","url":null,"abstract":"In the paper we investigate scaling of parallel performance for an implementation of quasi-hydrodynamic (QHD) approach as an OpenFOAM solver. Time-dependent partial differential equations are discretized using Finite volume method (FVM). As a test hydrodynamical problem we take internal wave generation in a bounded tank of trapezoidal shape, and set of parameters, which allows internal wave attractors to evolve. Proper Orthogonal Decomposition was applied to analyze and compare 2D and full 3D simulations.","PeriodicalId":166102,"journal":{"name":"2019 Ivannikov Memorial Workshop (IVMEM)","volume":"135 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124260298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
2019 Ivannikov Memorial Workshop (IVMEM)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1