Pub Date : 2021-12-10DOI: 10.18523/2617-3808.2021.4.113-116
E. Nevmerzhytsky, Mykola Yeshchenko
A virtual asset is a type of asset which does not have a material representation, although its value is reflected in a real currency. Due to their nature, the price of digital assets is usually highly volatile, especially with futures, which are derivative financial contracts. This is the most important contributing factor to the problem of the low usability of digital-based contracts in enterprise operations.Previously existing virtual assets included photography, logos, illustrations, animations, audiovisual media, etc. However, virtually all of such assets required a third-party platform for exchange to currency. The necessity of having a trusted by both sides mediator greatly limited the ease of use, and ultimately restricted the number of such transactions. Still, popularity of digital assets only grew, as evidenced by an explosive growth of software applications in the 2000s, as well as blockchain-based asset space in the 2010s.The newest and most promising solution developed is based on cryptoassets. Underlying usage of block- chain technology for the transactions checking and storage ensures clarity in virtual assets’ value history. Smart contracts written for the Ethereum platform, as an example, provide a highly trustful way of express- ing predefined conditions of a certain transaction. This allows safe and calculated enterprise usage, and also eliminates the need of having a mutually trusted third-party. The transactions are fully automated and happen at the same time as the pre-defined external conditions are met.Ethereum was chosen as an exemplary platform due to its high flexibility and amount of existing development. Even now, further advancements are being explored by its founder and community. Besides Ether, it is also used nоn-fungible tokens, decentralized finance, and enterprise blockchain solutions. Another important point is how much more nature friendly it is compared to main competitors, due to energy-efficiency of the mining process, enforced by the platform itself. This makes it ideal for responsible usage as well as further research.This article explores the digital assets usage, as well as explains cryptoassets technological background, in order to highlight the recent developments in the area of futures based on virtual assets, using certain Ether implementation as an example, which offers perpetual futures.
{"title":"Technological Characteristic of Futures Based on Virtual Assets","authors":"E. Nevmerzhytsky, Mykola Yeshchenko","doi":"10.18523/2617-3808.2021.4.113-116","DOIUrl":"https://doi.org/10.18523/2617-3808.2021.4.113-116","url":null,"abstract":"A virtual asset is a type of asset which does not have a material representation, although its value is reflected in a real currency. Due to their nature, the price of digital assets is usually highly volatile, especially with futures, which are derivative financial contracts. This is the most important contributing factor to the problem of the low usability of digital-based contracts in enterprise operations.Previously existing virtual assets included photography, logos, illustrations, animations, audiovisual media, etc. However, virtually all of such assets required a third-party platform for exchange to currency. The necessity of having a trusted by both sides mediator greatly limited the ease of use, and ultimately restricted the number of such transactions. Still, popularity of digital assets only grew, as evidenced by an explosive growth of software applications in the 2000s, as well as blockchain-based asset space in the 2010s.The newest and most promising solution developed is based on cryptoassets. Underlying usage of block- chain technology for the transactions checking and storage ensures clarity in virtual assets’ value history. Smart contracts written for the Ethereum platform, as an example, provide a highly trustful way of express- ing predefined conditions of a certain transaction. This allows safe and calculated enterprise usage, and also eliminates the need of having a mutually trusted third-party. The transactions are fully automated and happen at the same time as the pre-defined external conditions are met.Ethereum was chosen as an exemplary platform due to its high flexibility and amount of existing development. Even now, further advancements are being explored by its founder and community. Besides Ether, it is also used nоn-fungible tokens, decentralized finance, and enterprise blockchain solutions. Another important point is how much more nature friendly it is compared to main competitors, due to energy-efficiency of the mining process, enforced by the platform itself. This makes it ideal for responsible usage as well as further research.This article explores the digital assets usage, as well as explains cryptoassets technological background, in order to highlight the recent developments in the area of futures based on virtual assets, using certain Ether implementation as an example, which offers perpetual futures.","PeriodicalId":433538,"journal":{"name":"NaUKMA Research Papers. Computer Science","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130647489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-10DOI: 10.18523/2617-3808.2021.4.4-9
Oleksii Oletsky
The paper investigates the issue related to a possible generalization of the “state-probability of choice” model so that the generalized model could be applied to the problem of ranking alternatives, either individual or by a group of agents. It is shown that the results obtained before for the problem of multi-agent choice and decision making by majority of votes can be easily transferred to the problem of multi-agent alternatives ranking. On the basis of distributions of importance values for the problem of ranking alternatives, we can move on to similar models for the choice and voting with the help of well-known exponential normalization of rows.So we regard two types of matrices, both of which belonging to the sort of matrices named balanced rectangular stochastic matrices. For such matrices, sums of elements in each row equal 1, and all columns have equal sums of elements. Both types are involved in a two-level procedure regarded in this paper. Firstly a matrix representing all possible distributions of importance among alternatives should be formed, and secondly a “state-probability of choice” matrix should be obtained on its base. For forming a matrix of states, which belongs and the rows of which correspond to possible distributions of importance, applying pairwise comparisons and the Analytic Hierarchy Method is suggested. Parameterized transitive scales with the parameter affecting the spread of importance between the best and the worst alternatives are regarded. For further getting the matrices of choice probabilities, another parameter which reflects the degree of the agent’s decisiveness is also introduced. The role of both parameters is discussed and illustrated with examples in the paper.The results are reported regarding some numerical experiments which illustrate getting distributions of importance on the basis of the Analytic Hierarchy Process and which are connected to gaining the situation of dynamic equilibrium of alternatives, i.e. the situation when alternatives are considered as those of equal value.
{"title":"Using of Rectangular Stochastic Matrices for the Problem of Evaluating and Ranking Alternatives","authors":"Oleksii Oletsky","doi":"10.18523/2617-3808.2021.4.4-9","DOIUrl":"https://doi.org/10.18523/2617-3808.2021.4.4-9","url":null,"abstract":"The paper investigates the issue related to a possible generalization of the “state-probability of choice” model so that the generalized model could be applied to the problem of ranking alternatives, either individual or by a group of agents. It is shown that the results obtained before for the problem of multi-agent choice and decision making by majority of votes can be easily transferred to the problem of multi-agent alternatives ranking. On the basis of distributions of importance values for the problem of ranking alternatives, we can move on to similar models for the choice and voting with the help of well-known exponential normalization of rows.So we regard two types of matrices, both of which belonging to the sort of matrices named balanced rectangular stochastic matrices. For such matrices, sums of elements in each row equal 1, and all columns have equal sums of elements. Both types are involved in a two-level procedure regarded in this paper. Firstly a matrix representing all possible distributions of importance among alternatives should be formed, and secondly a “state-probability of choice” matrix should be obtained on its base. For forming a matrix of states, which belongs and the rows of which correspond to possible distributions of importance, applying pairwise comparisons and the Analytic Hierarchy Method is suggested. Parameterized transitive scales with the parameter affecting the spread of importance between the best and the worst alternatives are regarded. For further getting the matrices of choice probabilities, another parameter which reflects the degree of the agent’s decisiveness is also introduced. The role of both parameters is discussed and illustrated with examples in the paper.The results are reported regarding some numerical experiments which illustrate getting distributions of importance on the basis of the Analytic Hierarchy Process and which are connected to gaining the situation of dynamic equilibrium of alternatives, i.e. the situation when alternatives are considered as those of equal value.","PeriodicalId":433538,"journal":{"name":"NaUKMA Research Papers. Computer Science","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130857956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-10DOI: 10.18523/2617-3808.2021.4.88-92
Lada Beniukh, A. Hlybovets
Testing system performance and its importance at the same time is difficult to overestimate or underestimate. It would be much more correct to talk about the timeliness of this activity. Virtually any digital sys- tem built on modern approaches and technologies can work without any critical problems with its own performance. At the same time, for any system, especially when it becomes popular, it is very likely that there will be a time when it will not be able to cope with the ever-increasing load and become unstable. However, most companies that develop and maintain their own digital solutions – from websites to any other digital systems – often focus primarily on the functionality of the system and its compliance, rather than on the performance of the system as a whole. Such intentions are quite natural, because the system must properly perform the functions expected of it. When companies start to face performance problems, they try not to optimize the software as soon as possible, but to add more capacity – vertical and horizontal scaling. This strategy works, but it has limitations. After all, the addition of additional resources cannot be endless and sooner or later rests either on the architecture of the system, or in the capabilities of the company itself, and so on.Therefore it is recommended to carry out stress testing in advance, plan time and resources to have enough time to correct errors, and generally understand the boundaries of the system. At the same time, in order to organize full-fledged stress testing, trained specialists, tools and infrastructure are needed, especially when we are talking about heavy workload.As part of this work, an analysis of various tools for the implementation of stress testing and performance testing, scaling of such tests and centralized reporting of metrics. As a result, approaches and principles for the construction of a modern architecture for the implementation of the load testing subsystem in the continuous supply of code were proposed.
{"title":"Development of the Architecture of the System of High-Load Testing","authors":"Lada Beniukh, A. Hlybovets","doi":"10.18523/2617-3808.2021.4.88-92","DOIUrl":"https://doi.org/10.18523/2617-3808.2021.4.88-92","url":null,"abstract":"Testing system performance and its importance at the same time is difficult to overestimate or underestimate. It would be much more correct to talk about the timeliness of this activity. Virtually any digital sys- tem built on modern approaches and technologies can work without any critical problems with its own performance. At the same time, for any system, especially when it becomes popular, it is very likely that there will be a time when it will not be able to cope with the ever-increasing load and become unstable. However, most companies that develop and maintain their own digital solutions – from websites to any other digital systems – often focus primarily on the functionality of the system and its compliance, rather than on the performance of the system as a whole. Such intentions are quite natural, because the system must properly perform the functions expected of it. When companies start to face performance problems, they try not to optimize the software as soon as possible, but to add more capacity – vertical and horizontal scaling. This strategy works, but it has limitations. After all, the addition of additional resources cannot be endless and sooner or later rests either on the architecture of the system, or in the capabilities of the company itself, and so on.Therefore it is recommended to carry out stress testing in advance, plan time and resources to have enough time to correct errors, and generally understand the boundaries of the system. At the same time, in order to organize full-fledged stress testing, trained specialists, tools and infrastructure are needed, especially when we are talking about heavy workload.As part of this work, an analysis of various tools for the implementation of stress testing and performance testing, scaling of such tests and centralized reporting of metrics. As a result, approaches and principles for the construction of a modern architecture for the implementation of the load testing subsystem in the continuous supply of code were proposed.","PeriodicalId":433538,"journal":{"name":"NaUKMA Research Papers. Computer Science","volume":"191 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132742082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-10DOI: 10.18523/2617-3808.2021.4.98-100
S. Gorokhovskyi, Yelyzaveta Pyrohova
With the rapid development of applications for mobile platforms, developers from around the world already understand the need to impress with new technologies and the creation of such applications, with which the consumer will plunge into the world of virtual or augmented reality. Some of the world’s most popular mobile operating systems, Android and iOS, already have some well-known tools to make it easier to work with the machine learning industry and augmented reality technology. However, it cannot be said that their use has already reached its peak, as these technologies are at the stage of active study and development. Every year the demand for mobile application developers increases, and therefore more questions arise as to how and from which side it is better to approach immersion in augmented reality and machine learning. From a tourist point of view, there are already many applications that, with the help of these technologies, will provide more information simply by pointing the camera at a specific object.Augmented Reality (AR) is a technology that allows you to see the real environment right in front of us with a digital complement superimposed on it. Thanks to Ivan Sutherland’s first display, created in 1968 under the name «Sword of Damocles», paved the way for the development of AR, which is still used today.Augmented reality can be divided into two forms: based on location and based on vision. Location-based reality provides a digital picture to the user when moving through a physical area thanks to a GPS-enabled device. With a story or information, you can learn more details about a particular location. If you use AR based on vision, certain user actions will only be performed when the camera is aimed at the target object.Thanks to advances in technology that are happening every day, easy access to smart devices can be seen as the main engine of AR technology. As the smartphone market continues to grow, consumers have the opportunity to use their devices to interact with all types of digital information. The experience of using a smartphone to combine the real and digital world is becoming more common. The success of AR applications in the last decade has been due to the proliferation and use of smartphones that have the capabilities needed to work with the application itself. If companies want to remain competitive in their field, it is advisable to consider work that will be related to AR.However, analyzing the market, one can see that there are no such applications for future entrants to higher education institutions. This means that anyone can bring a camera to the university building and learn important information. The UniApp application based on the existing Swift and Watson Studio technologies was developed to simplify obtaining information on higher education institutions.
{"title":"Use of Augmented Reality to Create an iOS App with Watson Studio","authors":"S. Gorokhovskyi, Yelyzaveta Pyrohova","doi":"10.18523/2617-3808.2021.4.98-100","DOIUrl":"https://doi.org/10.18523/2617-3808.2021.4.98-100","url":null,"abstract":"With the rapid development of applications for mobile platforms, developers from around the world already understand the need to impress with new technologies and the creation of such applications, with which the consumer will plunge into the world of virtual or augmented reality. Some of the world’s most popular mobile operating systems, Android and iOS, already have some well-known tools to make it easier to work with the machine learning industry and augmented reality technology. However, it cannot be said that their use has already reached its peak, as these technologies are at the stage of active study and development. Every year the demand for mobile application developers increases, and therefore more questions arise as to how and from which side it is better to approach immersion in augmented reality and machine learning. From a tourist point of view, there are already many applications that, with the help of these technologies, will provide more information simply by pointing the camera at a specific object.Augmented Reality (AR) is a technology that allows you to see the real environment right in front of us with a digital complement superimposed on it. Thanks to Ivan Sutherland’s first display, created in 1968 under the name «Sword of Damocles», paved the way for the development of AR, which is still used today.Augmented reality can be divided into two forms: based on location and based on vision. Location-based reality provides a digital picture to the user when moving through a physical area thanks to a GPS-enabled device. With a story or information, you can learn more details about a particular location. If you use AR based on vision, certain user actions will only be performed when the camera is aimed at the target object.Thanks to advances in technology that are happening every day, easy access to smart devices can be seen as the main engine of AR technology. As the smartphone market continues to grow, consumers have the opportunity to use their devices to interact with all types of digital information. The experience of using a smartphone to combine the real and digital world is becoming more common. The success of AR applications in the last decade has been due to the proliferation and use of smartphones that have the capabilities needed to work with the application itself. If companies want to remain competitive in their field, it is advisable to consider work that will be related to AR.However, analyzing the market, one can see that there are no such applications for future entrants to higher education institutions. This means that anyone can bring a camera to the university building and learn important information. The UniApp application based on the existing Swift and Watson Studio technologies was developed to simplify obtaining information on higher education institutions.","PeriodicalId":433538,"journal":{"name":"NaUKMA Research Papers. Computer Science","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127778147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-10DOI: 10.18523/2617-3808.2021.4.48-51
S. Gorokhovskyi, Artem Laiko
Euclidean algorithm is known by humanity for more than two thousand years. During this period many applications for it were found, covering different disciplines and music is one of those. Such algorithm application in music first appeared in 2005 when researchers found a correlation between world music rhythm and the Euclidean algorithm result, defining Euclidean rhythms as the concept.In the modern world, music could be created using many approaches. The first one being the simple analogue, the analogue signal is just a sound wave that emitted due to vibration of a certain medium, the one that is being recorded onto a computer hard drive or other digital storage called digital and has methods of digital signal processing applied. Having the ability to convert the analogue signal or create and modulate digital sounds creates a lot of possibilities for sound design and production, where sonic characteristics were never accessible because of limitations in sound development by the analogue devices or instruments, nowadays become true. Sound generation process, which usually consists of modulating waveform and frequency and can be influenced by many factors like oscillation, FX pipeline and so on. The programs that influence synthesised or recorded signal called VST plugins and they are utilising the concepts of digital signal processing.This paper aims to research the possible application of Euclidean rhythms and integrate those in the sound generation process by creating a VST plugin that oscillates incoming signal with one of the four basic wave shapes in order to achieve unique sonic qualities. The varying function allows modulation with one out of four basic wave shapes such as sine, triangle, square and sawtooth, depending on the value received from the Euclidean rhythm generator, switching modulating functions introduces subharmonics, with the resulting richer and tighter sound which could be seen on the spectrograms provided in the publication.
{"title":"Euclidean Algorithm for Sound Generation","authors":"S. Gorokhovskyi, Artem Laiko","doi":"10.18523/2617-3808.2021.4.48-51","DOIUrl":"https://doi.org/10.18523/2617-3808.2021.4.48-51","url":null,"abstract":"Euclidean algorithm is known by humanity for more than two thousand years. During this period many applications for it were found, covering different disciplines and music is one of those. Such algorithm application in music first appeared in 2005 when researchers found a correlation between world music rhythm and the Euclidean algorithm result, defining Euclidean rhythms as the concept.In the modern world, music could be created using many approaches. The first one being the simple analogue, the analogue signal is just a sound wave that emitted due to vibration of a certain medium, the one that is being recorded onto a computer hard drive or other digital storage called digital and has methods of digital signal processing applied. Having the ability to convert the analogue signal or create and modulate digital sounds creates a lot of possibilities for sound design and production, where sonic characteristics were never accessible because of limitations in sound development by the analogue devices or instruments, nowadays become true. Sound generation process, which usually consists of modulating waveform and frequency and can be influenced by many factors like oscillation, FX pipeline and so on. The programs that influence synthesised or recorded signal called VST plugins and they are utilising the concepts of digital signal processing.This paper aims to research the possible application of Euclidean rhythms and integrate those in the sound generation process by creating a VST plugin that oscillates incoming signal with one of the four basic wave shapes in order to achieve unique sonic qualities. The varying function allows modulation with one out of four basic wave shapes such as sine, triangle, square and sawtooth, depending on the value received from the Euclidean rhythm generator, switching modulating functions introduces subharmonics, with the resulting richer and tighter sound which could be seen on the spectrograms provided in the publication.","PeriodicalId":433538,"journal":{"name":"NaUKMA Research Papers. Computer Science","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116975752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-28DOI: 10.18523/2617-3808.2020.3.12-16
A. Hlybovets, O. Shapoval
The goal of this work is to practically apply methods of empirical engineering software, algorithms for data collection and data analysis. The results include software measurement, analysis and selection of direct and indirect metrics for research and identification of dependencies between direct and indirect metrics. Based on the received results, there were built dependencies between software metrics and software expertise properties were selected by individual variation. For measurement results analysis there were used primary statistical analysis, expert estimations, correlation and regression analysis. Expert estimation is the dominant strategy when estimating software development effort. Typically, effort estimates are over-optimistic and there is a strong over-confidence in their accuracy. Primary data analysis is the process of comprehending the data collected to answer research questions or to support or reject research hypotheses that the study was originally designed to evaluate. Correlation analysis gives possibility to make some conclusions about which metrics and expert estimations are much coupled, and which are not. Regression analysis involves both graphical construction and analytical research and gives an ability to make a conclusion about which metrics and expert estimations are the most coupled. Analyzing regression lines for metrics of normal and nonnormal distributions give an ability to identify pairs of ‘metric – expert estimation’. There have been calculated and measured metrics relations for defining relation of such quality attributes as Understandability and Functionality Completeness. Understandability expresses the clarity of the system design. If the system is well designed, new developers are able to understand easily the implementation details and quickly begin contributing to the project. Functionality Completeness refers to the absence of omission errors in the program and database. It is evaluated against a specification of software requirements that define the desired degree of generalization and abstraction. Relationship between metric and expertise includes building direct relationships between the metric and expertise, indirect metrics and expertise. Additionally, it has been determined whether they have common trends of the relationship between those direct metrics and expert estimates, indirect metrics and expert estimates. The practical results of this work can be applied for software measurements to analyze what changes in the code (affecting given metric) will cause increasing or decreasing of what quality attribute.
{"title":"Investigation of the Relationship Between Software Metrics Measurements and its Maintainability Degree","authors":"A. Hlybovets, O. Shapoval","doi":"10.18523/2617-3808.2020.3.12-16","DOIUrl":"https://doi.org/10.18523/2617-3808.2020.3.12-16","url":null,"abstract":"The goal of this work is to practically apply methods of empirical engineering software, algorithms for data collection and data analysis. The results include software measurement, analysis and selection of direct and indirect metrics for research and identification of dependencies between direct and indirect metrics. Based on the received results, there were built dependencies between software metrics and software expertise properties were selected by individual variation. For measurement results analysis there were used primary statistical analysis, expert estimations, correlation and regression analysis. Expert estimation is the dominant strategy when estimating software development effort. Typically, effort estimates are over-optimistic and there is a strong over-confidence in their accuracy. Primary data analysis is the process of comprehending the data collected to answer research questions or to support or reject research hypotheses that the study was originally designed to evaluate. Correlation analysis gives possibility to make some conclusions about which metrics and expert estimations are much coupled, and which are not. Regression analysis involves both graphical construction and analytical research and gives an ability to make a conclusion about which metrics and expert estimations are the most coupled. Analyzing regression lines for metrics of normal and nonnormal distributions give an ability to identify pairs of ‘metric – expert estimation’. There have been calculated and measured metrics relations for defining relation of such quality attributes as Understandability and Functionality Completeness. Understandability expresses the clarity of the system design. If the system is well designed, new developers are able to understand easily the implementation details and quickly begin contributing to the project. Functionality Completeness refers to the absence of omission errors in the program and database. It is evaluated against a specification of software requirements that define the desired degree of generalization and abstraction. Relationship between metric and expertise includes building direct relationships between the metric and expertise, indirect metrics and expertise. Additionally, it has been determined whether they have common trends of the relationship between those direct metrics and expert estimates, indirect metrics and expert estimates. The practical results of this work can be applied for software measurements to analyze what changes in the code (affecting given metric) will cause increasing or decreasing of what quality attribute.","PeriodicalId":433538,"journal":{"name":"NaUKMA Research Papers. Computer Science","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115560915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-28DOI: 10.18523/2617-3808.2020.3.56-61
T. Torba, N. Vovk
{"title":"Virtual Data Room as a Storage of Confidential Corporate Documents","authors":"T. Torba, N. Vovk","doi":"10.18523/2617-3808.2020.3.56-61","DOIUrl":"https://doi.org/10.18523/2617-3808.2020.3.56-61","url":null,"abstract":"","PeriodicalId":433538,"journal":{"name":"NaUKMA Research Papers. Computer Science","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126506426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-28DOI: 10.18523/2617-3808.2020.3.36-41
I. Morenets, A. Shabinskiy
Serverless, a new cloud-based architecture, brings development and deployment flexibility to a new level by significantly decreasing the size of the deployment units. Nevertheless, it still hasn’t been clearly defined for which applications it should be employed and how to use it most effectively, and this is the focus of this research. The study uses Microsoft Azure Functions – one of the popular mature tools – because of its stateful orchestrators – Durable Functions. The tool is used to present and describe four flexible serverless patterns with code examples. The first pattern is HTTP nanoservices. The example demonstrates how flexible can be the Function-asa-Service model, which uses relatively small functions as deployment units. The second usage scenario described is a small logic layer between a few other cloud services. Thanks to its event-driver nature, serverless is well-suited for such tasks as making an action in one service after a specific event from another one. New functions easily integrate with the API from the first example. The third scenario – distributed computing – relies on the ability of Durable Functions to launch a myriad of functions in parallel and then aggregate their results. and distributed computing. A custom MapReduce implementation is presented in this section. The last pattern described in this research significantly simplifies concurrent working with mutable data by implementing the actor model. Durable Entities guarantee that messages are delivered reliably and in order, and also the absence of deadlocks. The results of this work can be used as a practical guide to serverless main concepts and usage scenarios. Main topic of future research was chosen to be the development of a full-fledged serverless application using typical patterns to study the architecture in more depth.
{"title":"Serverless Event-driven Applications Development Tools and Techniques","authors":"I. Morenets, A. Shabinskiy","doi":"10.18523/2617-3808.2020.3.36-41","DOIUrl":"https://doi.org/10.18523/2617-3808.2020.3.36-41","url":null,"abstract":"Serverless, a new cloud-based architecture, brings development and deployment flexibility to a new level by significantly decreasing the size of the deployment units. Nevertheless, it still hasn’t been clearly defined for which applications it should be employed and how to use it most effectively, and this is the focus of this research. The study uses Microsoft Azure Functions – one of the popular mature tools – because of its stateful orchestrators – Durable Functions. The tool is used to present and describe four flexible serverless patterns with code examples. The first pattern is HTTP nanoservices. The example demonstrates how flexible can be the Function-asa-Service model, which uses relatively small functions as deployment units. The second usage scenario described is a small logic layer between a few other cloud services. Thanks to its event-driver nature, serverless is well-suited for such tasks as making an action in one service after a specific event from another one. New functions easily integrate with the API from the first example. The third scenario – distributed computing – relies on the ability of Durable Functions to launch a myriad of functions in parallel and then aggregate their results. and distributed computing. A custom MapReduce implementation is presented in this section. The last pattern described in this research significantly simplifies concurrent working with mutable data by implementing the actor model. Durable Entities guarantee that messages are delivered reliably and in order, and also the absence of deadlocks. The results of this work can be used as a practical guide to serverless main concepts and usage scenarios. Main topic of future research was chosen to be the development of a full-fledged serverless application using typical patterns to study the architecture in more depth.","PeriodicalId":433538,"journal":{"name":"NaUKMA Research Papers. Computer Science","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128832343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-28DOI: 10.18523/2617-3808.2020.3.31-35
A. Vasylenko, A. Hlybovets
Commercial and non-commercial organizations that have a large customer base need a system that allows them to work effectively with such a base. In contrast to European practice, insurance brokerage companies in Ukraine have started to develop relatively recently. Today it is quite difficult to find a client base management system that meets all the needs of a brokerage company. Such companies are forced to either use universal customer base management systems, which often means the lack of specific functionality, or use programs from the office suite (MS Excel, MS Access). Obviously, both options have significant drawbacks.The paper describes the details of the requirements analysis and implementation of the client base management system for brokerage companies in the field of insurance. Such a system must take into account the specifics of companies and their business model in order to fully meet the requirements. The ability to work with a large number of records from any device, as well as the availability of forms and elements to display purely industry-specific indicators is a key aspect in the design and implementation of the system.Based on the results of the work, a CRM system was obtained, which fully satisfies the necessary basic requirements.The need for specialized CRM systems for brokerage companies today is acutely felt in the Ukrainian market. The specifics of the companies, as well as the special requirements for the system do not allow the full use of universal CRM systems.According to the results of the analysis of insurance needs of companies in the field of CRM-systems and its functional content, SaaS is the model that will suit the vast majority of users. This model allows you to get a working version of the product with minimal cost. In addition, the use of SaaS eliminates the need to hire specialists to maintain the stability and development of the system.The created version of the CRM system for an insurance brokerage company takes into account the basic requirements and can be used as a basis for further development of the system and its full-scale commercial use.Manuscript received 09.06.2020
{"title":"Сustomer Relationship Management System as a SaaS on Example of Insurance Broker Company","authors":"A. Vasylenko, A. Hlybovets","doi":"10.18523/2617-3808.2020.3.31-35","DOIUrl":"https://doi.org/10.18523/2617-3808.2020.3.31-35","url":null,"abstract":"Commercial and non-commercial organizations that have a large customer base need a system that allows them to work effectively with such a base. In contrast to European practice, insurance brokerage companies in Ukraine have started to develop relatively recently. Today it is quite difficult to find a client base management system that meets all the needs of a brokerage company. Such companies are forced to either use universal customer base management systems, which often means the lack of specific functionality, or use programs from the office suite (MS Excel, MS Access). Obviously, both options have significant drawbacks.The paper describes the details of the requirements analysis and implementation of the client base management system for brokerage companies in the field of insurance. Such a system must take into account the specifics of companies and their business model in order to fully meet the requirements. The ability to work with a large number of records from any device, as well as the availability of forms and elements to display purely industry-specific indicators is a key aspect in the design and implementation of the system.Based on the results of the work, a CRM system was obtained, which fully satisfies the necessary basic requirements.The need for specialized CRM systems for brokerage companies today is acutely felt in the Ukrainian market. The specifics of the companies, as well as the special requirements for the system do not allow the full use of universal CRM systems.According to the results of the analysis of insurance needs of companies in the field of CRM-systems and its functional content, SaaS is the model that will suit the vast majority of users. This model allows you to get a working version of the product with minimal cost. In addition, the use of SaaS eliminates the need to hire specialists to maintain the stability and development of the system.The created version of the CRM system for an insurance brokerage company takes into account the basic requirements and can be used as a basis for further development of the system and its full-scale commercial use.Manuscript received 09.06.2020","PeriodicalId":433538,"journal":{"name":"NaUKMA Research Papers. Computer Science","volume":"134 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115776135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}