Abstract The fractional chromatic number of a graph is defined as the optimum of a rather unwieldy linear program. (Setting up the program requires generating all independent sets of the given graph.) Using combinatorial arguments we construct a more manageable linear program whose optimum value provides an upper estimate for the fractional chromatic number. In order to assess the feasibility of the proposal and in order to check the accuracy of the estimates we carry out numerical experiments.
{"title":"Estimating the fractional chromatic number of a graph","authors":"S. Szabó","doi":"10.2478/ausi-2021-0006","DOIUrl":"https://doi.org/10.2478/ausi-2021-0006","url":null,"abstract":"Abstract The fractional chromatic number of a graph is defined as the optimum of a rather unwieldy linear program. (Setting up the program requires generating all independent sets of the given graph.) Using combinatorial arguments we construct a more manageable linear program whose optimum value provides an upper estimate for the fractional chromatic number. In order to assess the feasibility of the proposal and in order to check the accuracy of the estimates we carry out numerical experiments.","PeriodicalId":41480,"journal":{"name":"Acta Universitatis Sapientiae Informatica","volume":"37 10 1","pages":"122 - 133"},"PeriodicalIF":0.3,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72954040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract This paper furthers the study on the confluence number of a graph. In particular results for certain derivative graphs such as the line graph of trees, cactus graphs, linear Jaco graphs and novel graph operations are reported.
{"title":"Confluence number of certain derivative graphs","authors":"J. Kok, J. Shiny","doi":"10.2478/ausi-2021-0002","DOIUrl":"https://doi.org/10.2478/ausi-2021-0002","url":null,"abstract":"Abstract This paper furthers the study on the confluence number of a graph. In particular results for certain derivative graphs such as the line graph of trees, cactus graphs, linear Jaco graphs and novel graph operations are reported.","PeriodicalId":41480,"journal":{"name":"Acta Universitatis Sapientiae Informatica","volume":"65 1","pages":"21 - 38"},"PeriodicalIF":0.3,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83946533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract The integration of Wireless Sensor Networks (WSN) and cloud computing brings several advantages. However, one of the main problems with the existing cloud solutions is the latency involved in accessing, storing, and processing data. This limits the use of cloud computing for various types of applications (for instance, patient health monitoring) that require real-time access and processing of data. To address the latency problem, we proposed a fog-assisted Link Aware and Energy E cient Protocol for Wireless Body Area Networks (Fog-LAEEBA). The proposed solution is based on the already developed state-of-the-art protocol called LAEEBA. We implement, test, evaluate and compare the results of Fog-LAEEBA in terms of stability period, end-to-end delay, throughput, residual energy, and path-loss. For the stability period all nodes in the LAEEBA protocol die after 7445 rounds, while in our case the last node dies after 9000 rounds. For the same number of rounds, the end-to-end delay is 2 seconds for LAEEBA and 1.25 seconds for Fog-LAEEBA. In terms of throughput, our proposed solution increases the number of packets received by the sink node from 1.5 packets to 1.8 packets. The residual energy of the nodes in Fog-LAEEBA is also less than the LAEEBA protocol. Finally, our proposed solution improves the path loss by 24 percent.
{"title":"Fog-LAEEBA: Fog-assisted Link aware and energy efficient protocol for wireless body area network","authors":"K. Ullah, H. Khan","doi":"10.2478/ausi-2021-0008","DOIUrl":"https://doi.org/10.2478/ausi-2021-0008","url":null,"abstract":"Abstract The integration of Wireless Sensor Networks (WSN) and cloud computing brings several advantages. However, one of the main problems with the existing cloud solutions is the latency involved in accessing, storing, and processing data. This limits the use of cloud computing for various types of applications (for instance, patient health monitoring) that require real-time access and processing of data. To address the latency problem, we proposed a fog-assisted Link Aware and Energy E cient Protocol for Wireless Body Area Networks (Fog-LAEEBA). The proposed solution is based on the already developed state-of-the-art protocol called LAEEBA. We implement, test, evaluate and compare the results of Fog-LAEEBA in terms of stability period, end-to-end delay, throughput, residual energy, and path-loss. For the stability period all nodes in the LAEEBA protocol die after 7445 rounds, while in our case the last node dies after 9000 rounds. For the same number of rounds, the end-to-end delay is 2 seconds for LAEEBA and 1.25 seconds for Fog-LAEEBA. In terms of throughput, our proposed solution increases the number of packets received by the sink node from 1.5 packets to 1.8 packets. The residual energy of the nodes in Fog-LAEEBA is also less than the LAEEBA protocol. Finally, our proposed solution improves the path loss by 24 percent.","PeriodicalId":41480,"journal":{"name":"Acta Universitatis Sapientiae Informatica","volume":"47 1","pages":"180 - 194"},"PeriodicalIF":0.3,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82717077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract The development of computer-generated ecosystem simulations are becoming more common due to the greater computational capabilities of machines. Because natural ecosystems are very complex, simplifications are required for implementation. This simulation environment o er a global view of the system and generate a lot of data to process and analyse, which are difficult or impossible to do in nature. 3D simulations, besides of the scientific advantages in experiments, can be used for presentation, educational and entertainment purposes too. In our simulated framework (Animal Farm) we have implemented a few basic animal behaviors and mechanics to observe in 3D. All animals are controlled by an individual logic model, which determines the next action of the animal, based on their needs and surrounding environment.
{"title":"Animal Farm—a complex artificial life 3D framework","authors":"A. Kiss, G. Pusztai","doi":"10.2478/ausi-2021-0004","DOIUrl":"https://doi.org/10.2478/ausi-2021-0004","url":null,"abstract":"Abstract The development of computer-generated ecosystem simulations are becoming more common due to the greater computational capabilities of machines. Because natural ecosystems are very complex, simplifications are required for implementation. This simulation environment o er a global view of the system and generate a lot of data to process and analyse, which are difficult or impossible to do in nature. 3D simulations, besides of the scientific advantages in experiments, can be used for presentation, educational and entertainment purposes too. In our simulated framework (Animal Farm) we have implemented a few basic animal behaviors and mechanics to observe in 3D. All animals are controlled by an individual logic model, which determines the next action of the animal, based on their needs and surrounding environment.","PeriodicalId":41480,"journal":{"name":"Acta Universitatis Sapientiae Informatica","volume":"16 1","pages":"60 - 85"},"PeriodicalIF":0.3,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85318533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Data stream processing has been gaining attention in the past decade. Apache Flink is an open-source distributed stream processing engine that is able to process a large amount of data in real time with low latency. Computations are distributed among a cluster of nodes. Currently, provisioning the appropriate amount of cloud resources must be done manually ahead of time. A dynamically varying workload may exceed the capacity of the cluster, or leave resources underutilized. In our paper, we describe an architecture that enables the automatic scaling of Flink jobs on Kubernetes based on custom metrics, and describe a simple scaling policy. We also measure the e ects of state size and target parallelism on the duration of the scaling operation, which must be considered when designing an autoscaling policy, so that the Flink job respects a Service Level Agreement.
{"title":"Towards autoscaling of Apache Flink jobs","authors":"B. Varga, Márton Balassi, A. Kiss","doi":"10.2478/ausi-2021-0003","DOIUrl":"https://doi.org/10.2478/ausi-2021-0003","url":null,"abstract":"Abstract Data stream processing has been gaining attention in the past decade. Apache Flink is an open-source distributed stream processing engine that is able to process a large amount of data in real time with low latency. Computations are distributed among a cluster of nodes. Currently, provisioning the appropriate amount of cloud resources must be done manually ahead of time. A dynamically varying workload may exceed the capacity of the cluster, or leave resources underutilized. In our paper, we describe an architecture that enables the automatic scaling of Flink jobs on Kubernetes based on custom metrics, and describe a simple scaling policy. We also measure the e ects of state size and target parallelism on the duration of the scaling operation, which must be considered when designing an autoscaling policy, so that the Flink job respects a Service Level Agreement.","PeriodicalId":41480,"journal":{"name":"Acta Universitatis Sapientiae Informatica","volume":"39 1","pages":"39 - 59"},"PeriodicalIF":0.3,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81539366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Let S = (G, σ) be a signed graph of order n and size m and let x1, x2, ..., xn be the eigenvalues of S. The energy of S is defined as ɛ(S)=∑j=1n| xj | varepsilon left( S right) = sumlimits_{j = 1}^n {left| {{x_j}} right|} . A connected signed graph is said to be bicyclic if m=n + 1. In this paper, we determine the bicyclic signed graphs with first 20 minimal energies for all n ≥ 30 and with first 16 minimal energies for all 17 ≤ n ≤ 29.
{"title":"On ordering of minimal energies in bicyclic signed graphs","authors":"S. Pirzada, Tahir Shamsher, M. Bhat","doi":"10.2478/ausi-2021-0005","DOIUrl":"https://doi.org/10.2478/ausi-2021-0005","url":null,"abstract":"Abstract Let S = (G, σ) be a signed graph of order n and size m and let x1, x2, ..., xn be the eigenvalues of S. The energy of S is defined as ɛ(S)=∑j=1n| xj | varepsilon left( S right) = sumlimits_{j = 1}^n {left| {{x_j}} right|} . A connected signed graph is said to be bicyclic if m=n + 1. In this paper, we determine the bicyclic signed graphs with first 20 minimal energies for all n ≥ 30 and with first 16 minimal energies for all 17 ≤ n ≤ 29.","PeriodicalId":41480,"journal":{"name":"Acta Universitatis Sapientiae Informatica","volume":"28 1","pages":"86 - 121"},"PeriodicalIF":0.3,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72893745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Most typical data mining techniques are developed based on training the batch data which makes the task of mining the data stream represent a significant challenge. On the other hand, providing a mechanism to perform data mining operations without revealing the patient’s identity has increasing importance in the data mining field. In this work, a classification model with differential privacy is proposed for mining the medical data stream using Adaptive Random Forest (ARF). The experimental results of applying the proposed model on four medical datasets show that ARF mostly has a more stable performance over the other six techniques.
{"title":"Differential privacy based classification model for mining medical data stream using adaptive random forest","authors":"Hayder K. Fatlawi, A. Kiss","doi":"10.2478/ausi-2021-0001","DOIUrl":"https://doi.org/10.2478/ausi-2021-0001","url":null,"abstract":"Abstract Most typical data mining techniques are developed based on training the batch data which makes the task of mining the data stream represent a significant challenge. On the other hand, providing a mechanism to perform data mining operations without revealing the patient’s identity has increasing importance in the data mining field. In this work, a classification model with differential privacy is proposed for mining the medical data stream using Adaptive Random Forest (ARF). The experimental results of applying the proposed model on four medical datasets show that ARF mostly has a more stable performance over the other six techniques.","PeriodicalId":41480,"journal":{"name":"Acta Universitatis Sapientiae Informatica","volume":"36 1","pages":"1 - 20"},"PeriodicalIF":0.3,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78302519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Izabella Ingrid Farkas, Kristóf Szabados, A. Kovács
Abstract This paper is based on research results achieved by a collaboration between Ericsson Hungary Ltd. and the Large Scale Testing Research Lab of Eötvös Loránd University, Budapest. We present design issues and empirical observations on extending an existing industrial toolset with a new intermediate language1. Context: The industry partner’s toolset is using C/C++ as an intermediate language, providing good execution performance, but “somewhat long” build times, o ering a sub-optimal experience for users. Objective: In cooperation with our industry partner our task was to perform an experiment with Java as a different intermediate language and evaluate results, to see if this could improve build times. Method: We extended the mentioned toolset to use Java as an intermediate language. Results: Our measurements show that using Java as an intermediate language improves build times significantly. We also found that, while the runtime performance of C/C++ is better in some situations, Java, at least in our testing scenarios, can be a viable alternative to improve developer productivity. Our contribution is unique in the sense that both ways of building and execution can use the same source code as input, written in the same language, generate intermediate codes with the same high-level structure, compile into executables that are configured using the same files, run on the same machine, show the same behaviour and generate the same logs. Conclusions: We created an alternative build pipeline that might enhance the productivity of our industry partner’s test developers by reducing the length of builds during their daily work.
{"title":"Improving productivity in large scale testing at the compiler level by changing the intermediate language from C++ to Java","authors":"Izabella Ingrid Farkas, Kristóf Szabados, A. Kovács","doi":"10.2478/ausi-2021-0007","DOIUrl":"https://doi.org/10.2478/ausi-2021-0007","url":null,"abstract":"Abstract This paper is based on research results achieved by a collaboration between Ericsson Hungary Ltd. and the Large Scale Testing Research Lab of Eötvös Loránd University, Budapest. We present design issues and empirical observations on extending an existing industrial toolset with a new intermediate language1. Context: The industry partner’s toolset is using C/C++ as an intermediate language, providing good execution performance, but “somewhat long” build times, o ering a sub-optimal experience for users. Objective: In cooperation with our industry partner our task was to perform an experiment with Java as a different intermediate language and evaluate results, to see if this could improve build times. Method: We extended the mentioned toolset to use Java as an intermediate language. Results: Our measurements show that using Java as an intermediate language improves build times significantly. We also found that, while the runtime performance of C/C++ is better in some situations, Java, at least in our testing scenarios, can be a viable alternative to improve developer productivity. Our contribution is unique in the sense that both ways of building and execution can use the same source code as input, written in the same language, generate intermediate codes with the same high-level structure, compile into executables that are configured using the same files, run on the same machine, show the same behaviour and generate the same logs. Conclusions: We created an alternative build pipeline that might enhance the productivity of our industry partner’s test developers by reducing the length of builds during their daily work.","PeriodicalId":41480,"journal":{"name":"Acta Universitatis Sapientiae Informatica","volume":"1 1","pages":"134 - 179"},"PeriodicalIF":0.3,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88681543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract The degree set of a k-partite graph is the set of distinct degrees of its vertices. We prove that every set of non-negative integers is a degree set of some k-partite graph.
k部图的度集是其顶点的不同度的集合。证明了每一个非负整数集都是某个k部图的度集。
{"title":"On degree sets in k-partite graphs","authors":"T. A. Naikoo, U. Samee, S. Pirzada, B. Rather","doi":"10.2478/ausi-2020-0015","DOIUrl":"https://doi.org/10.2478/ausi-2020-0015","url":null,"abstract":"Abstract The degree set of a k-partite graph is the set of distinct degrees of its vertices. We prove that every set of non-negative integers is a degree set of some k-partite graph.","PeriodicalId":41480,"journal":{"name":"Acta Universitatis Sapientiae Informatica","volume":"5 1","pages":"251 - 259"},"PeriodicalIF":0.3,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85999046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Csaba Farkas, David Iclanzan, Boróka Olteán-Péter, Géza Vekov
Abstract In 2020, due to the COVID − 19 pandemic, various epidemiological models appeared in major studies [16, 22, 21], which differ in terms of complexity, type, etc. In accordance with the hypothesis, a complex model is more accurate and gives more reliable results than a simpler one because it takes into consideration more parameters. In this paper we study three different epidemiological models: a SIR, a SEIR and a SEIR − type model. Our aim is to set up differential equation models, which rely on similar parameters, however, the systems of equation and number of parameters deviate from each other. A visualization dashboard is implemented through this study, and thus, we are able not only to study the models but also to make users understand the differences between the complexity of epidemiological models, and ultimately, to share a more specific overview about these that are defined by differential equations [24]. In order to validate our results, we make a comparison between the three models and the empirical data from Northern Italy and Wuhan, based on the infectious cases of COVID-19. To validate our results, we calculate the values of the parameters using the Least Square optimization algorithm.
{"title":"Comparing epidemiological models with the help of visualization dashboards","authors":"Csaba Farkas, David Iclanzan, Boróka Olteán-Péter, Géza Vekov","doi":"10.2478/ausi-2020-0016","DOIUrl":"https://doi.org/10.2478/ausi-2020-0016","url":null,"abstract":"Abstract In 2020, due to the COVID − 19 pandemic, various epidemiological models appeared in major studies [16, 22, 21], which differ in terms of complexity, type, etc. In accordance with the hypothesis, a complex model is more accurate and gives more reliable results than a simpler one because it takes into consideration more parameters. In this paper we study three different epidemiological models: a SIR, a SEIR and a SEIR − type model. Our aim is to set up differential equation models, which rely on similar parameters, however, the systems of equation and number of parameters deviate from each other. A visualization dashboard is implemented through this study, and thus, we are able not only to study the models but also to make users understand the differences between the complexity of epidemiological models, and ultimately, to share a more specific overview about these that are defined by differential equations [24]. In order to validate our results, we make a comparison between the three models and the empirical data from Northern Italy and Wuhan, based on the infectious cases of COVID-19. To validate our results, we calculate the values of the parameters using the Least Square optimization algorithm.","PeriodicalId":41480,"journal":{"name":"Acta Universitatis Sapientiae Informatica","volume":"32 1","pages":"260 - 282"},"PeriodicalIF":0.3,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87181509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}