Pub Date : 2019-11-30DOI: 10.22456/2175-2745.92486
F. W. Fernandes
For decades, computational simulation models have been used by scientists in search for new materials with technological applications in several areas of knowledge. For this, software based on several theoretical-computational models were developed in order to obtain an analysis of the physical properties at atomic levels. The objective of this work is proposing a widely functional software to analyze the physical properties of nanostructures based on carbon and condensed systems using theories of low computational cost. Therefore, a Fortran language computational program called HICOLM was developed, whose theoretical bases are based on two commonly known models (Tight-binding and Molecular Dynamics). The physical properties of condensed systems can be obtained in the thermodynamic equilibrium in several statistical ensembles, and possible to obtain an analysis of the properties of the material and its evolution in the time-dependent on its thermodynamic conditions like temperature and pressure. Moreover, from the tight-binding model, the HICOLM program is also capable of performing a physical analysis of carbon-based nanostructures from the calculation of the material band structure.
{"title":"HICOLM: High-Performance Platform of Physical Simulations by Using Low Computational Cost Methods","authors":"F. W. Fernandes","doi":"10.22456/2175-2745.92486","DOIUrl":"https://doi.org/10.22456/2175-2745.92486","url":null,"abstract":"For decades, computational simulation models have been used by scientists in search for new materials with technological applications in several areas of knowledge. For this, software based on several theoretical-computational models were developed in order to obtain an analysis of the physical properties at atomic levels. The objective of this work is proposing a widely functional software to analyze the physical properties of nanostructures based on carbon and condensed systems using theories of low computational cost. Therefore, a Fortran language computational program called HICOLM was developed, whose theoretical bases are based on two commonly known models (Tight-binding and Molecular Dynamics). The physical properties of condensed systems can be obtained in the thermodynamic equilibrium in several statistical ensembles, and possible to obtain an analysis of the properties of the material and its evolution in the time-dependent on its thermodynamic conditions like temperature and pressure. Moreover, from the tight-binding model, the HICOLM program is also capable of performing a physical analysis of carbon-based nanostructures from the calculation of the material band structure.","PeriodicalId":82472,"journal":{"name":"Research initiative, treatment action : RITA","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81218068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-11-30DOI: 10.22456/2175-2745.91522
Thiago Rateke, K. A. Justen, A. V. Wangenheim
The type of road pavement directly influences the way vehicles are driven. It’s common to find papers that deal with path detection but don’t take into account major changes in road surface patterns. The quality of the road surface has a direct impact on the comfort and especially on the safety of road users. In emerging countries it’s common to find unpaved roads or roads with no maintenance. Unpaved or damaged roads also impact in higher fuel costs and vehicle maintenance. This kind of analysis can be useful for both road maintenance departments as well as for autonomous vehicle navigation systems to verify potential critical points. For the experiments accomplishment upon the surface types and quality classification, we present a new dataset, collected with a low-cost camera. This dataset has examples of good and bad asphalt (with potholes and other damages) other types of pavement and also many examples of unpaved roads (with and without potholes). We also provide several frames from our dataset manually sorted in surface types for tests accuracy verification. Our road type and quality classifier was done through a simple Convolutional Neural Network with few steps and presents promising results in different datasets.
{"title":"Road Surface Classification with Images Captured From Low-cost Camera - Road Traversing Knowledge (RTK) Dataset","authors":"Thiago Rateke, K. A. Justen, A. V. Wangenheim","doi":"10.22456/2175-2745.91522","DOIUrl":"https://doi.org/10.22456/2175-2745.91522","url":null,"abstract":"The type of road pavement directly influences the way vehicles are driven. It’s common to find papers that deal with path detection but don’t take into account major changes in road surface patterns. The quality of the road surface has a direct impact on the comfort and especially on the safety of road users. In emerging countries it’s common to find unpaved roads or roads with no maintenance. Unpaved or damaged roads also impact in higher fuel costs and vehicle maintenance. This kind of analysis can be useful for both road maintenance departments as well as for autonomous vehicle navigation systems to verify potential critical points. For the experiments accomplishment upon the surface types and quality classification, we present a new dataset, collected with a low-cost camera. This dataset has examples of good and bad asphalt (with potholes and other damages) other types of pavement and also many examples of unpaved roads (with and without potholes). We also provide several frames from our dataset manually sorted in surface types for tests accuracy verification. Our road type and quality classifier was done through a simple Convolutional Neural Network with few steps and presents promising results in different datasets.","PeriodicalId":82472,"journal":{"name":"Research initiative, treatment action : RITA","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86246061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-11-30DOI: 10.22456/2175-2745.91814
Carlos Melo, J. Dantas, Ronierison Maciel, P. Silva, P. Maciel
ThestrictnessoftheServiceLevelAgreements(SLAs)ismainlyduetoasetofconstraintsrelated to performance and dependability attributes, such as availability. This paper shows that system’s availability values may be improved by deploying services over a private environment, which may obtain better availability values with improved management, security, and control. However, how much a company needs to afford to keep this improved availability? As an additional activity, this paper compares the obtained availability values with the infrastructure deployment expenses and establishes a cost × benefit relationship. As for the system’s evaluation technique, we choose modeling; while for the service used to demonstrate the models’ feasibility, the blockchain-as-a-service was the selected one. This paper proposes and evaluate four different infrastructures hosting blockchains: (i) baseline; (ii) double redundant; (iii) triple redundant, and (iv) hyper-converged. The obtained results pointed out that the hyper-converged architecture had an advantage over a full triple redundant environment regarding availability and deployment cost.
{"title":"Models to evaluate service Provisioning over Cloud Computing Environments - A Blockchain-As-A-Service case study","authors":"Carlos Melo, J. Dantas, Ronierison Maciel, P. Silva, P. Maciel","doi":"10.22456/2175-2745.91814","DOIUrl":"https://doi.org/10.22456/2175-2745.91814","url":null,"abstract":"ThestrictnessoftheServiceLevelAgreements(SLAs)ismainlyduetoasetofconstraintsrelated to performance and dependability attributes, such as availability. This paper shows that system’s availability values may be improved by deploying services over a private environment, which may obtain better availability values with improved management, security, and control. However, how much a company needs to afford to keep this improved availability? As an additional activity, this paper compares the obtained availability values with the infrastructure deployment expenses and establishes a cost × benefit relationship. As for the system’s evaluation technique, we choose modeling; while for the service used to demonstrate the models’ feasibility, the blockchain-as-a-service was the selected one. This paper proposes and evaluate four different infrastructures hosting blockchains: (i) baseline; (ii) double redundant; (iii) triple redundant, and (iv) hyper-converged. The obtained results pointed out that the hyper-converged architecture had an advantage over a full triple redundant environment regarding availability and deployment cost.","PeriodicalId":82472,"journal":{"name":"Research initiative, treatment action : RITA","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81333320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Iracema is a Python library that aims to provide models for the extraction of meaningful informationfrom recordings of monophonic pieces of music, for purposes of research in music performance. With this objective in mind, we propose an architecture that will provide to users an abstraction level that simplifies the manipulation of different kinds of time series, as well as the extraction of segments from them. In this paper we: (1) introduce some key concepts at the core of the proposed architecture; (2) describe the current functionalities of the package; (3) give some examples of the application programming interface; and (4) give some brief examples of audio analysis using the system.
{"title":"Iracema: a Python library for audio content analysis","authors":"T. Magalhaes, F. B. Barros, M. Loureiro","doi":"10.5753/sbcm.2019.10418","DOIUrl":"https://doi.org/10.5753/sbcm.2019.10418","url":null,"abstract":"Iracema is a Python library that aims to provide models for the extraction of meaningful informationfrom recordings of monophonic pieces of music, for purposes of research in music performance. With this objective in mind, we propose an architecture that will provide to users an abstraction level that simplifies the manipulation of different kinds of time series, as well as the extraction of segments from them. In this paper we: (1) introduce some key concepts at the core of the proposed architecture; (2) describe the current functionalities of the package; (3) give some examples of the application programming interface; and (4) give some brief examples of audio analysis using the system.","PeriodicalId":82472,"journal":{"name":"Research initiative, treatment action : RITA","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83025469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Audio-to-MIDI conversion can be used to allow digital musical control through an analog instrument. Audio-to-MIDI converters rely on fundamental frequency estimators that are usually restricted to a minimum delay of two fundamental periods. This delay is perceptible for the case of bass notes. In this dissertation, we propose a low-latency fundamental frequency estimation method that relies on specific characteristics of the electric bass guitar. By means of physical modeling and signal acquisition, we show that the assumptions of this method are based on the generalization of all electric basses. We evaluated our method in a dataset with musical notes played by diverse bassists. Results show that our method outperforms the Yin method in low-latency settings, which indicates its suitability for low-latency audio-to-MIDI conversion of the electric bass sound.
{"title":"Low-Latency f0 Estimation for the Finger Plucked Electric Bass Guitar Using the Absolute Difference Function","authors":"C. Fonseca, T. Tavares","doi":"10.5753/sbcm.2019.10433","DOIUrl":"https://doi.org/10.5753/sbcm.2019.10433","url":null,"abstract":"Audio-to-MIDI conversion can be used to allow digital musical control through an analog instrument. Audio-to-MIDI converters rely on fundamental frequency estimators that are usually restricted to a minimum delay of two fundamental periods. This delay is perceptible for the case of bass notes. In this dissertation, we propose a low-latency fundamental frequency estimation method that relies on specific characteristics of the electric bass guitar. By means of physical modeling and signal acquisition, we show that the assumptions of this method are based on the generalization of all electric basses. We evaluated our method in a dataset with musical notes played by diverse bassists. Results show that our method outperforms the Yin method in low-latency settings, which indicates its suitability for low-latency audio-to-MIDI conversion of the electric bass sound.","PeriodicalId":82472,"journal":{"name":"Research initiative, treatment action : RITA","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74418705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-08-03DOI: 10.22456/2175-2745.83498
Márcio Sergio Soares Austregésilo, G. Callou
In recent years, the growth of information technology has required higher reliability, accessibility, collaboration, availability, and a reduction of costs on data centers due to factors such as social network, cloud computing, and e-commerce. These systems require redundant mechanisms on the data center infrastrucutre to achieve high availability, which may increase the electric energy consumption, impacting in both the sustainability and cost. This work proposes a multi-objective optimization approach, based on Genetic Algorithms, to optimize cost, sustainability and availability of data center power infrastructures. The main goal is to maximize availability and minimize cost and exergy consumed (adopted to estimate the environmental impacts). In order to compute such metrics, this work adopts the energy flow model (EFM), reliability block diagrams (RBD) and stochastic petri nets (SPN). Two case studies are conducted to show the applicability of the proposed strategy: (i) takes into account 5 typical data center architectures that were optimized to conduct the validation process of the proposed strategy; (ii) uses the optimization strategy in two architectures classified by ANSI / TIA-942 (TIER I and II). In both case studies, significant improvements were achieved in the results, which were very close to the optimum one that was obtained by a brute force algorithm that analyzes all the possibilities and returns the optimal solution. It is worth mentioning that the time used to obtain the results using the genetic algorithm approach was significantly lower (6,763,260 times), in comparison with the strategy which combines all the possible combinations to obtain the optimal result.
{"title":"Stochastic Models for Optimizing Availability, Cost and Sustainability of Data Center Power Architectures through Genetic Algorithm","authors":"Márcio Sergio Soares Austregésilo, G. Callou","doi":"10.22456/2175-2745.83498","DOIUrl":"https://doi.org/10.22456/2175-2745.83498","url":null,"abstract":"In recent years, the growth of information technology has required higher reliability, accessibility, collaboration, availability, and a reduction of costs on data centers due to factors such as social network, cloud computing, and e-commerce. These systems require redundant mechanisms on the data center infrastrucutre to achieve high availability, which may increase the electric energy consumption, impacting in both the sustainability and cost. This work proposes a multi-objective optimization approach, based on Genetic Algorithms, to optimize cost, sustainability and availability of data center power infrastructures. The main goal is to maximize availability and minimize cost and exergy consumed (adopted to estimate the environmental impacts). In order to compute such metrics, this work adopts the energy flow model (EFM), reliability block diagrams (RBD) and stochastic petri nets (SPN). Two case studies are conducted to show the applicability of the proposed strategy: (i) takes into account 5 typical data center architectures that were optimized to conduct the validation process of the proposed strategy; (ii) uses the optimization strategy in two architectures classified by ANSI / TIA-942 (TIER I and II). In both case studies, significant improvements were achieved in the results, which were very close to the optimum one that was obtained by a brute force algorithm that analyzes all the possibilities and returns the optimal solution. It is worth mentioning that the time used to obtain the results using the genetic algorithm approach was significantly lower (6,763,260 times), in comparison with the strategy which combines all the possible combinations to obtain the optimal result.","PeriodicalId":82472,"journal":{"name":"Research initiative, treatment action : RITA","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85312971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-08-03DOI: 10.22456/2175-2745.82043
Airton Orlandini Junior, L. A. F. Martimiano
UsingTCP(TransmissionControlProtocol)inwirelessnetworkscanaffectitsperformanceduetoits lack of ability to identify packets losses properly, causing the triggering of its congestion control mechanism. Some TCP variants were proposed to improve this control, being TCP-UEM one of them. This variant allows the evaluation of the link reliability in wireless networks in time intervals, keeping the end-to-end semantics. TCP-UEM was implemented in FreeBSD OS and its performance with relation to segment transfer rate (in Mbps) was compared to two other variants, TCP-NEWRENO and TCP-CUBIC. This paper describes TCP-UEM, discusses results of the tests and the statistical analysis that were carried out using two scenarios. For each scenario, 30 samples of 30 seconds of execution time with different loss rates were collected. The results showed that TCP-UEM presented a good performance, achieving a performance higher than the other two variants in the majority of the tests, with different loss rates.
{"title":"Performance Analysis of the Segment Transfer Rate of TCP-UEM","authors":"Airton Orlandini Junior, L. A. F. Martimiano","doi":"10.22456/2175-2745.82043","DOIUrl":"https://doi.org/10.22456/2175-2745.82043","url":null,"abstract":"UsingTCP(TransmissionControlProtocol)inwirelessnetworkscanaffectitsperformanceduetoits lack of ability to identify packets losses properly, causing the triggering of its congestion control mechanism. Some TCP variants were proposed to improve this control, being TCP-UEM one of them. This variant allows the evaluation of the link reliability in wireless networks in time intervals, keeping the end-to-end semantics. TCP-UEM was implemented in FreeBSD OS and its performance with relation to segment transfer rate (in Mbps) was compared to two other variants, TCP-NEWRENO and TCP-CUBIC. This paper describes TCP-UEM, discusses results of the tests and the statistical analysis that were carried out using two scenarios. For each scenario, 30 samples of 30 seconds of execution time with different loss rates were collected. The results showed that TCP-UEM presented a good performance, achieving a performance higher than the other two variants in the majority of the tests, with different loss rates.","PeriodicalId":82472,"journal":{"name":"Research initiative, treatment action : RITA","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89350754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-08-03DOI: 10.22456/2175-2745.87085
Munyque Mittelmann, Jerusa Marchi, A. V. Wangenheim
Situation Awareness provides a theory for agents decision making to allow perception and comprehension of his environment. However, the transformation of the sensory stimulus in beliefs to favor the BDI reasoning cycle is still an unexplored subject. Autonomous agent projects often require the use of multiple sensors to capture environmental aspects. The natural variability of the physical world and the imprecision contained in linguistic concepts used by humans mean that sensory data contain different types of uncertainty in their measurements. Thus, to obtain the Situational Awareness for Agents with physical sensors, it is necessary to define a data fusion process to perform uncertainty treatment. This paper presents a model to beliefs generation using fuzzy-bayesian inference. An example in robotics navigation and location is used to illustrate the proposal.
{"title":"Data Fusion through Fuzzy-Bayesian Networks for Belief Generation in Cognitive Agents","authors":"Munyque Mittelmann, Jerusa Marchi, A. V. Wangenheim","doi":"10.22456/2175-2745.87085","DOIUrl":"https://doi.org/10.22456/2175-2745.87085","url":null,"abstract":"Situation Awareness provides a theory for agents decision making to allow perception and comprehension of his environment. However, the transformation of the sensory stimulus in beliefs to favor the BDI reasoning cycle is still an unexplored subject. Autonomous agent projects often require the use of multiple sensors to capture environmental aspects. The natural variability of the physical world and the imprecision contained in linguistic concepts used by humans mean that sensory data contain different types of uncertainty in their measurements. Thus, to obtain the Situational Awareness for Agents with physical sensors, it is necessary to define a data fusion process to perform uncertainty treatment. This paper presents a model to beliefs generation using fuzzy-bayesian inference. An example in robotics navigation and location is used to illustrate the proposal.","PeriodicalId":82472,"journal":{"name":"Research initiative, treatment action : RITA","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86525876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-08-03DOI: 10.22456/2175-2745.89063
Alexandra Katiuska Ramos Diaz, S. M. Peres
Biclustering and coclustering are data mining tasks capable of extracting relevant information from data by applying similarity criteria simultaneously to rows and columns of data matrices. Algorithms used to accomplish these tasks simultaneously cluster objects and attributes, enabling the discovery of biclusters or coclusters. Although similar, the natures and aims of these tasks are different, and coclustering can be seen as a generalization of biclustering. An accurate study on algorithms related to biclustering and coclustering is essential to achieve effectiveness when solving real-world problems. Determining the values appropriate for the parameters of these algorithms is even more difficult when complex real-world data are analyzed. For example, when biclustering or coclustering is applied to textual data (i.e., in text mining), a representation through a vector space model is required. Such representation usually generates vector spaces with a high number of dimensions and high sparsity, which influences the performance of many algorithms. This tutorial aims to didactically present concepts related to the biclustering and coclustering tasks and how two basic algorithms address these concepts. In addition, experiments are presented in data contexts with a high number of dimensions and high sparsity, represented by both a synthetic dataset and a corpus of real-world news. In general and comparative terms, the results obtained show the algorithm used for coclustering (i.e., NBVD) as the most appropriate for the experiments’ context. Although the biclustering algorithm (i.e., Cheng and Church) was responsible for producing less relevant results in textual data than NBVD, its application in data with a high number of dimensions and high sparsity provided a suitable study environment to understand its operation.
{"title":"Biclustering and coclustering: concepts, algorithms and viability for text mining","authors":"Alexandra Katiuska Ramos Diaz, S. M. Peres","doi":"10.22456/2175-2745.89063","DOIUrl":"https://doi.org/10.22456/2175-2745.89063","url":null,"abstract":"Biclustering and coclustering are data mining tasks capable of extracting relevant information from data by applying similarity criteria simultaneously to rows and columns of data matrices. Algorithms used to accomplish these tasks simultaneously cluster objects and attributes, enabling the discovery of biclusters or coclusters. Although similar, the natures and aims of these tasks are different, and coclustering can be seen as a generalization of biclustering. An accurate study on algorithms related to biclustering and coclustering is essential to achieve effectiveness when solving real-world problems. Determining the values appropriate for the parameters of these algorithms is even more difficult when complex real-world data are analyzed. For example, when biclustering or coclustering is applied to textual data (i.e., in text mining), a representation through a vector space model is required. Such representation usually generates vector spaces with a high number of dimensions and high sparsity, which influences the performance of many algorithms. This tutorial aims to didactically present concepts related to the biclustering and coclustering tasks and how two basic algorithms address these concepts. In addition, experiments are presented in data contexts with a high number of dimensions and high sparsity, represented by both a synthetic dataset and a corpus of real-world news. In general and comparative terms, the results obtained show the algorithm used for coclustering (i.e., NBVD) as the most appropriate for the experiments’ context. Although the biclustering algorithm (i.e., Cheng and Church) was responsible for producing less relevant results in textual data than NBVD, its application in data with a high number of dimensions and high sparsity provided a suitable study environment to understand its operation.","PeriodicalId":82472,"journal":{"name":"Research initiative, treatment action : RITA","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76043120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-08-03DOI: 10.22456/2175-2745.85091
G. Andrade, M. C. Cera
Run tracing allows you to identify issues affecting the performance of parallel applications. This work consists in evaluating the parallelization of a Genetic Algorithm applied to the Vehicle Routing Problem with OpenMP, where the performance obtained was not ideally expected. Being that it was possible to obtain a performance increase of 1.4 times in the architecture used, however, but still below ideal. Therefore, the general objective of this work is to investigate the causes of the low performance obtained by the Genetic Algorithm, performing an analysis from the execution traces. Our results showed that the parallelization of the Genetic Algorithm is according to the model in which it was implemented and to the set of instances of the target Vehicle Routing Problem used.
{"title":"Analysis of the Performance of Genetic Algorithm Parallelized with OpenMP Through Execution Traces","authors":"G. Andrade, M. C. Cera","doi":"10.22456/2175-2745.85091","DOIUrl":"https://doi.org/10.22456/2175-2745.85091","url":null,"abstract":"Run tracing allows you to identify issues affecting the performance of parallel applications. This work consists in evaluating the parallelization of a Genetic Algorithm applied to the Vehicle Routing Problem with OpenMP, where the performance obtained was not ideally expected. Being that it was possible to obtain a performance increase of 1.4 times in the architecture used, however, but still below ideal. Therefore, the general objective of this work is to investigate the causes of the low performance obtained by the Genetic Algorithm, performing an analysis from the execution traces. Our results showed that the parallelization of the Genetic Algorithm is according to the model in which it was implemented and to the set of instances of the target Vehicle Routing Problem used.","PeriodicalId":82472,"journal":{"name":"Research initiative, treatment action : RITA","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77362774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}