Pub Date : 2016-01-01DOI: 10.12921/CMST.2016.22.02.003
M. Antkowiak, Ł. Kucharski, R. Matysiak, G. Kamieniarz
In this work we present a very efficient scaling of our two applications based on the quantum transfer matrix method which we exploited to simulate the thermodynamic properties of Cr9 and Mn6 molecules as examples of the uniform and non-uniform molecular nanomagnets. The test runs were conducted on the IBM BlueGene/P supercomputer JUGENE of the Tier-0 performance class installed in the Jülich Supercomputing Centre.
{"title":"Highly Scalable Quantum Transfer Matrix Simulations of Molecule-Based Nanomagnets on a Parallel IBM BlueGene/P Architecture","authors":"M. Antkowiak, Ł. Kucharski, R. Matysiak, G. Kamieniarz","doi":"10.12921/CMST.2016.22.02.003","DOIUrl":"https://doi.org/10.12921/CMST.2016.22.02.003","url":null,"abstract":"In this work we present a very efficient scaling of our two applications based on the quantum transfer matrix method which we exploited to simulate the thermodynamic properties of Cr9 and Mn6 molecules as examples of the uniform and non-uniform molecular nanomagnets. The test runs were conducted on the IBM BlueGene/P supercomputer JUGENE of the Tier-0 performance class installed in the Jülich Supercomputing Centre.","PeriodicalId":10561,"journal":{"name":"computational methods in science and technology","volume":"59 1","pages":"87-93"},"PeriodicalIF":0.0,"publicationDate":"2016-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88465831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-01-01DOI: 10.12921/CMST.2016.0000022
S. Saaidpour, F. Ghaderi
The quantitative structure-property relationship (QSPR) method is used to develop the correlation between structures of crude oil hydrocarbons and their physical properties. In this study, we used VolSurf+ descriptors for QSPR modeling of the boiling point, Henry law constant and water solubility of eighty crude oil hydrocarbons. A subset of the calculated descriptors selected using stepwise regression (SR) was used in the QSPR model development. Multivariate linear regressions (MLR) are utilized to construct the linear models. The prediction results agree well with the experimental values of these properties. The comparison results indicate the superiority of the presented models and reveal that it can be effectively used to predict the boiling point, Henry law constant and water solubility values of crude oil hydrocarbons from the molecular structures alone. The stability and predictivity of the proposed models were validated using internal validation (leave one out and leave many out) and external validation. Application of the developed models to test a set of 16 compounds demonstrates that the new models are reliable with good predictive accuracy and simple formulation.
{"title":"Quantitative Modeling of Physical Properties of Crude Oil Hydrocarbons Using Volsurf+ Molecular Descriptors","authors":"S. Saaidpour, F. Ghaderi","doi":"10.12921/CMST.2016.0000022","DOIUrl":"https://doi.org/10.12921/CMST.2016.0000022","url":null,"abstract":"The quantitative structure-property relationship (QSPR) method is used to develop the correlation between structures of crude oil hydrocarbons and their physical properties. In this study, we used VolSurf+ descriptors for QSPR modeling of the boiling point, Henry law constant and water solubility of eighty crude oil hydrocarbons. A subset of the calculated descriptors selected using stepwise regression (SR) was used in the QSPR model development. Multivariate linear regressions (MLR) are utilized to construct the linear models. The prediction results agree well with the experimental values of these properties. The comparison results indicate the superiority of the presented models and reveal that it can be effectively used to predict the boiling point, Henry law constant and water solubility values of crude oil hydrocarbons from the molecular structures alone. The stability and predictivity of the proposed models were validated using internal validation (leave one out and leave many out) and external validation. Application of the developed models to test a set of 16 compounds demonstrates that the new models are reliable with good predictive accuracy and simple formulation.","PeriodicalId":10561,"journal":{"name":"computational methods in science and technology","volume":"6 1","pages":"133-141"},"PeriodicalIF":0.0,"publicationDate":"2016-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89513708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-01-01DOI: 10.12921/CMST.2016.22.01.003
A. Werner-Juszczuk, P. Rynkowski
This paper aims to prove utility of the boundary element method for modelling 2D heat transfer in complex multiregions, particularly in thermal bridges. It proposes BEM as an alternative method commonly applied in commercial software for simulation of temperature field and heat flux in thermal bridges, mesh methods (FEM, FDM).The BEM algorithm with Robin boundary condition is developed for modelling 2D heat transfer in complex multi-regions. Simulation is performed with the authoring Fortran program. The developed mathematical algorithm and computer program are validated according to standard EN ISO 10211:2007. Two examples of complex thermal bridges that commonly appears in house building are presented. Analysis of two reference cases, listed in standard ISO, confirms utility of the proposed BEM algorithm and Fortran program for simulation of linear thermal bridges. Conditions, quoted in standard ISO, are satisfied with models of a relatively small number of boundary elements. Performed validation constitutes the base for further development of BEM as an efficient method for modelling heat transfer in building components, and for the prospective application in commercial software.
本文旨在证明边界元方法在复杂多区域,特别是热桥中模拟二维传热的实用性。提出了边界元法作为商业软件中常用的热桥温度场和热流密度模拟的替代方法,网格法(FEM, FDM)。提出了一种具有Robin边界条件的边界元算法,用于模拟复杂多区域的二维传热。用编写的Fortran程序进行了仿真。根据EN ISO 10211:2007标准对所开发的数学算法和计算机程序进行了验证。介绍了住宅建筑中常见的两个复杂热桥的实例。对ISO标准中列出的两个参考案例的分析,证实了所提出的BEM算法和Fortran程序在线性热桥模拟中的实用性。标准ISO中所引用的条件,满足于边界元素数量相对较少的模型。所进行的验证为边界元法作为一种有效的建筑构件传热建模方法的进一步发展以及在商业软件中的预期应用奠定了基础。
{"title":"BEM Utility for Simulation of Linear Thermal Bridges","authors":"A. Werner-Juszczuk, P. Rynkowski","doi":"10.12921/CMST.2016.22.01.003","DOIUrl":"https://doi.org/10.12921/CMST.2016.22.01.003","url":null,"abstract":"This paper aims to prove utility of the boundary element method for modelling 2D heat transfer in complex multiregions, particularly in thermal bridges. It proposes BEM as an alternative method commonly applied in commercial software for simulation of temperature field and heat flux in thermal bridges, mesh methods (FEM, FDM).The BEM algorithm with Robin boundary condition is developed for modelling 2D heat transfer in complex multi-regions. Simulation is performed with the authoring Fortran program. The developed mathematical algorithm and computer program are validated according to standard EN ISO 10211:2007. Two examples of complex thermal bridges that commonly appears in house building are presented. Analysis of two reference cases, listed in standard ISO, confirms utility of the proposed BEM algorithm and Fortran program for simulation of linear thermal bridges. Conditions, quoted in standard ISO, are satisfied with models of a relatively small number of boundary elements. Performed validation constitutes the base for further development of BEM as an efficient method for modelling heat transfer in building components, and for the prospective application in commercial software.","PeriodicalId":10561,"journal":{"name":"computational methods in science and technology","volume":"54 86 1","pages":"31-40"},"PeriodicalIF":0.0,"publicationDate":"2016-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88088569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-01-01DOI: 10.12921/CMST.2016.22.02.002
L. Rondoni, G. Dematteis
Recently, novel ergodic notions have been introduced in order to find physically relevant formulations and derivations of fluctuation relations. These notions have been subsequently used in the development of a general theory of response, for time continuous deterministic dynamics. The key ingredient of this theory is the Dissipation Function Ω, that in nonequilibrium systems of physical interest can be identified with the energy dissipation rate, and that is used to determine exactly the evolution of ensembles in phase space. This constitutes an advance compared to the standard solution of the (generalized) Liouville Equation, that is based on the physically elusive phase space variation rate. The response theory arising in this framework focuses on observables, rather than on details of the dynamics and of the stationary probability distributions on phase space. In particular, this theory does not rest on metric transitivity, which amounts to standard ergodicity. It rests on the properties of the initial equilibrium, in which a system is found before being perturbed away from that state. This theory is exact, not restricted to linear response, and it applies to all dynamical systems. Moreover, it yields necessary and sufficient conditions for relaxation of ensembles (as in usual response theory), as well as for relaxation of single systems. We extend the continuous time theory to time discrete systems, we illustrate our results with simple maps and we compare them with other recent theories.
{"title":"Physical Ergodicity and Exact Response Relations for Low-dimensional Maps","authors":"L. Rondoni, G. Dematteis","doi":"10.12921/CMST.2016.22.02.002","DOIUrl":"https://doi.org/10.12921/CMST.2016.22.02.002","url":null,"abstract":"Recently, novel ergodic notions have been introduced in order to find physically relevant formulations and derivations of fluctuation relations. These notions have been subsequently used in the development of a general theory of response, for time continuous deterministic dynamics. The key ingredient of this theory is the Dissipation Function Ω, that in nonequilibrium systems of physical interest can be identified with the energy dissipation rate, and that is used to determine exactly the evolution of ensembles in phase space. This constitutes an advance compared to the standard solution of the (generalized) Liouville Equation, that is based on the physically elusive phase space variation rate. The response theory arising in this framework focuses on observables, rather than on details of the dynamics and of the stationary probability distributions on phase space. In particular, this theory does not rest on metric transitivity, which amounts to standard ergodicity. It rests on the properties of the initial equilibrium, in which a system is found before being perturbed away from that state. This theory is exact, not restricted to linear response, and it applies to all dynamical systems. Moreover, it yields necessary and sufficient conditions for relaxation of ensembles (as in usual response theory), as well as for relaxation of single systems. We extend the continuous time theory to time discrete systems, we illustrate our results with simple maps and we compare them with other recent theories.","PeriodicalId":10561,"journal":{"name":"computational methods in science and technology","volume":"5 1","pages":"71-85"},"PeriodicalIF":0.0,"publicationDate":"2016-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84334073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-12DOI: 10.12921/CMST.2015.21.04.006
B. S. Gildeh, M. T. Ashkavaey
The goal of this work is to detect any potentially harmful change in a process. The reliability tests are assumed to generate type-I right-censored data following a log-logistic distribution with scale parameter (η) and shape parameter (β). For this purpose, we have constructed a likelihood ratio based simultaneous cumulative sum (CUSUM) control chart that targets changes in both the failure mechanism and the characteristic life (the simultaneous CUSUM chart for detecting shifts in the shape and the scale parameters). This control chart displays best performance for combinations with larger positive or negative shifts in the shape parameter, signaling on average in 5 samples in an out-of-control situation, while targeting an in-control average run length of 370. The simultaneous CUSUM chart’s performance is highly dependent on the values of β, and on the interaction between them and the censoring rates and shift sizes.
{"title":"Optimal Cusum Control Chart for Censored Reliability Data with Log-logistic Distribution","authors":"B. S. Gildeh, M. T. Ashkavaey","doi":"10.12921/CMST.2015.21.04.006","DOIUrl":"https://doi.org/10.12921/CMST.2015.21.04.006","url":null,"abstract":"The goal of this work is to detect any potentially harmful change in a process. The reliability tests are assumed to generate type-I right-censored data following a log-logistic distribution with scale parameter (η) and shape parameter (β). For this purpose, we have constructed a likelihood ratio based simultaneous cumulative sum (CUSUM) control chart that targets changes in both the failure mechanism and the characteristic life (the simultaneous CUSUM chart for detecting shifts in the shape and the scale parameters). This control chart displays best performance for combinations with larger positive or negative shifts in the shape parameter, signaling on average in 5 samples in an out-of-control situation, while targeting an in-control average run length of 370. The simultaneous CUSUM chart’s performance is highly dependent on the values of β, and on the interaction between them and the censoring rates and shift sizes.","PeriodicalId":10561,"journal":{"name":"computational methods in science and technology","volume":"11 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78290553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-05-06DOI: 10.12921/cmst.2016.22.02.005
A. Patkowski
Some integrals of the Glaisher-Ramanujan type are established in a more general form than in previous studies. As an application we prove some Ramanujan-type series identities, as well as a new formula for the Dirichlet beta function at the value $s=3.$
{"title":"Some Remarks on Glaisher-Ramanujan Type Integrals","authors":"A. Patkowski","doi":"10.12921/cmst.2016.22.02.005","DOIUrl":"https://doi.org/10.12921/cmst.2016.22.02.005","url":null,"abstract":"Some integrals of the Glaisher-Ramanujan type are established in a more general form than in previous studies. As an application we prove some Ramanujan-type series identities, as well as a new formula for the Dirichlet beta function at the value $s=3.$","PeriodicalId":10561,"journal":{"name":"computational methods in science and technology","volume":"69 1","pages":"103-108"},"PeriodicalIF":0.0,"publicationDate":"2015-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88956289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-01-01DOI: 10.12921/CMST.2015.21.01.004
B. Porankiewicz
Abstract: The paper examines accuracy of low depth, blind holes drilled in a solid wood. Evaluated were statistical dependencies of the exponential form with several double interactions, between shift of average hole diameter series dN and dispersion of holes diameter DH and parameters of Machine-Tool-Working element (M-T-W) set by drilling solid wood. Significant, nonlinear dependencies of the shift of average hole diameter series dN and dispersion of holes diameter DH from height of centering spike hCS , drill lateral stiffness EL, drill bit diameter DD , radial run out of main cutting edge RR were found.
{"title":"An Attempt to Evaluate Accuracy of Diameter of Shallow Blind Holes Drilled in Solid Wood","authors":"B. Porankiewicz","doi":"10.12921/CMST.2015.21.01.004","DOIUrl":"https://doi.org/10.12921/CMST.2015.21.01.004","url":null,"abstract":"Abstract: The paper examines accuracy of low depth, blind holes drilled in a solid wood. Evaluated were statistical dependencies of the exponential form with several double interactions, between shift of average hole diameter series dN and dispersion of holes diameter DH and parameters of Machine-Tool-Working element (M-T-W) set by drilling solid wood. Significant, nonlinear dependencies of the shift of average hole diameter series dN and dispersion of holes diameter DH from height of centering spike hCS , drill lateral stiffness EL, drill bit diameter DD , radial run out of main cutting edge RR were found.","PeriodicalId":10561,"journal":{"name":"computational methods in science and technology","volume":"31 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79900359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-01-01DOI: 10.12921/CMST.2015.21.03.004
T. Praczyk
Assembler Encoding is a neuro-evolutionary method which represents a neural network in the form of a linear program. The program consists of operations and data and its goal is to produce a matrix including all the information necessary to construct a network. In order for the programs to produce effective networks, evolutionary techniques are used. A genetic algorithm determines an arrangement of the operations and data in the program and parameters of the operations. Implementations of the operations do not evolve, they are defined in advance by a designer. Since operations with predefined implementations could narrow down applicability of Assembler Encoding to a restricted class of problems, the method has been modified by applying evolvable operations. To verify effectiveness of the new method, experiments on the predator-prey problem were carried out. In the experiments, the task of neural networks was to control a team of underwater-vehicles-predators whose common goal was to capture an underwater-vehicle-prey behaving by a simple deterministic strategy. The paper describes the modified method and reports the experiments.
{"title":"Assembler Encoding with Evolvable Operations","authors":"T. Praczyk","doi":"10.12921/CMST.2015.21.03.004","DOIUrl":"https://doi.org/10.12921/CMST.2015.21.03.004","url":null,"abstract":"Assembler Encoding is a neuro-evolutionary method which represents a neural network in the form of a linear program. The program consists of operations and data and its goal is to produce a matrix including all the information necessary to construct a network. In order for the programs to produce effective networks, evolutionary techniques are used. A genetic algorithm determines an arrangement of the operations and data in the program and parameters of the operations. Implementations of the operations do not evolve, they are defined in advance by a designer. Since operations with predefined implementations could narrow down applicability of Assembler Encoding to a restricted class of problems, the method has been modified by applying evolvable operations. To verify effectiveness of the new method, experiments on the predator-prey problem were carried out. In the experiments, the task of neural networks was to control a team of underwater-vehicles-predators whose common goal was to capture an underwater-vehicle-prey behaving by a simple deterministic strategy. The paper describes the modified method and reports the experiments.","PeriodicalId":10561,"journal":{"name":"computational methods in science and technology","volume":"84 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75972478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-01-01DOI: 10.12921/CMST.2015.21.04.004
S. Saaidpour, Asrin Bahmani, A. Rostami
In this article, at first, a quantitative structure–property relationship (QSPR) model for estimation of the normal boiling point of liquid amines is developed. QSPR study based multiple linear regression was applied to predict the boiling points of primary, secondary and tertiary amines. The geometry of all amines was optimized by the semi-empirical method AM1 and used to calculate different types of molecular descriptors. The molecular descriptors of structures were calculated using Molecular Modeling Pro plus software. Stepwise regression was used for selection of relevance descriptors. The linear models developed with Molegro Data Modeller (MDM) allow accurate estimate of the boiling points of amines using molar mass (MM), Hansen dispersion forces (DF), molar refractivity (MR) and hydrogen bonding (HB) (1◦ and 2◦ amines) descriptors. The information encoded in the descriptors allows an interpretation of the boiling point studied based on the intermolecular interactions. Multiple linear regression (MLR) was used to develop three linear models for 1◦ , 2◦ and 3◦ amines containing four and three variables with a high precision root mean squares error, 15.92 K, 9.89 K and 15.76 K and a good correlation with the squared correlation coefficient 0.96, 0.98 and 0.96, respectively. The predictive power and robustness of the QSPR models were characterized by the statistical validation and applicability domain (AD).
本文首先建立了用于估算液态胺正常沸点的定量构效关系(QSPR)模型。采用基于QSPR研究的多元线性回归预测了伯胺、仲胺和叔胺的沸点。采用半经验方法AM1对所有胺的几何形状进行优化,并用于计算不同类型的分子描述符。使用molecular Modeling Pro plus软件计算结构的分子描述符。采用逐步回归方法选择相关描述符。使用Molegro Data modeler (MDM)开发的线性模型允许使用摩尔质量(MM), Hansen色散力(DF),摩尔折射率(MR)和氢键(HB)(1◦和2◦胺)描述符准确估计胺的沸点。在描述符中编码的信息允许根据分子间相互作用对沸点进行解释。采用多元线性回归(MLR)建立了1◦、2◦和3◦胺的4变量和3变量线性模型,其均方根误差分别为15.92 K、9.89 K和15.76 K,具有较高的精度,相关系数分别为0.96、0.98和0.96。通过统计验证和适用域(AD)对QSPR模型的预测能力和稳健性进行了表征。
{"title":"Prediction the Normal Boiling Points of Primary, Secondary and Tertiary Liquid Amines from their Molecular Structure Descriptors","authors":"S. Saaidpour, Asrin Bahmani, A. Rostami","doi":"10.12921/CMST.2015.21.04.004","DOIUrl":"https://doi.org/10.12921/CMST.2015.21.04.004","url":null,"abstract":"In this article, at first, a quantitative structure–property relationship (QSPR) model for estimation of the normal boiling point of liquid amines is developed. QSPR study based multiple linear regression was applied to predict the boiling points of primary, secondary and tertiary amines. The geometry of all amines was optimized by the semi-empirical method AM1 and used to calculate different types of molecular descriptors. The molecular descriptors of structures were calculated using Molecular Modeling Pro plus software. Stepwise regression was used for selection of relevance descriptors. The linear models developed with Molegro Data Modeller (MDM) allow accurate estimate of the boiling points of amines using molar mass (MM), Hansen dispersion forces (DF), molar refractivity (MR) and hydrogen bonding (HB) (1◦ and 2◦ amines) descriptors. The information encoded in the descriptors allows an interpretation of the boiling point studied based on the intermolecular interactions. Multiple linear regression (MLR) was used to develop three linear models for 1◦ , 2◦ and 3◦ amines containing four and three variables with a high precision root mean squares error, 15.92 K, 9.89 K and 15.76 K and a good correlation with the squared correlation coefficient 0.96, 0.98 and 0.96, respectively. The predictive power and robustness of the QSPR models were characterized by the statistical validation and applicability domain (AD).","PeriodicalId":10561,"journal":{"name":"computational methods in science and technology","volume":"60 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88886546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-01-01DOI: 10.12921/CMST.2015.21.04.009
T. Praczyk
Autonomous underwater vehicles are vehicles that are entirely or partly independent of human decisions. In order to obtain operational independence, the vehicles have to be equipped with a specialized control system. The main task of the system is to move the vehicle along a path with collision avoidance. Regardless of the logic embedded in the system, i.e. whether it works as a neural network, fuzzy, expert, or algorithmic system or even as a hybrid of all the mentioned solutions, it is always parameterized and values of the system parameters affect its effectiveness. The paper reports the experiments whose goal was to optimize an algorithmic control system of a biomimetic autonomous underwater vehicle. To this end, three different genetic algorithms were used, i.e. a canonical genetic algorithm, a steady state genetic algorithm and a eugenic algorithm.
{"title":"Using Genetic Algorithms for Optimizing Algorithmic Control System of Biomimetic Underwater Vehicle","authors":"T. Praczyk","doi":"10.12921/CMST.2015.21.04.009","DOIUrl":"https://doi.org/10.12921/CMST.2015.21.04.009","url":null,"abstract":"Autonomous underwater vehicles are vehicles that are entirely or partly independent of human decisions. In order to obtain operational independence, the vehicles have to be equipped with a specialized control system. The main task of the system is to move the vehicle along a path with collision avoidance. Regardless of the logic embedded in the system, i.e. whether it works as a neural network, fuzzy, expert, or algorithmic system or even as a hybrid of all the mentioned solutions, it is always parameterized and values of the system parameters affect its effectiveness. The paper reports the experiments whose goal was to optimize an algorithmic control system of a biomimetic autonomous underwater vehicle. To this end, three different genetic algorithms were used, i.e. a canonical genetic algorithm, a steady state genetic algorithm and a eugenic algorithm.","PeriodicalId":10561,"journal":{"name":"computational methods in science and technology","volume":"300 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72623992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}