Pub Date : 2023-12-03DOI: 10.3390/computation11120241
E. J. S. Pires, A. Cerveira, José Baptista
This work addresses the wind farm (WF) optimization layout considering several substations. It is given a set of wind turbines jointly with a set of substations, and the goal is to obtain the optimal design to minimize the infrastructure cost and the cost of electrical energy losses during the wind farm lifetime. The turbine set is partitioned into subsets to assign to each substation. The cable type and the connections to collect wind turbine-produced energy, forwarding to the corresponding substation, are selected in each subset. The technique proposed uses a genetic algorithm (GA) and an integer linear programming (ILP) model simultaneously. The GA creates a partition in the turbine set and assigns each of the obtained subsets to a substation to optimize a fitness function that corresponds to the minimum total cost of the WF layout. The fitness function evaluation requires solving an ILP model for each substation to determine the optimal cable connection layout. This methodology is applied to four onshore WFs. The obtained results show that the solution performance of the proposed approach reaches up to 0.17% of economic savings when compared to the clustering with ILP approach (an exact approach).
{"title":"Wind Farm Cable Connection Layout Optimization Using a Genetic Algorithm and Integer Linear Programming","authors":"E. J. S. Pires, A. Cerveira, José Baptista","doi":"10.3390/computation11120241","DOIUrl":"https://doi.org/10.3390/computation11120241","url":null,"abstract":"This work addresses the wind farm (WF) optimization layout considering several substations. It is given a set of wind turbines jointly with a set of substations, and the goal is to obtain the optimal design to minimize the infrastructure cost and the cost of electrical energy losses during the wind farm lifetime. The turbine set is partitioned into subsets to assign to each substation. The cable type and the connections to collect wind turbine-produced energy, forwarding to the corresponding substation, are selected in each subset. The technique proposed uses a genetic algorithm (GA) and an integer linear programming (ILP) model simultaneously. The GA creates a partition in the turbine set and assigns each of the obtained subsets to a substation to optimize a fitness function that corresponds to the minimum total cost of the WF layout. The fitness function evaluation requires solving an ILP model for each substation to determine the optimal cable connection layout. This methodology is applied to four onshore WFs. The obtained results show that the solution performance of the proposed approach reaches up to 0.17% of economic savings when compared to the clustering with ILP approach (an exact approach).","PeriodicalId":52148,"journal":{"name":"Computation","volume":"55 7","pages":""},"PeriodicalIF":2.2,"publicationDate":"2023-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138605132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-03DOI: 10.3390/computation11120240
Helen Papadaki, E. Kaselouris, M. Bakarezos, M. Tatarakis, N. Papadogiannis, V. Dimitriou
The dynamic behavior of solid Si targets irradiated by nanosecond laser pulses is computationally studied with transient, thermοmechanical three-dimensional finite element method simulations. The dynamic phase changes of the target and the generation and propagation of surface acoustic waves around the laser focal spot are provided by a finite element model of a very fine uniformly structured mesh, able to provide high-resolution results in short and long spatiotemporal scales. The dynamic changes in the Si material properties until the melting regime are considered, and the simulation results provide a detailed description of the irradiated area response, accompanied by the dynamics of the generation and propagation of ultrasonic waves. The new findings indicate that, due to the low thermal expansion coefficient and the high penetration depth of Si, the amplitude of the generated SAW is small, and the time and distance needed for the ultrasound to be generated is higher compared to dense metals. Additionally, in the melting regime, the development of high nonlinear thermal stresses leads to the generation and formation of an irregular ultrasound. Understanding the interaction between nanosecond lasers and Si is pivotal for advancing a wide range of technologies related to material processing and characterization.
{"title":"A Computational Study of Solid Si Target Dynamics under ns Pulsed Laser Irradiation from Elastic to Melting Regime","authors":"Helen Papadaki, E. Kaselouris, M. Bakarezos, M. Tatarakis, N. Papadogiannis, V. Dimitriou","doi":"10.3390/computation11120240","DOIUrl":"https://doi.org/10.3390/computation11120240","url":null,"abstract":"The dynamic behavior of solid Si targets irradiated by nanosecond laser pulses is computationally studied with transient, thermοmechanical three-dimensional finite element method simulations. The dynamic phase changes of the target and the generation and propagation of surface acoustic waves around the laser focal spot are provided by a finite element model of a very fine uniformly structured mesh, able to provide high-resolution results in short and long spatiotemporal scales. The dynamic changes in the Si material properties until the melting regime are considered, and the simulation results provide a detailed description of the irradiated area response, accompanied by the dynamics of the generation and propagation of ultrasonic waves. The new findings indicate that, due to the low thermal expansion coefficient and the high penetration depth of Si, the amplitude of the generated SAW is small, and the time and distance needed for the ultrasound to be generated is higher compared to dense metals. Additionally, in the melting regime, the development of high nonlinear thermal stresses leads to the generation and formation of an irregular ultrasound. Understanding the interaction between nanosecond lasers and Si is pivotal for advancing a wide range of technologies related to material processing and characterization.","PeriodicalId":52148,"journal":{"name":"Computation","volume":"92 23","pages":""},"PeriodicalIF":2.2,"publicationDate":"2023-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138605956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-02DOI: 10.3390/computation11120239
Luís Fonseca, F. Ribeiro, J. Metrôlho
In-bed posture classification has attracted considerable research interest and has significant potential to enhance healthcare applications. Recent works generally use approaches based on pressure maps, machine learning algorithms and focused mainly on finding solutions to obtain high accuracy in posture classification. Typically, these solutions use different datasets with varying numbers of sensors and classify the four main postures (supine, prone, left-facing, and right-facing) or, in some cases, include some variants of those main postures. Following this, this article has three main objectives: fine-grained detection of postures of bedridden people, identifying a large number of postures, including small variations—consideration of 28 different postures will help to better identify the actual position of the bedridden person with a higher accuracy. The number of different postures in this approach is considerably higher than the of those used in any other related work; analyze the impact of pressure map resolution on the posture classification accuracy, which has also not been addressed in other studies; and use the PoPu dataset, a dataset that includes pressure maps from 60 participants and 28 different postures. The dataset was analyzed using five distinct ML algorithms (k-nearest neighbors, linear support vector machines, decision tree, random forest, and multi-layer perceptron). This study’s findings show that the used algorithms achieve high accuracy in 4-posture classification (up to 99% in the case of MLP) using the PoPu dataset, with lower accuracies when attempting the finer-grained 28-posture classification approach (up to 68% in the case of random forest). The results indicate that using ML algorithms for finer-grained applications is possible to specify the patient’s exact position to some degree since the parent posture is still accurately classified. Furthermore, reducing the resolution of the pressure maps seems to affect the classifiers only slightly, which suggests that for applications that do not need finer-granularity, a lower resolution might suffice.
{"title":"Effects of the Number of Classes and Pressure Map Resolution on Fine-Grained In-Bed Posture Classification","authors":"Luís Fonseca, F. Ribeiro, J. Metrôlho","doi":"10.3390/computation11120239","DOIUrl":"https://doi.org/10.3390/computation11120239","url":null,"abstract":"In-bed posture classification has attracted considerable research interest and has significant potential to enhance healthcare applications. Recent works generally use approaches based on pressure maps, machine learning algorithms and focused mainly on finding solutions to obtain high accuracy in posture classification. Typically, these solutions use different datasets with varying numbers of sensors and classify the four main postures (supine, prone, left-facing, and right-facing) or, in some cases, include some variants of those main postures. Following this, this article has three main objectives: fine-grained detection of postures of bedridden people, identifying a large number of postures, including small variations—consideration of 28 different postures will help to better identify the actual position of the bedridden person with a higher accuracy. The number of different postures in this approach is considerably higher than the of those used in any other related work; analyze the impact of pressure map resolution on the posture classification accuracy, which has also not been addressed in other studies; and use the PoPu dataset, a dataset that includes pressure maps from 60 participants and 28 different postures. The dataset was analyzed using five distinct ML algorithms (k-nearest neighbors, linear support vector machines, decision tree, random forest, and multi-layer perceptron). This study’s findings show that the used algorithms achieve high accuracy in 4-posture classification (up to 99% in the case of MLP) using the PoPu dataset, with lower accuracies when attempting the finer-grained 28-posture classification approach (up to 68% in the case of random forest). The results indicate that using ML algorithms for finer-grained applications is possible to specify the patient’s exact position to some degree since the parent posture is still accurately classified. Furthermore, reducing the resolution of the pressure maps seems to affect the classifiers only slightly, which suggests that for applications that do not need finer-granularity, a lower resolution might suffice.","PeriodicalId":52148,"journal":{"name":"Computation","volume":"39 7","pages":""},"PeriodicalIF":2.2,"publicationDate":"2023-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138606427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-01DOI: 10.3390/computation11120237
P. Ashok, B. Bala Tripura Sundari
Stochastic circuits are used in applications that require low area and power consumption. The computing performed using these circuits is referred to as Stochastic computing (SC). The arithmetic operations in this computing can be realized using minimum logic circuits. The SC system allows a tradeoff of computational accuracy and area; thereby, the challenge in SC is improving the accuracy. The accuracy depends on the SC system’s stochastic number generator (SNG) part. SNGs provide the appropriate stochastic input required for stochastic computation. Hence we explore the accuracy in SC for various arithmetic operations performed using stochastic computing with the help of logic circuits. The contributions in this paper are; first, we have performed stochastic computing for arithmetic components using two different SNGs. The SNGs considered are Linear Feed-back Shift Register (LFSR) -based traditional stochastic number generators and S-box-based stochastic number generators. Second, the arithmetic components are implemented in a combinational circuit for algebraic expression in the stochastic domain using two different SNGs. Third, computational analysis for stochastic arithmetic components and the stochastic algebraic equation has been conducted. Finally, accuracy analysis and measurement are performed between LFSR-based computation and S-box-based computation. The novel aspect of this work is the use of S-box-based SNG in the development of stochastic computing in arithmetic components. Also, the implementation of stochastic computing in the combinational circuit using the developed basic arithmetic components, and exploration of accuracy with respect to stochastic number generators used is presented.
{"title":"Accuracy Analysis on Design of Stochastic Computing in Arithmetic Components and Combinational Circuit","authors":"P. Ashok, B. Bala Tripura Sundari","doi":"10.3390/computation11120237","DOIUrl":"https://doi.org/10.3390/computation11120237","url":null,"abstract":"Stochastic circuits are used in applications that require low area and power consumption. The computing performed using these circuits is referred to as Stochastic computing (SC). The arithmetic operations in this computing can be realized using minimum logic circuits. The SC system allows a tradeoff of computational accuracy and area; thereby, the challenge in SC is improving the accuracy. The accuracy depends on the SC system’s stochastic number generator (SNG) part. SNGs provide the appropriate stochastic input required for stochastic computation. Hence we explore the accuracy in SC for various arithmetic operations performed using stochastic computing with the help of logic circuits. The contributions in this paper are; first, we have performed stochastic computing for arithmetic components using two different SNGs. The SNGs considered are Linear Feed-back Shift Register (LFSR) -based traditional stochastic number generators and S-box-based stochastic number generators. Second, the arithmetic components are implemented in a combinational circuit for algebraic expression in the stochastic domain using two different SNGs. Third, computational analysis for stochastic arithmetic components and the stochastic algebraic equation has been conducted. Finally, accuracy analysis and measurement are performed between LFSR-based computation and S-box-based computation. The novel aspect of this work is the use of S-box-based SNG in the development of stochastic computing in arithmetic components. Also, the implementation of stochastic computing in the combinational circuit using the developed basic arithmetic components, and exploration of accuracy with respect to stochastic number generators used is presented.","PeriodicalId":52148,"journal":{"name":"Computation","volume":"29 4","pages":""},"PeriodicalIF":2.2,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138627384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-01DOI: 10.3390/computation11120238
Yunus Emre Orhan, Harun Pirim, Yusuf Akbulut
This study examines how U.S. senators strategically used hashtags to create political communities on Twitter during the 2022 Midterm Elections. We propose a way to model topic-based implicit interactions among Twitter users and introduce the concept of Building Political Hashtag Communities (BPHC). Using multiplex network analysis, we provide a comprehensive view of elites’ behavior. Through AI-driven topic modeling on real-world data, we observe that, at a general level, Democrats heavily rely on BPHC. Yet, when disaggregating the network across layers, this trend does not uniformly persist. Specifically, while Republicans engage more intensively in BPHC discussions related to immigration, Democrats heavily rely on BPHC in topics related to identity and women. However, only a select group of Democratic actors engage in BPHC for topics on labor and the environment—domains where Republicans scarcely, if at all, participate in BPHC efforts. This research contributes to the understanding of digital political communication, offering new insights into echo chamber dynamics and the role of politicians in polarization.
{"title":"Building Political Hashtag Communities: A Multiplex Network Analysis of U.S. Senators on Twitter during the 2022 Midterm Elections","authors":"Yunus Emre Orhan, Harun Pirim, Yusuf Akbulut","doi":"10.3390/computation11120238","DOIUrl":"https://doi.org/10.3390/computation11120238","url":null,"abstract":"This study examines how U.S. senators strategically used hashtags to create political communities on Twitter during the 2022 Midterm Elections. We propose a way to model topic-based implicit interactions among Twitter users and introduce the concept of Building Political Hashtag Communities (BPHC). Using multiplex network analysis, we provide a comprehensive view of elites’ behavior. Through AI-driven topic modeling on real-world data, we observe that, at a general level, Democrats heavily rely on BPHC. Yet, when disaggregating the network across layers, this trend does not uniformly persist. Specifically, while Republicans engage more intensively in BPHC discussions related to immigration, Democrats heavily rely on BPHC in topics related to identity and women. However, only a select group of Democratic actors engage in BPHC for topics on labor and the environment—domains where Republicans scarcely, if at all, participate in BPHC efforts. This research contributes to the understanding of digital political communication, offering new insights into echo chamber dynamics and the role of politicians in polarization.","PeriodicalId":52148,"journal":{"name":"Computation","volume":"11 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138609698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-25DOI: 10.3390/computation11120236
Gabriel Alexandru Constantin, M. Munteanu, G. Voicu, G. Paraschiv, E. Ştefan
The baking process in tunnel ovens can be influenced by many parameters. Among these, the most important can be considered as: the baking time, the volume of dough pieces, the texture and humidity of the dough, the distribution of temperature inside the oven, as well as the flow of air currents applied in the baking chamber. In order to obtain a constant quality of bakery or pastry products, and for the efficient operation of the oven, it is necessary that the solution made by the designers be subjected to modelling, simulation and analysis processes, before their manufacture, and in this sense it can be applied to the Computational Fluid Dynamics (CFD) numerical simulation tool. In this study, we made an analysis of the air flow inside the baking chamber of an oven. The analyzed oven was used very frequently on the pastry lines. After performing the modelling and simulation, the temperature distribution inside the oven was obtained in the longitudinal and transverse planes. For the experimental validation of the temperatures obtained in the computer-assisted simulation, the temperatures inside the analyzed electric oven were measured. The measured temperatures validated the simulation results with a maximum error of 7.6%.
{"title":"An Analysis of Air Flow in the Baking Chamber of a Tunnel-Type Electric Oven","authors":"Gabriel Alexandru Constantin, M. Munteanu, G. Voicu, G. Paraschiv, E. Ştefan","doi":"10.3390/computation11120236","DOIUrl":"https://doi.org/10.3390/computation11120236","url":null,"abstract":"The baking process in tunnel ovens can be influenced by many parameters. Among these, the most important can be considered as: the baking time, the volume of dough pieces, the texture and humidity of the dough, the distribution of temperature inside the oven, as well as the flow of air currents applied in the baking chamber. In order to obtain a constant quality of bakery or pastry products, and for the efficient operation of the oven, it is necessary that the solution made by the designers be subjected to modelling, simulation and analysis processes, before their manufacture, and in this sense it can be applied to the Computational Fluid Dynamics (CFD) numerical simulation tool. In this study, we made an analysis of the air flow inside the baking chamber of an oven. The analyzed oven was used very frequently on the pastry lines. After performing the modelling and simulation, the temperature distribution inside the oven was obtained in the longitudinal and transverse planes. For the experimental validation of the temperatures obtained in the computer-assisted simulation, the temperatures inside the analyzed electric oven were measured. The measured temperatures validated the simulation results with a maximum error of 7.6%.","PeriodicalId":52148,"journal":{"name":"Computation","volume":"38 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2023-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139236276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-22DOI: 10.3390/computation11120235
Dinara Akhmetsadyk, Arkady Ilyin, Nazim Guseinov, Gary Beall
SO2 (sulfur dioxide) is a toxic substance emitted into the environment due to burning sulfur-containing fossil fuels in cars, factories, power plants, and homes. This issue is of grave concern because of its negative effects on the environment and human health. Therefore, the search for a material capable of interacting to detect SO2 and the research on developing effective materials for gas detection holds significant importance in the realm of environmental and health applications. It is well known that one of the effective methods for predicting the structure and electronic properties of systems capable of interacting with a molecule is a method based on quantum mechanical approaches. In this work, the DFT (Density Functional Theory) program DMol3 in Materials Studio was used to study the interactions between the SO2 molecule and four systems. The adsorption energy, bond lengths, bond angle, charge transfer, and density of states of SO2 molecule on pristine graphene, N-doped graphene, Ga-doped graphene, and -Ga-N- co-doped graphene were investigated using DFT calculations. The obtained data indicate that the bonding between the SO2 molecule and pristine graphene is relatively weak, with a binding energy of −0.32 eV and a bond length of 3.06 Å, indicating physical adsorption. Next, the adsorption of the molecule on an N-doped graphene system was considered. The adsorption of SO2 molecules on N-doped graphene is negligible; generally, the interaction of SO2 molecules with this system does not significantly change the electronic properties. However, the adsorption energy of the gas molecule on Ga-doped graphene relative to pristine graphene increased significantly. The evidence of chemisorption is increased adsorption energy and decreased adsorption distance between SO2 and Ga-doped graphene. In addition, our results show that introducing -Ga-N- co-dopants of an “ortho” configuration into pristine graphene significantly affects the adsorption between the gas molecule and graphene. Thus, this approach is significantly practical in the adsorption of SO2 molecules.
SO2(二氧化硫)是一种有毒物质,由于汽车、工厂、发电厂和家庭燃烧含硫化石燃料而排放到环境中。由于其对环境和人类健康的负面影响,这一问题备受关注。因此,在环境和健康应用领域,寻找一种能够相互作用检测二氧化硫的材料以及研究开发有效的气体检测材料具有重要意义。众所周知,预测能与分子相互作用的系统的结构和电子特性的有效方法之一是基于量子力学方法的方法。在这项研究中,我们使用了 Materials Studio 中的 DFT(密度泛函理论)程序 DMol3 来研究二氧化硫分子与四个系统之间的相互作用。利用 DFT 计算研究了 SO2 分子在原始石墨烯、N-掺杂石墨烯、Ga-掺杂石墨烯和 -Ga-N- 共掺杂石墨烯上的吸附能、键长、键角、电荷转移和状态密度。得到的数据表明,二氧化硫分子与原始石墨烯之间的结合相对较弱,结合能为-0.32 eV,键长为 3.06 Å,表明存在物理吸附。接下来,我们考虑了分子在掺杂 N 的石墨烯体系上的吸附情况。二氧化硫分子在 N 掺杂石墨烯上的吸附可以忽略不计;一般来说,二氧化硫分子与该体系的相互作用不会显著改变电子特性。然而,相对于原始石墨烯,气体分子在掺杂 Ga 的石墨烯上的吸附能显著增加。二氧化硫与掺镓石墨烯之间吸附能的增加和吸附距离的减小就是化学吸附的证据。此外,我们的研究结果表明,在原始石墨烯中引入 "正交 "构型的 -Ga-N- 共掺杂剂会明显影响气体分子与石墨烯之间的吸附。因此,这种方法在二氧化硫分子的吸附方面非常实用。
{"title":"Adsorption of SO2 Molecule on Pristine, N, Ga-Doped and -Ga-N- co-Doped Graphene: A DFT Study","authors":"Dinara Akhmetsadyk, Arkady Ilyin, Nazim Guseinov, Gary Beall","doi":"10.3390/computation11120235","DOIUrl":"https://doi.org/10.3390/computation11120235","url":null,"abstract":"SO2 (sulfur dioxide) is a toxic substance emitted into the environment due to burning sulfur-containing fossil fuels in cars, factories, power plants, and homes. This issue is of grave concern because of its negative effects on the environment and human health. Therefore, the search for a material capable of interacting to detect SO2 and the research on developing effective materials for gas detection holds significant importance in the realm of environmental and health applications. It is well known that one of the effective methods for predicting the structure and electronic properties of systems capable of interacting with a molecule is a method based on quantum mechanical approaches. In this work, the DFT (Density Functional Theory) program DMol3 in Materials Studio was used to study the interactions between the SO2 molecule and four systems. The adsorption energy, bond lengths, bond angle, charge transfer, and density of states of SO2 molecule on pristine graphene, N-doped graphene, Ga-doped graphene, and -Ga-N- co-doped graphene were investigated using DFT calculations. The obtained data indicate that the bonding between the SO2 molecule and pristine graphene is relatively weak, with a binding energy of −0.32 eV and a bond length of 3.06 Å, indicating physical adsorption. Next, the adsorption of the molecule on an N-doped graphene system was considered. The adsorption of SO2 molecules on N-doped graphene is negligible; generally, the interaction of SO2 molecules with this system does not significantly change the electronic properties. However, the adsorption energy of the gas molecule on Ga-doped graphene relative to pristine graphene increased significantly. The evidence of chemisorption is increased adsorption energy and decreased adsorption distance between SO2 and Ga-doped graphene. In addition, our results show that introducing -Ga-N- co-dopants of an “ortho” configuration into pristine graphene significantly affects the adsorption between the gas molecule and graphene. Thus, this approach is significantly practical in the adsorption of SO2 molecules.","PeriodicalId":52148,"journal":{"name":"Computation","volume":"34 3","pages":""},"PeriodicalIF":2.2,"publicationDate":"2023-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139248424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-17DOI: 10.3390/computation11110233
Moshibudi Ramoshaba, T. Mosuang
A full potential all-electron density functional method within generalized gradient approximation is used herein to investigate correlations of the electronic, elastic and thermo-electric transport properties of cubic copper sulphide and copper selenide. The electronic band structure and density of states suggest a metallic behaviour with a zero-energy band gap for both materials. Elastic property calculations suggest stiff materials, with bulk to shear modulus ratios of 0.35 and 0.44 for Cu2S and Cu2Se, respectively. Thermo-electric transport properties were estimated using the Boltzmann transport approach. The Seebeck coefficient, electrical conductivity, thermal conductivity and power factor all suggest a potential p-type conductivity for α-Cu2S and n-type conductivity for α-Cu2Se.
本文采用广义梯度近似的全电势全电子密度泛函法,研究了立方硫化铜和硒化铜的电子、弹性和热电传输特性的相关性。电子能带结构和状态密度表明这两种材料都具有零能带隙的金属特性。弹性特性计算表明,Cu2S 和 Cu2Se 材料都很坚硬,体积与剪切模量比分别为 0.35 和 0.44。热电传输特性是采用波尔兹曼传输方法估算的。塞贝克系数、电导率、热导率和功率因数都表明,α-Cu2S 具有潜在的 p 型传导性,而 α-Cu2Se 具有 n 型传导性。
{"title":"Correlations of the Electronic, Elastic and Thermo-Electric Properties of Alpha Copper Sulphide and Selenide","authors":"Moshibudi Ramoshaba, T. Mosuang","doi":"10.3390/computation11110233","DOIUrl":"https://doi.org/10.3390/computation11110233","url":null,"abstract":"A full potential all-electron density functional method within generalized gradient approximation is used herein to investigate correlations of the electronic, elastic and thermo-electric transport properties of cubic copper sulphide and copper selenide. The electronic band structure and density of states suggest a metallic behaviour with a zero-energy band gap for both materials. Elastic property calculations suggest stiff materials, with bulk to shear modulus ratios of 0.35 and 0.44 for Cu2S and Cu2Se, respectively. Thermo-electric transport properties were estimated using the Boltzmann transport approach. The Seebeck coefficient, electrical conductivity, thermal conductivity and power factor all suggest a potential p-type conductivity for α-Cu2S and n-type conductivity for α-Cu2Se.","PeriodicalId":52148,"journal":{"name":"Computation","volume":"112 9","pages":""},"PeriodicalIF":2.2,"publicationDate":"2023-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139265829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-16DOI: 10.3390/computation11110232
Ana Paula Aravena-Cifuentes, J. Nuñez-Gonzalez, A. Elola, Malinka Ivanova
This study presents a model for predicting photovoltaic power generation based on meteorological, temporal and geographical variables, without using irradiance values, which have traditionally posed challenges and difficulties for accurate predictions. Validation methods and evaluation metrics are used to analyse four different approaches that vary in the distribution of the training and test database, and whether or not location-independent modelling is performed. The coefficient of determination, R2, is used to measure the proportion of variation in photovoltaic power generation that can be explained by the model’s variables, while gCO2eq represents the amount of CO2 emissions equivalent to each unit of power generation. Both are used to compare model performance and environmental impact. The results show significant differences between the locations, with substantial improvements in some cases, while in others improvements are limited. The importance of customising the predictive model for each specific location is emphasised. Furthermore, it is concluded that environmental impact studies in model production are an additional step towards the creation of more sustainable and efficient models. Likewise, this research considers both the accuracy of solar energy predictions and the environmental impact of the computational resources used in the process, thereby promoting the responsible and sustainable progress of data science.
{"title":"Development of AI-Based Tools for Power Generation Prediction","authors":"Ana Paula Aravena-Cifuentes, J. Nuñez-Gonzalez, A. Elola, Malinka Ivanova","doi":"10.3390/computation11110232","DOIUrl":"https://doi.org/10.3390/computation11110232","url":null,"abstract":"This study presents a model for predicting photovoltaic power generation based on meteorological, temporal and geographical variables, without using irradiance values, which have traditionally posed challenges and difficulties for accurate predictions. Validation methods and evaluation metrics are used to analyse four different approaches that vary in the distribution of the training and test database, and whether or not location-independent modelling is performed. The coefficient of determination, R2, is used to measure the proportion of variation in photovoltaic power generation that can be explained by the model’s variables, while gCO2eq represents the amount of CO2 emissions equivalent to each unit of power generation. Both are used to compare model performance and environmental impact. The results show significant differences between the locations, with substantial improvements in some cases, while in others improvements are limited. The importance of customising the predictive model for each specific location is emphasised. Furthermore, it is concluded that environmental impact studies in model production are an additional step towards the creation of more sustainable and efficient models. Likewise, this research considers both the accuracy of solar energy predictions and the environmental impact of the computational resources used in the process, thereby promoting the responsible and sustainable progress of data science.","PeriodicalId":52148,"journal":{"name":"Computation","volume":"6 2","pages":""},"PeriodicalIF":2.2,"publicationDate":"2023-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139267540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-16DOI: 10.3390/computation11110231
Azad Arif Hama Amin, Aso M. Aladdin, Dler O. Hasan, Soran R. Mohammed-Taha, Tarik Ahmed Rashid
Analyzing stochastic algorithms for comprehensive performance and comparison across diverse contexts is essential. By evaluating and adjusting algorithm effectiveness across a wide spectrum of test functions, including both classical benchmarks and CEC-C06 2019 conference functions, distinct patterns of performance emerge. In specific situations, underscoring the importance of choosing algorithms contextually. Additionally, researchers have encountered a critical issue by employing a statistical model randomly to determine significance values without conducting other studies to select a specific model for evaluating performance outcomes. To address this concern, this study employs rigorous statistical testing to underscore substantial performance variations between pairs of algorithms, thereby emphasizing the pivotal role of statistical significance in comparative analysis. It also yields valuable insights into the suitability of algorithms for various optimization challenges, providing professionals with information to make informed decisions. This is achieved by pinpointing algorithm pairs with favorable statistical distributions, facilitating practical algorithm selection. The study encompasses multiple nonparametric statistical hypothesis models, such as the Wilcoxon rank-sum test, single-factor analysis, and two-factor ANOVA tests. This thorough evaluation enhances our grasp of algorithm performance across various evaluation criteria. Notably, the research addresses discrepancies in previous statistical test findings in algorithm comparisons, enhancing result reliability in the later research. The results proved that there are differences in significance results, as seen in examples like Leo versus the FDO, the DA versus the WOA, and so on. It highlights the need to tailor test models to specific scenarios, as p-value outcomes differ among various tests within the same algorithm pair.
{"title":"Enhancing Algorithm Selection through Comprehensive Performance Evaluation: Statistical Analysis of Stochastic Algorithms","authors":"Azad Arif Hama Amin, Aso M. Aladdin, Dler O. Hasan, Soran R. Mohammed-Taha, Tarik Ahmed Rashid","doi":"10.3390/computation11110231","DOIUrl":"https://doi.org/10.3390/computation11110231","url":null,"abstract":"Analyzing stochastic algorithms for comprehensive performance and comparison across diverse contexts is essential. By evaluating and adjusting algorithm effectiveness across a wide spectrum of test functions, including both classical benchmarks and CEC-C06 2019 conference functions, distinct patterns of performance emerge. In specific situations, underscoring the importance of choosing algorithms contextually. Additionally, researchers have encountered a critical issue by employing a statistical model randomly to determine significance values without conducting other studies to select a specific model for evaluating performance outcomes. To address this concern, this study employs rigorous statistical testing to underscore substantial performance variations between pairs of algorithms, thereby emphasizing the pivotal role of statistical significance in comparative analysis. It also yields valuable insights into the suitability of algorithms for various optimization challenges, providing professionals with information to make informed decisions. This is achieved by pinpointing algorithm pairs with favorable statistical distributions, facilitating practical algorithm selection. The study encompasses multiple nonparametric statistical hypothesis models, such as the Wilcoxon rank-sum test, single-factor analysis, and two-factor ANOVA tests. This thorough evaluation enhances our grasp of algorithm performance across various evaluation criteria. Notably, the research addresses discrepancies in previous statistical test findings in algorithm comparisons, enhancing result reliability in the later research. The results proved that there are differences in significance results, as seen in examples like Leo versus the FDO, the DA versus the WOA, and so on. It highlights the need to tailor test models to specific scenarios, as p-value outcomes differ among various tests within the same algorithm pair.","PeriodicalId":52148,"journal":{"name":"Computation","volume":"68 3","pages":""},"PeriodicalIF":2.2,"publicationDate":"2023-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139268081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}